summaryrefslogtreecommitdiffstats
path: root/glusterfsd/src/glusterfsd-mgmt.c
diff options
context:
space:
mode:
authorMohit Agrawal <moagrawal@redhat.com>2018-07-12 13:29:48 +0530
committerAtin Mukherjee <amukherj@redhat.com>2018-07-27 01:24:09 +0000
commit9400b6f2c8aa219a493961e0ab9770b7f12e80d2 (patch)
tree50e31f0467b154d39f3d602e5a0f05d65d38a76c /glusterfsd/src/glusterfsd-mgmt.c
parent2836e158f38eb9ed070de88b64a3a8758cd2d4c0 (diff)
glusterd: Add multiple checks before attach/start a brick
Problem: In brick mux scenario sometime glusterd is not able to start/attach a brick and gluster v status shows brick is already running Solution: 1) To make sure brick is running check brick_path in /proc/<pid>/fd , if a brick is consumed by the brick process it means brick stack is come up otherwise not 2) Before start/attach a brick check if a brick is mounted or not 3) At the time of printing volume status check brick is consumed by any brick process Test: To test the same followed procedure 1) Setup brick mux environment on a vm 2) Put a breaking point in gdb in function posix_health_check_thread_proc at the time of notify GF_EVENT_CHILD_DOWN event 3) unmount anyone brick path forcefully 4) check gluster v status it will show N/A for the brick 5) Try to start volume with force option, glusterd throw message "No device available for mount brick" 6) Mount the brick_root path 7) Try to start volume with force option 8) down brick is started successfully Change-Id: I91898dad21d082ebddd12aa0d1f7f0ed012bdf69 fixes: bz#1595320 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Diffstat (limited to 'glusterfsd/src/glusterfsd-mgmt.c')
-rw-r--r--glusterfsd/src/glusterfsd-mgmt.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/glusterfsd/src/glusterfsd-mgmt.c b/glusterfsd/src/glusterfsd-mgmt.c
index f1b2f9e7123..1e9015440ae 100644
--- a/glusterfsd/src/glusterfsd-mgmt.c
+++ b/glusterfsd/src/glusterfsd-mgmt.c
@@ -919,6 +919,9 @@ glusterfs_handle_attach (rpcsvc_request_t *req)
"got attach for %s but no active graph",
xlator_req.name);
}
+ if (ret) {
+ ret = -1;
+ }
glusterfs_translator_info_response_send (req, ret, NULL, NULL);