summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-pmap.c
diff options
context:
space:
mode:
authorMohit Agrawal <moagrawa@redhat.com>2017-08-08 14:36:17 +0530
committerJeff Darcy <jeff@pl.atyp.us>2017-08-08 22:28:30 +0000
commitc13d69babc228a2932994962d6ea8afe2cdd620a (patch)
tree6d453f44fd5986e3b72b9aacc5b163dddc4dfd86 /xlators/mgmt/glusterd/src/glusterd-pmap.c
parentc63aa2239bc682739328e0aa6cbcb3279a72a8e2 (diff)
glusterd: Block brick attach request till the brick's ctx is set
Problem: In multiplexing setup in a container environment we hit a race where before the first brick finishes its handshake with glusterd, the subsequent attach requests went through and they actually failed and glusterd has no mechanism to realize it. This resulted into all the such bricks not to be active resulting into clients not able to connect. Solution: Introduce a new flag port_registered in glusterd_brickinfo to make sure about pmap_signin finish before the subsequent attach bricks can be processed. Test: To reproduce the issue followed below steps 1) Create 100 volumes on 3 nodes(1x3) in CNS environment 2) Enable brick multiplexing 3) Reboot one container 4) Run below command for v in ‛gluster v list‛ do glfsheal $v | grep -i "transport" done After apply the patch command should not fail. Note: A big thanks to Atin for suggest the fix. BUG: 1478710 Change-Id: I8e1bd6132122b3a5b0dd49606cea564122f2609b Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17984 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Diffstat (limited to 'xlators/mgmt/glusterd/src/glusterd-pmap.c')
-rw-r--r--xlators/mgmt/glusterd/src/glusterd-pmap.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/xlators/mgmt/glusterd/src/glusterd-pmap.c b/xlators/mgmt/glusterd/src/glusterd-pmap.c
index 1fc7a250748..2a754769c95 100644
--- a/xlators/mgmt/glusterd/src/glusterd-pmap.c
+++ b/xlators/mgmt/glusterd/src/glusterd-pmap.c
@@ -502,6 +502,9 @@ __gluster_pmap_signin (rpcsvc_request_t *req)
GF_PMAP_PORT_BRICKSERVER, req->trans);
ret = glusterd_get_brickinfo (THIS, args.brick, args.port, &brickinfo);
+ /* Update portmap status in brickinfo */
+ if (brickinfo)
+ brickinfo->port_registered = _gf_true;
fail:
glusterd_submit_reply (req, &rsp, NULL, 0, NULL,
@@ -554,6 +557,10 @@ __gluster_pmap_signout (rpcsvc_request_t *req)
brick_path, GF_PMAP_PORT_BRICKSERVER,
req->trans);
}
+ /* Update portmap status on brickinfo */
+ if (brickinfo)
+ brickinfo->port_registered = _gf_false;
+
/* Clean up the pidfile for this brick given glusterfsd doesn't clean it
* any more. This is required to ensure we don't end up with having
* stale pid files in case a brick is killed from the backend