summaryrefslogtreecommitdiffstats
path: root/tests/bugs
diff options
context:
space:
mode:
authorAtin Mukherjee <amukherj@redhat.com>2018-03-27 16:53:33 +0530
committerAtin Mukherjee <amukherj@redhat.com>2018-04-05 07:18:03 +0000
commitd70529701f09f89c7e4f578446d55de31497361d (patch)
tree716cacd3adc6612cba90d581da95e034257d9d85 /tests/bugs
parent1dfd17216ddd11d85842380f4afbb9267bd597c6 (diff)
glusterd: mark port_registered to true for all running bricks with brick mux
glusterd maintains a boolean flag 'port_registered' which is used to determine if a brick has completed its portmap sign in process. This flag is (re)set in pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is the identifier to determine if the very first brick with which the process is spawned up has completed its sign in process. However in case of glusterd restart when a brick is already identified as running, glusterd does a pmap_registry_bind to ensure its portmap table is updated but this flag isn't which is fine in case of non brick multiplex case but causes an issue if the very first brick which came as part of process is replaced and then the subsequent brick attach will fail. One of the way to validate this is to create and start a volume, remove the first brick and then add-brick a new one. Add-brick operation will take a very long time and post that the volume status will show all other brick status apart from the new brick as down. Solution is to set brickinfo->port_registered to true for all the running bricks when brick multiplexing is enabled. Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff Fixes: bz#1560957 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Diffstat (limited to 'tests/bugs')
-rw-r--r--tests/bugs/glusterd/brick-mux-validation.t15
1 files changed, 13 insertions, 2 deletions
diff --git a/tests/bugs/glusterd/brick-mux-validation.t b/tests/bugs/glusterd/brick-mux-validation.t
index 3c6ad49686e..9e1c2c21752 100644
--- a/tests/bugs/glusterd/brick-mux-validation.t
+++ b/tests/bugs/glusterd/brick-mux-validation.t
@@ -43,6 +43,17 @@ EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_processes
EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_pids
EXPECT_WITHIN $PROCESS_UP_TIMEOUT 6 online_brick_count
+#bug-1560957 - brick status goes offline after remove-brick followed by add-brick
+
+pkill glusterd
+TEST glusterd
+TEST $CLI volume remove-brick $V0 $H0:$B0/${V0}1 force
+TEST $CLI volume add-brick $V0 $H0:$B0/${V0}1_new force
+
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_processes
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_pids
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 6 online_brick_count
+
#bug-1446172 - reset brick with brick multiplexing enabled
TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 $M0;
@@ -52,7 +63,7 @@ do
echo $i > $M0/file$i.txt
done
-TEST $CLI volume reset-brick $V0 $H0:$B0/${V0}1 start
+TEST $CLI volume reset-brick $V0 $H0:$B0/${V0}1_new start
EXPECT_WITHIN $PROCESS_DOWN_TIMEOUT 5 online_brick_count
EXPECT 1 count_brick_processes
@@ -61,7 +72,7 @@ EXPECT 1 count_brick_processes
TEST ! $CLI volume reset-brick $V0 $H0:$B0/${V0}1 $H0:$B0/${V0}1 commit
# reset-brick commit force should work and should bring up the brick
-TEST $CLI volume reset-brick $V0 $H0:$B0/${V0}1 $H0:$B0/${V0}1 commit force
+TEST $CLI volume reset-brick $V0 $H0:$B0/${V0}1_new $H0:$B0/${V0}1_new commit force
EXPECT_WITHIN $PROCESS_UP_TIMEOUT 6 online_brick_count
EXPECT 1 count_brick_processes