summaryrefslogtreecommitdiffstats
path: root/tests/basic/afr/arbiter-add-brick.t
diff options
context:
space:
mode:
authorRavishankar N <ravishankar@redhat.com>2018-01-28 13:50:47 +0530
committerRavishankar N <ravishankar@redhat.com>2018-02-01 14:17:50 +0000
commit0e6e8216823c2d9dafb81aae0f6ee3497c23d140 (patch)
tree06481a13e40b18796cbef6a8248f539e6a739951 /tests/basic/afr/arbiter-add-brick.t
parentd663b9a323f34919da3f35bfc221a0aa91d9ab94 (diff)
afr: don't treat all cases all bricks being blamed as split-brain
Problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks while I/O or heal are in progress. Fix: Instead of undoing the post-op, pick a source based on the xattr values. If 2 bricks blame one, the blamed one must be treated as sink. If there is no majority, all are sources. Once we pick a source, self-heal will then do the heal instead of erroring out due to split-brain. Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae BUG: 1539358 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Diffstat (limited to 'tests/basic/afr/arbiter-add-brick.t')
-rw-r--r--tests/basic/afr/arbiter-add-brick.t16
1 files changed, 16 insertions, 0 deletions
diff --git a/tests/basic/afr/arbiter-add-brick.t b/tests/basic/afr/arbiter-add-brick.t
index fe919de0ab4..77b93d9a210 100644
--- a/tests/basic/afr/arbiter-add-brick.t
+++ b/tests/basic/afr/arbiter-add-brick.t
@@ -12,6 +12,8 @@ TEST $CLI volume set $V0 performance.stat-prefetch off
TEST $CLI volume start $V0
TEST $CLI volume set $V0 self-heal-daemon off
TEST $GFS --volfile-id=$V0 --volfile-server=$H0 $M0;
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 0
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 1
TEST mkdir $M0/dir1
TEST dd if=/dev/urandom of=$M0/file1 bs=1024 count=1
@@ -24,6 +26,7 @@ TEST dd if=/dev/urandom of=$M0/file1 bs=1024 count=1024
#convert replica 2 to arbiter volume
TEST $CLI volume start $V0 force
EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" brick_up_status $V0 $H0 $B0/${V0}1
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 1
#syntax check for add-brick.
TEST ! $CLI volume add-brick $V0 replica 2 arbiter 1 $H0:$B0/${V0}2
@@ -31,6 +34,19 @@ TEST ! $CLI volume add-brick $V0 replica 3 arbiter 2 $H0:$B0/${V0}2
TEST $CLI volume add-brick $V0 replica 3 arbiter 1 $H0:$B0/${V0}2
EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" brick_up_status $V0 $H0 $B0/${V0}2
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 2
+
+#Trigger name heals from client. If we just rely on index heal, the first index
+#crawl on B0 fails for /, dir2 and /file either due to lock collision or files
+#not being present on the other 2 bricks yet. It is getting healed only in the
+#next crawl after priv->shd.timeout (600 seconds) or by manually launching
+#index heal again.
+TEST $CLI volume set $V0 data-self-heal off
+TEST $CLI volume set $V0 metadata-self-heal off
+TEST $CLI volume set $V0 entry-self-heal off
+TEST stat $M0/dir1
+TEST stat $M0/dir2
+TEST stat $M0/file1
#Heal files
TEST $CLI volume set $V0 self-heal-daemon on