summaryrefslogtreecommitdiffstats
path: root/tests
diff options
context:
space:
mode:
authorPranith Kumar K <pkarampu@redhat.com>2018-08-08 15:26:42 +0530
committerAtin Mukherjee <amukherj@redhat.com>2018-08-09 11:32:09 +0000
commit22d5540f5618ea0726a5eab1252163c48124cc06 (patch)
treeb3c2c3dc8a5e07fbd49d4ec360a132a0b87f176a /tests
parent511af2e786f5e881bc4c53287407235dd6fe2704 (diff)
tests: Set heal-timeout to 5 seconds
Shd keeps doing heals in a loop until it heals at least one entry in the previous run. A heal is termed successful only if it heals both metadata and entry/data heal i.e. the entry needs to be completely healed by just that healer. In tests/basic/afr/granular-esh/replace-brick.t test, brick-0 is old and brick-1 is new. After replace-brick only root-gfid will be present in brick-0's index 1) shd-thread corresponding to brick-0 does metadata heal, this creates root-gfid in brick-0's 'dirty' index. 2) Both healer threads corresponding to brick-0 and brick-1 now try to heal root-gfid and brick-1 gets the heal-domain lock. brick-0's shd-thread will experience a failure and it goes back to waiting for 10 minutes (cluster.heal-timeout). 3) When brick-1's healer-thread completes healing root-gfid it creates 5 files which create indices in brick-0, so until brick-0 doesn't trigger one more heal, heal won't happen. $HEAL_TIMEOUT is set at 120 seconds, which is lesser than cluster.heal-timeout, so decreasing this to 5 seconds so that the next heal is triggered which will do the heals. fixes bz#1613807 Change-Id: I881133fc28880d8615fbc4558a0dfa0dc63d7798 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r--tests/basic/afr/granular-esh/replace-brick.t1
1 files changed, 1 insertions, 0 deletions
diff --git a/tests/basic/afr/granular-esh/replace-brick.t b/tests/basic/afr/granular-esh/replace-brick.t
index 639ed81b95c..5fc7811a8d8 100644
--- a/tests/basic/afr/granular-esh/replace-brick.t
+++ b/tests/basic/afr/granular-esh/replace-brick.t
@@ -12,6 +12,7 @@ TEST $CLI volume set $V0 cluster.data-self-heal off
TEST $CLI volume set $V0 cluster.metadata-self-heal off
TEST $CLI volume set $V0 cluster.entry-self-heal off
TEST $CLI volume set $V0 self-heal-daemon off
+TEST $CLI volume set $V0 cluster.heal-timeout 5
TEST $CLI volume heal $V0 granular-entry-heal enable
TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 $M0;