From b63dfd84fc8b3e08e3f005f71bf493c633452612 Mon Sep 17 00:00:00 2001 From: Ravishankar N Date: Sun, 21 Oct 2018 17:32:52 +0530 Subject: tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t Problem: https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing this .t spuriously. On checking one of the failure logs, I see: 22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful: 22:05:44 Self-heal daemon is not running. Check self-heal daemon log file. 22:05:44 not ok 20 , LINENUM:38 In glusterd log: [2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file But the tests which preceed this check whether via a statedump if the shd is conected to the bricks, and they have succeeded and even started healing. From glustershd.log: [2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1 sinks=2 So the only reason I can see launching heal via cli failing is a race where shd has been spawned but glusterd has not yet updated in-memory that it is up, and hence failing the CLI. Fix: Check for shd up status before launching heal via CLI Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4 fixes: bz#1641761 Signed-off-by: Ravishankar N (cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1) --- tests/bugs/replicate/bug-1637802-arbiter-stale-data-heal-lock.t | 1 + 1 file changed, 1 insertion(+) diff --git a/tests/bugs/replicate/bug-1637802-arbiter-stale-data-heal-lock.t b/tests/bugs/replicate/bug-1637802-arbiter-stale-data-heal-lock.t index 91ed39beb96..d7d1f285e01 100644 --- a/tests/bugs/replicate/bug-1637802-arbiter-stale-data-heal-lock.t +++ b/tests/bugs/replicate/bug-1637802-arbiter-stale-data-heal-lock.t @@ -32,6 +32,7 @@ EXPECT 2 get_pending_heal_count $V0 # Bring it back up and let heal complete. TEST $CLI volume start $V0 force EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" brick_up_status $V0 $H0 $B0/${V0}2 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" glustershd_up_status EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 0 EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 1 EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 2 -- cgit