summaryrefslogtreecommitdiffstats
path: root/tests
diff options
context:
space:
mode:
authorPranith Kumar K <pkarampu@redhat.com>2016-05-04 19:05:28 +0530
committerPranith Kumar Karampuri <pkarampu@redhat.com>2016-05-05 20:50:54 -0700
commite66add8a304ca610b74ecbbe48cec72dba582340 (patch)
treea95017750eaeebea911fc1a358f076922ceea712 /tests
parent74837896c38bafdd862f164d147b75fcbb619e8f (diff)
cluster/afr: Do heals with shd pid
Multi-threaded healing doesn't create synctask with shd pid, this leads to healing problems when quota exceeds. BUG: 1332994 Change-Id: I80f57c1923756f3298730b8820498127024e1209 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14211 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r--tests/basic/afr/heal-quota.t35
1 files changed, 35 insertions, 0 deletions
diff --git a/tests/basic/afr/heal-quota.t b/tests/basic/afr/heal-quota.t
new file mode 100644
index 00000000000..2663906f9d5
--- /dev/null
+++ b/tests/basic/afr/heal-quota.t
@@ -0,0 +1,35 @@
+#!/bin/bash
+
+#This file tests that heal succeeds even when quota is exceeded
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../volume.rc
+
+cleanup;
+
+TEST glusterd
+TEST pidof glusterd
+TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{0,1}
+TEST $CLI volume set $V0 cluster.self-heal-daemon off
+TEST $CLI volume start $V0
+
+TEST glusterfs --attribute-timeout=0 --entry-timeout=0 --volfile-id=/$V0 --volfile-server=$H0 $M0;
+TEST $CLI volume quota $V0 enable
+TEST $CLI volume quota $V0 limit-usage / 10MB
+TEST $CLI volume quota $V0 soft-timeout 0
+TEST $CLI volume quota $V0 hard-timeout 0
+
+TEST touch $M0/a $M0/b
+dd if=/dev/zero of=$M0/b bs=1M count=7
+TEST kill_brick $V0 $H0 $B0/${V0}0
+dd if=/dev/zero of=$M0/a bs=1M count=12 #This shall fail
+TEST $CLI volume start $V0 force
+TEST $CLI volume set $V0 cluster.self-heal-daemon on
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" glustershd_up_status
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 0
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 1
+
+TEST $CLI volume heal $V0
+EXPECT_WITHIN $HEAL_TIMEOUT "0" get_pending_heal_count $V0
+
+cleanup