summaryrefslogtreecommitdiffstats
path: root/tests
diff options
context:
space:
mode:
authorAtin Mukherjee <amukherj@redhat.com>2015-04-20 17:37:21 +0530
committerKrishnan Parthasarathi <kparthas@redhat.com>2015-04-28 01:53:35 -0700
commit018a0a5b846ed903d5d2545c2c353281e1e9949d (patch)
tree1fe0f9a9b0e2e8f93ccdb73a614f3ccd1b9d1b10 /tests
parent00b02d1f18308016fa9a134bea593d3095b1157f (diff)
glusterd: initialize snapd svc at volume restore path
In restore path snapd svc was not initialized because of which any glusterd instance which went down and came back may have uninitialized snapd svc. The reason I used 'may' is because depending on the nodes in the cluster. In a single node cluster this wouldn't be a problem since glusterd_spawn_daemon takes care of initializing it. Backport of http://review.gluster.org/10304 Change-Id: I2da1e419a0506d3b2742c1cf39a3b9416eb3c305 BUG: 1215518 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10304 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com> (cherry picked from commit 18fd2fdd60839d737ab0ac64f33a444b54bdeee4) Reviewed-on: http://review.gluster.org/10397 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r--tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t26
1 files changed, 26 insertions, 0 deletions
diff --git a/tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t b/tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t
new file mode 100644
index 00000000000..1dbfdf8697b
--- /dev/null
+++ b/tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t
@@ -0,0 +1,26 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../cluster.rc
+
+cleanup
+
+TEST launch_cluster 2;
+TEST $CLI_1 peer probe $H2;
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+
+TEST $CLI_1 volume create $V0 $H1:$B1/$V0 $H2:$B2/$V0
+TEST $CLI_1 volume start $V0
+
+kill_glusterd 2
+TEST start_glusterd 2
+
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+
+#volume stop should not crash
+TEST $CLI_2 volume stop $V0
+
+# check whether glusterd instance is running on H2 as this is the node which
+# restored the volume configuration after a restart
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+cleanup