summaryrefslogtreecommitdiffstats
path: root/tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t
diff options
context:
space:
mode:
authorAtin Mukherjee <amukherj@redhat.com>2015-04-20 17:37:21 +0530
committerKaushal M <kaushal@redhat.com>2015-04-26 21:54:51 -0700
commit18fd2fdd60839d737ab0ac64f33a444b54bdeee4 (patch)
treef5bab9a0e010221794615498881a1473d3952ee9 /tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t
parent4c3724f195240e40994b71add255f85ee1b025fb (diff)
glusterd: initialize snapd svc at volume restore path
In restore path snapd svc was not initialized because of which any glusterd instance which went down and came back may have uninitialized snapd svc. The reason I used 'may' is because depending on the nodes in the cluster. In a single node cluster this wouldn't be a problem since glusterd_spawn_daemon takes care of initializing it. Change-Id: I2da1e419a0506d3b2742c1cf39a3b9416eb3c305 BUG: 1213295 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10304 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com>
Diffstat (limited to 'tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t')
-rw-r--r--tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t26
1 files changed, 26 insertions, 0 deletions
diff --git a/tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t b/tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t
new file mode 100644
index 00000000000..1dbfdf8697b
--- /dev/null
+++ b/tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t
@@ -0,0 +1,26 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../cluster.rc
+
+cleanup
+
+TEST launch_cluster 2;
+TEST $CLI_1 peer probe $H2;
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+
+TEST $CLI_1 volume create $V0 $H1:$B1/$V0 $H2:$B2/$V0
+TEST $CLI_1 volume start $V0
+
+kill_glusterd 2
+TEST start_glusterd 2
+
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+
+#volume stop should not crash
+TEST $CLI_2 volume stop $V0
+
+# check whether glusterd instance is running on H2 as this is the node which
+# restored the volume configuration after a restart
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+cleanup