summaryrefslogtreecommitdiffstats
path: root/tests/bugs/glusterd/bug-1322145-disallow-detatch-peer.t
diff options
context:
space:
mode:
authorGaurav Yadav <gyadav@redhat.com>2017-03-16 14:56:39 +0530
committerAtin Mukherjee <amukherj@redhat.com>2017-03-31 21:53:10 -0400
commit1c92f83ec041176ad7c42ef83525cda7d3eda3c5 (patch)
tree1826ab841b94ceec21be71297b8cf4d1e5a09c0a /tests/bugs/glusterd/bug-1322145-disallow-detatch-peer.t
parent4c3aa910e7913c34db24f864a33dfb6d1e0234a4 (diff)
glusterd : Disallow peer detach if snapshot bricks exist on it
Problem : - Deploy gluster on 2 nodes, one brick each, one volume replicated - Create a snapshot - Lose one server - Add a replacement peer and new brick with a new IP address - replace-brick the missing brick onto the new server (wait for replication to finish) - peer detach the old server - after doing above steps, glusterd fails to restart. Solution: With the fix detach peer will populate an error : "N2 is part of existing snapshots. Remove those snapshots before proceeding". While doing so we force user to stay with that peer or to delete all snapshots. Change-Id: I3699afb9b2a5f915768b77f885e783bd9b51818c BUG: 1322145 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/16907 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Diffstat (limited to 'tests/bugs/glusterd/bug-1322145-disallow-detatch-peer.t')
-rw-r--r--tests/bugs/glusterd/bug-1322145-disallow-detatch-peer.t36
1 files changed, 36 insertions, 0 deletions
diff --git a/tests/bugs/glusterd/bug-1322145-disallow-detatch-peer.t b/tests/bugs/glusterd/bug-1322145-disallow-detatch-peer.t
new file mode 100644
index 00000000000..60eceb4f44d
--- /dev/null
+++ b/tests/bugs/glusterd/bug-1322145-disallow-detatch-peer.t
@@ -0,0 +1,36 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../cluster.rc
+. $(dirname $0)/../../volume.rc
+. $(dirname $0)/../../snapshot.rc
+
+
+cleanup;
+TEST verify_lvm_version
+TEST launch_cluster 3;
+TEST setup_lvm 3;
+
+TEST $CLI_1 peer probe $H2;
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+
+TEST $CLI_1 volume create $V0 replica 2 $H1:$L1 $H2:$L2
+EXPECT 'Created' volinfo_field $V0 'Status'
+
+TEST $CLI_1 volume start $V0
+EXPECT 'Started' volinfo_field $V0 'Status'
+
+TEST $CLI_1 snapshot create snap1 $V0 no-timestamp;
+
+kill_glusterd 2
+
+TEST $CLI_1 peer probe $H3;
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+
+TEST $CLI_1 volume replace-brick $V0 $H2:$L2 $H3:$L3 commit force
+
+
+# peer hosting snapshotted bricks should not be detachable
+TEST ! $CLI_1 peer detach $H2
+cleanup;
+