From a238ad371c32feddb5af8a48642870bc6b9ee767 Mon Sep 17 00:00:00 2001 From: Atin Mukherjee Date: Thu, 9 Jun 2016 18:22:43 +0530 Subject: glusterd: fail volume delete if one of the node is down Backport of http://review.gluster.org/14681 Deleting a volume on a cluster where one of the node in the cluster is down is buggy since once that node comes back the resync of the same volume will happen. Till we bring in the soft delete feature tracked in http://review.gluster.org/12963 this is a safe guard to block the volume deletion. Please note the test file which is backported from this commit has an issue where we start the volume and then try to delete it which is anyway going to fail. So the test actually doesn't validate the fix. http://review.gluster.org/#/c/14693/ in master fixed the problem and the same is ported as part of this commit as well. Cherry picked from commit 5016cc548d4368b1c180459d6fa8ae012bb21d6e: > Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c > BUG: 1344407 > Signed-off-by: Atin Mukherjee > Reviewed-on: http://review.gluster.org/14681 > Smoke: Gluster Build System > CentOS-regression: Gluster Build System > Reviewed-by: Kaushal M > NetBSD-regression: NetBSD Build System Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c BUG: 1344631 Signed-off-by: Atin Mukherjee Reviewed-on: http://review.gluster.org/14691 Reviewed-by: Niels de Vos Smoke: Gluster Build System NetBSD-regression: NetBSD Build System CentOS-regression: Gluster Build System --- .../glusterd/bug-1344407-volume-delete-on-node-down.t | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100755 tests/bugs/glusterd/bug-1344407-volume-delete-on-node-down.t (limited to 'tests/bugs') diff --git a/tests/bugs/glusterd/bug-1344407-volume-delete-on-node-down.t b/tests/bugs/glusterd/bug-1344407-volume-delete-on-node-down.t new file mode 100755 index 00000000000..5081c373e47 --- /dev/null +++ b/tests/bugs/glusterd/bug-1344407-volume-delete-on-node-down.t @@ -0,0 +1,19 @@ +#!/bin/bash + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc +. $(dirname $0)/../../cluster.rc + +cleanup; + +TEST launch_cluster 2; + +TEST $CLI_1 peer probe $H2; +EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count + +TEST $CLI_1 volume create $V0 $H1:$B1/$V0 + +TEST kill_glusterd 2 +TEST ! $CLI_1 volume delete $V0 + +cleanup; -- cgit