summaryrefslogtreecommitdiffstats
path: root/tests/bugs
diff options
context:
space:
mode:
authoranand <anekkunt@redhat.com>2015-07-21 15:42:24 +0530
committerAtin Mukherjee <amukherj@redhat.com>2015-07-27 20:42:17 -0700
commit65e6ab1bfbbec7755f7ae2294cb83334ac65a296 (patch)
treee8c0febdc939c1baa3df83b7656dc58bd93a3a4f /tests/bugs
parentca67ac071c56a3bd6f2b2ba3a958f0305db50a3d (diff)
glusterd: getting txn_id from frame->cookie in op_sm call back
RCA: If rebalance start is triggered from one node and one of other nodes in the cluster goes down simultaneously we might end up in a case where callback will use the txn_id from priv->global_txn_id which is always zeros and this means injecting an event with an incorrect txn_id will result into op-sm getting stuck. fix: set txn_id in frame->cookie during sumbit_and_request, so that we can get txn_id in call back functions. Change-Id: I519176c259ea9d37897791a77a7c92eb96d10052 BUG: 1245142 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/11728 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
Diffstat (limited to 'tests/bugs')
-rw-r--r--tests/bugs/glusterd/bug-1245142-rebalance_test.t28
1 files changed, 28 insertions, 0 deletions
diff --git a/tests/bugs/glusterd/bug-1245142-rebalance_test.t b/tests/bugs/glusterd/bug-1245142-rebalance_test.t
new file mode 100644
index 00000000000..a28810ea71c
--- /dev/null
+++ b/tests/bugs/glusterd/bug-1245142-rebalance_test.t
@@ -0,0 +1,28 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../cluster.rc
+. $(dirname $0)/../../volume.rc
+
+
+cleanup;
+TEST launch_cluster 2;
+TEST $CLI_1 peer probe $H2;
+
+EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
+
+$CLI_1 volume create $V0 $H1:$B1/$V0 $H2:$B2/$V0
+EXPECT 'Created' cluster_volinfo_field 1 $V0 'Status';
+
+$CLI_1 volume start $V0
+EXPECT 'Started' cluster_volinfo_field 1 $V0 'Status';
+
+$CLI_1 volume rebalance $V0 start &
+#kill glusterd2 after requst sent, so that call back is called
+#with rpc->status fail ,so roughly 1sec delay is introduced to get this scenario.
+sleep 1
+kill_glusterd 2
+#check glusterd commands are working after rebalance start command
+EXPECT 'Started' cluster_volinfo_field 1 $V0 'Status';
+
+cleanup;