summaryrefslogtreecommitdiffstats
path: root/xlators
diff options
context:
space:
mode:
authorkarthik-us <ksubrahm@redhat.com>2017-01-05 14:06:21 +0530
committerAtin Mukherjee <amukherj@redhat.com>2017-01-06 01:19:38 -0800
commitc916a2ffc257b0cfa493410e31b6af28f428c53a (patch)
tree89eeaf1be42ad00d00543ec474f1a9826e4db9f7 /xlators
parent395c55483724912821929ff2a793e99fd7bf4e71 (diff)
glusterd: Fail add-brick on replica count change, if brick is down
Problem: 1. Have a replica 2 volume with bricks b1 and b2 2. Before setting the layout, b1 goes down 3. Set the layout write some data, which gets populated on b2 4. b2 goes down then b1 comes up 5. Add another brick b3, and heal will take place from b1 to b3, which basically have no data 6. Write some data. Both b1 and b3 will mark b2 for pending writes 7. b1 goes down, and b2 comes up 8. b2 gets heald from b1. During heal it removes the data which is already in b2, considering that as stale data. This leads to data loss. Solution: 1. In glusterd stage-op, while adding bricks, check whether the replica count is being increased 2. If yes, then check whether any of the bricks are down at that time 3. If yes, then fail the add-brick to avoid such data loss 4. Else continue the normal operation. This check will work enen when we convert plain distribute volume to replicate Test: 1. Create a replica 2 volume 2. Kill one brick from the volume 3. Try adding a brick to the volume 4. It should fail with all bricks are not up error 5. Cretae a distribute volume and kill one of the brick 6. Try to convert it to replicate volume, by adding bricks. 7. This should also fail. Change-Id: I9c8d2ab104263e4206814c94c19212ab914ed07c BUG: 1406411 Signed-off-by: karthik-us <ksubrahm@redhat.com> Reviewed-on: http://review.gluster.org/16330 Tested-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Diffstat (limited to 'xlators')
-rw-r--r--xlators/mgmt/glusterd/src/glusterd-brick-ops.c19
1 files changed, 19 insertions, 0 deletions
diff --git a/xlators/mgmt/glusterd/src/glusterd-brick-ops.c b/xlators/mgmt/glusterd/src/glusterd-brick-ops.c
index 19db03a5201..c2c4cf4606f 100644
--- a/xlators/mgmt/glusterd/src/glusterd-brick-ops.c
+++ b/xlators/mgmt/glusterd/src/glusterd-brick-ops.c
@@ -1724,6 +1724,25 @@ glusterd_op_stage_add_brick (dict_t *dict, char **op_errstr, dict_t *rsp_dict)
}
}
+ if (volinfo->replica_count < replica_count) {
+ cds_list_for_each_entry (brickinfo, &volinfo->bricks,
+ brick_list) {
+ if (gf_uuid_compare (brickinfo->uuid, MY_UUID))
+ continue;
+ if (brickinfo->status == GF_BRICK_STOPPED) {
+ ret = -1;
+ snprintf (msg, sizeof (msg), "Brick %s is down,"
+ " changing replica count needs all "
+ "the bricks to be up to avoid data "
+ "loss", brickinfo->path);
+ gf_msg (THIS->name, GF_LOG_ERROR, 0,
+ GD_MSG_BRICK_ADD_FAIL, "%s", msg);
+ *op_errstr = gf_strdup (msg);
+ goto out;
+ }
+ }
+ }
+
if (conf->op_version > GD_OP_VERSION_3_7_5 &&
is_origin_glusterd (dict)) {
ret = glusterd_validate_quorum (this, GD_OP_ADD_BRICK, dict,