summaryrefslogtreecommitdiffstats
path: root/tests/basic
diff options
context:
space:
mode:
authorAshish Pandey <aspandey@redhat.com>2017-04-03 12:46:29 +0530
committerXavier Hernandez <xhernandez@datalab.es>2017-06-06 14:41:52 +0000
commit88c67b72b1d5843d11ce7cba27dd242bd0c23c6a (patch)
tree5df681630650442107f9c03f52cdc273f8872113 /tests/basic
parent4dba0d5f8d9ac13504087f553d7c730060f0f9c7 (diff)
cluster/ec: Update xattr and heal size properly
Problem-1 : Recursive healing of same file is happening when IO is going on even after data heal completes. Solution: RCA: At the end of the write, when ec_update_size_version gets called, we send it only on good bricks and not on healing brick. Due to this, xattr on healing brick will always remain out of sync and when the background heal check source and sink, it finds this brick to be healed and start healing from scratch. That involve ftruncate and writing all of the data again. To solve this, send xattrop on all the good bricks as well as healing bricks. Problem-2: The above fix exposes the data corruption during heal. If the write on a file is going on and heal finishes, we find that the file gets corrupted. RCA: The real problem happens in ec_rebuild_data(). Here we receive the 'size' argument which contains the real file size at the time of starting self-heal and it's assigned to heal->total_size. After that, a sequence of calls to ec_sync_heal_block() are done. Each call ends up calling ec_manager_heal_block(), which does the actual work of healing a block. First a lock on the inode is taken in state EC_STATE_INIT using ec_heal_inodelk(). When the lock is acquired, ec_heal_lock_cbk() is called. This function calls ec_set_inode_size() to store the real size of the inode (it uses heal->total_size). The next step is to read the block to be healed. This is done using a regular ec_readv(). One of the things this call does is to trim the returned size if the file is smaller than the requested size. In our case, when we read the last block of a file whose size was = 512 mod 1024 at the time of starting self-heal, ec_readv() will return only the first 512 bytes, not the whole 1024 bytes. This isn't a problem since the following ec_writev() sent from the heal code only attempts to write the amount of data read, so it shouldn't modify the remaining 512 bytes. However ec_writev() also checks the file size. If we are writing the last block of the file (determined by the size stored on the inode that we have set to heal->total_size), any data beyond the (imposed) end of file will be cleared with 0's. This causes the 512 bytes after the heal->total_size to be cleared. Since the file was written after heal started, the these bytes contained data, so the block written to the damaged brick will be incorrect. Solution: Align heal->total_size to a multiple of the stripe size. Thanks "Xavier Hernandez" <xhernandez@datalab.es> to find out the root cause and to fix the issue. Change-Id: I6c9f37b3ff9dd7f5dc1858ad6f9845c05b4e204e BUG: 1428673 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/16985 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Diffstat (limited to 'tests/basic')
-rwxr-xr-xtests/basic/ec/ec-data-heal.t87
1 files changed, 87 insertions, 0 deletions
diff --git a/tests/basic/ec/ec-data-heal.t b/tests/basic/ec/ec-data-heal.t
new file mode 100755
index 00000000000..4599c8a336b
--- /dev/null
+++ b/tests/basic/ec/ec-data-heal.t
@@ -0,0 +1,87 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../volume.rc
+
+#This test checks data corruption after heal while IO is going on
+
+cleanup
+TEST glusterd
+TEST pidof glusterd
+TEST $CLI volume create $V0 disperse 3 redundancy 1 $H0:$B0/${V0}{0..2}
+TEST $CLI volume start $V0
+
+TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0;
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "3" ec_child_up_count $V0 0
+
+############ Start IO ###########
+TEST touch $M0/file
+#start background IO on file
+dd if=/dev/urandom of=$M0/file conv=fdatasync &
+iopid=$(echo $!)
+
+
+############ Kill and start brick0 for heal ###########
+brick0=$(ps -p $(get_brick_pid $V0 $H0 $B0/${V0}0) -o args)
+WORDTOREMOVE=COMMAND
+brick0=${brick0//$WORDTOREMOVE/}
+TEST kill_brick $V0 $H0 $B0/${V0}0
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "2" ec_child_up_count $V0 0
+#sleep so that data can be written which will be healed later
+sleep 10
+TEST eval $brick0
+##wait for heal info to become 0 and kill IO
+EXPECT_WITHIN $IO_HEAL_TIMEOUT "^0$" get_pending_heal_count $V0
+kill $iopid
+EXPECT_WITHIN $IO_HEAL_TIMEOUT "^0$" get_pending_heal_count $V0
+
+############### Check md5sum #########################
+
+## unmount and mount get md5sum after killing brick0
+
+brick0=$(ps -p $(get_brick_pid $V0 $H0 $B0/${V0}0) -o args)
+WORDTOREMOVE=COMMAND
+brick0=${brick0//$WORDTOREMOVE/}
+TEST kill_brick $V0 $H0 $B0/${V0}0
+
+EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0
+TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0;
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "2" ec_child_up_count $V0 0
+mdsum0=`md5sum $M0/file | awk '{print $1}'`
+TEST eval $brick0
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "3" ec_child_up_count $V0 0
+
+## unmount and mount get md5sum after killing brick1
+
+brick1=$(ps -p $(get_brick_pid $V0 $H0 $B0/${V0}1) -o args)
+WORDTOREMOVE=COMMAND
+brick1=${brick1//$WORDTOREMOVE/}
+TEST kill_brick $V0 $H0 $B0/${V0}1
+
+EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0
+TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0;
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "2" ec_child_up_count $V0 0
+mdsum1=`md5sum $M0/file | awk '{print $1}'`
+TEST eval $brick1
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "3" ec_child_up_count $V0 0
+
+## unmount and mount get md5sum after killing brick2
+
+brick2=$(ps -p $(get_brick_pid $V0 $H0 $B0/${V0}2) -o args)
+WORDTOREMOVE=COMMAND
+brick2=${brick2//$WORDTOREMOVE/}
+TEST kill_brick $V0 $H0 $B0/${V0}2
+
+EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0
+TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0;
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "2" ec_child_up_count $V0 0
+mdsum2=`md5sum $M0/file | awk '{print $1}'`
+TEST eval $brick2
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "3" ec_child_up_count $V0 0
+
+# compare all the three md5sums
+EXPECT "$mdsum0" echo $mdsum1
+EXPECT "$mdsum0" echo $mdsum2
+EXPECT "$mdsum1" echo $mdsum2
+
+cleanup