diff options
| author | Krutika Dhananjay <kdhananj@redhat.com> | 2018-03-29 17:21:32 +0530 |
|---|---|---|
| committer | Pranith Kumar Karampuri <pkarampu@redhat.com> | 2018-06-13 09:57:14 +0000 |
| commit | c30aca6a5b25223e36b4ea812af544e348952138 (patch) | |
| tree | ec851479d54cb892e8e76d8f811a42bca2924630 /tests/bugs/shard/bug-1568521-EEXIST.t | |
| parent | 5702ff3012f6b97f6b497b5c2e89e8700caf8bc1 (diff) | |
features/shard: Introducing ".shard/.remove_me" for atomic shard deletion (part 1)
PROBLEM:
Shards are deleted synchronously when a sharded file is unlinked or
when a sharded file participating as the dst in a rename() is going to
be replaced. The problem with this approach is it makes the operation
really slow, sometimes causing the application to time out, especially
with large files.
SOLUTION:
To make this operation atomic, we introduce a ".remove_me" directory.
Now renames and unlinks will simply involve two steps:
1. creating an empty file under .remove_me named after the gfid of the file
participating in unlink/rename
2. carrying out the actual rename/unlink
A synctask is created (more on that in part 2) to scan this directory
after every unlink/rename operation (or upon a volume mount) and clean
up all shards associated with it. All of this happens in the background.
The task takes care to delete the shards associated with the gfid in
.remove_me only if this gfid doesn't exist in backend, ensuring that the
file was successfully renamed/unlinked and its shards can be discarded now
safely.
Change-Id: Ia1d238b721a3e99f951a73abbe199e4245f51a3a
updates: bz#1568521
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Diffstat (limited to 'tests/bugs/shard/bug-1568521-EEXIST.t')
| -rw-r--r-- | tests/bugs/shard/bug-1568521-EEXIST.t | 79 |
1 files changed, 79 insertions, 0 deletions
diff --git a/tests/bugs/shard/bug-1568521-EEXIST.t b/tests/bugs/shard/bug-1568521-EEXIST.t new file mode 100644 index 00000000000..e4c3d41098c --- /dev/null +++ b/tests/bugs/shard/bug-1568521-EEXIST.t @@ -0,0 +1,79 @@ +#!/bin/bash + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc + +cleanup + +TEST glusterd +TEST pidof glusterd +TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{0,1} +TEST $CLI volume set $V0 features.shard on +TEST $CLI volume set $V0 features.shard-block-size 4MB +TEST $CLI volume start $V0 +TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 $M0 + +TEST mkdir $M0/dir +# Unlink a temporary file to trigger creation of .remove_me +TEST touch $M0/tmp +TEST unlink $M0/tmp + +TEST stat $B0/${V0}0/.shard/.remove_me +TEST stat $B0/${V0}1/.shard/.remove_me + +TEST dd if=/dev/zero of=$M0/dir/file bs=1024 count=9216 +gfid_file=$(get_gfid_string $M0/dir/file) + +# Create marker file from the backend to simulate ENODATA. +touch $B0/${V0}0/.shard/.remove_me/$gfid_file +touch $B0/${V0}1/.shard/.remove_me/$gfid_file + +# Set block and file size to incorrect values of 64MB and 5MB to simulate "stale xattrs" case +# and confirm that the correct values are set when the actual unlink takes place + +TEST setfattr -n trusted.glusterfs.shard.block-size -v 0x0000000004000000 $B0/${V0}0/.shard/.remove_me/$gfid_file +TEST setfattr -n trusted.glusterfs.shard.block-size -v 0x0000000004000000 $B0/${V0}1/.shard/.remove_me/$gfid_file + +TEST setfattr -n trusted.glusterfs.shard.file-size -v 0x0000000000500000000000000000000000000000000000000000000000000000 $B0/${V0}0/.shard/.remove_me/$gfid_file +TEST setfattr -n trusted.glusterfs.shard.file-size -v 0x0000000000500000000000000000000000000000000000000000000000000000 $B0/${V0}1/.shard/.remove_me/$gfid_file + +# Sleep for 2 seconds to prevent posix_gfid_heal() from believing marker file is "fresh" and failing lookup with ENOENT +sleep 2 + +TEST unlink $M0/dir/file +EXPECT "0000000000400000" get_hex_xattr trusted.glusterfs.shard.block-size $B0/${V0}0/.shard/.remove_me/$gfid_file +EXPECT "0000000000400000" get_hex_xattr trusted.glusterfs.shard.block-size $B0/${V0}1/.shard/.remove_me/$gfid_file +EXPECT "0000000000900000000000000000000000000000000000000000000000000000" get_hex_xattr trusted.glusterfs.shard.file-size $B0/${V0}0/.shard/.remove_me/$gfid_file +EXPECT "0000000000900000000000000000000000000000000000000000000000000000" get_hex_xattr trusted.glusterfs.shard.file-size $B0/${V0}1/.shard/.remove_me/$gfid_file + +############################## +### Repeat test for rename ### +############################## + +TEST touch $M0/src +TEST dd if=/dev/zero of=$M0/dir/dst bs=1024 count=9216 +gfid_dst=$(get_gfid_string $M0/dir/dst) + +# Create marker file from the backend to simulate ENODATA. +touch $B0/${V0}0/.shard/.remove_me/$gfid_dst +touch $B0/${V0}1/.shard/.remove_me/$gfid_dst + +# Set block and file size to incorrect values of 64MB and 5MB to simulate "stale xattrs" case +# and confirm that the correct values are set when the actual unlink takes place + +TEST setfattr -n trusted.glusterfs.shard.block-size -v 0x0000000004000000 $B0/${V0}0/.shard/.remove_me/$gfid_dst +TEST setfattr -n trusted.glusterfs.shard.block-size -v 0x0000000004000000 $B0/${V0}1/.shard/.remove_me/$gfid_dst + +TEST setfattr -n trusted.glusterfs.shard.file-size -v 0x0000000000500000000000000000000000000000000000000000000000000000 $B0/${V0}0/.shard/.remove_me/$gfid_dst +TEST setfattr -n trusted.glusterfs.shard.file-size -v 0x0000000000500000000000000000000000000000000000000000000000000000 $B0/${V0}1/.shard/.remove_me/$gfid_dst + +# Sleep for 2 seconds to prevent posix_gfid_heal() from believing marker file is "fresh" and failing lookup with ENOENT +sleep 2 + +TEST mv -f $M0/src $M0/dir/dst +EXPECT "0000000000400000" get_hex_xattr trusted.glusterfs.shard.block-size $B0/${V0}0/.shard/.remove_me/$gfid_dst +EXPECT "0000000000400000" get_hex_xattr trusted.glusterfs.shard.block-size $B0/${V0}1/.shard/.remove_me/$gfid_dst +EXPECT "0000000000900000000000000000000000000000000000000000000000000000" get_hex_xattr trusted.glusterfs.shard.file-size $B0/${V0}0/.shard/.remove_me/$gfid_dst +EXPECT "0000000000900000000000000000000000000000000000000000000000000000" get_hex_xattr trusted.glusterfs.shard.file-size $B0/${V0}1/.shard/.remove_me/$gfid_dst + +cleanup |
