diff options
author | Pranith Kumar K <pkarampu@redhat.com> | 2020-04-13 19:31:51 +0530 |
---|---|---|
committer | Ravishankar N <ravishankar@redhat.com> | 2020-10-01 12:03:25 +0000 |
commit | 9ecbd69127d373ac000e9e1be00c1829e49e64a4 (patch) | |
tree | 66e5276ca1fbcc50913e29fe004a3c6ffbc919ae /tests/bugs/replicate/bug-1744548-heal-timeout.t | |
parent | 4ec05d087e07dc1ae2ada9d36ab2597c175890b4 (diff) |
cluster/afr: Heal directory rename without rmdir/mkdir
Problem1:
When a directory is renamed while a brick
is down entry-heal always did an rm -rf on that directory on
the sink on old location and did mkdir and created the directory
hierarchy again in the new location. This is inefficient.
Problem2:
Renamedir heal order may lead to a scenario where directory in
the new location could be created before deleting it from old
location leading to 2 directories with same gfid in posix.
Fix:
As part of heal, if oldlocation is healed first and is not present in
source-brick always rename it into a hidden directory inside the
sink-brick so that when heal is triggered in new-location shd can
rename it from this hidden directory to the new-location.
If new-location heal is triggered first and it detects that the
directory already exists in the brick, then it should skip healing the
directory until it appears in the hidden directory.
Credits: Ravi for rename-data-loss.t script
Fixes: #1211
Change-Id: I0cba2006f35cd03d314d18211ce0bd530e254843
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Diffstat (limited to 'tests/bugs/replicate/bug-1744548-heal-timeout.t')
-rw-r--r-- | tests/bugs/replicate/bug-1744548-heal-timeout.t | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/tests/bugs/replicate/bug-1744548-heal-timeout.t b/tests/bugs/replicate/bug-1744548-heal-timeout.t index c208112c8b0..011535066f9 100644 --- a/tests/bugs/replicate/bug-1744548-heal-timeout.t +++ b/tests/bugs/replicate/bug-1744548-heal-timeout.t @@ -25,14 +25,14 @@ TEST ! $CLI volume heal $V0 TEST $CLI volume profile $V0 start TEST $CLI volume profile $V0 info clear TEST $CLI volume heal $V0 enable -# Each brick does 3 opendirs, corresponding to dirty, xattrop and entry-changes -EXPECT_WITHIN $HEAL_TIMEOUT "^333$" get_cumulative_opendir_count +# Each brick does 4 opendirs, corresponding to dirty, xattrop and entry-changes, anonymous-inode +EXPECT_WITHIN 4 "^444$" get_cumulative_opendir_count # Check that a change in heal-timeout is honoured immediately. TEST $CLI volume set $V0 cluster.heal-timeout 5 sleep 10 # Two crawls must have happened. -EXPECT_WITHIN $HEAL_TIMEOUT "^999$" get_cumulative_opendir_count +EXPECT_WITHIN $CHILD_UP_TIMEOUT "^121212$" get_cumulative_opendir_count # shd must not heal if it is disabled and heal-timeout is changed. TEST $CLI volume heal $V0 disable |