| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Fixes: bz#1708929
Change-Id: I9cc81a9047ff874df752ca5552e00bf033485bd8
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
If heal from next brick starts after the first brick completes heal, then
opendir on the brick can change atime leading to failure of the test. When
ctime is disabled it is better to just check mtime to be same after heal.
fixes: bz#1751134
Change-Id: Ia03e30fd547e6bbe85c1e299845ffa122f3a2692
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
| |
fixes: #721
Change-Id: I5333540e3c635ccf441cf1f4696e4c8986e38ea8
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If update size/version is not successful on the file, updates on the
same stripe could lead to data corruptions if the earlier un-aligned
write is not successful on all the bricks. Application won't have
any knowledge of this because update size/version happens in the
background.
Fix:
Fail fsync/flush on fds that are opened before update-size-version
went bad.
fixes: bz#1748836
Change-Id: I9d323eddcda703bd27d55f340c4079d76e06e492
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were unconditionally cleaning up the grap when we get
child_down followed by parent_down. But this is prone to
race condition when some of the bricks are already disconnected.
In this case, even before the last child down is executed in the
client xlator code,we might have freed the graph. Because the
child_down event is alreadt recevied.
To fix this race, we have introduced a check to see if all client
xlator have cleared thier reconnect chain, and called the child_down
for last time.
Change-Id: I7d02813bc366dac733a836e0cd7b14a6fac52042
fixes: bz#1727329
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors
Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
Fixes: bz#1728770
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Since commit 600ba94183333c4af9b4a09616690994fd528478, shd starts
healing as soon as it is toggled from disabled to enabled. This was
causing the following line in the .t to fail on a 'fast' machine (always
on my laptop and sometimes on the jenkins slaves).
EXPECT_NOT "^0$" get_pending_heal_count $V0
because by the time shd was disabled, the heal was already completed.
Fix:
Increase the no. of files to be healed and make it a variable called
FILE_COUNT, should we need to bump it up further because the machines
become even faster. Also created pending metadata heals to increase the
time taken to heal a file.
fixes: bz#1748744
Change-Id: I5a26b08e45b8c19bce3c01ce67bdcc28ed48198d
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...whenever shd is re-enabled after disabling or there is a change in
`cluster.heal-timeout`, without needing to restart shd or waiting for the
current `cluster.heal-timeout` seconds to expire.
See BZ 1743988 for more details.
Change-Id: Ia5ebd7c8e9f5b54cba3199c141fdd1af2f9b9bfe
fixes: bz#1744548
Reported-by: Glen Kiessling <glenk1973@hotmail.com>
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Null character string is a valid xattr value in file system. But for
those xattrs processed by md-cache, it does not update its entries if
value is null('\0'). This results in ENODATA when those xattrs are
queried afterwards via getxattr() causing failures in basic operations
like create, copy etc in a specially configured Samba setup for Mac OS
clients.
On the other side snapview-server is internally setting empty string("")
as value for xattrs received as part of listxattr() and are not intended
to be cached. Therefore we try to maintain that behaviour using an
additional dictionary key to prevent updation of entries in getxattr()
and fgetxattr() callbacks in md-cache.
Credits: Poornima G <pgurusid@redhat.com>
Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7
Fixes: bz#1726205
Signed-off-by: Anoop C S <anoopcs@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: sometime ./tests/bugs/glusterd/bug-1595320.t is failing is
failing at the time of checking brick_process after sending
a kill signal to brick process
Solution: Wait sometime after just sending a kill signal to brick
process to make sure brick process is stopped
Change-Id: Iee9e91284618abfc62a550d47e4f9117785def58
Fixes: bz#1743200
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test the various combinations of hashed and cached
subvols for the src and dst.
Change-Id: I41416f9e5f2b7ea1c880d1913fdd6576da1ee868
fixes: bz#1626543
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
| |
bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t as BRICK_MUX_BAD_TEST
Updates: bz#1743069
Change-Id: I1eea0186ca0c1b1226f4b3d0d7c0e41fc7821cbd
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
| |
Fixes: bz#1734370
Change-Id: I29e338bac62104233a6f80212df8d0fb016affda
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Create a separate logdir for each host instance created by
cluster.rc. This makes it easier to determine the files
belonging to a particular instance.
Change-Id: Ic8321f83f98995412b7d5f095b3d3f0391767a8b
Fixes: bz#1733042
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b
fixes bz#1738419
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
https://build.gluster.org/job/centos7-regression/7337/consoleFull
indicates the shd crashing for this .t. On looking at the core, I see
the crash is at the time of shd init and glusterfs context is null:
(gdb) bt
(gdb) p ctx
$2 = (glusterfs_ctx_t *) 0xf00000000
The .t is killing all gluster processes immediately after volume start,
so it looks like a race between shd coming up and it being killed.
Fix: Kill gluster processes only after they are up and running.
Fixes: bz#1740017
Change-Id: I7cf589201669bd9f535e968d147015dc99e9a4b6
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test file tests/basic/shd-mux.t was taking longer than 200
seconds in some iterations. So this patch is breaking the
test case to three files
Change-Id: I1430f58798f876edf6368d6f4b8b5a75f0114c31
Updates: bz#1708929
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
| |
Updates: bz#1693692
Change-Id: Ia10ccca5e1fed6c4269842ebb4d507662ca0f6a6
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
| |
Updates: bz#1693692
Change-Id: I145df4c0d0b0ce738f1d34b02341ec606e38522e
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add few more mgmt functions to the coverage
* While testing mgmt function, found an issue, where if the
'glfs_set_volfile_server()' is not called before calling
'glfs_unset_volfile_server()', unset would cause a crash.
Null check of few variables fixes the issue, which is handled
in this patch itself.
* Added a test for volfile API
Updates: bz#1693692
Change-Id: Iba151f8da1b64107e2f436ddbfef9da45b1c1588
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
| |
Updates: bz#1693692
Change-Id: I13de7cf4c380b9663e6258e2fd62ce1f180591b4
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
| |
Updates: bz#1693692
Change-Id: I07371ea1e2613fed6dd9ea67c40cbb3ebcb9387e
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
make sure to provide 'log-file' option, so we can see the logs.
This test does test volgen inserting the trace xlator in server
graph.
Updates: bz#1693692
Change-Id: I26c736b04376674b4c094d48060660421e6c983c
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
EC doesn't allow concurrent writes on overlapping areas, they are
serialized. However non-overlapping writes are serviced in parallel.
When a write is not aligned, EC first needs to read the entire chunk
from disk, apply the modified fragment and write it again.
The problem appears on sparse files because a write to an offset
implicitly creates data on offsets below it (so, in some way, they
are overlapping). For example, if a file is empty and we read 10 bytes
from offset 10, read() will return 0 bytes. Now, if we write one byte
at offset 1M and retry the same read, the system call will return 10
bytes (all containing 0's).
So if we have two writes, the first one at offset 10 and the second one
at offset 1M, EC will send both in parallel because they do not overlap.
However, the first one will try to read missing data from the first chunk
(i.e. offsets 0 to 9) to recombine the entire chunk and do the final write.
This read will happen in parallel with the write to 1M. What could happen
is that half of the bricks process the write before the read, and the
half do the read before the write. Some bricks will return 10 bytes of
data while the otherw will return 0 bytes (because the file on the brick
has not been expanded yet).
When EC tries to recombine the answers from the bricks, it can't, because
it needs more than half consistent answers to recover the data. So this
read fails with EIO error. This error is propagated to the parent write,
which is aborted and EIO is returned to the application.
The issue happened because EC assumed that a write to a given offset
implies that offsets below it exist.
This fix prevents the read of the chunk from bricks if the current size
of the file is smaller than the read chunk offset. This size is
correctly tracked, so this fixes the issue.
Also modifying ec-stripe.t file for Test #13 within it.
In this patch, if a file size is less than the offset we are writing, we
fill zeros in head and tail and do not consider it strip cache miss.
That actually make sense as we know what data that part holds and there is
no need of reading it from bricks.
Change-Id: Ic342e8c35c555b8534109e9314c9a0710b6225d6
Fixes: bz#1730715
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The files which were created before ctime enabled would not
have "trusted.glusterfs.mdata"(stores time attributes) xattr.
Upon fops which modifies either ctime or mtime, the xattr
gets created with latest ctime, mtime and atime, which is
incorrect. It should update only the corresponding time
attribute and rest from backend
Solution:
Creating xattr with values from brick is not possible as
each brick of replica set would have different times.
So create the xattr upon successful lookup if the xattr
is not created
Note To Reviewers:
The time attributes used to set xattr is got from successful
lookup. Instead of sending the whole iatt over the wire via
setxattr, a structure called mdata_iatt is sent. The mdata_iatt
contains only time attributes.
Change-Id: I5e535631ddef04195361ae0364336410a2895dd4
fixes: bz#1593542
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added test case for the patch
https://review.gluster.org/#/c/glusterfs/+/22894/4
Also updated if else structure in gsyncdconfig.py to avoid
repeated occurance of values in new configfile.
fixes: bz#1707731
Change-Id: If97e1d37ac52dbd17d47be6cb659fc5a3ccab6d7
Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
tests/bugs/replicate/bug-1717819-metadata-split-brain-detection.t fails
intermittently in test cases #49 & #50, which compare the values of the
user set xattr values after enabling the heal. We are not waiting for
the heal to complete before comparing those values, which might lead
those tests to fail.
Fix:
Wait till the HEAL-TIMEOUT before comparing the xattr values.
Also cheking for the shd to come up and the bricks to connect to the shd
process in another case.
Change-Id: I0e245b328da9df23ce70c5300278fad1c1d9f7ff
Fixes: bz#1729847
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to send the commit req to peers in case of geo-rep
operations even though it is a no volname operation. In commit
phase peers try to set the txn_opinfo which will fail because
it is a no volname operation where we don't require a commit
phase. We mark skip_locking as true for no volname operations,
but we have to give an exception to geo-rep operations, so that
they can set txn_opinfo in commit phase.
Please refer to detailed RCA at the bug: 1729463
fixes: bz#1729463
Change-Id: I9f2478b12a281f6e052035c0563c40543493a3fc
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problems:
1. When checking for type and gfid mismatch, if the type or gfid
is unknown because of missing gfid handle and the gfid xattr
it will be reported as type or gfid mismatch and the heal will
not complete.
2. If the source selected during entry heal has null gfid the same
will be sent to afr_lookup_and_heal_gfid(). In this function when
we try to assign the gfid on the bricks where it does not exist,
we are considering the same gfid and try to assign that on those
bricks. This will fail in posix_gfid_set() since the gfid sent
is null.
Fix:
If the gfid sent to afr_lookup_and_heal_gfid() is null choose a
valid gfid before proceeding to assign the gfid on the bricks
where it is missing.
In afr_selfheal_detect_gfid_and_type_mismatch(), do not report
type/gfid mismatch if the type/gfid is unknown or not set.
Change-Id: Ia06552e4dc4a9f89cb7f5302833604bd21bbf7da
fixes: bz#1722507
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With group-metadata-cache group profile settings
performance.cache-invalidation option when turned on enables both md-cache
and quick-read xlator's cache-invalidation feature. While the intent of
the group-metadata-cache is to set md-cache xlator's cache-invalidation
feature, quick-read xlator also gets affected due to the same. While
md-cache feature and it's profile existed since release-3.9,
quick-read cache-invalidation was introduced in release-4 and due to
this op-version mismatch on any cluster which is >= glusterfs-4 when
this group profile is applied it breaks backward compatibility with the
old clients.
The proposed fix here is to rename the key in quick-read to
'quick-read-cache-invalidation' so that both these features have
distinct identification. While this brings in by itself a backward
compatibility challenge where this feature is enabled in an existing
cluster and when the same is upgraded to a version where this change
exists, it will lead to an unidentified old key. But as a workaround we
can always ask users upgrading to release-7 version to turn off this
option, upgrade the cluster and turn it back on with the new key. This
needs to be documented once the patch is accepted.
Fixes: bz#1698042
Change-Id: I30422ba6496208e21191a8d78ad29b2e21078664
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gluster volume create <VOLNAME> replica 2 thin-arbiter 1 <host1>:<brick1> <host2>:<brick2>
<thin-arbiter-host>:<path-to-store-replica-id-file> [force]
The changes have been made in a way that the last brick in the bricks list
will be treated as the thin-arbiter.
GD1 will be manipulated to consider replica count to be as 2 and continue creating the
volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick
client xlator entries for each subvolume in fuse volfile, volfile generation is
modified in a way to inject these entries seperately in the volfile for every subvolume.
Few more additions -
1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count.
2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry
in fuse volfiles. The option can be set using the following CLI syntax -
gluster volume set <VOLNAME> client.ta-brick-port <PORTNO.>
3- Volume Info will contain a Thin-Arbiter-path entry to distinguish
from other replicate volumes.
Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406
Updates #687
Signed-off-by: Vishal Pandey <vpandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
* reverting changes made in
https://review.gluster.org/#/c/glusterfs/+/21686/
* Now storage.reserve can take value in percent or bytes
fixes: bz#1651445
Change-Id: Id4826210ec27991c55b17d1fecd90356bff3e036
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: glusterd populate unix socket file name based
on volname and if volname is lengthy socket
system call's are failed due to breach maximum length
is defined in the kernel.
Solution:Convert unix socket name to hash to resolve the issue
Change-Id: I5072e8184013095587537dbfa4767286307fff65
fixes: bz#1720566
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
* found a bug with quiesce fallocate() - fixed.
* found a bug with cloudsync part of code in posix - fixed
updates: bz#1693692
Change-Id: I4f315ffebb612de072ae08761b8cd0f47714080a
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: test case is failing after just starting the volume at the
time of running stat command on mount point and client is
getting error transport endpoint is not conencted
Solution: To avoid the error make sure all brick instance should be
up and mount point should be active
Change-Id: I49553a04d5b13e155ee02f4a1888a07fe3ee2ff5
fixes: bz#1721590
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Race:
Thread-1 Thread-2
1) Does ec_get_size_version() to perform
pre-op fxattrop as part of write-1
2) Calls ec_set_dirty_flag() in
ec_get_size_version() for write-2.
This sets dirty[] to 1
3) Completes executing
ec_prepare_update_cbk leading to
ctx->dirty[] = '1'
4) Takes LOCK(inode->lock) to check if there are
any flags and sets dirty-flag because
lock->waiting_flag is 0 now. This leads to
fxattrop to increment on-disk dirty[] to '2'
At the end of the writes the file will be marked for heal even when it doesn't need heal.
Fix:
Perform ec_set_dirty_flag() and other checks inside LOCK() to prevent dirty[] to be marked
as '1' in step 2) above
Updates bz#1593224
Change-Id: Icac2ab39c0b1e7e154387800fbededc561612865
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
On a EC volume, during upgrade from the older version where
ctime feature is not enabled(or not present) to the newer
version where the ctime feature is available (enabled default),
the self heal hangs and doesn't complete.
Cause:
The ctime feature has both client side code (utime) and
server side code (posix). The feature is driven from client.
Only if the client side sets the time in the frame, should
the server side sets the time attributes in xattr. But posix
setattr/fseattr was not doing that. When one of the server
nodes is updated, since ctime is enabled by default, it
starts setting xattr on setattr/fseattr on the updated node/brick.
On a EC volume the first two updated nodes(bricks) are not a
problem because there are 4 other bricks with consistent data.
However once the third brick is updated, the new attribute(mdata xattr)
will cause an inconsistency on metadata on 3 bricks, which
prevents the file to be repaired.
Fix:
Don't create mdata xattr with utimes/utimensat system call.
Only update if already present.
Change-Id: Ieacedecb8a738bb437283ef3e0f042fd49dc4c8c
fixes: bz#1720201
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The feature is not supported and is moved out of the codebase from
glusterfs-5.x release. Doesn't make sense to keep the code to
support it.
For those who want to upgrade from an version supporting it to higher
version, please do a 'gluster volume reset $VOL encryption reset' and
then continue with the upgrade process.
updates: bz#1648169
Change-Id: I8cf822c0d7195940bd37f6af2432a3cac68d44d1
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
| |
To avoid the failure wait to run hook script S13create-subdir-mounts.sh
after executed add-brick command by test case.
Change-Id: I063b6d0f86a550ed0a0527255e4dfbe8f0a8c02e
fixes: bz#1720993
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
| |
Also fixed some issues on test ec-1468261.t.
Change-Id: If156f86af986d9eed13cdd1f15c5a7214cd11706
Updates: bz#1193929
Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
... with out which volume creation fails with "volume create: <xyz>: failed:
Failed to create volume files"
Fixes: bz#1716812
Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
| |
$SRC/glusterfs/bugs/nfs/showmount-many-clients.t
Change-Id: I48758cc66fcb55f48c4a8a0a738b06867f6814a1
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Updates: bz#1193929
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently for an application using glfsapi to use glusterfs, when a
statedump is taken, it uses /var/run/gluster dir to dump info.
There can be concerns as this directory may be owned by some other
user, and hence it may fail taking statedump. Such applications
should have an option to use different path.
This patch provides an API to do so.
Updates: bz#1689097
Change-Id: I8918e002bc823d83614c972b6c738baa04681b23
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is critical so all the tests will be contained in the same
directory, and one can just 'cp -a tests/ <any-location>/' and
run glusterfs tests.
only 'glfsxmp.c' was an exception as it was just copying the
file from api example directory. Now moved it to tests.
updates: bz#1193929
Change-Id: I00359d64be580bffc5b3c3a090968d86c2c6952a
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The test case is failing to heal the volume within $HEAL_TIMEOUT @195.
This is happening because as part of split-brain resolution the file
gets expunged from the sink and the new entry mark for that file will
be done on the source bricks as part of impunging. Since the source
bricks shd-threads failed to get the heal-domain lock, they will wait
for the heal-timeout of 10 minutes, which is greater than $HEAL_TIMEOUT.
Fix:
Set the cluster.heal-timeout to 5 seconds to trigger the heal so that
one of the source brick heals the file within the $HEAL_TIMEOUT.
Change-Id: Ie73c578cc5361c0d617a48ccc86026734d20ba8c
fixes: bz#1718998
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not met.
Though the FOP is still unwound with failure, the xattrs remain on the disk.
Due to these partial post-ops and partial heals (healing only when 2 bricks
are up), we can end up in metadata split-brain purely from the afr xattrs
point of view i.e each brick is blamed by atleast one of the others for
metadata. These scenarios are hit when there is frequent connect/disconnect
of the client/shd to the bricks.
Fix:
Pick a source based on the xattr values. If 2 bricks blame one, the blamed
one must be treated as sink. If there is no majority, all are sources. Once
we pick a source, self-heal will then do the heal instead of erroring out
due to split-brain.
This patch also adds restriction of all the bricks to be up to perform
metadata heal to avoid any metadata loss.
Removed the test case tests/bugs/replicate/bug-1468279-source-not-blaming-sinks.t
as it was doing metadata heal even when only 2 of 3 bricks were up.
Change-Id: I07a9d62f84ceda329dcab1f02a33aeed258dcb09
fixes: bz#1717819
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: useradd fails with: Cannot allocate memory
useradd: cannot lock /etc/passwd; try again later.
Solution:
Lock files should get automatically removed once "usradd" or "groupadd"
command finishes. But sometimes we encounter situations (bugs) where
some of these files may not get properly unlocked after the execution of
the command. In that case, when we execute useradd next time, it may show
the error “cannot lock /etc/password” or “unable to lock group file”.
So, to avoid any such errors, check for any lock files under /etc and
remove those.
updates: bz#1193929
Change-Id: If6456a271c2bc0717f768d7101a40ce44a9af3d7
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Long tale of double unref! But do read...
In cases where a shard base inode is evicted from lru list while still
being part of fsync list but added back soon before its unlink, there
could be an extra inode_unref() leading to premature inode destruction
leading to crash.
One such specific case is the following -
Consider features.shard-deletion-rate = features.shard-lru-limit = 2.
This is an oversimplified example but explains the problem clearly.
First, a file is FALLOCATE'd to a size so that number of shards under
/.shard = 3 > lru-limit.
Shards 1, 2 and 3 need to be resolved. 1 and 2 are resolved first.
Resultant lru list:
1 -----> 2
refs on base inode - (1) + (1) = 2
3 needs to be resolved. So 1 is lru'd out. Resultant lru list -
2 -----> 3
refs on base inode - (1) + (1) = 2
Note that 1 is inode_unlink()d but not destroyed because there are
non-zero refs on it since it is still participating in this ongoing
FALLOCATE operation.
FALLOCATE is sent on all participant shards. In the cbk, all of them are
added to fync_list.
Resulting fsync list -
1 -----> 2 -----> 3 (order doesn't matter)
refs on base inode - (1) + (1) + (1) = 3
Total refs = 3 + 2 = 5
Now an attempt is made to unlink this file. Background deletion is triggered.
The first $shard-deletion-rate shards need to be unlinked in the first batch.
So shards 1 and 2 need to be resolved. inode_resolve fails on 1 but succeeds
on 2 and so it's moved to tail of list.
lru list now -
3 -----> 2
No change in refs.
shard 1 is looked up. In lookup_cbk, it's linked and added back to lru list
at the cost of evicting shard 3.
lru list now -
2 -----> 1
refs on base inode: (1) + (1) = 2
fsync list now -
1 -----> 2 (again order doesn't matter)
refs on base inode - (1) + (1) = 2
Total refs = 2 + 2 = 4
After eviction, it is found 3 needs fsync. So fsync is wound, yet to be ack'd.
So it is still inode_link()d.
Now deletion of shards 1 and 2 completes. lru list is empty. Base inode unref'd and
destroyed.
In the next batched deletion, 3 needs to be deleted. It is inode_resolve()able.
It is added back to lru list but base inode passed to __shard_update_shards_inode_list()
is NULL since the inode is destroyed. But its ctx->inode still contains base inode ptr
from first addition to lru list for no additional ref on it.
lru list now -
3
refs on base inode - (0)
Total refs on base inode = 0
Unlink is sent on 3. It completes. Now since the ctx contains ptr to base_inode and the
shard is part of lru list, base shard is unref'd leading to a crash.
FIX:
When shard is readded back to lru list, copy the base inode pointer as is into its inode ctx,
even if it is NULL. This is needed to prevent double unrefs at the time of deleting it.
Change-Id: I99a44039da2e10a1aad183e84f644d63ca552462
Updates: bz#1696136
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
| |
* Also some logging enhancements in snapview-server
Change-Id: I6a7646771cedf4bd1c62806eea69d720bbaf0c83
fixes: bz#1715921
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following files are fixed.
tests/bugs/distribute/overlap.py
tests/utils/changelogparser.py
tests/utils/create-files.py
tests/utils/gfid-access.py
tests/utils/libcxattr.py
Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
updates: bz#1193929
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|