| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
cut -d$'\n' is not separating the xattrs shown as part of getfattr output.
Hence use awk to get the nth line of getfattr output for nth iteration
in the for loop.
Change-Id: I1a96cd3f72f4f407f9a783375f78d9a69d5d3885
fixes: bz#1574606
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
force-migration config for remove-brick operation.
The cli will take input from the user before starting "remove-brick"
start operation. The message/confirmation looks like the following:
<Running remove-brick with cluster.force-migration enabled can result
in data corruption. It is safer to disable this option so that files
that receive writes during migration are not migrated. Files that are
not migrated can then be manually copied after the remove-brick commit
operation. Do you want to continue with your current
cluster.force-migration settings? (y/n)>
And also question for COMMIT_FORCE is changed.
Fixes: bz#1572586
Change-Id: Ifdb6b108a646f50339dd196d6e65962864635139
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
see https://review.gluster.org/#/c/19788/
use print fn from __future__
Change-Id: If5075d8d9ca9641058fbc71df8a52aa35804cda4
updates: #411
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Sometimes brick process is getting crashed at the time
of stop brick while brick mux is enabled.
Solution: Brick process was getting crashed because of rpc connection
was not cleaning properly while brick mux is enabled.In this patch
after sending GF_EVENT_CLEANUP notification to xlator(server)
waits for all rpc client connection destroy for specific xlator.Once rpc
connections are destroyed in server_rpc_notify for all associated client
for that brick then call xlator_mem_cleanup for for brick xlator as well as
all child xlators.To avoid races at the time of cleanup introduce
two new flags at each xlator cleanup_starting, call_cleanup.
BUG: 1544090
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
with same patch after enable brick mux forcefully, all test cases are
passed.
Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.
With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.
Fix:
Use the brick to validate and populate the inode/fd status.
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. If pre-op fails on all bricks,set lock->release to true in
afr_handle_lock_acquire_failure so that the GF_ASSERT in afr_unlock() does not
crash.
2. Added a missing 'return' after handling pre-op failure in
afr_transaction_perform_fop(), fixing a use-after-free issue.
Change-Id: If0627a9124cb5d6405037cab3f17f8325eed2d83
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note 1) we're not supposed to be using #!/usr/bin/env python, see
https://fedoraproject.org/wiki/Packaging:Guidelines?rd=Packaging/Guidelines#Shebang_lines
Note 2) we're also not supposed to be using "!/usr/bin/python,
see https://fedoraproject.org/wiki/Changes/Avoid_usr_bin_python_in_RPM_Build#Quick_Opt-Out
The previous patch (https://review.gluster.org/19767) tried to do too
much in one patch, so it was abandoned.
This patch does two things:
1) minor cleanup of configure(.ac) to explicitly use python2
2) change all the shebang lines to #!/usr/bin/python2 and add them
where they were missing based on warnings emitted during rpmbuild.
In a follow-up patch python2 will eventually be changed to python3.
Before that python2-isms (e.g. print, string.join(), etc.) need to be
converted to python3. Some of those can be rewritten in version agnostic
python. E.g. print statements become print() with "from __future_ import
print_function". The python 2to3 utility will be used for some of those.
Also Aravinda has given guidance in the comments to the first patch for
changes.
updates: #411
Change-Id: I471730962b2526022115a1fc33629fb078b74338
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
spec-files:
https://review.gluster.org/#/c/18854/
Overview:
* Cloudsync maintains three file states in it's inode-ctx i.e
1 - LOCAL,
2 - REMOTE,
3 - DOWNLOADING.
* A data modifying fop is allowed only if the state is LOCAL.
If the state is REMOTE or DOWNLOADING, client will download
or wait for the download to finish initiated by other client.
* Multiple download and upload from different clients are synchronized
by inodelk.
* In POSIX a state check is done (part of different commit)before
allowing the fop to continue. If the state is remote/downloading the
fop is unwound with EREMOTE. The client will then download the file
and continue with the fop again.
* Basic Algo for fop (let's say write fop):
- If LOCAL -> resume fop
- If REMOTE ->
- INODELK
- STAT (this gets state and heal the state if needed)
- DOWNLOAD
- resume fop
Note:
* Developers will need to write plugins for download, based on the
remote store they choose. In phase-1, support will be added for
one remote store per volume. In future, more options for multiple
remote stores will be explored.
TODOs:
- Implement stat/lookup/readdirp to return size info from xattr
- Make plugins configurable
- Implement unlink fop
- Add metrics collection
- Add sharding support
Design Contributions:
Aravinda V K <avishwan@redhat.com>
Amar Tumballi <amarts@redhat.com>
Ram Ankireddypalle <areddy@commvault.com>
Susant Palai <spalai@redhat.com>
updates: #387
Change-Id: Iddf711ee7ab4e946ae3e472ff62791a7b85e6d4b
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
| |
Change-Id: I4648816af908539efdc2528608aa2ebf7f0d0e2f
fixes: bz#1559004
BUG: 1559004
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd maintains a boolean flag 'port_registered' which is used to determine
if a brick has completed its portmap sign in process. This flag is (re)set in
pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is
the identifier to determine if the very first brick with which the process is
spawned up has completed its sign in process. However in case of glusterd
restart when a brick is already identified as running, glusterd does a
pmap_registry_bind to ensure its portmap table is updated but this flag isn't
which is fine in case of non brick multiplex case but causes an issue if
the very first brick which came as part of process is replaced and then
the subsequent brick attach will fail. One of the way to validate this
is to create and start a volume, remove the first brick and then
add-brick a new one. Add-brick operation will take a very long time and
post that the volume status will show all other brick status apart from
the new brick as down.
Solution is to set brickinfo->port_registered to true for all the
running bricks when brick multiplexing is enabled.
Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff
Fixes: bz#1560957
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Lookup-optimize has been shown to improve create
performance. The code has been in the project for several
years and is considered stable.
Enabling this by default in order to test this in the
upstream regression runs.
Change-Id: Iab792979ee34f0af4713931e0b5b399c23f65313
updates: bz#1557435
BUG: 1557435
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b.
This commit was causing multiple tests to time out when brick
multiplexing is enabled. With further debugging, it's found that even
though the volume stop transaction is converted into mgmt_v3 to allow
the remote nodes to follow the synctask framework to process the command,
there are other callers of glusterd_brick_stop () which are not synctask
based.
Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693
updates: bz#1545048
|
|
|
|
|
|
|
|
|
|
| |
Updates: #363
This new value (3) will try to wind read requests to the child of AFR
having the least amount of pending requests in its queue.
Change-Id: If6bda2aac9bf7aec3fc39622f78659313c4b6508
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The xattr trusted.glusterfs.list-node-uuids was only sent to a single
subvolume. This was returning null uuids from the other subvolumes as
if they were down.
This fix forces that xattr to be requested from all subvolumes.
Change-Id: If62eb39a6857258923ba625e153d4ad79018ea2f
fixes: bz#1561406
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: There's a race between the last glusterfs_handle_terminate()
response sent to glusterd and the kill that happens immediately if the
terminated brick is the last brick.
Solution: When it is a last brick for the brick process, instead of glusterfsd
killing itself, glusterd will kill the process in case of brick multiplexing.
And also changing gf_attach utility accordingly.
Change-Id: I386c19ca592536daa71294a13d9fc89a26d7e8c0
fixes: bz#1545048
BUG: 1545048
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
commit fef9293 changed network.inode-lru-limit from 50000 to 200000 in
nl-cache group profile but the test wasn't changed to reflect it
accordingly.
Change-Id: Ibb5fb0a387f160f6b726246b161a9a7b33135755
fixes: bz#1560589
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
| |
inode table size is currently set to 200000. Hence the need of change in
testcase which was expecting the old value 50000.
Change-Id: I8e44b1d0a2da1e8100bebd25f48bb36e2897b4f8
fixes: bz#1560393
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
| |
`find_library()` doesn't consider LD_LIBRARY_PATH on Python < 3.6.
Change-Id: Iee26085cb5d14061001f19f032c2664d69a378a8
BUG: 1450593
Signed-off-by: Niklas Hambüchen <mail@nh2.me>
|
|
|
|
|
|
| |
BUG: 1557932
Change-Id: I3783e41b3812267bc10c0d05d062a31396ce135b
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
when dd happens on sharded replicate volume all the writes on shards happen
through anon-fd. When the writes don't come quick enough, old anon-fd closes
and new fd gets created to serve the new writes. open-fd-count is decremented
only after the fd is closed as part of fd_destroy(). So even when one fd is on
the way to be closed a new fd will be created and during this short period it
appears as though there are multiple fds opened on the file. AFR thinks another
application opened the same file and switches off eager-lock leading to
extra latency.
Fix:
Have a different option called active-fd whose life cycle starts at
fd_bind() and ends just before fd_destroy()
BUG: 1557932
Change-Id: I2e221f6030feeedf29fbb3bd6554673b8a5b9c94
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I1508a336a7a927b389a19815ef57001cdf29b109
BUG: 1558074
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
1) Afr's eager-lock only works for data transactions.
2) When there are conflicting writes, write with conflicting region initiates
unlock of eager-lock leading to extra pre-ops and post-ops on the file. When
eager-lock goes off, it leads to extra fsyncs for random-write workload in afr.
Solution (that is modeled after EC):
In EC, when there is a conflicting write, it waits for the current write to
complete before it winds the conflicted write. This leads to better utilization
of network and disk, because we will not be doing extra xattrops and FSYNCs and
inodelk/unlock. Moved fd based counters to inode based counters.
I tried to model the solution based on EC's locking, but it is not similar to
AFR because we had to keep backward compatibility.
Lifecycle of lock:
==================
First transaction is added to inode->owners list and an inodelk will be sent on
the wire. All the next transactions will be put in inode->waiters list until
the first transaction completes inodelk and [f]xattrop completely. Once
[f]xattrop also completes, all the requests in the inode->waiters list are
checked if it conflict with any of the existing locks which are in
inode->owners list and if not are added to inode->owners list and resumed with
doing transaction. When these transactions complete fop phase they will be
moved to inode->post_op list and resume the transactions that were paused
because of conflicts. Post-op and unlock will not be issued on the wire until
that is the last transaction on that inode. Last transaction when it has to
perform post-op can choose to sleep for deyed-post-op-secs value. During that
time if any other transaction comes, it will wake up the sleeping transaction
and takes over the ownership of the lock and the cycle continues. If the
dealyed-post-op-secs expire, then the timer thread will wakeup the sleeping
transaction and it will set lock->release to true and starts doing post-op and
then unlock. During this time if any other transactions come, they will be put
in inode->frozen list. Once the previous unlock comes it will move the frozen
list to waiters list and moves the first element from this waiters-list to
owners-list and attempts the lock and the cycle continues. This is the general
idea. There is logic at the time of dealying and at the time of new
transaction or in flush fop to wakeup existing sleeping transactions or
choosing whether to delay a transaction etc, which is subjected to change based
on future enhancements etc.
Fixes: #418
BUG: 1549606
Change-Id: I88b570bbcf332a27c82d2767dfa82472f60055dc
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Whenever we read data from file over NFS, NFS reads
more data then requested and caches it. Based on the
stat information it makes sure that the cached/pre-read
data is valid or not.
Consider 4 + 2 EC volume and all the bricks are on
differnt nodes.
In EC, with round-robin read policy, reads are sent on
different set of data bricks. This way, it balances the
read fops to go on all the bricks and avoid heating UP
(overloading) same set of bricks.
Due to small difference in clock speed, it is possible
that we get minor difference for atime, mtime or ctime
for different bricks. That might cause a different stat
returned to NFS based on which NFS will discard
cached/pre-read data which is actually not changed and
could be used.
Solution:
Change read policy for EC as gfid-hash. That will force
all the read to go to same set of bricks.
Change-Id: I825441cc519e94bf3dc3aa0bd4cb7c6ae6392c84
BUG: 1554743
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In the jenkins regression test brick multiplexing is enabled by
is_brick_mx_enabled function and not by setting cluster.brick-multiplex
option. Hence check the count of bricks and its logs, this fixes the
failure.
Change-Id: Ibb2ed8fbffd3765f283da741689304a5579d447c
BUG: 1555167
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Self-heal creates a thread per brick to sweep the index looking for
files that need to be healed. These threads are started before the
volume comes online, so nothing is done but waiting for the next
sweep. This happens once per minute.
When a replace brick command is executed, the new graph is loaded and
all index sweeper threads started. When all bricks have reported, a
getxattr request is sent to the root directory of the volume. This
causes a heal on it (because the new brick doesn't have good data),
and marks its contents as pending to be healed. This is done by the
index sweeper thread on the next round, one minute later.
This patch solves this problem by waking all index sweeper threads
after a successful check on the root directory.
Additionally, the index sweep thread scans the index directory
sequentially, but it might happen that after healing a directory entry
more index entries are created but skipped by the current directory
scan. This causes the remaining entries to be processed on the next
round, one minute later. The same can happen in the next round, so
the heal is running in bursts and taking a lot to finish, specially
on volumes with many directory levels.
This patch solves this problem by immediately restarting the index
sweep if a directory has been healed.
Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e
BUG: 1547662
Signed-off-by: Xavi Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test does:
1. mount a volume
2. kill a brick in the volume
3. mkdir (/somedir)
In my local tests and in [1], I see that mkdir in step 3 fails because
there is no dht-layout on root directory.
The reason I think is by the time first lookup on "/" hit dht, a brick
was killed as per step 2. This means layout was not healed for "/" and
since this is a new volume, no layout is present on it. Note that the
first lookup done on "/" by fuse-bridge is not synchronized with
parent process of daemonized glusterfs mount completing. IOW, by the
time glusterfs cmd executed there is no guarantee that lookup on "/"
is complete. So, if step 2 races ahead of fuse_first_lookup on "/", we
end up with an invalid dht-layout on "/" resulting in failures.
Doint an operation like ls makes sure that lookup on "/" is completed
before we kill a brick
Change-Id: Ie0c4e442c4c629fad6f7ae850437e3d63fe4bea9
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
BUG: 1543279
|
|
|
|
|
|
|
|
|
|
|
| |
Because bug-924726.t depends on netstat, tests failed before. This got resolved
by adding respective check to run-tests.sh.
Enabled respective test again.
Change-Id: I70c9bff03379ed9ee8cd95842c3501dfb50b8e86
BUG: 1312830
Signed-off-by: Sven Fischer <sven@fischer-abc.de>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead send the SIGTERM (default, 15) first, and at the end
send SIGKILL. If SIGKILL is sent directly, we miss many tests
like valgrind, lcov etc., not able to process the information
properly.
BUG: 1549000
Change-Id: I664de12ee7dbf47eb98b8141004cd51f6006b314
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The subdirectories are expected to be present for a subdir
mount to be successful. If not, the client_handshake()
itself fails to succeed. When a volume is about to get
mounted first time, this is easier to handle, as if the
directory is not present in one brick, then its mostly
not present in any other brick. In case of add-brick,
the directory is not present in new brick, and there is
no chance of healing it from the subdirectory mount, as
in those clients, the subdir itself will be 'root' ('/')
of the filesystem. Hence we need a volume mount to heal
the directory before connections can succeed.
This patch does take care of that by healing the directories
which are expected to be mounted as subdirectories from the
volume level mount point.
Change-Id: I2c2ac7b7567fe209aaa720006d09b68584d0dd14
BUG: 1549915
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We are not seeing much improvement with this change. So removing the
feature so that it doesn't need to be maintained anymore.
Fixes: #414
Change-Id: Ic7969b151544daf2547bd262a9fa03f575626411
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
| |
Change-Id: Ib74354f57a18569762ad45a51f182822a2537421
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
For as long as a shard's inode is in priv->lru_list, it should have a non-zero
ref-count. This patch achieves it by taking a ref on the inode when it
is added to lru list. When it's time for the inode to be evicted
from the lru list, a corresponding unref is done.
Change-Id: I289ffb41e7be5df7489c989bc1bbf53377433c86
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This patch fixes the namespace test failure when brick multiplexing is enabled.
By changing the log file name, when brick multiplexing is enabled. As only one
log file generated for all bricks.
Change-Id: Ide941946e5e1b2676e7139e1b5bf6b93b93c0815
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following release-3.8-fb branch patch is upstreamed:
> features/namespace: Add namespace xlator and link into brick graph
> Commit ID: dbd30776f26e
> https://review.gluster.org/#/c/18041/
> By Michael Goulet <mgoulet@fb.com>
Changes in this patch:
Removes extra config.h and namespace.h file in namespace.c
Adds default_getspec_cbk to libglusterfs.sym
Rename dict_for_each to dict_foreach_inline
Remove fd.h header file stack.h
Add test case for truncate, open and symlink
This patch is required to forward port io-threads namespace patch.
Updates: #401
Change-Id: Ib88c95b89eecee9b8957df8a4c8712c899c761d1
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
| |
There are a few tests that take more time on regression nodes
Change-Id: If126d5ebd422cd6d99125db040e74f0d104af7bc
Signed-off-by: Nigel Babu <nigelb@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses 'timeout' command with 300 seconds default. Right now,
there is just 1 test which takes more than that in a properly
setup machine.
Ideally best case is set the default to something like 30 seconds,
and if a test is supposed to take more than that, owner should add
a timeout line to test knowingly. That way, it makes test writers
think about a time limit too.
Change-Id: I747005ce1f208aeb2ecbf899e8feea487ecd21a0
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
| |
In bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
check for peer count after starting glusterd instance on node 2
Change-Id: I3f92013719d94b6d92fb5db25efef1fb4b41d510
BUG: 1540607
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
| |
Updates: #389
Change-Id: Ic71632722effe4b8855d5de3e65688efd9afe1e3
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
| |
Updates: #389
Change-Id: I8faea0828921fb17f05f7321c3cb01747373f21e
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As nfs-ganesha, a wcc data contains pre/post attributes is return
in read/write rpc reply. nfs-ganesha get those attributes by
two getattr between the real read/write right now.
But, gluster has return pre/post attributes from glusterfsd,
those attributes are skipped in syncop/gfapi, if gfapi return them,
the upper user (nfs-ganesha) can use them directly without any
duplicate getattr.
Updates: #389
Change-Id: I7b643ae4241cfe2aeb17063de00192d81674024a
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To reduce the overall time taken by the every regression job for all glusterd test cases,
avoiding some duplicate tests by clubbing similar test cases into one.
real time taken for all regression jobs of glusterd without this patch is 1959 seconds,
with this patch it is 1059 seconds.
Look at the below document for your reference.
https://docs.google.com/document/d/1u8o4-wocrsuPDI8BwuBU6yi_x4xA_pf2qSrFY6WEQpo/edit?usp=sharing
Change-Id: Ib14c61ace97e62c3abce47230dd40598640fe9cb
BUG: 1530905
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
In one of the case in commit cb0339f there's one particular case where
after removing the old snap it wasn't writing the new snap version and
this resulted into one of the test to fail spuriously.
Change-Id: I3e83435fb62d6bba3bbe227e40decc6ce37ea77b
BUG: 1540607
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following release-3.8-fb branch patch is upstreamed:
> io-stats: Expose io-thread queue depths
> Commit ID: 69509ee7d2
> https://review.gluster.org/#/c/18143/
> By Shreyas Siravara <sshreyas@fb.com>
Changes in this patch:
- Replace iot_pri_t with gf_fop_pri_t
- Replace IOT_PRI_{HI, LO, NORMAL, MAX, LEAST} with
GF_FOP_PRI_{HI, LO, NORMAL, MAX, LEAST}
- Use dict_unref() instead of dict_destroy()
This patch is required to forward port io-threads namespace patch.
Updates: #401
Change-Id: I1b47a63185a441a30fbc423ca1015df7b36c2518
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test to check that non-root users can delete stale
linkto files
Change-Id: Ic9bc76bc485cab839927af60cfce78a058eee2e4
BUG: 1542318
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For more details on this issue see
https://github.com/gluster/glusterfs/issues/308
Solution:
This is a restrictive solution where a file will not be migrated
if a client writes to it during the migration. This does not
check if the writes from the rebalance and the client actually
do overlap.
If dht_writev_cbk finds that the file is being migrated (PHASE1)
it will set an xattr on the destination file indicating the file
was updated by a non-rebalance client.
Rebalance checks if any other client has written to the dst file
and aborts the file migration if it finds the xattr.
updates gluster/glusterfs#308
Change-Id: I73aec28bc9dbb8da57c7425ec88c6b6af0fbc9dd
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* from the patch which got tested in experimental branch, there
was a code cleanup involved, which missed setting of a local
variable, which led to crash immediately after enabling the
feature.
* added a sanity test case to validate all the fops of sdfs.
Updates: #397
Change-Id: I7e0bebfc195c344620577cb16c1afc5f4e7d2d92
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not
met. Though the FOP is still unwound with failure, the xattrs remain on
the disk. Due to these partial post-ops and partial heals (healing only when
2 bricks are up), we can end up in split-brain purely from the afr
xattrs point of view i.e each brick is blamed by atleast one of the
others. These scenarios are hit when there is frequent
connect/disconnect of the client/shd to the bricks while I/O or heal
are in progress.
Fix:
Instead of undoing the post-op, pick a source based on the xattr values.
If 2 bricks blame one, the blamed one must be treated as sink.
If there is no majority, all are sources. Once we pick a source,
self-heal will then do the heal instead of erroring out due to
split-brain.
Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae
BUG: 1539358
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
| |
Change-Id: Ifbf5e628ccb9a0ecb285f5884a41e70d935316bd
Signed-off-by: Csaba Henk <csaba@redhat.com>
|
|
|
|
|
|
| |
Updates: #242
Change-Id: I767e574a26e922760a7130bd209c178d74e8cf69
Signed-off-by: Poornima G <pgurusid@redhat.com>
|
|
|
|
|
|
|
|
|
| |
These tests are prone to issues at the moment that need further
debugging and fixing.
BUG: 1537602
Change-Id: Ic59ca620925c6f43948b8a751eaddb571b791969
Signed-off-by: Nigel Babu <nigelb@redhat.com>
|