summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
* cluster/dht: EBADF handling for fremovexattr and fsetxattrN Balachandran2017-10-033-3/+44
| | | | | | | | | | | | | | | | | Add EBADF handling for dht_fremovexattr and dht_fsetxattr. > BUG: 1476665 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17999 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> (cherry picked from commit 747a08d34e2a1e94d7fce68a3577370288bb1955) Change-Id: Ide0d5812dae79655d2565157e5baabcd753b4309 BUG: 1467010 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* mount/fuse: Make event-history feature configurableKrutika Dhananjay2017-10-023-14/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ... and disable it by default. Backport of: > Change-Id: Ia533788d309c78688a315dc8cd04d30fad9e9485 > Reviewed-on: https://review.gluster.org/18242 > BUG: 1467614 > cherry-picked from commit 956d43d6e89d40ee683547003b876f1f456f03b6 This is because having it disabled seems to improve performance. This could be due to the lock contention by the different epoll threads on the circular buff lock in the fop cbks just before writing their response to /dev/fuse. Just to provide some data - wrt ovirt-gluster hyperconverged environment, I saw an increase in IOPs by 12K with event-history disabled for randrom read workload. Usage: mount -t glusterfs -o event-history=on $HOSTNAME:$VOLNAME $MOUNTPOINT OR glusterfs --event-history=on --volfile-server=$HOSTNAME --volfile-id=$VOLNAME $MOUNTPOINT Change-Id: Ia533788d309c78688a315dc8cd04d30fad9e9485 BUG: 1495430 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* afr: auto-resolve split-brains for zero-byte filesRavishankar N2017-10-023-0/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/18283/ Problems: As described in BZ 1491670, renaming hardlinks can result in data/mdata split-brain of the DHT link-to files (T files) without any mismatch of data and metadata. As described in BZ 1486063, for a zero-byte file with only dirty bits set, arbiter brick will likely be chosen as the source brick. Fix: For zero byte files in split-brain, pick first brick as a) data source if file size is zero on all bricks. b) metadata source if metadata is the same on all bricks In arbiter case, if file size is zero on all bricks and there are no pending afr xattrs, pick 1st brick as data source. Change-Id: I0270a9a2f97c3b21087e280bb890159b43975e04 BUG: 1496321 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reported-by: Rahul Hinduja <rhinduja@redhat.com> Reported-by: Mabi <mabi@protonmail.ch>
* dht: add FOP check to dht_file_setattr_cbkRavishankar N2017-09-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: bug-797171.7 loaded error-gen xlator on the brick which sent EBADF for a non fd-based fop, namely setattr. This caused dht_check_and_open_fd_on_subvol_task() to crash as local->fd was NULL. Fix: Call dht_check_and_open_fd_on_subvol_task() from dht_file_setattr_cbk only for dht_fsetattr and not dht_setattr or dht_setattr2 > Reviewed-on: https://review.gluster.org/18208 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Susant Palai <spalai@redhat.com> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 47188e9eac59de416a5c86c7ec7540ed6aaa1c98) Signed-off-by: Ravishankar N <ravishankar@redhat.com> Change-Id: Iab4999e213bf2065804f3f8237e470ad454e3c99 BUG: 1497122
* posix: add sanity checks for removing the gfid symlink for directoriesRavishankar N2017-09-231-14/+56
| | | | | | | | | | | | | Backport of https://review.gluster.org/17945 ...during mkdir and rmdir. Otherwise, during entry self-heal, the directory could be left out without a .glusterfs symlink causing fops like opendir, readdir to fail. The only chance the missing symlink will be created is when a fresh lookup comes on it. Change-Id: I2e1cf1bce8962ea80187edd8f6d73e0a09cf9f8e BUG: 1491966 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: Prevent null gfids in self-heal entry re-creationRavishankar N2017-09-171-3/+11
| | | | | | | | | | | | | | | | | | > Reviewed-on: https://review.gluster.org/17981 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Karthik U S <ksubrahm@redhat.com> (cherry picked from commit bead74a6e085001225bc0704bad1a5db36dd75a1) Change-Id: I5acb8bd0a19fc4e764d61e349bb690b5236ee610 BUG: 1491985 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/18300 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* afr: heal metadata in discover code pathRavishankar N2017-09-172-17/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | ****************************************************** Backport of: https://review.gluster.org/18202 Also added loc_is_nameless() to libglusterfs since the patch that introduced it in master was not backported to release-3.10. Note: 18202 is a squash of 17850 and 18187 in master. ****************************************************** During graph switch, if fuse sends nameless (gfid) lookups, afr takes the discover code path to serve it. If there are pending metadata heals, they do not happen unless an inode refresh happens as a part of discover (which is not guaranteed to happen always). This patch fixes it by attempting metadata heal as a part of discover, just like how it is done in lookup code path. Change-Id: I87c493045b9225741cad173bf3f645848697032e BUG: 1492010 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/18304 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: fix invalid memory reference returnedXavier Hernandez2017-09-171-2/+9
| | | | | | | | | | | | | | | | | | > BUG: 1490897 > Reviewed-on: https://review.gluster.org/18263 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> > Reviewed-by: Gaurav Yadav <gyadav@redhat.com> Change-Id: I0823c7b33060b48040c1d86ad346a5f6e15bc190 BUG: 1491166 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/18281 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* posix: add null gfid checksRavishankar N2017-09-174-0/+50
| | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/17975 ...in file/dir creation and lookup codepaths. The check is relaxed for fops coming from trash xlator at the moment until trash has client side logic to send the create fops with gfid-req. Also fixed the missing trash pid assignment in creates sent by trash xlator. Without this, truncated files won't be moved to .trashcan. Change-Id: Ieddd7f0634850e7c7010e4fbb4ad1eead35888c8 BUG: 1491985 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/18302 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* afr: check validity of afr_replyRavishankar N2017-09-175-47/+71
| | | | | | | | | | | | | | | | | | | | | | ...in various self-heal code paths. Originally found by Pranith in __afr_selfheal_name_impunge () Also change __afr_selfheal_assign_gfid() to send lookup only on those bricks that don't have a gfid matching that of the source. > Reviewed-on: https://review.gluster.org/18065 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit d594900dbca92c356152be65fce16f77c402117c) Change-Id: I70a2ccd750a2af92c5fc36e0eefb2b6125404b4a BUG: 1491995 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/18303 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* features/shard: Return aggregated size in stbuf of LINK fopKrutika Dhananjay2017-09-171-2/+42
| | | | | | | | | | | | | | | | Backport of: > Change-Id: I42df7679d63fec9b4c03b8dbc66c5625f097fac0 > Reviewed-on: https://review.gluster.org/18209 > BUG: 1488546 > cherry-picked from 91430817ce5bcbeabf057e9c978485728a85fb2b Change-Id: I42df7679d63fec9b4c03b8dbc66c5625f097fac0 BUG: 1488719 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: https://review.gluster.org/18212 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* features/shard: Increment counts in locksPranith Kumar K2017-09-171-2/+10
| | | | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/18203 Problem: Because create_count/eexist_count are incremented without locks, all the shards may not be created because call_count will be lesser than what it needs to be. This can lead to crash in shard_common_inode_write_do() because inode on which we want to do fd_anonymous() is NULL Fix: Increment the counts in frame->lock >Change-Id: Ibc87dcb1021e9f4ac2929f662da07aa7662ab0d6 >BUG: 1488354 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Change-Id: Ibc87dcb1021e9f4ac2929f662da07aa7662ab0d6 BUG: 1488391 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/18206 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd : glusterd fails to start when peer's network interface is downGaurav Yadav2017-09-173-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: glusterd fails to start on nodes where glusterd tries to come up even before network is up. Fix: On startup glusterd tries to resolve brick path which is based on hostname/ip, but in the above scenario when network interface is not up, glusterd is not able to resolve the brick path using ip_address or hostname With this fix glusterd will use UUID to resolve brick path. >Reviewed-on: https://review.gluster.org/17813 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Prashanth Pai <ppai@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710 BUG: 1482857 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/18063 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/dht: Check for open fd only on EBADFN Balachandran2017-09-174-160/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DHT fd based fops used to check if the fd was open on the cached subvol before winding the call. However, this introduced a performance regression of about 30% for reads. This check was introduced to handle cases where files were migrated while IOs were happening. As this is not the common case, dht will now check if the fd is open on the cached subvol only if the call fails with EBADF. This will prevent a performance hit where a rebalance is not running. > BUG: 1476665 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17976 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Susant Palai <spalai@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I2035a858d63c3fcd22bb634055bbb0ad01686808 BUG: 1467010 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/18057 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/ec: Node uuid xattr support update for ECSunil Kumar Acharya2017-09-132-6/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: The change in EC to return list of node uuids for GF_XATTR_NODE_UUID_KEY was causing problems with geo-rep. Fix: This patch will allow to get the single node uuid as it was doing before with the key "GF_XATTR_NODE_UUID_KEY", and will also allow to get the list of node uuids by using a new key "GF_XATTR_LIST_NODE_UUIDS_KEY". This will solve the problem with geo-rep and any other features which were depending on this. >BUG: 1462790 >Change-Id: I2d9214a9658d4a41a3d6de08600884d2bda5f3eb >Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> >Reviewed-on: https://review.gluster.org/17594 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> BUG: 1487647 Change-Id: I2d9214a9658d4a41a3d6de08600884d2bda5f3eb Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-on: https://review.gluster.org/17667 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* xlators/ganesha : remove ganesha xlator code from 3.10Jiffin Tony Thottan2017-09-071-19/+0
| | | | | | | | | | | | | | | The commit e4a4043 have removed ganesha xlator from glusterfs codebase. But while reverting back ganesha changes, the Makefile.am in xlators/ganesha got resurrected. Change-Id: I6efaacaf1fe426da974608ddac5eae4a43800983 BUG: 1486542 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: https://review.gluster.org/18147 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* cluster/ec: return all node uuids from all subvolumesXavier Hernandez2017-09-012-105/+141
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EC was retuning the UUID of the brick with smaller value. This had the side effect of not evenly balancing the load between bricks on rebalance operations. This patch modifies the common functions that combine multiple subvolume values into a single result to take into account the subvolume order and, optionally, other subvolumes that could be damaged. This makes easier to add future features where brick order is important. It also makes possible to easily identify the originating brick of each answer, in case some brick will have an special meaning in the future. >Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f >BUG: 1366817 >Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> >Reviewed-on: https://review.gluster.org/17297 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Ashish Pandey <aspandey@redhat.com> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> BUG: 1487042 Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-on: https://review.gluster.org/18148 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* nfs: add NULL check for call state in nfs3_call_state_wipeJiffin Tony Thottan2017-08-111-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refcounting added for nfs call state in https://review.gluster.org/17696. This is based on assumption that call state won't NULL when it is freed. But currently gluster nfs server is crashing in different scenarios at nfs3_getattr() with following bt #0 0x00007ff1cfea9205 in _gf_ref_put (ref=ref@entry=0x0) at refcount.c:36 #1 0x00007ff1c1997455 in nfs3_call_state_wipe (cs=cs@entry=0x0) at nfs3.c:559 #2 0x00007ff1c1998931 in nfs3_getattr (req=req@entry=0x7ff1bc0b26d0, fh=fh@entry=0x7ff1c2f76ae0) at nfs3.c:962 #3 0x00007ff1c1998c8a in nfs3svc_getattr (req=0x7ff1bc0b26d0) at nfs3.c:987 #4 0x00007ff1cfbfd8c5 in rpcsvc_handle_rpc_call (svc=0x7ff1bc03e500, trans=trans@entry=0x7ff1bc0c8020, msg=<optimized out>) at rpcsvc.c:695 #5 0x00007ff1cfbfdaab in rpcsvc_notify (trans=0x7ff1bc0c8020, mydata=<optimized out>, event=<optimized out>, data=<optimized out>) at rpcsvc.c:789 #6 0x00007ff1cfbff9e3 in rpc_transport_notify (this=this@entry=0x7ff1bc0c8020, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7ff1bc0038d0) at rpc-transport.c:538 #7 0x00007ff1c4a2e3d6 in socket_event_poll_in (this=this@entry=0x7ff1bc0c8020, notify_handled=<optimized out>) at socket.c:2306 #8 0x00007ff1c4a3097c in socket_event_handler (fd=21, idx=9, gen=19, data=0x7ff1bc0c8020, poll_in=1, poll_out=0, poll_err=0) at socket.c:2458 #9 0x00007ff1cfe950f6 in event_dispatch_epoll_handler (event=0x7ff1c2f76e80, event_pool=0x5618154d5ee0) at event-epoll.c:572 #10 event_dispatch_epoll_worker (data=0x56181551cbd0) at event-epoll.c:648 #11 0x00007ff1cec99e25 in start_thread () from /lib64/libpthread.so.0 #12 0x00007ff1ce56634d in clone () from /lib64/libc.so.6 This patch add previous NULL check move from __nfs3_call_state_wipe() to nfs3_call_state_wipe() Cherry picked from commit 111d6bda9259126b0429113c9b8ba479958a4398: > Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea > BUG: 1479030 > Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> > Reviewed-on: https://review.gluster.org/17989 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Niels de Vos <ndevos@redhat.com> Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea BUG: 1480594 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/18027 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/rebalance: Fix hardlink migration failuresSusant Palai2017-08-112-15/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A brief about how hardlink migration works: - Different hardlinks (to the same file) may hash to different bricks, but their cached subvol will be same. Rebalance picks up the first hardlink, calculates it's hash(call it TARGET) and set the hashed subvolume as an xattr on the data file. - Now all the hardlinks those come after this will fetch that xattr and will create linkto files on TARGET (all linkto files for the hardlinks will be hardlink to each other on TARGET). - When number of hardlinks on source is equal to the number of hardlinks on TARGET, the data migration will happen. RACE:1 Since rebalance is multi-threaded, the first lookup (which decides where the TARGET subvol should be), can be called by two hardlink migration parallely and they may end up creating linkto files on two different TARGET subvols. Hence, hardlinks won't be migrated. Fix: Rely on the xattr response of lookup inside gf_defrag_handle_hardlink since it is executed under synclock. RACE:2 The linkto files on TARGET can be created by other clients also if they are doing lookup on the hardlinks. Consider a scenario where you have 100 hardlinks. When rebalance is migrating 99th hardlink, as a result of continuous lookups from other client, linkcount on TARGET is equal to source linkcount. Rebalance will migrate data on the 99th hardlink itself. On 100th hardlink migration, hardlink will have TARGET as cached subvolume. If it's hash is also the same, then a migration will be triggered from TARGET to TARGET leading to data loss. Fix: Make sure before the final data migration, source is not same as destination. RACE:3 Since a hardlink can be migrating to a non-hashed subvolume, a lookup from other client or even the rebalance it self, might delete the linkto file on TARGET leading to hardlinks never getting migrated. This will be addressed in a different patch in future. > Change-Id: If0f6852f0e662384ee3875a2ac9d19ac4a6cea98 > BUG: 1469964 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17755 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: If0f6852f0e662384ee3875a2ac9d19ac4a6cea98 BUG: 1473141 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17838 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: fix on demand migration files from clientSusant Palai2017-08-114-20/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On demand migration of files i.e. migration done by clients triggered by a setfattr was broken. Dependency on defrag led to crash when migration was triggered from client. Note: This functionality is not available for tiered volumes. Migration from tier served client will fail with ENOTSUP. usage (But refer to the steps mentioned below to avoid any issues) : setfattr -n "trusted.distribute.migrate-data" -v "1" <filename> The purpose of fixing the on-demand client migration was to give a workaround where the user has lots of empty directories compared to files and want to do a remove-brick process. Here are the steps to trigger file migration for remove-brick process from client. (This is highly recommended to follow below steps as is) Let's say it is a replica volume and user want to remove a replica pair named brick1 and brick2. (Make sure healing is completed before you run these steps) Step-1: Start remove-brick process - gluster v remove-brick <volname> brick1 brick2 start Step-2: Kill the rebalance daemon - ps aux | grep glusterfs | grep rebalance\/ | awk '{print $2}' | xargs kill Step-3: Do a fresh mount as mentioned here - glusterfs -s ${localhostname} --volfile-id rebalance/$volume-name /tmp/mount/point Step-4: Go to one of the bricks (among brick1 and brick2) - cd <brick1 path> Step-5: Run the following command. - find . -not \( -path ./.glusterfs -prune \) -type f -not -perm 01000 -exec bash -c 'setfattr -n "distribute.fix.layout" -v "1" ${mountpoint}/$(dirname '{}')' \; -exec setfattr -n "trusted.distribute.migrate-data" -v "1" ${mountpoint}/'{}' \; This command will ignore the linkto files and empty directories. Do a fix-layout of the parent directory. And trigger a migration operation on the files. Step-6: Once this process is completed do "remove-brick force" - gluster v remove-brick <volname> brick1 brick2 force Note: Use the above script only when there are large number of empty directories. Since the script does a crawl on the brick side directly and avoids directories those are empty, the time spent on fixing layout on those directories are eliminated(even if the script does not do fix-layout on empty directories, post remove-brick a fresh layout will be built for the directory, hence not affecting application continuity). Detailing the expectation for hardlink migartion with this patch: Hardlink is migrated only for remove-brick process. It is highly essential to have a new mount(step-3) for the hardlink migration to happen. Why?: setfattr operation is an inode based operation. Since, we are doing setfattr from fuse mount here, inode_path will try to build path from the linked dentries to the inode. For a file without hardlinks the path construction will be correct. But for hardlinks, the inode will have multiple dentries linked. Without fresh mount, inode_path will always get the most recently linked dentry. e.g. if there are three hardlinks named dir1/link1, dir2/link2, dir3/link3, on a client where these hardlinks are looked up, inode_path will always return the path dir3/link3 if dir3/link3 was looked up most recently. Hence, we won't be able to create linkto files for all other hardlinks on destination (read gf_defrag_handle_hardlink for more details on hardlink migration). With a fresh mount, the lookup and setfattr become serialized. e.g. link2 won't be looked up until link1 is looked up and migrated. Hence, inode_path will always have the correct path, in this case link1 dentry is picked up(as this is the most recently looked up inode) and the path is built right. Note: If you run the above script on an existing mount(all entries looked up), hard links may not be migrated, but there should not be any other issue. Please raise a bug, if you find any issue. Tests: Manual > Change-Id: I9854cdd4955d9e24494f348fb29ba856ea7ac50a > BUG: 1450975 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17115 > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: I9854cdd4955d9e24494f348fb29ba856ea7ac50a BUG: 1473140 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17837 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: initialize throttle option "normal" to same in init and reconfigureSusant Palai2017-08-111-77/+52
| | | | | | | | | | | | | | | | | | | | | | | | | Normal value were different in dht_init and dht_reconfigure. Initialization/reconfigure of throttle option are carved out to a separate function (dht_configure_throttle) now. Normal value will be "2". > Change-Id: Ie323eae019af41d6bef0a136e3d284dc82bab9a1 > BUG: 1451162 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17303 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Zhou Zhengping <johnzzpcrystal@gmail.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: Ie323eae019af41d6bef0a136e3d284dc82bab9a1 BUG: 1473137 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17836 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Make rebalance throttle option tuned by numberSusant Palai2017-08-114-30/+138
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current rebalance throttle options: lazy/normal/aggressive may not always be sufficient for the purpose of throttling. In our recent test, we observed for certain setups, normal and aggressive modes behaved similarly consuming full disk bandwidth. So in cases like this admin should be able to tune it down(or vice versa) depending on the need. Along with old throttle configurations, thread counts are tuned based on number. e.g. gluster v set vol-name cluster-rebal.throttle 5. Admin can tune up/down between 0 and the number of cores available. Note: For heterogenous servers, validation will fail on the old server if "number" is given for throttle configuration. The message looks something like this: "volume set: failed: Staging failed on vm2. Error: cluster.rebal-throttle should be {lazy|normal|aggressive}" Test: Manual test by logging active thread number after reconfiguring throttle option. testcase: tests/basic/distribute/throttle-rebal.t > Change-Id: I46e3cde546900307831028b344ecf601fd9b02c3 > BUG: 1438370 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/16980 > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: I46e3cde546900307831028b344ecf601fd9b02c3 BUG: 1473136 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17835 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: rebalance perf enhancementSusant Palai2017-08-112-108/+246
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Throttle settings "normal" and "aggressive" for rebalance did not have performance difference. normal mode spawns $(no. of cores - 4)/2 threads and aggressive spawns $(no. of cores - 4) threads. Though aggressive mode has twice the number of threads compared to that of normal mode, there was no performance gain when switched to aggressive mode from normal mode. RCA: During the course of debugging the above problem, we tried assigning migration job to migration threads spawned by rebalance, rather than synctasks(as there is more overhead associated to manage the task queue and threads). This gave us a significant improvement over rebalance under synctasks. This patch does not really gurantee that there will be a clear performance difference between normal and aggressive mode, but this patch certainly maximized the disk utilization for 1GBfiles run. Results: Test enviroment: Gluster Config: Number of Bricks: 2 (one brick per disk(RAID-6 12 disk)) Bricks: Brick1: server1:/brick/test1/1 Brick2: server2:/brick/test1/1 Options Reconfigured: performance.readdir-ahead: on server.event-threads: 4 client.event-threads: 4 1000 files with 1GB each were created/renamed such that all files will have server1 as cached and server2 as hashed, so that all files will be migrated. Test machines had 24 cores each. Results with/without synctask based migration: ----------------------------------------------- mode normal(10threads) aggressive(20threads) timetaken 0:55:30 (h:m:s) 0:56:3 (h:m:s) withsynctask timetaken with migrator 0:38:3 (h:m:s) 0:23:41 (h:m:s) threads From above table it can be seen that, there is a clear 2x perf gain between rebalance with synctask vs rebalance with migrator threads. Additionally this patch modifies the code so that caller will have the exact error number returned by dht_migrate_file(earlier the errno meaning was overloaded). This will help avoiding scenarios where migration failure due to ENOENT, can result in rebalance abort/failure. > Change-Id: I8904e2fb147419d4a51c1267be11a08ffd52168e > BUG: 1420166 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/16427 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: I8904e2fb147419d4a51c1267be11a08ffd52168e BUG: 1473134 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17834 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: correct space check for rebalanceSusant Palai2017-08-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | With rebalance doing fallocate on destination, we don't need to add file size to the "destination available space" to decide whether to migrate the file or not. Notes: Fallocate would have already occupied the file size space on destination > Change-Id: If7f6a6654e6257726680cf20d618482a6e9095a6 > BUG: 1441508 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17104 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: If7f6a6654e6257726680cf20d618482a6e9095a6 BUG: 1473133 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17833 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Skip file migration if the subvol that meets min-free-diskSusant Palai2017-08-113-25/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... criteria happens to be the same subvol containing data-file Rebalance need to figure out a new subvol in case the hashed subvol does not have enough space. In the process of figuring out the new subvol, we need to ignore the source subvol, otherwise it will lead to data loss. Test: Manual Ran the following sizeof /tmp/1: 1.5GB sizeof /brick/1: 16GB sizeof /tmp/2: 1.5GB <start> glusterd; gluster v create test1 vm1:/brick/1 vm1:/tmp/1; gluster v start test1; mount -t glusterfs vm1:test1 /mnt; for i in {1..2000} do dd if=/dev/zero of=/mnt/file$i bs=1KB count=1 &> /dev/null; done gluster v add-brick test1 vm1:/tmp/2 gluster v set test1 min-free-disk 12GB gluster v remove-brick test1 vm1:/tmp/1 star <end> file count and data were intact. > Change-Id: Ib8fc8467a3d48a7c12958824c4f0b88e160b86c1 > BUG: 1441508 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17064 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: Ib8fc8467a3d48a7c12958824c4f0b88e160b86c1 BUG: 1473133 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17832 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Make rebalance honor min-free-diskSusant Palai2017-08-114-17/+235
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | test: Manual created files of size 1K on 2 brick(of size 1GB) setup . added a brick of size 16GB. set min-free-disk to 12GB(so that first two bricks won't receive any files). removed one of the 1st brick of size 1GB. Logs from test: [2017-04-12 08:52:08.196484] W [MSGID: 0] [dht-rebalance.c:895:__dht_check_free_space] 0-test1-dht: Write will cross min-free-disk for file - /tile32 on subvol - test1-client-1. Looking for new subvol. [2017-04-12 08:52:08.196904] I [MSGID: 0] [dht-rebalance.c:925:__dht_check_free_space] 0-test1-dht: new target found - test1-client-2 for file - /tile32 - Post migration we have two files. The new destination (/brick/1) has the data file [root@vm1 ~]# ll /brick/1/tile32 -rw-r--r--. 2 root root 0 Apr 12 14:22 /brick/1/tile32 - On the old target the linkto file is there with linkto xattr pointing to /brick/1 [root@vm1 ~]# ll /tmp/2/tile32 ---------T. 2 root root 1000 Apr 12 14:22 /tmp/2/tile32 [root@vm1 ~]# getfattr -m . -de text /tmp/2/tile32 getfattr: Removing leading '/' from absolute path names security.selinux="unconfined_u:object_r:user_tmp_t:s0" trusted.gfid="����:Aс�#�/'b2" trusted.glusterfs.dht.linkto="test1-client-2" Marking ./tests/features/worm_sh.t as bad test. Reason being, this patch failed on master branch as well and it has nothing to do with rebalance/remove-brick. > BUG: 1441508 > Change-Id: I90bae251cda3d957a49cdceda90cd08311a392fb > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17034 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: I90bae251cda3d957a49cdceda90cd08311a392fb BUG: 1473132 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17831 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* nfs/nlm: keep track of the call-state and frame for notificationsNiels de Vos2017-08-112-24/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When blocking locks are used, a new frame is allocated that is used to send the notification to the client once once the lock becomes available. In all other cases, the frame that contains the request from the client will be used for the reply. Because there was no way to track the different clients with their requests (captured in the call-state), the call-state could be free'd before the notification was sent to the client. This caused a use-after-free of the call-state and could trigger segfaults of the Gluster/NFS server or incorrect replies on (un)lock requests. By introducing a nlm4_notify_args structure, the call-state and frame can be tracked better. This prevents the possibility of segfaulting when the call-state is used after being free'd. Cherry picked from commit b81997264f079983fa02bd5fa2b3715224942b00: > BUG: 1467313 > Change-Id: I285d2bc552f509e5145653b7a50afcff827cd612 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: https://review.gluster.org/17700 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> > Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Change-Id: I285d2bc552f509e5145653b7a50afcff827cd612 BUG: 1471870 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17796 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* dht/rebalance: Crawler performance improvementSusant Palai2017-08-111-127/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The job of the crawler in rebalance is to fetch files from each local subvolume and push them to migration queue if it is eligible for migration. And we do a lookup on the entries received to figure out the eligibilty. Since, the lookup done is on a local subvolume we receive linkto files and regular files as well. This requires us to do two lookups. first: do a lookup on the file to figure out whether it is a linkto file second: do a lookup on the file to figure out if it should be migrated Note: The migrator thread also does one lookup for the file before migration. Optimization: Remove the lookup done by the crawler. Offload these task to the migrator threads. For linkto file verification get the stat and xattr information from readdirp. So in total we have one lookup instead of three for each entry. Performance numbers: Create two node, two brick setup. Created 100000 files. And started rebalance. Since, there is no add-brick, no files will be migrated and we will get the crawler performance. Without patch: [root@gprfs039 ~]# grs Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 50070 0 0 completed 0:0:48 server2 0 0Bytes 49930 0 0 completed 0:0:44 volume rebalance: test1: success Total: 48 seconds WiththecurrentPatch: [root@gprfs039 mnt]# gluster v rebalance test1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 50070 0 0 completed 0:0:12 server2 0 0Bytes 49930 0 0 completed 0:0:12 volume rebalance: test1: success Total: 12 seconds That's 4X speed gain. :) > Updates glusterfs#155 > Change-Id: Idc8e5b366e76c54aa40d698876ae62fe1630b6cc > BUG: 1439571 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/15781 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Updates glusterfs#155 Change-Id: Idc8e5b366e76c54aa40d698876ae62fe1630b6cc BUG: 1473129 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17830 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* nfs/nlm: use refcounting for nfs3_call_state_tNiels de Vos2017-08-111-11/+35
| | | | | | | | | | | | | | | | | | | | | | | | In order to track down a potential use-after-free of the nfs3_call_state_t structure in the NLM component, add reference counting where teh structure is used. This should prevent premature free'ing of the structure. Cherry picked from commit 01bfdd4d1759423681d311da33f4ac2346ace445: > Change-Id: Ib1f13b0463ab1e012b7b49a623c91f0f3e73e1fb > BUG: 1467313 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: https://review.gluster.org/17699 > Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Change-Id: Ib1f13b0463ab1e012b7b49a623c91f0f3e73e1fb BUG: 1471870 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17795 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* refcount: typecast function for calling on freeNiels de Vos2017-08-113-14/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All of the functions called to free the refcounted structure are doing a typecast from (void*) to their own type taht is being free'd. This really is not needed and the refcount interface is made a little simpler without the requirement of typecasting. With this small improvement in the API, all callers are updated too. Cherry picked from commit f2ca301bd741e3e3f076cd3f72fcd377bcef2a1a: > Change-Id: I32473b6d1799f62861d4b2d78ea30c09e6c80ab1 > BUG: 1416889 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: https://review.gluster.org/16471 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Backport note: This patch makes it easier to backport changes that use gf_refcount_t. There is no functional change. Change-Id: I32473b6d1799f62861d4b2d78ea30c09e6c80ab1 BUG: 1471870 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17913 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* performance/io-cache: update inode contexts of each entry in readdirplusRaghavendra G2017-08-113-39/+67
| | | | | | | | | | | | | | | | | | | | | | | | io-cache stores read-cache in inode which is currently created only in lookup. But, with readdirplus and md-cache absorbing lookups, io-cache need not receive a lookup before a fop like readv. >Change-Id: I6eba995b0a90d4d5055a4aef0489707b852da1b8 >BUG: 1474180 >Signed-off-by: Raghavendra G <raghavendra@gluster.com> >Signed-off-by: Raghavendra G <rgowdapp@redhat.com> >Reviewed-on: https://review.gluster.org/5029 >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit b90e12134af85635199750967c326761d6c06e86) Change-Id: I6eba995b0a90d4d5055a4aef0489707b852da1b8 BUG: 1475638 Signed-off-by: Raghavendra G <raghavendra@gluster.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: https://review.gluster.org/17891 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* nfs/nlm: handle reconnect for non-NLM4_LOCK requestsNiels de Vos2017-08-111-22/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a reply on an NLM-procedure gets stuck, the NFS-client will resend the request. This can happen through a re-connect in case the connection was terminated (long delay in the reply on the initial request). Once that happens, not all NLM-procedures are handled correctly. Testing this is difficult and time-consuming. There still may be problems with certain operations, but this definitely makes it behave much better than before. The problem occured due to a problem in EC, change-id I18a782903ba addressed the root cause. Cherry picked from commit fafe1491ead527ba1024c521013aa90d2ee2b355: > Change-Id: I23b385568e27232951fa3fbd7198a0e5d775a8c2 > BUG: 1467313 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: https://review.gluster.org/17698 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Change-Id: I23b385568e27232951fa3fbd7198a0e5d775a8c2 BUG: 1471870 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17794 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* nfs/nlm: unref fds in nlm_client_free()Niels de Vos2017-08-111-13/+12
| | | | | | | | | | | | | | | | | | | | | | | | When a nlm_clnt is getting free'd, the FDs associated with this client should be unref'd as well. Cherry picked from commit e9a482f94e748ea12e73ddd2e275bad9aa314b4c: > Change-Id: Ifa4ea4b7ed45a454413cfc0c820f2516c534a9aa > BUG: 1467313 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: https://review.gluster.org/17697 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Change-Id: Ifa4ea4b7ed45a454413cfc0c820f2516c534a9aa BUG: 1471870 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17793 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
* nfs: make nfs3_call_state_t refcountedNiels de Vos2017-08-113-39/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no refcounting done of the nfs3_call_state_t structure, which seems to result in use-after-free problems in the NLM part of Gluster/NFS. The structure is initialized with two different functions, it is easier to have a single place to do this. The Gluster/NFS part will not use the refcounting, for now. This is being added to make the NLM code more stable. nfs3_call_state_wipe() will behave as before for Gluster/NFS, but cleanup is triggered through the refcounting now. This prevents major changes to the stable part of the NFS-server, and makes it possible to improve the NLM component separately. Cherry picked from commit daed52b8ebcac7ef36f11e944f83826f46593867: > Change-Id: I2e15bcf12af74e8a46c2727e4a160e9444d29ece > BUG: 1467313 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: https://review.gluster.org/17696 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> > Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Change-Id: I2e15bcf12af74e8a46c2727e4a160e9444d29ece BUG: 1471870 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17792 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Fix fd check raceN Balachandran2017-07-202-1/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a another race between the cached subvol being updated in the inode_ctx and the fd being opened on the target. 1. fop1 -> fd1 -> subvol0 2. file migrated from subvol0 to subvol1 and cached_subvol changed to subvol1 in inode_ctx 3. fop2 -> fd1 -> subvol1 [takes new cached subvol] 4. fop2 -> checks fd ctx (fd not open on subvol1) -> opens fd1 on subvol1 5. fop1 -> checks fd ctx (fd not open on subvol0) -> tries to open fd1 on subvol0 -> fails with "No such file on directory". Fix: If dht_fd_open_on_dst fails with ENOENT or ESTALE, wind to old subvol and let the phase1/phase2 checks handle it. Change-Id: I34f8011574a8b72e3bcfe03b0cc4f024b352f225 BUG: 1467010 > BUG: 1465075 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17731 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: Amar Tumballi <amarts@redhat.com> (cherry picked from commit f7a450c17fee7e43c544473366220887f0534ed7) Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17829 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* cluster/dht: Check if fd is opened on dst subvolN Balachandran2017-07-196-30/+543
| | | | | | | | | | | | | | | | | | | | | | | | | | | If an fd is opened on a file, the file is migrated and the cached subvol is updated in the inode_ctx before an fd based fop is sent, the fop is sent to the dst subvol on which the fd is not opened. This causes the FOP to fail with EBADF. Now, every fd based fop will check to see that the fd has been opened on the dst subvol before winding it down. > BUG: 1465075 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17630 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: Susant Palai <spalai@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 91db0d47ca267aecfc6124a3f337a4e2f2c9f1e2) Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e BUG: 1467010 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17752 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* fuse: memory leak fixesDanny Couture2017-07-191-38/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | Fix fuse ctx memory leak in case an error occurs and the cleanup path is different than usual. Also fix a memory leak in logging if eh_save_history() fails. Cherry picked from commit 5ee383fed9f6408d303aa539dda071275021f8e4: > Change-Id: I7ec967c807b0ed91184e5b958be70702215c46c9 > BUG: 1470220 > Signed-off-by: Danny Couture <couture.danny@gmail.com> > Reviewed-on: https://review.gluster.org/17759 > Reviewed-by: Niels de Vos <ndevos@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > Reviewed-by: Prashanth Pai <ppai@redhat.com> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Tested-by: Amar Tumballi <amarts@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I7ec967c807b0ed91184e5b958be70702215c46c9 BUG: 1471028 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17758 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* afr: mark non sources as sinks in metadata healRavishankar N2017-07-192-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: In a 3 way replica, when the source brick does not have pending xattrs for the sinks, but the 2 sinks blame each other, metadata heal was not happpening because we were not setting all non-sources as sinks. Fix: Mark all non-sources as sinks, like it is done in data and entry heal. > Reviewed-on: https://review.gluster.org/17717 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 77c1ed5fd299914e91ff034d78ef6e3600b9151c) Change-Id: I534978940f5087302e307fcc810a48ffe898ce08 BUG: 1471612 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17782 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/ec: correctly handle end of file for seekXavier Hernandez2017-07-071-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When a SEEK_HOLE was issued near to the end of file, sometimes an offset beyond the end of file was returned. Another problem was that using some offsets greater than the end of file returned successfully instead of failing with ENXIO. >Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5 >BUG: 1449348 >Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> >Reviewed-on: https://review.gluster.org/17228 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Amar Tumballi <amarts@redhat.com> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >(cherry picked from commit eb96dd45f8e583c6bad84bf32ca17e2bb01dd38f) Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5 BUG: 1468126 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/17711 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* Revert "Revert "glusterd: disallow rebalance & remove-brick on a sharded ↵Shyamsundar Ranganathan2017-07-042-0/+19
| | | | | | | | | | | | | | | | | | | | volume"" This is being reverted as a new bug around rebalance has been uncovered. As a result we would like to retain the warning in the code and in the release-notes. The new bug being, https://bugzilla.redhat.com/show_bug.cgi?id=1465075 A similar revert is being tracked for 3.11 here, https://review.gluster.org/17631 based on votes to both, we may want to consider this for 3.10 as well. This reverts commit abaf577626650edb4b9dfdddd43ba04a2a8e8ef3. BUG: 1467010 Change-Id: Iecd0357c44e41e2b421222e8f98fe8300513f963 Reviewed-on: https://review.gluster.org/17632 Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Tested-by: Raghavendra Talur <rtalur@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/dht: Additional checks for rebalance estimatesN Balachandran2017-07-041-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rebalance estimates calculation was not handling calculations correctly when no files had been processed, i.e., when rate_lookedup was 0. Now, the estimated time is set to 0 in such scenarios as there is no way for rebalance to figure out how long the process will take to complete without knowing the rate at which the files are being processed. > BUG: 1457985 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17564 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I7b6378e297e1ba139852bcb2239adf2477336b5b BUG: 1460914 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17599 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* core: assorted typos and spelling mistakes from Debian lintianKaleb S. KEITHLEY2017-07-012-6/+8
| | | | | | | | | | | | | | | | | Plus minor readability improvements. Reported-by: pmatthaei@debian.org master BUG: 1466785 master https://review.gluster.org/17660 release-3.11 BUG: 1466801 release-3.11 https://review.gluster.org/17661 Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf BUG: 1466852 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17663 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* cluster/afr: Returning single and list of node uuids from AFRkarthik-us2017-07-011-9/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: The change in afr to return list of node uuids was causing problems with geo-rep. Fix: This patch will allow to get the single node uuid as it was doing before with the key "GF_XATTR_NODE_UUID_KEY", and will also allow to get the list of node uuids by using a new key "GF_XATTR_LIST_NODE_UUIDS_KEY". This will solve the problem with geo-rep and any other feature which were depending on this. > Change-Id: I09885dac6dfca127be94b708470c8c2941356f9a > BUG: 1462790 > Signed-off-by: karthik-us <ksubrahm@redhat.com> > Reviewed-on: https://review.gluster.org/17576 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Ravishankar N <ravishankar@redhat.com> > Reviewed-by: Kotresh HR <khiremat@redhat.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> (cherry picked from commit 475ec9928ef96b63a0bfa859a9ae68709275033c) Change-Id: I5e741a48a426ee9a3cc69612051e0e9bcf33b500 BUG: 1464078 Signed-off-by: karthik-us <ksubrahm@redhat.com> Reviewed-on: https://review.gluster.org/17603 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster:dht Fix crash in dht_rename_lock_cbkN Balachandran2017-07-011-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Use a local variable to store the call count in the STACK_WIND for loop. Using frame->local is dangerous as it could be freed while the loop is still being processed > BUG: 1466110 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17645 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Tested-by: Nigel Babu <nigelb@redhat.com> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> (cherry picked from commit 56da27cf5dc6ef54c7fa5282dedd6700d35a0ab0) Change-Id: Ie65cdcfb7868509b4a83bc2a5b5d6304eabfbc8e BUG: 1466863 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17665 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* feature/changelog: Fix buffer overflow crashKotresh HR2017-06-301-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | The buffer used to hold the basename was hard coded to the size of NAME_MAX(255). It might lead to buffer overflow crashes when the basename which is sent is more than NAME_MAX length. Fixed the same. > Change-Id: I6c1cad3ccaeb8c55549b1d3c5f96a198f65ba2b7 > BUG: 1463178 > Signed-off-by: Kotresh HR <khiremat@redhat.com> > Reviewed-on: https://review.gluster.org/17579 > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> (cherry picked from commit b224f4253b7d3de3077ee35c8bdc20618eae4b7c) Change-Id: I6c1cad3ccaeb8c55549b1d3c5f96a198f65ba2b7 BUG: 1463623 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17592 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Zhou Zhengping <johnzzpcrystal@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* rpc: add options to manage socket keepalive lifespanMilind Changire2017-06-201-1/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Default values for handling socket timeouts for brick responses are insufficient for aggressive applications such as databases. Solution: Add 1:1 gluster options for keepalive, keepalive-idle, keepalive-interval and keepalive-timeout as per the socket level options available as per tcp(7) man page. Default values for options are NOT agressive and continue to be values which result in default timeout when only the keep alive option is turned on. These options are Linux specific and will not be applicable to the *BSDs. mainline: > BUG: 1426059 > Signed-off-by: Milind Changire <mchangir@redhat.com> > Reviewed-on: https://review.gluster.org/16731 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> (cherry picked from commit 6b8df081b46ac4f485c86a5052fc30472e74bfbb) Change-Id: I2a08ecd949ca8ceb3e090d336ad634341e2dbf14 BUG: 1452038 Signed-off-by: Milind Changire <mchangir@redhat.com> Reviewed-on: https://review.gluster.org/17330 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* cluster/dht: Include dirs in rebalance estimatesN Balachandran2017-06-203-31/+83
| | | | | | | | | | | | | | | | | | | | | | | | | Empty directories were not being considered while calculating rebalance estimates leading to negative time-left values being displayed as part of the rebalance status. > BUG: 1457985 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17448 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I48d41d702e72db30af10e6b87b628baa605afa98 BUG: 1460914 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17530 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* posix-acl: Whitelist virtual ACL xattrsSoumya Koduri2017-06-201-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to system.posix_acl_* xattrs, all users should get permission to be able to read glusterfs.posix.acl* xattrs too. This is backport of below mainline patch - https://review.gluster.org/17493 >BUG: 1459971 >Signed-off-by: Soumya Koduri <skoduri@redhat.com> >Reviewed-on: https://review.gluster.org/17493 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Raghavendra Talur <rtalur@redhat.com> >Reviewed-by: Niels de Vos <ndevos@redhat.com> >(cherry picked from commit 68f2192df570b5ee615d440c2e0c88d49a75a34f) BUG: 1460649 Change-Id: I1fc2b67c8a12113910e4ec57cd114e4baefe0d38 Signed-off-by: Soumya Koduri <skoduri@redhat.com> Reviewed-on: https://review.gluster.org/17513 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* Revert "glusterd: disallow rebalance & remove-brick on a sharded volume"Krutika Dhananjay2017-06-202-19/+0
| | | | | | | | | | | | | | | | | | | | | | This reverts commit 8375b3d70d5c6268c6770b42a18b2e1bc09e411e. Backport of: > Change-Id: I45493fcbb1f25fd0fff27b2b3526c42642ccb464 > BUG: 1460585 > Reviewed-on: https://review.gluster.org/17506 > (cherry-picked from c0d4081cf4b90a4316b786cc53263a7c56fdb344) Now that some of the users have confirmed rebalance works fine without causing corruption of VMs, time to revert the CLI restriction. Change-Id: I45493fcbb1f25fd0fff27b2b3526c42642ccb464 BUG: 1460993 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: https://review.gluster.org/17532 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* feature/bitrot: Fix ondemand scrubKotresh HR2017-06-202-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The flag which keeps tracks of whether the scrub frequency is changed from previous value should not be considered for on-demand scrubbing. It should be considered only for 'scrub-frequency' where it should not be re-scheduled if it is set to same value again. But in case ondemand scrub, it should start the scrub immediately no matter what the scrub-frequency. Reproducer: 1. Enable bitrot 2. Set scrub-throttle 3. Set ondemand scrub Make sure glusterd is not restarted while doing below steps > Change-Id: Ice5feaece7fff1579fb009d1a59d2b8292e23e0b > BUG: 1461845 > Signed-off-by: Kotresh HR <khiremat@redhat.com> > Reviewed-on: https://review.gluster.org/17552 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> (cherry picked from commit f0fb166078d59cab2a33583591b6448326247c40) Change-Id: Ice5feaece7fff1579fb009d1a59d2b8292e23e0b BUG: 1462080 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17553 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>