summaryrefslogtreecommitdiffstats
path: root/xlators/cluster
Commit message (Collapse)AuthorAgeFilesLines
* cluster/ec: Fix race in timer cancellationXavier Hernandez2016-07-171-15/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A race in timer cancellation for delayed unlock could cause a crash if the cancelling thread fails to cancel the timer because it has already been fired but not executed, and the callback is scheduled out of the CPU, delaying it until the thread has released important resources needed by the callback. This patch improves the handling of this case to make it robust. Backport of: > Change-Id: I5c8a8c6610c5136f71b938aa78b5878ba05238d4 > BUG: 1345855 > Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> > Reviewed-on: http://review.gluster.org/14712 > Smoke: Gluster Build System <jenkins@build.gluster.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Change-Id: I5c8a8c6610c5136f71b938aa78b5878ba05238d4 BUG: 1346156 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/14724 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* afr:Don't wind reads for files in metadata split-brainRavishankar N2016-06-271-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/13389/ Problem: For a read on a file in metadata split-brain: 1.lookup_done resets event_generation to zero. 2. readv is issued, goes to inode refresh due to mismatching event_gen. 3. After refresh is successful, we update event_generation, data and metdata readable. 3. We then call afr_read_txn_refresh_done() which in turn calls afr_inode_get_readable() but doesn't check for EIO. So afr_readv_wind is called with local->readable (which is populated with data_readable), thus winding the read to a brick. 4. Also, further parallel reads that come directly go to the wind path because there is no inode_refresh needed. Fix: 1.For any afr_read_txn(), readable must be an intersection of data and metadata readable. 2.Check for EIO in afr_read_txn_refresh_done(). Change-Id: I22dd221fdfaf96d7aced2f474e28ed1337d69f0e BUG: 1349881 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit 7a1c1e2904701496968ed14b6d7479fb706c3188) Reviewed-on: http://review.gluster.org/14791 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/dht: Fix unsafe iteration on inode->fd_listXavier Hernandez2016-06-241-16/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When DHT traverses the inode->fd_list, it does that in an unsafe way that can generate races with fd_unref() called from other threads. This patch fixes this problem taking the inode->lock and adding a reference to the fd while it's being used outside of the mutex protected region. A minor change in storage/posix has been done to also access the inode->fd_list in a safe way. Backport of: > Change-Id: I10d469ca6a8f76e950a8c9779ae9c8b70f88ef93 > BUG: 1344340 > Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> > Reviewed-on: http://review.gluster.org/14682 > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I10d469ca6a8f76e950a8c9779ae9c8b70f88ef93 BUG: 1346751 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/14734 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/distribute: heal layout in discover codepath tooRaghavendra G2016-06-221-33/+7
| | | | | | | | | | Change-Id: Ic559c220a1f0051e531314d13940604e2dead08c BUG: 1336284 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/14349 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* dht/afr/client/posix: Fail mkdir without gfid-reqPranith Kumar K2016-06-162-6/+20
| | | | | | | | | | | | | | | | | | | | | | | | Do not allow directory creations without gfids as after the directories are created, operations on them fail anyway. So it is better to fail mkdir. >BUG: 1317361 >Change-Id: I8f8e3b38bbded1960b7215bac0432500f7e78038 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13690 >Smoke: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >(cherry picked from commit b246b07896fefb261c9fb07f3f29f0d03b81b88d) Change-Id: Ibf9c84add7265e3e1755a37958e1de38307624b2 BUG: 1332372 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14188 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* core, shard: Make shards inherit main file's O_DIRECT flag if presentKrutika Dhananjay2016-06-161-1/+2
| | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/14191 If the application opens a file with O_DIRECT, the shards' anon fds would also need to inherit the flag. Towards this, shard xl would be passing the odirect flag in the @flags parameter to the WRITEV fop. This will be used in anon fd resolution and subsequent opening by posix xl. Change-Id: I3a0593fa46cc25e390a5762a0354b469c2a1532d BUG: 1342903 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/14663 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/ec: Fix invalid __fd_unref() callXavier Hernandez2016-06-131-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | __fd_unref() doesn't do any cleanup, so it cannot be called to release fd references, specially if it's the last reference. The code has been changed to avoid a call to this function. In the previous version we always tried to keep the newest fd in the ec_lock_t structure. However this is not necessary. We'll always keep one reference to an open file on the same inode. It's irrelevant if the reference is new or old. The function __fd_unref() has also been removed from fd.h to avoid being used in the future since it's useless as it's defined now. Backport of http://review.gluster.org/14683 Change-Id: Ia728777fc8e464758d5ea4d3bf020f0603919039 BUG: 1344422 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/14685 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/ec: Pass xdata to dht in case of errorAshish Pandey2016-06-131-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | Problem: In case of mkdir failure, dht expects error information so that it can act accordingly. Aftre adding bricks and re balance, layout gets changed. Fop "mkdir" with old layout returns EIO. EC gets this error in xdata but does not pass it back to dht. In this case dht will not be able to take corrective action. Solution: Return xdata back to dht master - http://review.gluster.org/#/c/14679/ Change-Id: I24def8038e6880607689b7b046dc6428f564c6ab BUG: 1344595 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/14689 Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* afr: Consider ENOSPC and EDQUOT as symmetric errorsRavishankar N2016-06-121-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/14604/ Problem: Since commit 8eaa3506ead4f11b81b146a9e56575c79f3aad7b, in replica 3, if a brick is down and a create fails on the other 2 brick with EDQUOT, we consider it an unsymmetric error and hence do not do post-op. So the dirty xattr remains set on the parent dir, leading to conservative merges during heal when all bricks are up. i.e. a file deleted on the source might re-appear after heal. Fix: Consider ENOSPC and EDQUOT as symmetric errors since there is no possibility of partial inode or entry modification operations possible when quota is enabled. IOW, if quota reports EDQUOT, the no. of bytes written (or not written) will be the same on all bricks of the replica. Likewise, the entry operation (create, mkdir...) will either succeed or not succeed on all bricks. Change-Id: Iacb1108e9ef4a918e36242fb4a957455133744e9 BUG: 1344561 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/14688 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/ec: Restrict the launch of replace brick healAshish Pandey2016-06-091-1/+2
| | | | | | | | | | | | | | | | | | | | | | | Problem: When features.cache-invalidation is ON, a lot of ec_notify function gets called which leads to launch of too many heals. This leads to no heal completion, which causes accumulation of heals. Solution: ec_launch_replace_heal should not be launch for every event. Replace brick will trigger a child up event and then only this heal function should be called. master - http://review.gluster.org/#/c/14649/ Change-Id: I57b44c6a279d57230daea1d93229be6069245b7d BUG: 1342964 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/14652 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* cluster/afr: Unwind xdata_rsp even in case of failuresPranith Kumar K2016-06-037-21/+120
| | | | | | | | | | | | | | | | | | | | | | | | | | | DHT expects GF_PREOP_CHECK_FAILED to be present in xdata_rsp in case of mkdir failures because of stale layout. But AFR was unwinding null xdata_rsp in case of failures. This was leading to mkdir failures just after remove-brick. Unwind the xdata_rsp in case of failures to make sure the response from brick reaches dht. >BUG: 1340623 >Change-Id: Idd3f7b95730e8ea987b608e892011ff190e181d1 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/14553 >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Anuradha Talur <atalur@redhat.com> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> BUG: 1340992 Change-Id: I2641d35a851be692aa223dfea5d082245ac6c2bc Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14633 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Don't let NFS cache stat after writes.Pranith Kumar K2016-06-034-6/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Afr does post-ops after write but the stat buffer it unwinds is at the time of write, so if nfs client caches this, it will see different ctime when it does stat on it after post-op is done. From NFS client's perspective it thinks the file is changed. Tar which depends on this to be correct keeps giving 'file changed as we read it' warning. If Afr instead has to choose to unwind after post-op, eager-lock, delayed-post-op will have to be disabled which will lead to bad performance for all write usecases. Fix: Don't let client cache stat after write. >Change-Id: Ic6062acc6e5cdd97a9c83c56bd529ec83cee8a23 >BUG: 1302948 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Signed-off-by: Anuradha Talur <atalur@redhat.com> >Reviewed-on: http://review.gluster.org/13785 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Niels de Vos <ndevos@redhat.com> BUG: 1312721 Change-Id: I42a5d524bcf2a2034fe48ee8454812ca26a98c37 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14454 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Refresh inode for inode-write fops in needPranith Kumar K2016-05-294-37/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: If a named fresh-lookup is done on an loc and the fop fails on one of the bricks or not sent on one of the bricks, but by the time response comes to afr, if the brick is up, 'can_interpret' will be set to false in afr_lookup_done(), this will lead to inode-ctx for that inode to be not set, this can lead to EIO in case of a transaction as it depends on 'readable' array to be available by that point. Fix: Refresh inode for inode-write fops for the ctx to be set if it is not already done at the time of named fresh-lookup or if the file is in split-brain where we need to perform one more refresh before failing the fop to check if the file is still in split-brain or not. >BUG: 1336612 >Change-Id: I5c50b62c8de06129b8516039f7c252e5008c47a5 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/14368 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Backport of http://review.gluster.org/14545 BUG: 1337831 Change-Id: If4465ab8fc506e1f905b623b82a53bdab8f5cffd Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14453 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* dht: selfheal should wind mkdir call to subvols with ESTALE errorSakshi Bansal2016-05-261-1/+2
| | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/14496/ > Change-Id: I7140e50263b5f28b900829592c664fa1d79f3f99 > BUG: 1338634 > Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Change-Id: I7140e50263b5f28b900829592c664fa1d79f3f99 BUG: 1338668 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/14497 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/afr: Do not inode_link in afrPranith Kumar K2016-05-263-5/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | Race is explained at https://bugzilla.redhat.com/show_bug.cgi?id=1337405#c0 This patch also handles performing of self-heal with shd-pid. Also performs the healing with this->itable's inode rather than main itable. >BUG: 1337405 >Change-Id: Id657a6623b71998b027b1dff6af5bbdf8cab09c9 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/14422 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> BUG: 1337872 Change-Id: I6d8e79a44e4cc1c5489d81f05c82510e4e90546f Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14456 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Check for required number of entrylksRavishankar N2016-05-251-5/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/14358 Problem: Parallel rmdir operations on the same directory results in ENOTCONN messages eventhough there was no network disconnect. In blocking entry lock during rmdir, AFR takes 2 set of locks on all its children-One (parentdir,name of dir to be deleted), the other (full lock on the dir being deleted). We proceed to pre-op stage even if only a single lock (but not all the needed locks) was obtained, only to fail it with ENOTCONN because afr_locked_nodes_get() returns zero nodes in afr_changelog_pre_op(). Fix: After we get replies for all blocking lock requests, if we don't have the minimum number of locks to carry out the FOP, unlock and fail the FOP. The op_errno will be that of the last failed reply we got, i.e. whatever is set in afr_lock_cbk(). Change-Id: I9fcb6bec0335dd9cdd851a92cb08605b4a959e64 BUG: 1339446 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/14528 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/afr: Do heals with shd pidPranith Kumar K2016-05-241-1/+10
| | | | | | | | | | | | | | | | | | | | | | Multi-threaded healing doesn't create synctask with shd pid, this leads to healing problems when quota exceeds. >BUG: 1332994 >Change-Id: I80f57c1923756f3298730b8820498127024e1209 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/14211 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> Change-Id: Id3f3ee44b27db7dbf94f3e7a9a6bfd7412d44ab8 BUG: 1335686 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14313 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* features/shard: Get hard-link-count in {unlink,rename}_cbk before deleting ↵Krutika Dhananjay2016-05-231-8/+13
| | | | | | | | | | | | | | | shards Backport of http://review.gluster.org/#/c/14334/ Change-Id: I41321d8b00a10f1bd5b0a7b008f673b1aa240d0c BUG: 1337837 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/14450 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/afr : Do post-op in case of symmetric errorsAnuradha Talur2016-05-231-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/14310/ In afr_changelog_post_op_now(), if there was any error, meaning op_ret < 0, post-op was not being done even when the errors were symmetric and there were no "failed subvols". Fix: When the errors are symmetric, perform post-op. How was the bug found : In a 1 X 3 volume with shard and write behind on when writes were done into a file with one brick down, the trusted.afr.dirty xattr's value for .shard directory would keep increasing as post op was not done but pre-op was. This incorrectly showed .shard to be in split-brain. RCA: When WB is on, due to multiple writes being sent on offset lying in the same shard, chances are that same shard file will be created more than once with the second one failing with op_ret < 0 and op_errno = EEXIST. As op_ret was negative, afr wouldn't do post-op, leading to no decrement of trusted.afr.dirty xattr. Thus showing .shard directory to be in split-brain. >Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59 >BUG: 1335652 >Signed-off-by: Anuradha Talur <atalur@redhat.com> Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59 BUG: 1335836 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/14332 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/tier: downgrade max-cycle-time log message to INFODan Lambright2016-05-211-1/+1
| | | | | | | | | | | | | | | | | | | | The "max cycle time" log message was incorrectly logged as an error. Downgrade it to INFO. This is a backport of 14336 > Change-Id: Ia7d074423019fa79443bc6ea694148b7b8da455d > BUG: 1335973 > Signed-off-by: Dan Lambright <dlambrig@redhat.com> Change-Id: I29514c66781f49d5c36a0d3ad5dee6ab0c0368cd BUG: 1336470 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/14361 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: N Balachandran <nbalacha@redhat.com>
* cluster/afr: If possible give errno received from lower xlatorsPranith Kumar K2016-05-201-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | In case of 3 way replication with quorum enabled with sharding, if one bricks is brought down and brought back up sometimes fops fail with EROFS because the mknod of shard file fails with two good nodes with EEXIST. So even when quorum is not met, it makes sense to unwind with the errno returned by lower xlators as much as possible. >Change-Id: Iabd91cd7c270f5dfe6cbd18c50e59c299a331552 >BUG: 1336612 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/14369 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> BUG: 1337831 Change-Id: I18979db118911e588da318094b2d22f5d426efd5 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14452 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* dht: rename takes lock on parent directory if destination existsSakshi Bansal2016-05-181-7/+32
| | | | | | | | | | | | | | | | | | | | | | For directory rename if destination exists the source directory is created as a child of the given destination directory. Since the new child directory does not exist take lock on parent of the child directory. Backport of http://review.gluster.org/#/c/14371/ > Change-Id: I24a34605a2cd65984910643ff5462f35e8fc7e71 > BUG: 1336698 > Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Change-Id: I24a34605a2cd65984910643ff5462f35e8fc7e71 BUG: 1337022 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/14407 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tier/detach: Clear tier-fix-layout-complete xattr after migration threads joinDan Lambright2016-05-161-33/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we had wrongly placed the clearing tier-fix-layout-complete xattr before the joining of migration threads. This would lead to situations where failure of clearing the xattr would cause the premature death of migration threads. Now we clear the xattr only after the data movement threads join, ensuring that all migration is done. This is a backport of 14285 > Change-Id: I829b671efa165ae13dbff7b00707434970b37a09 > BUG: 1334839 > Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Change-Id: I475242e6a05cacd2252dc5c29b160e7abc5d1791 BUG: 1336148 Reviewed-on: http://review.gluster.org/14341 Smoke: Gluster Build System <jenkins@build.gluster.com> Tested-by: N Balachandran <nbalacha@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* tier/detach : During detach check if background fixlayout is doneJoseph Fernandes2016-05-141-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | During detach check if background fixlayout is done, if not done ignore the case and continue detach. Backport of http://review.gluster.org/14147 > Change-Id: I5d5cfc0e73d0eb217fdeab54c432dc4af8bc598d > BUG: 1332136 > Signed-off-by: Joseph Fernandes <josferna@redhat.com> > Reviewed-on: http://review.gluster.org/14147 > Smoke: Gluster Build System <jenkins@build.gluster.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > Reviewed-by: Dan Lambright <dlambrig@redhat.com> >Signed-off-by: Joseph Fernandes <josferna@redhat.com> Change-Id: I2161673cf6861b02a8e323366a13a13587258bef BUG: 1333934 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/14246 Smoke: Gluster Build System <jenkins@build.gluster.com> Tested-by: Joseph Fernandes CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* cluster/afr: Handle non-zero source in heal-info decisionPranith Kumar K2016-05-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/14302 Problem: Spurious entries are reported in heal info when the mount is on second/third brick of the replica pair because local-child is given preference in selecting source. The code is supposed to suggest the file needs heal if the (source < 0) (failure code path), but instead it is written as if any non-zero value is considered failure. Fix: Treat +ve source as success case BUG: 1334566 Change-Id: Iac6d68cc429496756a9d8f6e21e71aa5f6b932ee Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14304 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Anuradha Talur <atalur@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* dht:remember locked subvol and send unlock to the sameMohammed Rafi KC2016-05-065-21/+221
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During locking we send lock request to cached subvol, and normally we unlock to the cached subvol But with parallel fresh lookup on a directory, there is a race window where the cached subvol can change and the unlock can go into a different subvol from which we took lock. This will result in a stale lock held on one of the subvol. So we will store the details of subvol which we took the lock and will unlock from the same subvol back port of> >Change-Id: I47df99491671b10624eb37d1d17e40bacf0b15eb >BUG: 1311002 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/13492 >Reviewed-by: N Balachandran <nbalacha@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Raghavendra G <rgowdapp@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Change-Id: Ia847e7115d2296ae9811b14a956f3b6bf39bd86d BUG: 1333645 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/14236 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: Perform NULL check on xdata before dict_get()Krutika Dhananjay2016-05-051-1/+1
| | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/14212 .. to prevent unnecessary logs from gf_msg_callingfn() Change-Id: I443322d26f2b5238320bc14f1ddc94affe030943 BUG: 1333241 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/14216 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/ec: Fix issues with eager lockingXavier Hernandez2016-05-042-75/+192
| | | | | | | | | | | | | | | | | | | | | | | | | | | Due to a race in timer cancellation, in some cases it was possible to unlock the lock while another concurrent fop that needed it continues execution as if it were not released. This patch also fixes an issue that caused a lock to not be released if an error was found while preparing ec_update_size_version(). > Change-Id: I1344a3f5ecfc333f05a09e62653838264c9c26b1 > BUG: 1331254 > Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> > Reviewed-on: http://review.gluster.org/14112 > Smoke: Gluster Build System <jenkins@build.gluster.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Chen Chen <chenchen@smartquerier.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Change-Id: I21edd17d914dfa8d2f98e6bbde50830496e12a92 BUG: 1330132 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/14174 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/dht: Handle rmdir failure correctlyN Balachandran2016-05-022-13/+108
| | | | | | | | | | | | | | | | | | DHT did not handle rmdir failures on non-hashed subvols correctly in a 2x2 dist-rep volume, causing the directory do be deleted from the hashed subvol. Also fixed an issue where the dht_selfheal_restore errcodes were overwriting the rmdir error codes. Change-Id: If2c6f8dc8ee72e3e6a7e04a04c2108243faca468 BUG: 1331933 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/14123 Smoke: Gluster Build System <jenkins@build.gluster.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* libglusterfs: Add debug and trace logs for stack traceRaghavendra Talur2016-05-022-4/+5
| | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/13448 commit c4d67a8338b42d6485f49999f310cbb9ed5359c5 It has become very difficult to identify the xlator which returned negative op_ret. Being able to just change the log level and visualize the stack is helpful in such cases. Change-Id: I6545b4802c1ab4d0d230d5e9e036afb2384882e1 BUG: 1330739 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/14099 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Do not fsync when durability is offPranith Kumar K2016-04-291-0/+3
| | | | | | | | | | | | | | | | | | | | | | | >BUG: 1329501 >Change-Id: Id402c20f2fa19b22bc402295e03e7a0ea96b0c40 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/14048 >Reviewed-by: Ravishankar N <ravishankar@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Jeff Darcy <jdarcy@redhat.com> >(cherry picked from commit 302e218f68ef5edab6b369411d6f06cafea08ce1) Change-Id: Ifbf693f8de6765fca90a9ef3c11c1912c2e9885f BUG: 1331342 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14104 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* tier/dht: check for rebalance completion for EIO errorMohammed Rafi KC2016-04-281-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an ongoing rebalance completion check task been triggered by dht, there is a possibility of a race between afr setting subvol as non-readable and dht updates the cached subvol. In this window a write can fail with EIO. Back port of> >Change-Id: I42638e6d4104c0dbe893d1bc73e1366188458c5d >BUG: 1329503 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: http://review.gluster.org/14049 >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Smoke: Gluster Build System <jenkins@build.gluster.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: N Balachandran <nbalacha@redhat.com> >Reviewed-by: Jeff Darcy <jdarcy@redhat.com> (cherry picked from commit a9ccd0c8ea6989c72073028b296f73a6fcb6b896) Change-Id: Ifc741f05a9cfe5c1d9c03085312ebbf21b8b798b BUG: 1330428 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/14070 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* afr: propagate child up event after timeoutRavishankar N2016-04-274-99/+200
| | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/11113 Problem: During mount, afr waits for response from all its children before notifying the parent xlator. In a 1x2 replica volume , if one of the nodes is down, the mount will hang for more than a minute until child down is received from the client xlator for that node. Fix: When parent up is received by afr, start a 10 second timer. In the timer call back, if we receive a successful child up from atleast one brick, propagate the event to the parent xlator. Change-Id: I31e57c8802c1a03a4a5d581ee4ab82f3a9c8799d BUG: 1330855 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14088 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* cluster/tier: fixes for migration over ec as cold tierDan Lambright2016-04-271-1/+1
| | | | | | | | | | | | | | | | Backport of http://review.gluster.org/11433 The above mentioned patch was backported but it was not complete. This patch backports whatever was left out. For more info, refer to previous backport at http://review.gluster.org/#/c/11652. Change-Id: I6ccda8ca8e43485b9b354341bbfcb302496f632c BUG: 1330765 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/14084 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* afr: replica pair going offline does not require CHILD_MODIFIED eventSakshi Bansal2016-04-272-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | As a part of CHILD_MODIFIED event DHT forgets the current layout and performs fresh lookup. However this is not required when a replica pair goes offline as the xattrs can be read from other replica pairs. Hence setting different event to handle replica pair going down. > Backport of http://review.gluster.org/#/c/12573/ > Change-Id: I5ede2a6398e63f34f89f9d3c9bc30598974402e3 > BUG: 1281230 > Signed-off-by: Sakshi Bansal <sabansal@redhat.com> > Reviewed-on: http://review.gluster.org/12573 > Reviewed-by: Ravishankar N <ravishankar@redhat.com> > Reviewed-by: Susant Palai <spalai@redhat.com> > Tested-by: NetBSD Build System <jenkins@build.gluster.org> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Change-Id: Ida30240d1ad8b8730af7ab50b129dfb05264fdf9 BUG: 1283972 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/12767 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* quota: setting 'read-only' option in xdata to instruct DHT to not healSakshi Bansal2016-04-261-2/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When quota is enabled the quota enforcer tries to get the size of the source directory by sending nameless lookup to quotad. But if the rename is successful even on one subvol or the source layout has anomalies then this nameless lookup in quotad tries to heal the directory which requires a lock on as many subvols as it can. But src is already locked as part of rename. For rename to proceed in brick it needs to complete a cluster-wide lookup. But cluster-wide lookup in quotad is blocked on locks held by rename, hence a deadlock. To avoid this quota sends an option in xdata which instructs DHT not to heal. Backport of http://review.gluster.org/#/c/13988/ > Change-Id: I792f9322331def0b1f4e16e88deef55d0c9f17f0 > BUG: 1252244 > Signed-off-by: Sakshi Bansal <sabansal@redhat.com> > Reviewed-on: http://review.gluster.org/13988 > Smoke: Gluster Build System <jenkins@build.gluster.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I792f9322331def0b1f4e16e88deef55d0c9f17f0 BUG: 1328473 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/14031 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* dht/rebalance: Handle GF_DEFRAG_STOPSusant Palai2016-04-261-0/+17
| | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/14004 Problem: On a rebal stop, the migrator threads don't intimate the crawler thread to wake up in case it is waiting on signal from migrator thread. BUG: 1330529 Change-Id: I9019a715c7b4673b8bb5a75d7d33a18add85ce33 Reviewed-by: N Balachandran <nbalacha@redhat.com> Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/14076 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com>
* dht: add "nuke" functionality for efficient server-side deletionJeff Darcy2016-04-261-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of the following two patches (of which the second is a trivial adjustment to a timeout for a test added by the first). http://review.gluster.org/13878 http://review.gluster.org/13935 This turns a special xattr into an rmdir with flags set. When that hits the posix translator on the server side, that causes the file/directory to be moved into the special "landfill" directory. From there, the posix janitor thread will take care of deleting it entirely on the server side - traversing it recursively if necessary. A couple of secondary issues were fixed to make this effective. * FUSE now ensures that setxattr values are NUL terminated. * The janitor thread now gets woken up immediately when something is placed in 'landfill' instead of only when file descriptors need to be closed. * The default landfill-emptying interval was reduced to 10s. To use the feature, issue a setxattr something like this: setfattr -n glusterfs.dht.nuke -v "" /mnt/glusterfs/vol/some_dir The value doesn't actually matter; the mere receipt of a request with this key is sufficient. Some day it might be useful to allow setting a required value as a sort of password, so that only those who know it can access the underlying special functionality. Change-Id: I4132a30d1faa53a6682399ad1d9041e2c4519951 BUG: 1330241 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/14065 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/afr: Fix inode-leak in data self-healPranith Kumar K2016-04-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Thanks to Olia-Kremmyda for finding the bug on github review, https://github.com/gluster/glusterfs/commit/b8106d1127f034ffa88b5dd322c23a10e023b9b6 >Change-Id: Ib8640ed0c331a635971d5d12052f0959c24f76a2 >BUG: 1329773 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/14052 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> BUG: 1329779 Change-Id: I3d77f0b445fdedf2c582ea88f8d89e1da525638f Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14053 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
* cluster/distribute: detect stale layouts in entry fopsRaghavendra G2016-04-255-27/+645
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dht_mkdir () { first-hashed-subvol = hashed-subvol for "bname" in in-memory layout of "parent"; inodelk (SETLKW, parent, "LAYOUT_HEAL_DOMAIN", "can be any subvol, but we choose first-hashed-subvol randomly"); { begin: hashed-subvol = hashed-subvol for "bname" in in-memory layout of "parent"; hash-range = extract hashe-range from layout of "parent"; ret = mkdir (parent/bname, hashed-subvol, hash-range); if (ret == "hash-value doesn't fall into layout stored on the brick (this error is returned by posix-mkdir)") { refresh_parent_layout (); goto begin; } } inodelk (UNLCK, parent, "LAYOUT_HEAL_DOMAIN", "first-hashed-subvol"); proceed with other parts of dht_mkdir; } posix_mkdir (parent/bname, client-hash-range) { disk-hash-range = getxattr (parent, "dht-layout-key"); if (disk-hash-range != client-hash-range) { fail-with-error ("hash-value doesn't fall into layout stored on the brick"); return 0; } continue-with-posix-mkdir; } Similar changes need to be done for dentry operations like create, symlink, link, unlink, rmdir, rename. These will be addressed in subsequent patches. This patch addresses only mkdir codepath. This change breaks stripe tests, as on some striped subvols dht layout xattrs are not set for some reason. This results in failure of mkdir. Since striped volumes are always created with dht, some tests associated with stripe also fail. So, I am making following tests changes (since stripe is out of maintainance): * modify ./tests/basic/rpc-coverage.t to not to use striped volumes * mark all (2) tests in tests/bugs/stripe/ as bad tests Change-Id: Idd1ae879f24a48303dc743c1bb4d91f89a629e25 BUG: 1329062 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/14040 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Fix spurious entries in heal infoPranith Kumar K2016-04-204-16/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Locking schemes in afr-v1 were locking the directory/file completely during self-heal. Newer schemes of locking don't require Full directory, file locking. But afr-v2 still has compatibility code to work-well with older clients, where in entry-self-heal it takes a lock on a special 256 character name which can't be created on the fs. Similarly for data self-heal there used to be a lock on (LLONG_MAX-2, 1). Old locking scheme requires heal info to take sh-domain locks before examining heal-state. If it doesn't take sh-domain locks, then there is a possibility of heal-info hanging till self-heal completes because of compatibility locks. But the problem with heal-info taking sh-domain locks is that if two heal-info or shd, heal-info try to inspect heal state in parallel using trylocks on sh-domain, there is a possibility that both of them assuming a heal is in progress. This was leading to spurious entries being shown in heal-info. Fix: As long as there is afr-v1 way of locking, we can't fix this problem with simple solutions. If we know that the cluster is running newer versions of locking schemes, in those cases we can give accurate information in heal-info. So introduce a new option called 'locking-scheme' which if it is 'granular' will give correct information in heal-info. Not only that, Extra network hops for taking compatibility locks, sh-domain locks in heal info will not be necessary anymore. Thus it improves performance. >BUG: 1322850 >Change-Id: Ia563c5f096b5922009ff0ec1c42d969d55d827a3 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13873 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Ashish Pandey <aspandey@redhat.com> >Reviewed-by: Anuradha Talur <atalur@redhat.com> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> >(cherry picked from commit b6a0780d86e7c6afe7ae0d9a87e6fe5c62b4d792) Change-Id: If7eee18843b48bbeff4c1355c102aa572b2c155a BUG: 1294675 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14039 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Use parallel dir scan functionalityPranith Kumar K2016-04-174-13/+59
| | | | | | | | | | | | | | | | | | | | >BUG: 1221737 >Change-Id: I0ed71a72f0e33bd733723e00a01cf28378c5534e >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13755 >Reviewed-on: http://review.gluster.org/13992 >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Smoke: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Jeff Darcy <jdarcy@redhat.com> BUG: 1325857 Change-Id: I7c6b2ea065edd7f5dafffeb42fd6c601b4ab8d14 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14010 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Don't lookup/forget inodesPranith Kumar K2016-04-175-43/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: All inodes that are looked-up are always forgotten without fail in afr removing the benefits of them being in lru. This same code can cause crashes if between inode_lookup, inode_forget in afr if the top xlator does inode_forget(0). Fix: Don't use lookup/forget in afr. No benefits are there at the moment for keeping this code. It is impossible to prevent top xlators to do inode_forget(0). Found similar instances in ec and removed them even though those code paths are not going to be executed in any place other than heal-daemon. >BUG: 1321554 >Change-Id: Ia4cb236178f7f129cc898d53f0bbd26f494a2a8d >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13834 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Anuradha Talur <atalur@redhat.com> BUG: 1327864 Change-Id: I3507ed88cd75e069ed302525bfa259cf407871fb Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14009 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* cluster/afr: Fix partial heals in 3-way replicationPranith Kumar K2016-04-165-15/+138
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: When there are 2 sources and one sink and if two self-heal daemons try to acquire locks at the same time, there is a chance that it gets a lock on one source and sink leading partial to heal. This will need one more heal from the remaining source to sink for the complete self-heal. This is not optimal. Fix: Upgrade non-blocking locks to blocking lock on all the subvolumes, if the number of locks acquired is majority and there were eagains. >BUG: 1318751 >Change-Id: Iae10b8d3402756c4164b98cc49876056ff7a61e5 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13766 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> >(cherry picked from commit 8deedef565df49def75083678f8d1558c7b1f7d3) Change-Id: Ia164360dc1474a717f63633f5deb2c39cc15017c BUG: 1327863 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14008 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Fix witness counting code in src/sink detectionPranith Kumar K2016-04-161-7/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In afr-v1 pre-op, xattrop increments self xattr first then it increments the value on rest. In post-op, xattr value is decreased first on rest and at last it gets decremented on self. So for a possible operation to be witnessed i.e. a fop is seen by the brick it is important to have at least 1 pending op because without completing pre-op fop won't come. The other possibility is when fop completes but at the time of post-op after decrementing pending counts on others just before decrementing its own pending count, the brick dies. Fix: Fix witness detection code in afr_self_heal_find_direction() >BUG: 1322253 >Change-Id: Ia7e76482c0a46e775e269bb96ec1b9490a3ac18f >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13811 >Smoke: Gluster Build System <jenkins@build.gluster.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> >(cherry picked from commit e88962f8c49ea1d65fa26703e5c11be3f21af2ba) Change-Id: I5d9a6d323b35409127c26f3ce61c5e1d91395b18 BUG: 1326212 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/13975 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Don't delete gfid-req from lookup requestPranith Kumar K2016-04-122-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Afr does dict_ref of the xattr_req that comes to it and deletes "gfid-req" key. Dht uses same dict to send lookup to other subvolumes. So in case of directories and more than 1 dht subvolumes, second subvolume till the last subvolume won't get a lookup request with "gfid-req". So gfid reset never happens on the directories in distributed replicate subvolume for 2nd till last subvolumes. Fix: Make a copy of lookup xattr request. Also fixed replies_wipe possibly resetting gfid to NULL gfid >BUG: 1312816 >Change-Id: Ic16260e5a4664837d069c1dc05b9e96ca05bda88 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13545 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> >(cherry picked from commit 9b022c3a3f2f774904b5b458ae065425b46cc15d) Change-Id: Ia68193b559ec1dfd841cc5a22ef1fa801b866200 BUG: 1313693 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/13574 CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.com>
* arbiter: write performance improvementRavishankar N2016-04-112-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/13906 Problem: The throughput for a 'dd' workload was much less for arbiter configuration when compared to normal replica-3 volume. There were 2 issues: i)arbiter_writev was using the request dict as response dict while unwinding, leading to incorect GLUSTERFS_WRITE_IS_APPEND and GLUSTERFS_OPEN_FD_COUNT values (=4), leading to immediate post-ops because is_afr_delayed_changelog_post_op_needed() failed due to afr_are_multiple_fds_opened() check. ii) The arbiter code in afr was setting local->transaction.{start and len} =0 to take full file locks. What this meant was even for simultaenous but non-overlapping writevs, afr_transaction_eager_lock_init() was not happening because afr_locals_overlap() always stays true. Consequently is_afr_delayed_changelog_post_op_needed() failed due to local->delayed_post_op not being set. Fix: i) Send appropriate response dict values in arbiter_writev. ii) Modify flock params instead of local->transaction.{start and len} to take full file locks in the transaction. Also changed _fill_writev_xdata() in posix to fill rsp_xdata for whatever key is requested for. Change-Id: I1c5fc5e98aba49ade540bb441a022e65b753432a BUG: 1324809 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reported-by: Robert Rauch <robert.rauch@gns-systems.de> Reported-by: Russel Purinton <russell.purinton@gmail.com> Reviewed-on: http://review.gluster.org/13925 Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* dht: lock on subvols to prevent rename and lookup selfheal raceSakshi2016-04-101-43/+158
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch addresses two races while renaming directories: 1) While renaming src to dst, if a lookup selfheal is triggered it can recreate src on those subvols where rename was successful. This leads to multiple directories (src and dst) having same gfid. To avoid this we must take locks on all subvols with src. 2) While renaming if the dst exists and a lookup selfheal is triggered it will find anomalies in the dst layout and try to heal the stale layout. To avoid this we must take lock on any one subvol with dst. Backport of http://review.gluster.org/#/c/11880/ > Change-Id: I637f637d3241d9065cd5be59a671c7e7ca3eed53 > BUG: 1252244 > Signed-off-by: Sakshi <sabansal@redhat.com> Change-Id: I637f637d3241d9065cd5be59a671c7e7ca3eed53 BUG: 1324381 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/13917 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/ec: Do not ref dictionary in lookupPranith Kumar K2016-04-091-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: 1) dict_for_each loops over the elements without any locks, so the members of the dictionary can be ref/unrefed while dict_for_each is executed by another thread leading to crashes. Basically with distributed ec + disctributed replicate as cold, hot tiers. tier sends a lookup which fails on ec. (By this time dict already contains ec xattrs) After this lookup_everywhere code path is hit in tier which triggers lookup on each of distribute's hash lookup but fails which leads to the cold, hot dht's lookup_everywhere in two parallel epoll threads where in ec when it tries to set trusted.ec.version/dirty/size as keys in the dictionary, the older values against the same key get erased. While this erasing is going on if the thread that is doing lookup on afr's subvolume accesses these keys either in dict_copy_with_ref or client xlator trying to serialize, that can either lead to crash or hang based on if the spin/mutex lock is called on invalid memory. 2) EC deletes GF_CONTENT_KEY from the dictionary, this may lead to extra reads in case of lookup-everwhere for tiered volumes. Fix: Do dict_copy_with_ref() for the lookup-dictionary. This is avoiding the problem and is not actually fixing the 1st problem. 2nd problem will be fixed. >Change-Id: I5427aa14c48cb7572977d4de9a28c5ffff2b4b95 >BUG: 1315560 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: http://review.gluster.org/13680 >Smoke: Gluster Build System <jenkins@build.gluster.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.com> >Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> >(cherry picked from commit 64cba025b13aad7fb3020a04930cfa22fbfcb859) Change-Id: I2828a0d9e730bc4b0ea6cee037365131767ae43e BUG: 1322520 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/13859 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com>
* dht: lock on subvols to prevent lookup vs rmdir raceSakshi2016-04-065-88/+435
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a possibility that while an rmdir is completed on some non-hashed subvol and proceeding to others, a lookup selfheal can recreate the same directory on those subvols for which the rmdir had succeeded. Now the deletion of the parent directory will fail with an ENOTEMPTY. To fix this take blocking inodelk on the subvols before starting rmdir. Selfheal must also take blocking inodelk before creating the entry. Backport of http://review.gluster.org/13528 > Change-Id: I168a195c35ac1230ba7124d3b0ca157755b3df96 > BUG: 1245065 > Signed-off-by: Sakshi <sabansal@redhat.com> > Reviewed-on: http://review.gluster.org/13528 > CentOS-regression: Gluster Build System <jenkins@build.gluster.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Smoke: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Tested-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I168a195c35ac1230ba7124d3b0ca157755b3df96 BUG: 1257894 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/13915 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>