summaryrefslogtreecommitdiffstats
path: root/xlators/cluster
Commit message (Collapse)AuthorAgeFilesLines
* cluster/afr: Make read child match check in afr optionalv3.5.4beta1Krutika Dhananjay2015-03-183-0/+21
| | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/9917 Having this particular check which was introduced by commit bb2df4e63fa8a5d65f18b4a5efc757e8d475fbff causes a drop in performance in readdirp. So the behavior is made configurable with this patch. Change-Id: I9012a6bb955229a0cbb48f06e4e2edc0782dfead BUG: 1202675 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/9924 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* afr: exit out of stack winds in for loops if call_count is zeroRavishankar N2015-03-115-4/+16
| | | | | | | | | | | | | ....in order to avoid a race where the fop cbk frees the frame's local variables and the fop tries to access it at a later point in time. Change-Id: I91d2696e5e183c61ea1368b3a538f9ed7f3851de BUG: 1200764 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/9856 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: pranith karampuri <pranith.k@gmail.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* afr: Don't write to sparse regions of sink.Ravishankar N2015-02-121-51/+146
| | | | | | | | | | | | | | | | | | | | | | | Corresponding afr-v2 fix: http://review.gluster.org/#/c/9480/ Problem: When data-self-heal-algorithm is set to 'full', shd just reads from source and writes to sink. If source file happened to be sparse (VM workloads), we end up actually writing 0s to the corresponding regions of the sink causing it to lose its sparseness. Fix: If the source file is sparse, and the data read from source and sink are both zeros for that range, skip writing that range to the sink. Change-Id: Iade957e4173c87e45a2881df501ba2ad3eb1a172 BUG: 1190633 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/9611 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/marker: Filter internal xattrs in lookupPranith Kumar K2015-02-123-23/+20
| | | | | | | | | | | | | | | | | Backport of http://review.gluster.com/9061 Afr should ignore quota-size-key as part of self-heal but should heal quota-limit key. BUG: 1162230 Change-Id: I639cfabbc44468da29914096afc7e2eca1ff1292 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9091 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: serialize inode locksPranith Kumar K2015-02-112-77/+221
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.com/9372 Problem: Afr winds inodelk calls without any order, so blocking inodelks from two different mounts can lead to dead lock when mount1 gets the lock on brick-1 and blocked on brick-2 where as mount2 gets lock on brick-2 and blocked on brick-1 Fix: Serialize the inodelks whether they are blocking inodelks or non-blocking inodelks. Non-blocking locks also need to be serialized. Otherwise there is a chance that both the mounts which issued same non-blocking inodelk may endup not acquiring the lock on any-brick. Ex: Mount1 and Mount2 request for full length lock on file f1. Mount1 afr may acquire the partial lock on brick-1 and may not acquire the lock on brick-2 because Mount2 already got the lock on brick-2, vice versa. Since both the mounts only got partial locks, afr treats them as failure in gaining the locks and unwinds with EAGAIN errno. Change-Id: I939a1d101e313a9f0abf212b94cdce1392611a5e BUG: 1177928 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9374 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Preserve errno in case of failures on all subvolsPranith Kumar K2015-02-111-0/+3
| | | | | | | | | | | | | | | | | | | | | Partly backported from http://review.gluster.org/8984 Problem: When quorum is enabled and the fop fails on all the subvolumes, op_errno is set to EROFS which overrides the actual errno returned from bricks. Fix: Don't override the errno when fop fails on all subvols. PS: Afr-v2 code differs from afr-v1 so that pre-op part of code doesn't apply Change-Id: I61e57bbf1a69407230ec172a983de18d1c624fd2 BUG: 1162150 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9088 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Fix incorrect updates to parent timesKrutika Dhananjay2015-02-051-6/+5
| | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/9457 In directory write FOPs, as far as updates to timestamps associated with parent by DHT is concerned, there are three possibilities: a) time (in sec) gotten from child of DHT < time (in sec) in inode ctx b) time (in sec) gotten from child of DHT = time (in sec) in inode ctx c) time (in sec) gotten from child of DHT > time (in sec) in inode ctx In case (c), for time in nsecs, it is the value returned by DHT's child that must be selected. But what DHT_UPDATE_TIME ends up doing is to choose the maximum of (time in nsec gotten from DHT's child, time in nsec in inode ctx). Change-Id: I1388e374c8a2029f3b8919380e68620e7591bde6 BUG: 1186121 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/9496 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: When parent and entry read subvols are different, set ↵Krutika Dhananjay2015-02-054-12/+121
| | | | | | | | | | | | | | | | | | | | entry->inode to NULL Backport of: http://review.gluster.org/#/c/9477 That way a lookup would be forced on the entry, and its attributes will always be selected from its read subvol. Additionally, directory write fops as well as LOOKUP have been made to unwind parent attributes from parent's read child in AFR. Change-Id: I9fca49fa91cc3a65f53db855fedb90b08f1ca7f4 BUG: 1186121 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/9504 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* Cluster/DHT : Fixed crash due to null derefNithya Balachandran2014-12-041-3/+4
| | | | | | | | | | | | | | | | | | | | | A lookup on a linkto file whose trusted.glusterfs.dht.linkto xattr points to a subvol that is not part of the volume can cause the brick process to segfault due to a null dereference. Modified to check for a non-null value before attempting to access the variable. Change-Id: Ie8f9df058f842cfc0c2b52a8f147e557677386fa BUG: 1162767 BUG:1162767 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/9034 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: venkatesh somyajulu <vsomyaju@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 0da374020c17256141fb3971ae792b62097d72df) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/9099 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* core: fix Ubuntu code audit (cppcheck) resultsKaleb S. KEITHLEY2014-11-264-10/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | See http://review.gluster.org/#/c/7583/ BZ 1086460 AFAICT these are false positives: [geo-replication/src/gsyncd.c:99]: (error) Memory leak: str [geo-replication/src/gsyncd.c:395]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1200]: (error) Possible null pointer dereference: fde Program exits, resource leak not an issue [extras/geo-rep/gsync-sync-gfid.c:105]: (error) Resource leak: fp Test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. Not built: [xlators/cluster/ha/src/ha.c:2699]: (error) Possible null pointer dereference: priv The remainder are fixed with this change-set: [heal/src/glfs-heal.c:357]: (error) Possible null pointer dereference: remote_subvol [libglusterfs/src/xlator.c:648]: (error) Uninitialized variable: gfid [libglusterfs/src/xlator.c:649]: (error) Uninitialized variable: gfid [xlators/cluster/afr/src/afr-inode-write.c:469]: (error) Possible null pointer dereference: frame [xlators/cluster/afr/src/afr-self-heal-common.c:1704]: (error) Possible null pointer dereference: local [xlators/cluster/dht/src/dht-rebalance.c:1643]: (error) Possible null pointer dereference: ctx [xlators/cluster/stripe/src/stripe.c:4963]: (error) Possible null pointer dereference: local [xlators/features/changelog/src/changelog.c:1464]: (error) Possible null pointer dereference: priv [xlators/mgmt/glusterd/src/glusterd-geo-rep.c:1656]: (error) Possible null pointer dereference: command [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:914]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:998]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-sm.c:248]: (error) Possible null pointer dereference: new_ev_ctx [xlators/mgmt/glusterd/src/glusterd-store.c:1332]: (error) Possible null pointer dereference: handle [xlators/mgmt/glusterd/src/glusterd-utils.c:4706]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:5613]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:6342]: (error) Possible null pointer dereference: path_tokens [xlators/mgmt/glusterd/src/glusterd-utils.c:6343]: (error) Possible null pointer dereference: path_tokens [xlators/mount/fuse/src/fuse-bridge.c:4591]: (error) Uninitialized variable: finh [xlators/mount/fuse/src/fuse-bridge.c:3004]: (error) Possible null pointer dereference: state [xlators/nfs/server/src/nfs-common.c:89]: (error) Dangerous usage of 'volname' (strncpy doesn't always null-terminate it). [xlators/performance/quick-read/src/quick-read.c:585]: (error) Possible null pointer dereference: iobuf Rerunning cppcheck afterwards: As before, test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. As before, believed to be false positive: [geo-replication/src/gsyncd.c:99]: (error) Memory leak: str [geo-replication/src/gsyncd.c:395]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1200]: (error) Possible null pointer dereference: fde As before, not built: [xlators/cluster/ha/src/ha.c:2699]: (error) Possible null pointer dereference: priv False positive after fix: [heal/src/glfs-heal.c:356]: (error) Possible null pointer dereference: remote_subvol [xlators/cluster/stripe/src/stripe.c:4963]: (error) Possible null pointer dereference: local Change-Id: Ib3029d3223f5a13e2ac386a527d64d5ffe3ecb90 BUG: 1092037 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/7605 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr : Prevent excessive logging of split-brain messages.Anuradha2014-11-135-6/+15
| | | | | | | | | | | | | | | | | Running the volume heal info command would result in excessive logging of split-brain messages. After this patch, running heal info command will not log the split brain messages. This info is now displayed in the output of heal info command instead. If a file is in split-brain, a message "Is in split-brain" will be written against its name. Change-Id: Ib8979be04f5ac7c59ce3ad1185886bb54b8be808 BUG: 1161102 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/9069 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Fix xattr heal comparison checksPranith Kumar K2014-11-131-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of part of the fixes in http://review.gluster.org/8558 Problem: While implementing list-xattr based meta-data self-heal for afr-v2 we found 2 issues, with afr-v1's implementation. 1) change in QUOTA_SIZE_KEY xattr value can trigger spurious metadata self-heal. 2) xattr comparison function that is implemented for afr-v1 checks if the number of xattrs in both the xattrs is same and then checks that the xattrs present in brick-1's response are present and equal. But what we observed me was that count also contains the gluster internal/virtual xattrs where as the compare function should only compare on-disk external xattrs that can be healed. So the correct implementation should check that the external xattrs in first brick's response are present in second brick's response and vide versa. Fix: This patch is partly backported from afr-v2's implementation. Will be providing the links where necessary. 1) Added QUOTA_SIZE_KEY xattr to the list of xattrs that need to be ignored. (http://review.gluster.org/#/c/8558/10/xlators/cluster/afr/src/afr-common.c line: 1155) 2) For xattrs to be equal, check all keys in xattr-dict1 are in xattr-dict2 and equal and vice versa. (http://review.gluster.org/#/c/8558/10/xlators/cluster/afr/src/afr-common.c line: 1195) Change-Id: I63aa74858c6f608b98d1fe425b3fa56f925bb5b3 BUG: 1162230 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9090 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* afr : Logging improvementAnuradha2014-11-137-143/+188
| | | | | | | | | | | | | | | In case of a split brain, adding the type of split brain that might have occurred. Added a few details to entry-self-heal in self-heal completion status. Change-Id: Ie99e2ecdd8aa5b1c57d7d4515d33a17dfa0c67ad BUG: 1101138 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/7870 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Fix excessive logging in glfsheal log fileKrutika Dhananjay2014-10-273-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wrong afr_local_t instance was being used in the missing entry sh check in afr_self_heal(), which was leading to entrylk failure messages of the following kind in glfsheal logfile: [2014-10-21 12:39:04.109875] I [afr-self-heal-common.c:2146:afr_sh_post_nb_entrylk_missing_entry_sh_cbk] 0-vol-replicate-1: Non blocking entrylks failed The fix involves sending the right "local" to afr_can_start_missing_entry_gfid_self_heal(). After fixing this, there were two more codepaths giving out too many log messages of the following kinds: [2014-10-21 22:19:29.568533] E [afr-self-heal-data.c:1611:afr_sh_data_open_cbk] 0-dis-rep-replicate-1: open of 8a858b02-0fc7-4713-9f61-8ca28dea82c0 failed on child dis-rep-client-2 (Stale file handle) [2014-10-21 22:19:29.577948] E [afr-self-heal-entry.c:2353:afr_sh_post_nonblocking_entry_cbk] 0-dis-rep-replicate-1: Non Blocking entrylks failed for ff9c82c4-5c0c-4ed9-b745-604a28dc352d. which are also fixed appropriately as part of this patch. Change-Id: Idd8d8e5735ee7a4ac36f369525f96e53276e0859 BUG: 1153629 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8965 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* logs: Do selective logging for errnosPranith Kumar K2014-10-221-3/+4
| | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8918 http://review.gluster.org/8955 Problem: Just after replace-brick the mount logs are filled with ENOENT/ESTALE warning logs because the file is yet to be self-healed now that the brick is new. Fix: Do conditional logging for the logs. ENOENT/ESTALE will be logged at lower log level. Only when debug logs are enabled, these logs will be written to the logfile. BUG: 1155073 Change-Id: Icf06f2fc4f2f91e199de24a88bcb0ce9b8955ebd Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8960 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Fix sizeof typoPranith Kumar K2014-10-181-1/+1
| | | | | | | | | Change-Id: Ib82a1c4967f0880c91c114e4baae08bdbe77bb60 BUG: 1153626 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8935 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* Only cleanup priv->shd.statistics if createdTiziano Müller2014-10-071-5/+8
| | | | | | | | | | | | | It is possible that the statistics array was never created and dereferencing it may case a segfault. BUG: 1147156 Change-Id: If905457ba985add62c3ed543bced1313640af762 Signed-off-by: Tiziano Müller <tiziano.mueller@stepping-stone.ch> Reviewed-on: http://review.gluster.org/8873 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: fix memory corruption in locking api.Raghavendra G2014-10-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | <man 3 qsort> The contents of the array are sorted in ascending order according to a comparison function pointed to by compar, which is called with two arguments that "point to the objects being compared". </man 3 qsort> qsort passes "pointers to members of the array" to comparision function. Since the members of the array happen to be (dht_lock_t *), the arguments passed to dht_lock_request_cmp are of type (dht_lock_t **). Previously we assumed them to be of type (dht_lock_t *), which resulted in memory corruption. Change-Id: Iee0758704434beaff3c3a1ad48d549cbdc9e1c96 BUG: 1140556 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/8659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit ed4a754f7b6b103b23b2c3e29b8b749cd9db89f3) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8733 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Fixed double UNWIND in lookup everywhere codeShyam2014-10-011-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | In dht_lookup_everywhere_done: Line: 1194 we call DHT_STACK_UNWIND and in the same if condition we go ahead and call, goto unwind_hashed_and_cached; which at Line 1371 calls another UNWIND. As is obvious, higher frames could cleanup their locals and on receiving the next unwind could cause a coredump of the process. Fixed the same by calling the required return post the first unwind Change-Id: Ic5d57da98255b8616a65b4caaedabeba9144fd49 BUG: 1140549 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8666 Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: susant palai <spalai@redhat.com> (cherry picked from commit b3314ea6e820fb659255d0e6e9a32ea259b7526d) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8732 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Added code to capture races in dht-lookup pathVenkatesh Somyajulu2014-10-011-6/+142
| | | | | | | | | | | | | Change-Id: I9270d2d40ebd4b113ff961583dfda7754741f15b BUG: 1129541 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/8430 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit bb2d5f49b5684e6484af16a580870cfe104aecd2) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8731 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* Cluster/DHT: Changing rename log severityNithya Balachandran2014-10-011-1/+1
| | | | | | | | | | | | | | | | | Changing the log level for a rename message from debug to info to improve debuggability Change-Id: I53031fcf97fffd62095692477330ecde0cf47dcd BUG: 1140348 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8582 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> (cherry picked from commit c087e5f634a0b2262118d61ab9c1d5c8e18c8819) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8730 Reviewed-by: venkatesh somyajulu <vsomyaju@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Rename should not fail post hardlink creationShyam2014-10-012-41/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the rename path, we wind the creation of newname hardlink and linkto file in dst hashed a the same time. If the linkto creation fails, but the link creation succeeds, we enter the failure code and cleanup the created newname hardlink. In the interim if another client looks up newname and finds it as a hardlink from FUSE, it could send an unlink for oldname instead of a rename. This combined with the above cleanup code could end up losing all the files copies, and thereby losing data. This fix separates these steps into 2 parts, creating the linkto first and then the link file, so that post link file creation no failures would cleanup the newname file. If linkto fails then link is not attempted, thereby not polluting the name space with newname. Change-Id: I61da8e906060da16a31ea1076eec2f01fd617f44 BUG: 1140348 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8570 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 4ce3db8e508e715a43352b082e861fd0e729951f) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8728 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Treat linkto file rename failure as non-critial errorShyam2014-10-011-6/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is a critical failure iff we fail to rename the cached file if the rename of the linkto failed, it is not a critical failure, and we do not want to lose the created hard link for the new name as that could have been read by other clients. NOTE: If another client is attempting the same oldname -> newname rename, and finds both file names as existing, and are hard links to each other, then FUSE would send in an unlink for oldname. In this time duration if we treat the linkto as a critical error and unlink the newname we created, we would have effectively lost the file to rename operations. Repercussions of treating this as a non-critical error is that we could leave behind a stale linkto file and/or not create the new linkto file, the second case would be rectified by a subsequent lookup, the first case by a rebalance, like for all stale linkto files Change-Id: Ia53ad8b43c3cf8f48ef5b43fd1fec4274e807556 BUG: 1140348 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8563 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 890ab583a519b3b189a61c5fd563b4326836b988) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8727 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: synchronize rename and file-migrationRaghavendra G2014-10-013-34/+291
| | | | | | | | | | | | | | Change-Id: I4f243c946f76d440680b651235f925e3d0ebf0fd BUG: 1140348 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/8523 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 21c8946b0bc05d0bc8f84906e16b8c2cbca4c9f9) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8726 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: invoke callback when there are no locks to be unlocked.Raghavendra G2014-10-011-0/+4
| | | | | | | | | | | | | | | Change-Id: I375cb68f1075c2d58cf9d09ed6bd5e2746e1637d BUG: 1140348 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/8549 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit a1b02e53a5fdf706290ce143fbbf8a09845105d0) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8724 Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: introduce dht locking api.Raghavendra G2014-10-015-1/+659
| | | | | | | | | | | | | | | Change-Id: I41389ba91951d3e63e617aa32cd0bee848261c72 BUG: 1140348 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/8521 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit a1fe3d72e373bf0deaed152842d12d94bb9129dc) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8722 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Launch self-heal only when all the brick status is knownPranith Kumar K2014-10-011-2/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: File goes into split-brain because of wrong erasing of xattrs. RCA: The issue happens because index self-heal is triggered even before all the bricks are up. So what ends up happening while erasing the xattrs is, xattrs are erased only on the sink brick for the brick that it thinks is up leading to split-brain Example: lets say the xattrs before heal started are: brick 2: trusted.afr.vol1-client-2=0x000000020000000000000000 trusted.afr.vol1-client-3=0x000000020000000000000000 brick 3: trusted.afr.vol1-client-2=0x000010040000000000000000 trusted.afr.vol1-client-3=0x000000000000000000000000 if only brick-2 came up at the time of triggering the self-heal only 'trusted.afr.vol1-client-2' is erased leading to the following xattrs: brick 2: trusted.afr.vol1-client-2=0x000000000000000000000000 trusted.afr.vol1-client-3=0x000000020000000000000000 brick 3: trusted.afr.vol1-client-2=0x000010040000000000000000 trusted.afr.vol1-client-3=0x000000000000000000000000 So the file goes into split-brain. Change-Id: I79f9a289d2118a715d262398221037b684a53d2a BUG: 1142614 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8757 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Fix dht_access treating directory like filesShyam2014-09-291-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | When the cluster topology changes due to add-brick, all sub volumes of DHT will not contain the directories till a rebalance is completed. Till the rebalance is run, if a caller bypasses lookup and calls access due to saved/cached inode information (like NFS server does) then, dht_access misreads the error (ESTALE/ENOENT) from the new subvolumes and incorrectly tries to handle the inode as a file. This results in the directories in memory state in DHT to be corrupted and not heal even post a rebalance. This commit fixes the problem in dht_access thereby preventing DHT from misrepresenting a directory as a file in the case presented above. Change-Id: Idcdaa3837db71c8fe0a40ec0084a6c3dbe27e772 BUG: 1140338 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8462 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 6630fff4812f4e8617336b98d8e3ac35976e5990) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8721 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Handle EAGAIN properly in inodelkPranith Kumar K2014-09-292-14/+155
| | | | | | | | | | | | | | | | | | | | | | | Problem: When one of the brick is taken down and brough back up in a replica pair, locks on that brick will be allowed. Afr returns inodelk success even when one of the bricks already has the lock taken. Fix: If any brick returns EAGAIN return failure to parent xlator. Note: This change only works for non-blocking inodelks. This patch addresses dht-synchronization which uses non-blocking locks for rename. Blocking lock is issued by only one of the rebalance processes. So for now there is no possibility of deadlock. Change-Id: I07673f8873263da334e03f35c6cdb5db9410a616 BUG: 1141733 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8739 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Fix dict_t leaks in rebalance process' execution pathKrutika Dhananjay2014-09-231-4/+7
| | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8763 Two dict_t objects are leaked for every file migrated in success codepath. It is the caller's responsibility to unref dict that it gets from calls to syncop_getxattr(); and rebalance performs two syncop_getxattr()s per file without freeing them. Also, syncop_getxattr() on GF_XATTR_LINKINFO_KEY doesn't seem to be using the response dict. Hence, NULL is now passed as opposed to @dict to syncop_getxattr(). Change-Id: I89d72bf5b8d75571ab33ff44953adf8e542826ef BUG: 1142052 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8784 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Added keys in dht_lookup_everywhere_doneVenkatesh Somyajulu2014-09-171-4/+72
| | | | | | | | | | | | | | | | | | | | | | | Case where both cached (C1) and hashed file are found, but hash does not point to above cached node (C1), then dont unlink if either fd-is-open on hashed or linkto-xattr is not found. Change-Id: I7ef49b88d2c88bf9d25d3aa7893714e6c0766c67 BUG: 1129541 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Change-Id: I86d0a21d4c0501c45d837101ced4f96d6fedc5b9 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/8429 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: susant palai <spalai@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 718f10e0d68715be2d73e677974629452485c699) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8720 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/dht: Modified logic of linkto file deletion on non-hashedVenkatesh Somyajulu2014-09-172-21/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently whenever dht_lookup_everywhere gets called, if in dht_lookup_everywhere_cbk, a linkto file is found on non-hashed subvolume, file is unlinked. But there are cases when this file is under migration. Under such condition, we should avoid deletion of file. When some other rebalance process changes the layout of parent such that dst_file (w.r.t. migration) falls on non-hashed node, then may be lookup could have found it as linkto file but just before unlink, file is under migration or already migrated In such cased unlink can be avoided. Race: ------- If we have two bricks (brick-1 and brick-2) with initial file "a" under BaseDir which is hashed as well as cached on (brick-1). Assume "a" hashing gives 44. Brick-1 Brick-2 Initial Setup: BaseDir/a BaseDir [1-50] [51-100] Now add new-brick Brick-3. 1. Rebalance-1 on node Node-1 (Brick-1 node) will reset the BaseDir Layout. 2. After that it will perform a) Create linkto file on new-hashed (brick-2) b) Perform file migration. 1.Rebalance-1 Fixes the base-layout: Brick-1 Brick-2 Brick-3 --------- ---------- ------------ BaseDir/a BaseDir BaseDir [1-33] [34-66] [67-100] 2. Only a) is BaseDir/a BaseDir/a(linkto) BaseDir performed Create linktofile Now rebalance 2 on node-2 jumped in and it will perform step 1 and 2-a. After (rebal-2, step-1), it changes the layout of the BaseDir. BaseDir/a BaseDir/a(link) BaseDir [67-100] [1-33] [34-66] For (rebale-2, step-2), It will perform lookup at Brick-3 as w.r.t new layout 44 falls for brick-3. But lookup will fail. So dht_lookup_everywhere gets called. NOTE: On brick-2 by rebalance-1, a linkto file was created. Currently that linkto files gets deleted by rebalance-2 lookup as it is considered as stale linkto file. But with patch if rebalance is already in progress or rebalance is over, linkto file will not be unlinked. If rebalance is in progress fd will be open and if rebalance is over then linkto file wont be set. Change-Id: I3fee0d28de3c76197325536a9e30099d2413f079 BUG: 1129541 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/8345 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 966997992bdbd5fffc632bf705678e287ed50bf7) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8719 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* dht: fix rename raceNithya Balachandran2014-09-171-2/+6
| | | | | | | | | | | | | | | | | Additional check to check if we created the linkto file before deleting it in the rename cleanup function Change-Id: I919cd7cb24f948ba4917eb9cf50d5169bb730a67 BUG: 1129527 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8338 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit df770496ba5ed6d2c72bcfc76ca9e816a08c383a) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8718 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/dht: Fix races to avoid deletion of linkto fileVenkatesh Somyajulu2014-09-173-29/+332
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Explanation of Race between rebalance processes: https://bugzilla.redhat.com/show_bug.cgi?id=1110694#c4 STATE 1: BRICK-1 only one brick Cached File in the system STATE 2: Add brick-2 BRICK-1 BRICK-2 STATE 3: Lookup of File on brick-2 by this node's rebalance will fail because hashed file is not created yet. So dht_lookup_everywhere is about to get called. STATE 4: As part of lookup link file at brick-2 will be created. STATE 5: getxattr to check that cached file belongs to this node is done STATE 6: dht_lookup_everywhere_cbk detects the link created by rebalance-1. It will unlink it. STATE 7: getxattr at the link file with "pathinfo" key will be called will fail as the link file is deleted by rebalance on node-2 Fix: So in the STATE 6, we should avoid the deletion of link file. Every time dht_lookup_everywhere gets called, lookup will be performed on all the nodes. So to avoid STATE 6, if linkto file is found, it is not deleted until valid case is found in dht_lookup_everywhere_done. Case 1: if linkto file points to cached node, and cached file exists, uwind with success. Case 2: if linkto does not point to current cached node, and cached file exists: a) Unlink stale link file b) Create new link file Case 3: Only linkto file exists: Delete linkto file Case 4: Only cached file Create link file (Handled event without patch) Case 5: Neither cached nor hashed file is present Return with ENOENT (handled even without patch) Change-Id: Ibf53671410d8d613b8e2e7e5d0ec30fc7dcc0298 BUG: 1129541 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/8231 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 74d92e322e3c9f4f70ddfbf9b0e2140922009658) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8717 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* DHT/Create : Failing to identify a linkto file in lookup_everywhere_cbk pathSusant Palai2014-09-171-7/+41
| | | | | | | | | | | | | | | | | | | In case a file is not found in its cached subvol we proceed with dht_lookup_everywhere. But as we dont add the linkto xattr to the dictionary, we fail to identify any linkto file encountered.The implication being we end up thinking the linkto file as a regular file and proceed with the fop. Change-Id: Iab02dc60e84bb1aeab49182f680c0631c33947e2 BUG: 1139170 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/8277 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> (cherry picked from commit 52da727e7564963a8a244fc5cb7028315e458529) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8715 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* dht: fix rename raceJeff Darcy2014-09-172-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | If two clients try to rename the same file at the same time, we sometimes end up with *no file at all* in either the old or new location. That's kind of bad. The culprit seems to be some overly aggressive cleanup code. AFAICT, based on today's study of the code, the intent of the changed section is to remove any linkfile we might have created before the actual rename. However, what we're removing might not be our extra link. If we're racing with another client that's also doing a rename, it might be the only remaining link to the user's data. The solution, which is good enough to pass this test but almost certainly still not complete, is to be more selective about when we do this unlink. Now, we only do it if we know that, at some point, we did in fact create the link without error (notably ENOENT on the source or EEXIST on the destination) ourselves. Change-Id: I8d8cce150b6f8b372c9fb813c90be58d69f8eb7b BUG: 1129527 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8269 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 950f9d8abe714708ca62b86f304e7417127e1132) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8714
* DHT/readdirp: Directory not shown/healed on mount point if existsSusant Palai2014-09-173-5/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on single brick(non first up subvolume). Problem: If snapshot is taken, when mkdir has succeeded only on hashed_subvolume, then after restoring snapshot the directory is not shown on mount point. Why: dht_readdirp takes only those directory entries in to account, which are present on first_up_subvolume. Hence, if the "hashed subvolume" is not same as first_up_subvolume, it wont be listed on mount point and also not healed. Solution: Case 1: (Rebalance not running)If hashed subvolume is NULL or down then filter in first_up_subvolume. Other wise the corresponding hashed subvolume will take care of the directory entry. Case 2: If readdirp_optimize option is turned on then read from first_up_subvol Change-Id: Idaad28f1c9f688dbfb1a8a3ab8b244510c02365e BUG: 1139103 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/7599 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit b8f3aab95f01ac7d590a5ba490e890d9cf8c2e50) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8713 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* dht/rebalance: Do not allow rebalance when gfid mismatch foundVenkatesh Somyajulu2014-09-171-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to race condition, it may so happen that, gfid obtained in readdirp and gfid found by lookup are different for a given name. in that case do no allow the rebalance. Readdirp of an entry will bring the gfid, which will be stored in the inode through inode_link, and when lookup is done and gfid brought by lookup is different from the one stored in the inode, client3_3_lookup_cbk will return ESATLE and error will be captured by rebalance process. Cherry picked from commit 72c7afcd: > Change-Id: Iad839177ef9b80c1dd0e87f3406bcf4cb018e6fa > BUG: 1104653 > Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> > Reviewed-on: http://review.gluster.org/7973 > Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Also merged the oneline change from commit de22a20a: > Change-Id: I979b7333efa93b1e8f4c73ccf048d48e308f9289 > BUG: 1104653 > Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> > Reviewed-on: http://review.gluster.org/8073 > Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: Iad839177ef9b80c1dd0e87f3406bcf4cb018e6fa BUG: 1138922 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8712 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* NFS: stripe-xlator should pass EOF at end of READDIRNiels de Vos2014-08-261-11/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NFS READDIR replies are made of a header, a sequence of entries, and a EOF flag. When GlusterFS's NFS server is used along with stripe xlator, it fails to set the EOF flag, which violates NFS RFC and confuses some clients. The bug is caused because nfs xlator sets EOF if it gets op_errno set to ENOENT. That value is produced in storage xlator and propagated through server, client, and other xlators until stripe xlator handles it. stripe only passed op_errno if op_ret < 0, which is not the case here. This change set adds a special case for that situation to fix the problem. Cherry picked from commit 9b5231e5c98b8cfa116838287c7a14042702795f: > Change-Id: Ie6db94b0515292387cfb04c1e4a9363f34fcd19a > BUG: 1130969 > Reported-by: Emmanuel Dreyfus <manu@netbsd.org> > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/8493 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> > Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> > Tested-by: Emmanuel Dreyfus <manu@netbsd.org> Change-Id: Ie6db94b0515292387cfb04c1e4a9363f34fcd19a BUG: 1132391 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8509 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
* cluster/afr: Fix a minor typo.Vijay Bellur2014-08-261-1/+1
| | | | | | | | | | Change-Id: I2e1bb21febb6754ed8772df6342c5c06aac95046 BUG: 1133949 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8545 Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Fix leaks in self-heal code pathPranith Kumar K2014-07-185-8/+15
| | | | | | | | | | | Change-Id: I5301ec9ebac27afe52e85cad75e6395d7f891355 BUG: 1120151 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8316 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/stripe: Fix EINVAL errors on quota enabled volumesKrutika Dhananjay2014-06-271-1/+0
| | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8145 Write operations on directories with quota enabled used to fail with EINVAL on stripe volumes. This was due to assert failure in stripe_lookup(), meant to ensure loc->path is not NULL. However, in nameless lookup (in this particular case triggered by quotad, which has stripe xlator in its graph), loc->path can be legitimately NULL. The fix involves removing this check in stripe_lookup(). Change-Id: Ibbd4f68763fdd8a85f29da78b3937cef1ee4fd1e BUG: 1100050 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8186 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/stripe: don't treat ESTALE as failure in lookupRavishankar N2014-06-241-2/+3
| | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8135 Problem: In a stripe volume, symlinks are created only on the first brick via the default_symlink() call. During gfid lookup, server sends ESTALE from the other bricks, which is treated as error in stripe_lookup_cbk() Fix: Don't treat ESTALE as error in stripe_lookup_cbk() BUG: 1111454 Change-Id: I337ef847f007b7c20feb365da329c79c121d20c4 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8153 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Fix resolution issues with afrPranith Kumar K2014-06-242-10/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem with afr: Lets say there is a directory hierarchy a/b/c/d on the mount and the user is cd'ed into the directory. Bring down one of the bricks of replica and remove all directories/files to simulate disk replacement on that brick. Now this brick is brought back up. Creates on the cd'ed directory fail with ESTALE. Basically before sending a create of 'f' inside 'd', fuse sends a lookup to make sure the file is not present. On one of the bricks 'd' is present and 'f' is not so it sends ENOENT as response. On the new brick 'd' itself is not present. So it sends ESTALE. In afr ESTALE is considered to be special errno on witnessing which lookup has to fail. And ESTALE is given more priority than ENOENT. Due to these reasons lookup fails with ESTALE rather than ENOENT. Since lookup didn't fail with ENOENT, 'create' can't be issued so the command is failed with ESTALE. Solution: Afr needs to consider ESTALE errno normally and ENOENT needs to be given more priority so that operations like create can proceed even when only one of the brick is up and running. Whenever client xlator identifies that gfid-changed, it sets that information in lookup xdata. Afr uses this information to fail the lookup with ESTALE so that top xlator can send fresh lookup. Change-Id: Ie8e0e327542fd644409eb5dadf451679afa1c0e5 BUG: 1112348 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8154 Tested-by: Justin Clift <justin@gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* protocol/server: send ENOENT instead of ESTALE for older clientsRavishankar N2014-06-201-0/+8
| | | | | | | | | | | | | | | Modify protocol/server and storage/posix to send ENOENT to older clients instead of ESTALE http://goo.gl/t83hmL Change-Id: Ie63e91e73e33769ce9dc3d964938cfd6eb4c4be5 BUG: 1109832 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8080 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: remove unused variableRavishankar N2014-05-151-2/+0
| | | | | | | | | | | | Fix a compiler warning about unused variable. Missed to catch this in http://review.gluster.org/7723 Change-Id: Iac97051edb97bdd7dbc5f0fd0f2528d4d404d16d BUG: 1096040 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7765 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: send opendirs to all children for entry self-healRavishankar N2014-05-142-25/+10
| | | | | | | | | | | | | | | | Problem: In entry self-heal, opendir was sent only to one source because of which afr_sh_erase_pending() failed to clear the changelogs of other sources. So heals were happening multiple times. Fix :Send opendir to all sources. Change-Id: Ief4f131848b24a0da782f29b9f1d40e136d3fcff BUG: 1096040 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7723 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Remove eager-lock stub on finodelk failurePranith Kumar K2014-05-143-6/+23
| | | | | | | | | | | | | | | | | | | | Problem: For write fops afr's transaction eager-lock init adds transactions that can share eager-lock to fdctx list. But if eager-lock finodelk fop fails the stub remains in the list. This could later lead to corruption of the list and lead to infinite loop on the list leading to a mount hang. Fix: Remove the stub when finodelk fails. Change-Id: Ic9d1368907c32edb4ea2e6db623e869e4f50180d BUG: 1063190 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7748 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Fix bugs in quorum implementationPranith Kumar K2014-05-108-103/+166
| | | | | | | | | | | | | | - Have common place to perform quorum fop wind check - Check if fop succeeded in a way that matches quorum to avoid marking changelog in split-brain. Change-Id: I663072ece0e1de6e1ee9fccb03e1b6c968793bc5 BUG: 1066996 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7513 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Unable to self heal symbolic linksVenkatesh Somyajulu2014-05-082-2/+4
| | | | | | | | | | | | | | | | | | | | | | Problem: Under the entry self heal, readlink is done at the source and sink. When readlink is done at the sink, because link is not present at the sink, afr expects ENOENT. AFR translator takes decisions for new link creation based on ENOENT but server translator is modified to return ESTALE because of which afr xlator is not able to heal. Fix: The check for inode absence at server includes ESTALE as well. Change-Id: I9218da214ed44f7219570ad9dae298d6b5cbded9 BUG: 1046624 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6600 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>