summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
...
* cluster/dht: Fix dht_access treating directory like filesShyam2014-09-291-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | When the cluster topology changes due to add-brick, all sub volumes of DHT will not contain the directories till a rebalance is completed. Till the rebalance is run, if a caller bypasses lookup and calls access due to saved/cached inode information (like NFS server does) then, dht_access misreads the error (ESTALE/ENOENT) from the new subvolumes and incorrectly tries to handle the inode as a file. This results in the directories in memory state in DHT to be corrupted and not heal even post a rebalance. This commit fixes the problem in dht_access thereby preventing DHT from misrepresenting a directory as a file in the case presented above. Change-Id: Idcdaa3837db71c8fe0a40ec0084a6c3dbe27e772 BUG: 1140338 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8462 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 6630fff4812f4e8617336b98d8e3ac35976e5990) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8721 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* nfs: 'gluster volume help' should show the correct path for nfs.mount-rmtabNiels de Vos2014-09-291-1/+1
| | | | | | | | | | | | | | | This has been fixed in newer releases with a more invasive change: - build: make GLUSTERD_WORKDIR rely on localstatedir http://review.gluster.org/8246 For this release, it is sufficient to only correct the help text. Change-Id: Id5db126e3b5f8b98c2810d5b5a6f7079200df612 BUG: 1147243 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8874 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Handle EAGAIN properly in inodelkPranith Kumar K2014-09-292-14/+155
| | | | | | | | | | | | | | | | | | | | | | | Problem: When one of the brick is taken down and brough back up in a replica pair, locks on that brick will be allowed. Afr returns inodelk success even when one of the bricks already has the lock taken. Fix: If any brick returns EAGAIN return failure to parent xlator. Note: This change only works for non-blocking inodelks. This patch addresses dht-synchronization which uses non-blocking locks for rename. Blocking lock is issued by only one of the rebalance processes. So for now there is no possibility of deadlock. Change-Id: I07673f8873263da334e03f35c6cdb5db9410a616 BUG: 1141733 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8739 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/dht: Fix dict_t leaks in rebalance process' execution pathKrutika Dhananjay2014-09-231-4/+7
| | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8763 Two dict_t objects are leaked for every file migrated in success codepath. It is the caller's responsibility to unref dict that it gets from calls to syncop_getxattr(); and rebalance performs two syncop_getxattr()s per file without freeing them. Also, syncop_getxattr() on GF_XATTR_LINKINFO_KEY doesn't seem to be using the response dict. Hence, NULL is now passed as opposed to @dict to syncop_getxattr(). Change-Id: I89d72bf5b8d75571ab33ff44953adf8e542826ef BUG: 1142052 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8784 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/marker: Fill loc->path before sending the control to healingVarun Shastry2014-09-222-24/+42
| | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8296 Problem: The xattr healing part of the marker requires path to be present in the loc. Currently path is not filled while triggering from the readdirp_cbk. Solution: Current patch tries to fill the loc with path. Change-Id: Icc16c740bc6453714306eae19526e18c1775c1d8 BUG: 1144315 Signed-off-by: Varun Shastry <vshastry@redhat.com> Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8778 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: Added keys in dht_lookup_everywhere_doneVenkatesh Somyajulu2014-09-171-4/+72
| | | | | | | | | | | | | | | | | | | | | | | Case where both cached (C1) and hashed file are found, but hash does not point to above cached node (C1), then dont unlink if either fd-is-open on hashed or linkto-xattr is not found. Change-Id: I7ef49b88d2c88bf9d25d3aa7893714e6c0766c67 BUG: 1129541 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Change-Id: I86d0a21d4c0501c45d837101ced4f96d6fedc5b9 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/8429 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: susant palai <spalai@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 718f10e0d68715be2d73e677974629452485c699) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8720 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/dht: Modified logic of linkto file deletion on non-hashedVenkatesh Somyajulu2014-09-173-23/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently whenever dht_lookup_everywhere gets called, if in dht_lookup_everywhere_cbk, a linkto file is found on non-hashed subvolume, file is unlinked. But there are cases when this file is under migration. Under such condition, we should avoid deletion of file. When some other rebalance process changes the layout of parent such that dst_file (w.r.t. migration) falls on non-hashed node, then may be lookup could have found it as linkto file but just before unlink, file is under migration or already migrated In such cased unlink can be avoided. Race: ------- If we have two bricks (brick-1 and brick-2) with initial file "a" under BaseDir which is hashed as well as cached on (brick-1). Assume "a" hashing gives 44. Brick-1 Brick-2 Initial Setup: BaseDir/a BaseDir [1-50] [51-100] Now add new-brick Brick-3. 1. Rebalance-1 on node Node-1 (Brick-1 node) will reset the BaseDir Layout. 2. After that it will perform a) Create linkto file on new-hashed (brick-2) b) Perform file migration. 1.Rebalance-1 Fixes the base-layout: Brick-1 Brick-2 Brick-3 --------- ---------- ------------ BaseDir/a BaseDir BaseDir [1-33] [34-66] [67-100] 2. Only a) is BaseDir/a BaseDir/a(linkto) BaseDir performed Create linktofile Now rebalance 2 on node-2 jumped in and it will perform step 1 and 2-a. After (rebal-2, step-1), it changes the layout of the BaseDir. BaseDir/a BaseDir/a(link) BaseDir [67-100] [1-33] [34-66] For (rebale-2, step-2), It will perform lookup at Brick-3 as w.r.t new layout 44 falls for brick-3. But lookup will fail. So dht_lookup_everywhere gets called. NOTE: On brick-2 by rebalance-1, a linkto file was created. Currently that linkto files gets deleted by rebalance-2 lookup as it is considered as stale linkto file. But with patch if rebalance is already in progress or rebalance is over, linkto file will not be unlinked. If rebalance is in progress fd will be open and if rebalance is over then linkto file wont be set. Change-Id: I3fee0d28de3c76197325536a9e30099d2413f079 BUG: 1129541 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/8345 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 966997992bdbd5fffc632bf705678e287ed50bf7) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8719 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* dht: fix rename raceNithya Balachandran2014-09-171-2/+6
| | | | | | | | | | | | | | | | | Additional check to check if we created the linkto file before deleting it in the rename cleanup function Change-Id: I919cd7cb24f948ba4917eb9cf50d5169bb730a67 BUG: 1129527 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8338 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit df770496ba5ed6d2c72bcfc76ca9e816a08c383a) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8718 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/dht: Fix races to avoid deletion of linkto fileVenkatesh Somyajulu2014-09-175-40/+399
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Explanation of Race between rebalance processes: https://bugzilla.redhat.com/show_bug.cgi?id=1110694#c4 STATE 1: BRICK-1 only one brick Cached File in the system STATE 2: Add brick-2 BRICK-1 BRICK-2 STATE 3: Lookup of File on brick-2 by this node's rebalance will fail because hashed file is not created yet. So dht_lookup_everywhere is about to get called. STATE 4: As part of lookup link file at brick-2 will be created. STATE 5: getxattr to check that cached file belongs to this node is done STATE 6: dht_lookup_everywhere_cbk detects the link created by rebalance-1. It will unlink it. STATE 7: getxattr at the link file with "pathinfo" key will be called will fail as the link file is deleted by rebalance on node-2 Fix: So in the STATE 6, we should avoid the deletion of link file. Every time dht_lookup_everywhere gets called, lookup will be performed on all the nodes. So to avoid STATE 6, if linkto file is found, it is not deleted until valid case is found in dht_lookup_everywhere_done. Case 1: if linkto file points to cached node, and cached file exists, uwind with success. Case 2: if linkto does not point to current cached node, and cached file exists: a) Unlink stale link file b) Create new link file Case 3: Only linkto file exists: Delete linkto file Case 4: Only cached file Create link file (Handled event without patch) Case 5: Neither cached nor hashed file is present Return with ENOENT (handled even without patch) Change-Id: Ibf53671410d8d613b8e2e7e5d0ec30fc7dcc0298 BUG: 1129541 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/8231 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 74d92e322e3c9f4f70ddfbf9b0e2140922009658) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8717 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* storage/posix: removing deleting entries in case of creation failuresRaghavendra G2014-09-174-41/+91
| | | | | | | | | | | | | | | | | The code is not atomic enough to not to delete a dentry created by a prallel dentry creation operation. Change-Id: I9bd6d2aa9e7a1c0688c0a937b02a4b4f56d7aa3d BUG: 1129527 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/8327 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 45fbf99cb669e891a84a8228cef27973f5e774bf) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8716 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* DHT/Create : Failing to identify a linkto file in lookup_everywhere_cbk pathSusant Palai2014-09-171-7/+41
| | | | | | | | | | | | | | | | | | | In case a file is not found in its cached subvol we proceed with dht_lookup_everywhere. But as we dont add the linkto xattr to the dictionary, we fail to identify any linkto file encountered.The implication being we end up thinking the linkto file as a regular file and proceed with the fop. Change-Id: Iab02dc60e84bb1aeab49182f680c0631c33947e2 BUG: 1139170 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/8277 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> (cherry picked from commit 52da727e7564963a8a244fc5cb7028315e458529) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8715 Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* dht: fix rename raceJeff Darcy2014-09-172-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | If two clients try to rename the same file at the same time, we sometimes end up with *no file at all* in either the old or new location. That's kind of bad. The culprit seems to be some overly aggressive cleanup code. AFAICT, based on today's study of the code, the intent of the changed section is to remove any linkfile we might have created before the actual rename. However, what we're removing might not be our extra link. If we're racing with another client that's also doing a rename, it might be the only remaining link to the user's data. The solution, which is good enough to pass this test but almost certainly still not complete, is to be more selective about when we do this unlink. Now, we only do it if we know that, at some point, we did in fact create the link without error (notably ENOENT on the source or EEXIST on the destination) ourselves. Change-Id: I8d8cce150b6f8b372c9fb813c90be58d69f8eb7b BUG: 1129527 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8269 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 950f9d8abe714708ca62b86f304e7417127e1132) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8714
* DHT/readdirp: Directory not shown/healed on mount point if existsSusant Palai2014-09-173-5/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on single brick(non first up subvolume). Problem: If snapshot is taken, when mkdir has succeeded only on hashed_subvolume, then after restoring snapshot the directory is not shown on mount point. Why: dht_readdirp takes only those directory entries in to account, which are present on first_up_subvolume. Hence, if the "hashed subvolume" is not same as first_up_subvolume, it wont be listed on mount point and also not healed. Solution: Case 1: (Rebalance not running)If hashed subvolume is NULL or down then filter in first_up_subvolume. Other wise the corresponding hashed subvolume will take care of the directory entry. Case 2: If readdirp_optimize option is turned on then read from first_up_subvol Change-Id: Idaad28f1c9f688dbfb1a8a3ab8b244510c02365e BUG: 1139103 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/7599 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit b8f3aab95f01ac7d590a5ba490e890d9cf8c2e50) Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8713 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* dht/rebalance: Do not allow rebalance when gfid mismatch foundVenkatesh Somyajulu2014-09-171-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to race condition, it may so happen that, gfid obtained in readdirp and gfid found by lookup are different for a given name. in that case do no allow the rebalance. Readdirp of an entry will bring the gfid, which will be stored in the inode through inode_link, and when lookup is done and gfid brought by lookup is different from the one stored in the inode, client3_3_lookup_cbk will return ESATLE and error will be captured by rebalance process. Cherry picked from commit 72c7afcd: > Change-Id: Iad839177ef9b80c1dd0e87f3406bcf4cb018e6fa > BUG: 1104653 > Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> > Reviewed-on: http://review.gluster.org/7973 > Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Also merged the oneline change from commit de22a20a: > Change-Id: I979b7333efa93b1e8f4c73ccf048d48e308f9289 > BUG: 1104653 > Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> > Reviewed-on: http://review.gluster.org/8073 > Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: Iad839177ef9b80c1dd0e87f3406bcf4cb018e6fa BUG: 1138922 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/8712 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: fix compile warningNiels de Vos2014-09-091-1/+0
| | | | | | | | | | | | | | | | | | The following warning has been moved to an error and prevents the smoke tests in Jenkins to succeed. cc1: warnings being treated as errors /d/var_lib_jenkins_jobs/smoke/workspace/xlators/mgmt/glusterd/src/glusterd-utils.c: In function ‘glusterd_add_inode_size_to_dict’: /d/var_lib_jenkins_jobs/smoke/workspace/xlators/mgmt/glusterd/src/glusterd-utils.c:5038: error: unused variable ‘inode_size’ The warning was introduced with http://review.gluster.org/8491. Change-Id: I0c824aaf6df70dea35364af6fa72f34eea8c9829 BUG: 1081016 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8663 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gNFS: Fix memory leak in setacl code pathSantosh Kumar Pradhan2014-09-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If ACL is set on a file in Gluster NFS mount (setfacl command), and it succeed, then the NFS call state data is leaked. Though all the failure code path frees up the memory. Impact: There is a OOM kill i.e. vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) FIX: Make sure to deallocate the memory for call state in acl3_setacl_cbk() using nfs3_call_state_wipe(). Cherry picked from commit 5c869aea79c0f304150eac014c7177e74ce0852e: > Change-Id: I9caa3f851e49daaba15be3eec626f1f2dd8e45b3 > BUG: 1139195 > Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> > Reviewed-on: http://review.gluster.org/8651 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Niels de Vos <ndevos@redhat.com> Change-Id: Ia4fd03ce53a729c1a2bca86e507c39822a35efe1 BUG: 1139245 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/8661 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* NFS: stripe-xlator should pass EOF at end of READDIRNiels de Vos2014-08-261-11/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NFS READDIR replies are made of a header, a sequence of entries, and a EOF flag. When GlusterFS's NFS server is used along with stripe xlator, it fails to set the EOF flag, which violates NFS RFC and confuses some clients. The bug is caused because nfs xlator sets EOF if it gets op_errno set to ENOENT. That value is produced in storage xlator and propagated through server, client, and other xlators until stripe xlator handles it. stripe only passed op_errno if op_ret < 0, which is not the case here. This change set adds a special case for that situation to fix the problem. Cherry picked from commit 9b5231e5c98b8cfa116838287c7a14042702795f: > Change-Id: Ie6db94b0515292387cfb04c1e4a9363f34fcd19a > BUG: 1130969 > Reported-by: Emmanuel Dreyfus <manu@netbsd.org> > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/8493 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> > Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> > Tested-by: Emmanuel Dreyfus <manu@netbsd.org> Change-Id: Ie6db94b0515292387cfb04c1e4a9363f34fcd19a BUG: 1132391 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8509 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
* cluster/afr: Fix a minor typo.Vijay Bellur2014-08-261-1/+1
| | | | | | | | | | Change-Id: I2e1bb21febb6754ed8772df6342c5c06aac95046 BUG: 1133949 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8545 Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* xlators/mgmt: don't allow glusterd fork bomb (cache the brick inode size)Niels de Vos2014-08-151-30/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Was don't leave zombies if required programs aren't installed Also, the existing if (strcmp (foo, bar) == 0) antipattern leaves me underwhelmed -- table driven is better; I like fully qualified paths to system tools too. File systems aren't going to change their inode size. Rather than fork-and-exec a tool repeatedly, hang on to the answer for subsequent use. Even if there are hundreds of volumes the size of a dict to keep this in memory is small. Cherry picked from commit f20d0ef8ad7d2f65a9234fc11101830873a9f6ab: > Change-Id: I704a8b1215446488b6e9e051a3e031af21b37adb > BUG: 1081013 > Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> > Reviewed-on: http://review.gluster.org/8134 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Change-Id: I704a8b1215446488b6e9e051a3e031af21b37adb BUG: 1081016 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8491 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd: call runner_end even if runner_start failsNiels de Vos2014-08-151-0/+11
| | | | | | | | | | | | | | | | | | | Cherry picked from commit aa199093fdf37dcd87a73cea83f9b9164d5800c5: > Change-Id: I5eca01a131307ba3be2aed4922eea73025ff284c > BUG: 1081013 > Signed-off-by: Jeff Darcy <jdarcy@redhat.com> > Reviewed-on: http://review.gluster.org/7360 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Niels de Vos <ndevos@redhat.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Reviewed-by: Anand Avati <avati@redhat.com> Change-Id: I5eca01a131307ba3be2aed4922eea73025ff284c BUG: 1081016 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8490 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* dict: add dict_set_dynstr_with_allocNiels de Vos2014-08-151-23/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is an overwhelming no. of instances of the following pattern in glusterd module. ... char *dynstr = gf_strdup (str); if (!dynstr) goto err; ret = dict_set_dynstr (dict, key, dynstr); if (ret) goto err; ... With this changes it would look as below, ret = dict_set_dynstr_with_alloc (dict, key, str); if (ret) goto err; Cherry picked from commit a9d4d369efc978511e3cb69e5643945710cc9416: > Change-Id: I6a47b1cbab4834badadc48c56d0b5c8c06c6dd4d > Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> > Reviewed-on: http://review.gluster.org/7379 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Backport notes: Included this change to accommodate additional backports. BUG: 1081016 Change-Id: I6a47b1cbab4834badadc48c56d0b5c8c06c6dd4d Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8489 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* cluster/afr: Fix leaks in self-heal code pathPranith Kumar K2014-07-185-8/+15
| | | | | | | | | | | Change-Id: I5301ec9ebac27afe52e85cad75e6395d7f891355 BUG: 1120151 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8316 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* client_t: Fix memory leaksPranith Kumar K2014-07-143-10/+10
| | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8247 - Assign frame->root->client so that gf_client_unref happens in server_connection_cleanup_flush_cbk - Avoid taking extra ref in gf_client_get TODO: The whole reason why there are two types of refs bind, ref-count is to avoid lock inside lock which is not the case now. I will be sending one more patch which will accomplish that as well as changing the tablearray to list BUG: 1116672 Change-Id: Ica87b9cbf02cae34c10789cfb56d1ccdc393cbf0 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8289 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* protocol/server: '/s/ESTALE/ENOENT' only in lookup pathRavishankar N2014-07-142-22/+12
| | | | | | | | | | | | | | | | | | | | | | Problem: [1] modified the server resolver code to send ENOENT instead of ESTALE to older clients for all FOPS. This caused dht_mkdir to fail under certain conditions (see bug description). Fix: Since [1] is needed by AFR only in its lookup path, reverted the changes introduced by [1] in resolve_entry_simple () an resolve_inode_simple () and made the change instead in server_lookup_resume(). [1] http://review.gluster.org/#/c/8080 Change-Id: Idb2de25839fe712550486f2263a60c0531530d8f BUG: 1118574 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8294 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* nfs: prevent assertion error with MOUNT over UDPNiels de Vos2014-07-102-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MOUNT service over UDP runs in a separate thread. This thread does not have the correct *THIS xlator set. *THIS points to the global (base) xlator structure, but GF_CALLOC() requires it to be the NFS-xlator so that assertions can get validated correctly. This is solved by passing the NFS-xlator to the pthread function, and setting the *THIS pointer explicitly in the new thread. It seems that on occasion (needs further investigation) MOUNT over UDP does not unregister itself. There can also be issues when the kernel NLM implementation has been registered at portmap/rpcbind, so adding some unregister procedures in the cleanup of the test-cases. Cherry picked from commit ec74ceedaa41047b88d270c00eeb071b73e19664: > Change-Id: I3be5a420fc800bbcc14198d0b6faf4cf2c7300b1 > BUG: 1116503 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/8241 > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: I3be5a420fc800bbcc14198d0b6faf4cf2c7300b1 BUG: 1116997 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8258 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
* rpcsvc: Validate RPC procedure number before fetchSantosh Kumar Pradhan2014-07-088-20/+20
| | | | | | | | | | | | | | | | | | | | | While accessing the procedures of given RPC program in, rpcsvc_get_program_vector_sizer(), It was not checking boundary conditions which would cause buffer overflow and subsequently SEGV. Make sure rpcsvc_actor_t arrays have numactors number of actors. FIX: Validate the RPC procedure number before fetching the actor. Upstream main review: http://review.gluster.org/7726 BUG: 1096020 Change-Id: Iaf207ee976cb56fa9a554ec82c9eab36d3b289ed Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/8228 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/gfid-access: Fix entry operationsPranith Kumar K2014-07-081-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8204 Problem: When more than one aux-mounts are performing rmdir .gfid/<pargfid>/dir simultaneously, then sometimes a hang is observed. In gfid-access xlator When virtual parent/inode are replaced with real parent/inode in loc, virtual pargfid/gfid are not replaced with real pargfid/gfid respectively. Afr is using parent_loc->gfids to order the entry locks. But parent_loc->gfid contains random/virtual gfid generated by gfid-access xlator. Entrylk in client xlator is using loc->inod->gfid for sending entrylk which has 'real' gfid. Because the ordering is happening based on random gfids, One mount orders the locks as (L1, L2) where as the other orders them as (L2, L1) leading to a dead-lock thus a hang. Fix: Replace virtual pargfid/gfid with real pargfid/gfid when virtual-inodes are replaced with real-inodes in loc. BUG: 1114501 Change-Id: I13016de1da11762e0697792d76e6e946d991c0a4 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8251 Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/gfid-access: Fix inode leaks and loc path corruptionPranith Kumar K2014-07-082-123/+147
| | | | | | | | | | | | | Backport of http://review.gluster.org/8009 Backport of http://review.gluster.org/8163 BUG: 1112659 Change-Id: Ic70a3ddfcfef88909c12ca6791a8e20c3fee2eae Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8250 Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* quotad: Remove dead codePranith Kumar K2014-07-082-11/+1
| | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8180 client_t is created by server xlator for managing connection related resources. Quotad doesn't do that. So no need to handle anything related to it. BUG: 1113403 Change-Id: I4f457b60c0b3377f8980857a883da1cf3e44d16e Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8227 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* feautre/gfid-access: Fix EINVAL when stat on .gfidKotresh H R2014-07-072-14/+45
| | | | | | | | | | | | | | | | | | | | | Problem: Some of the inode operations on '.gfid' virtual directory was resulting in the error EINVAL from dht after failing to find the layout. Solution: Inode operations on '.gfid' virtual directory should not wind further down and should be handled accordingly in the gfid-access translator itself. Change-Id: I156cb10ffea0c46b0d747e26f74538d7fb01a1dd BUG: 1105891 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8011 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8234 Reviewed-by: Niels de Vos <ndevos@redhat.com>
* gNFS: Support wildcard in RPC auth allow/rejectNiels de Vos2014-07-021-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RFE: Support wildcard in "nfs.rpc-auth-allow" and "nfs.rpc-auth-reject". e.g. *.redhat.com 192.168.1[1-5].* 192.168.1[1-5].*, *.redhat.com, 192.168.21.9 Along with wildcard, support for subnetwork or IP range e.g. 192.168.10.23/24 The option will be validated for following categories: 1) Anonymous i.e. "*" 2) Wildcard pattern i.e. string containing any ('*', '?', '[') 3) IPv4 address 4) IPv6 address 5) FQDN 6) subnetwork or IPv4 range Currently this does not support IPv6 subnetwork. Cherry-picked from 00e247ee44067f2b3e7ca5f7e6dc2f7934c97181: > Change-Id: Iac8caf5e490c8174d61111dad47fd547d4f67bf4 > BUG: 1086097 > Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> > Reviewed-on: http://review.gluster.org/7485 > Reviewed-by: Poornima G <pgurusid@redhat.com> > Reviewed-by: Harshavardhana <harsha@harshavardhana.net> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: I18ef0a914cd403c1f9e66d1b03ecd29465cbce95 BUG: 1115369 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8223 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
* gNFS: Fix multi-homed m/c issue in NFS subdir authNiels de Vos2014-07-022-90/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NFS subdir authentication doesn't correctly handle multi-homed (host with multiple NIC having multiple IP addr) OR multi-protocol (IPv4 and IPv6) network addresses. When user/admin sets HOSTNAME in gluster CLI for NFS subdir auth, mnt3_verify_auth() routine does not iterate over all the resolved n/w addrs returned by getaddrinfo() n/w API. Instead, it just tests with the one returned first. 1. Iterate over all the n/w addrs (linked list) returned by getaddrinfo(). 2. Move the n/w mask calculation part to mnt3_export_fill_hostspec() instead of doing it in mnt3_verify_auth() i.e. calculating for each mount request. It does not change for MOUNT req. 3. Integrate "subnet support code rpc-auth.addr.<volname>.allow" and "NFS subdir auth code" to remove code duplication. Cherry-picked from commit d3f0de90d0c5166e63f5764d2f21703fd29ce976: > Change-Id: I26b0def52c22cda35ca11766afca3df5fd4360bf > BUG: 1102293 > Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> > Reviewed-on: http://review.gluster.org/8048 > Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Niels de Vos <ndevos@redhat.com> Change-Id: Ie92a8ac602bec2cd77268acb7b23ad8ba3c52f5f BUG: 1112980 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8198 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
* features/index : Creation of indices directory as soon as brick is up.Anuradha2014-06-271-0/+5
| | | | | | | | | | | | | | | | | | | Missing indices directory in the bricks leads to unwanted log messages. Therefore, indices directory needs to be created as soon as the brick comes up. This patch results in creation of indices/xattrop directory as required. Also includes a testcase to test the same. Backport of http://review.gluster.org/#/c/6343/ and http://review.gluster.org/#/c/6426/ Change-Id: I2a6f48b78aa09357ed60e45b62ec5fb08f816d76 BUG: 1112111 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/8152 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/stripe: Fix EINVAL errors on quota enabled volumesKrutika Dhananjay2014-06-271-1/+0
| | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8145 Write operations on directories with quota enabled used to fail with EINVAL on stripe volumes. This was due to assert failure in stripe_lookup(), meant to ensure loc->path is not NULL. However, in nameless lookup (in this particular case triggered by quotad, which has stripe xlator in its graph), loc->path can be legitimately NULL. The fix involves removing this check in stripe_lookup(). Change-Id: Ibbd4f68763fdd8a85f29da78b3937cef1ee4fd1e BUG: 1100050 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8186 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* protocol/client: conn-id should be unique when lk-heal is offPranith Kumar K2014-06-272-8/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/6669 Problem: It was observed that in some cases client disconnects and re-connects before server xlator could detect that a disconnect happened. So it still uses previous fdtable and ltable. But it can so happen that in between disconnect and re-connect an 'unlock' fop may fail because the fds are marked 'bad' in client xlator upon disconnect. Due to this stale locks remain on the brick which lead to hangs/self-heals not happening etc. For the exact bug RCA please look at https://bugzilla.redhat.com/show_bug.cgi?id=1049932#c0 Fix: When lk-heal is not enabled make sure connection-id is different for every setvolume. This will make sure that a previous connection's resources are not re-used in server xlator. BUG: 1113894 Change-Id: I5090f832730e4072c4b6b6758e64f757b911bd49 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8187 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/stripe: don't treat ESTALE as failure in lookupRavishankar N2014-06-241-2/+3
| | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8135 Problem: In a stripe volume, symlinks are created only on the first brick via the default_symlink() call. During gfid lookup, server sends ESTALE from the other bricks, which is treated as error in stripe_lookup_cbk() Fix: Don't treat ESTALE as error in stripe_lookup_cbk() BUG: 1111454 Change-Id: I337ef847f007b7c20feb365da329c79c121d20c4 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8153 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* cluster/afr: Fix resolution issues with afrPranith Kumar K2014-06-243-10/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem with afr: Lets say there is a directory hierarchy a/b/c/d on the mount and the user is cd'ed into the directory. Bring down one of the bricks of replica and remove all directories/files to simulate disk replacement on that brick. Now this brick is brought back up. Creates on the cd'ed directory fail with ESTALE. Basically before sending a create of 'f' inside 'd', fuse sends a lookup to make sure the file is not present. On one of the bricks 'd' is present and 'f' is not so it sends ENOENT as response. On the new brick 'd' itself is not present. So it sends ESTALE. In afr ESTALE is considered to be special errno on witnessing which lookup has to fail. And ESTALE is given more priority than ENOENT. Due to these reasons lookup fails with ESTALE rather than ENOENT. Since lookup didn't fail with ENOENT, 'create' can't be issued so the command is failed with ESTALE. Solution: Afr needs to consider ESTALE errno normally and ENOENT needs to be given more priority so that operations like create can proceed even when only one of the brick is up and running. Whenever client xlator identifies that gfid-changed, it sets that information in lookup xdata. Afr uses this information to fail the lookup with ESTALE so that top xlator can send fresh lookup. Change-Id: Ie8e0e327542fd644409eb5dadf451679afa1c0e5 BUG: 1112348 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8154 Tested-by: Justin Clift <justin@gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/index: Don't delete current xattrop index.Ravishankar N2014-06-241-0/+4
| | | | | | | | | | | | | | | | | | | Problem: `gluster v heal <volname> statistics heal-count` was not able to read the number of entries to be healed from the source brick because the base xattrop entries in indices/base_indices_holder and indices/xattrop were getting deleted after a successful heal and the code flow prevented them from creating it again. Fix: Don't delete the xattrop index unless it is stale (i.e. brick is restarted) Change-Id: Ief4eee0ddf42c4d8b711d00751be92bbbc7bbbb0 BUG: 1101647 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7897 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* protocol/client,server: Suppress ESTALE logsPranith Kumar K2014-06-242-59/+75
| | | | | | | | | | | | Backport of http://review.gluster.org/7696 Change-Id: I5372b45243ad9a68a7c20290e9f5fd5ca9ab28f2 BUG: 1095256 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8087 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/quota: Fix dict leakVarun Shastry2014-06-231-0/+3
| | | | | | | | | Change-Id: Id4542d1629175cce5fec5ab8f9a5899eec48e2eb BUG: 1110777 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/8132 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/locks: Clean up logging of cleanup in DISCONNECT codepathKrutika Dhananjay2014-06-234-69/+126
| | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/7981 Now, gfid is printed as opposed to path in cleanup messages. Also, refkeeper update is eliminated in inodelk and entrylk. Instead, the patch ensures inode and pl_inode are kept alive as long as there is atleast one lock (granted/blocked) on an inode. Also, every inode is unref'd appropriately on a DISCONNECT from the lock-owning client. Change-Id: I234db688ad0d314f4936a16cc5af70a3bd071970 BUG: 1104915 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8042 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* protocol/server: send ENOENT instead of ESTALE for older clientsRavishankar N2014-06-203-5/+40
| | | | | | | | | | | | | | | Modify protocol/server and storage/posix to send ENOENT to older clients instead of ESTALE http://goo.gl/t83hmL Change-Id: Ie63e91e73e33769ce9dc3d964938cfd6eb4c4be5 BUG: 1109832 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8080 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/gfid-access: Fix memory leaks.v3.5.1beta2Raghavendra G2014-06-101-1/+27
| | | | | | | | | | Change-Id: I90f6cdb1c8c4face1bb72a9cc77818d308389e45 BUG: 1104919 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/7983 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* gNFS: Make NFS DRC off by defaultNiels de Vos2014-06-102-2/+2
| | | | | | | | | | | | | | | | | | | | | | | DRC in NFS causes memory bloat and there are known memory corruptions. It would be good to disable drc by default till the feature is stable. Cherry picked from 4215d071cec4fc8a62ca4fd6212d83f931838829: > Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17 > BUG: 1105524 > Original-patch-by: Santosh Kumar Pradhan <spradhan@redhat.com> > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/8004 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17 BUG: 1105524 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8013 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Better op-version values and rangesNiels de Vos2014-06-101-3/+3
| | | | | | | | | | | | | | | | | | | | | | | Till now, the op-version was an incrementing integer that was incremented by 1 for every Y release (when using the X.Y.Z release numbering). This is not flexible enough to handle backports of features into Z releases. Going forward, from the upcoming 3.6.0 and 3.5.1 releases, the op-versions will be multi-digit integer values composed of the version numbers, instead of a simple incrementing integer. An X.Y.Z release will have XYZ as its op-version. Y and Z will always be 2 digits wide and will be padded with 0 if required. This way of bumping op-versions allows for gaps in between the subsequent Y releases. These gaps will allow backporting features from new Y releases into old Z releases. Change-Id: Ib6a09989f03521146e299ec0588fe36273191e47 Depends-on: http://review.gluster.org/7963 BUG: 1096425 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8010 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* features/gfid-access: calloc gfid and set in xdataRavishankar N2014-06-081-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/#/c/7978/ Problem: the gfid passed to ga_fill_tmp_loc() was a stack variable which the function set in the xdata dictionary. Accessing it in a later point in time gave unexpected values. This was easy to hit when AFR was involved like so: ga_mknod()--->xxx-->afr_mknod(): In afr_mknod transaction, once the stack-winds for the lock-phase are sent, the gfid in xdata becomes out of scope. When we send the actual op i.e. afr_mknod_wind(), the gfid in xdata is stale, causing posix to set junk gfids on the files. Fix: calloc the gfid and set it in the dict. Thanks to Pranith for the RCA! Change-Id: Ief2080836dc2923dec4be44dda4f6211430e535e BUG: 1104959 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7985 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: On gaining quorum spawn_daemons in new threadKaushal M2014-06-084-28/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/7703 from master During startup, if a glusterd has peers, it waits till quorum is obtained to spawn bricks and other services. If peers are not present, the daemons are started during glusterd' startup itself. The spawning of daemons as a quorum action was done without using a seperate thread, unlike the spawn on startup. Since, quotad was launched using the blocking runner_run api, this leads to the thread being blocked. The calling thread is almost always the epoll thread and this leads to a deadlock. The runner_run call blocks the epoll thread waiting for quotad to start, as a result glusterd cannot serve any requests. But the startup of quotad is blocked as it cannot fetch the volfile from glusterd. The fix for this is to launch the spawn daemons task in a seperate thread. This will free up the epoll thread and prevents the above deadlock from happening. BUG: 1105188 Change-Id: Idad1e96fbe1411dfd4b1a542fb5fa115673636c0 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7995 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/glupy: GPLv2 or LGPLv3+ licenseKaleb S. KEITHLEY2014-05-313-30/+22
| | | | | | | | | | Change-Id: Id57fdb9d45a105701fff4e28818230a30a5215f9 BUG: 1102306 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/7919 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com>
* rpc: implement server.manage-gids for group resolving on the bricksNiels de Vos2014-05-237-2/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new volume option 'server.manage-gids' can be enabled in environments where a user belongs to more than the current absolute maximum of 93 groups. This option triggers the following behavior: 1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or libgfapi) will contain only one (1) auxiliary group, instead of a full list. This reduces network usage and prevents problems in encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes. 2. The single group in the RPC Calls received by the server is replaced by resolving the groups server-side. Permission checks and similar in lower xlators are applied against the full list of groups where the user belongs to, and not the single auxiliary group that the client sent. Cherry picked from commit 2fd499d148fc8865c77de8b2c73fe0b7e1737882: > BUG: 1053579 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/7501 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Reviewed-by: Harshavardhana <harsha@harshavardhana.net> > Reviewed-by: Anand Avati <avati@redhat.com> Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91 BUG: 1096425 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7830 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
* rpc: warn and truncate grouplist if RPC/AUTH can not hold everythingNiels de Vos2014-05-221-1/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH header. This header contains the uid, gid and auxiliary groups of the user/process that accesses the Gluster Volume. The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2 In order to not cause complete failures on the client-side when trying to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes, we can calculate the expected size of the other elements: 1 | pid 1 | uid 1 | gid 1 | groups_len XX | groups_val (GF_MAX_AUX_GROUPS=65535) 1 | lk_owner_len YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024) ----+------------------------------------------- 5 | total xdr-units one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units. XX + YY can be 95 to fill the 100 xdr-units. Note that the on-wire protocol has tighter requirements than the internal structures. It is possible for xlators to use more groups and a bigger lk_owner than that can be sent by a GlusterFS-client. This change prevents overflows when allocating the RPC/AUTH header. Two new macros are introduced to calculate the number of groups that fit in the RPC/AUTH header, when taking the size of the lk_owner in account. In case the list of groups exceeds the maximum possible, only the first groups are passed over the RPC/GlusterFS protocol to the bricks. A warning is added to the logs, so that most system administrators will get informed. The reducing of the number of groups is not a new inventions. The RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16 groups. Most, if not all, NFS-clients will reduce any bigger number of groups to 16. (nfs.server-aux-gids can be used to workaround the limit of 16 groups, but the Gluster NFS-server will be limited to a maximum of 93 groups, or fewer in case the lk_owner structure contains more items.) Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52: > BUG: 1053579 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/7202 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Harshavardhana <harsha@harshavardhana.net> > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10 BUG: 1096425 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7829 Reviewed-by: Lalatendu Mohanty <lmohanty@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>