summaryrefslogtreecommitdiffstats
path: root/xlators/cluster
Commit message (Collapse)AuthorAgeFilesLines
* cluster/dht: Skip '..' for the volume root dirN Balachandran2018-02-121-0/+5
| | | | | | | | | | | | | | | | dht_populate_inode_for_dentry tries to update the layout for the '..' entry when listing the root of the volume. This entry does not correspond to an entry in the volume and therefore does not have a gfid or a layout on disk, causing layout processing to fail. > Change-Id: I2b7470e1c5e20d87b5545160697f24d041045140 > BUG: 1537457 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: I2b7470e1c5e20d87b5545160697f24d041045140 BUG: 1539516 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: Cleanup on fallocate failureN Balachandran2018-02-091-1/+17
| | | | | | | | | | | | | | | It looks like fallocate leaves a non-empty file behind in case of some failures. We now truncate the file to 0 bytes on failure in __dht_rebalance_create_dst_file. > Change-Id: Ia4ad7b94bb3624a301fcc87d9e36c4dc751edb59 > BUG: 1541916 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: Ia4ad7b94bb3624a301fcc87d9e36c4dc751edb59 BUG: 1542601 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: Unlink linkto files as rootN Balachandran2018-02-081-3/+7
| | | | | | | | | | | | | | | Non-privileged users cannot delete linkto files. However the failure to unlink a stale linkto causes DHT to fail the lookup with EIO and hence prevent access to the file. > Change-Id: Id295362d41e52263790694602f36f1219f0646a2 > BUG: 1542318 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: Id295362d41e52263790694602f36f1219f0646a2 BUG: 1543016 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: Fixed leak in dht_populate_inode_for_dentryN Balachandran2018-02-072-6/+10
| | | | | | | | | | | | | | | | Fixed an issue in dht_populate_inode_for_dentry where a layout is set in the inode without checking if it is already set. This overwrites the value each time without freeing the already existing layout. Also includes the changes in https://review.gluster.org/19471 > Change-Id: I651bf539a0b82b4ddc4c355890c16a8e91f5f1fd > BUG: 1541264 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: I651bf539a0b82b4ddc4c355890c16a8e91f5f1fd BUG: 1541267 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/afr: remove unnecessary child_up initializationXavier Hernandez2018-02-061-7/+0
| | | | | | | | | | | | | | | The child_up array was initialized with all elements being -1 to allow afr_notify() to differentiate down bricks from bricks that haven't reported yet. With current implementation this is not needed anymore and it was causing unexpected results when other parts of the code considered that if child_up[i] != 0, it meant that it was up. Backport of: > BUG: 1541038 Change-Id: I2a9d712ee64c512f24bd5cd3a48dcb37e3139472 BUG: 1541930 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* cluster/dht: Add migration checks to dht_(f)xattropN Balachandran2018-02-065-45/+325
| | | | | | | | | | | | | | | | The dht_(f)xattrop implementation did not implement migration phase1/phase2 checks which could cause issues with rebalance on sharded volumes. This does not solve the issue where fops may reach the target out of order. > Change-Id: I2416fc35115e60659e35b4b717fd51f20746586c > BUG: 1471031 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: I2416fc35115e60659e35b4b717fd51f20746586c BUG: 1540224 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/ec: OpenFD heal implementation for ECSunil Kumar Acharya2018-02-027-32/+184
| | | | | | | | | | | | | | | | Existing EC code doesn't try to heal the OpenFD to avoid unnecessary healing of the data later. Fix implements the healing of open FDs before carrying out file operations on them by making an attempt to open the FDs on required up nodes. Backport of: >BUG: 1431955 BUG: 1536334 Change-Id: Ib696f59c41ffd8d5678a484b23a00bb02764ed15 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* cluster/dht: Use percentages for space checkN Balachandran2018-01-121-5/+20
| | | | | | | | | | | | | | | | | | | | | With heterogenous bricks now being supported in DHT we could run into issues where files are not migrated even though there is sufficient space in newly added bricks which just happen to be considerably smaller than older bricks. Using percentages instead of absolute available space for space checks can mitigate that to some extent. Marking bug-1247563.t as that used to depend on the easier code to prevent a file from migrating. This will be removed once we find a way to force a file migration failure. > Change-Id: I3452520511f304dbf5af86f0632f654a92fcb647 > BUG: 1529440 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: I3452520511f304dbf5af86f0632f654a92fcb647 BUG: 1530455 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: don't overfill the buffer in readdir(p)Raghavendra G2017-12-081-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | Superflous dentries that cannot be fit in the buffer size provided by kernel are thrown away by fuse-bridge. This means, * the next readdir(p) seen by readdir-ahead would have an offset of a dentry returned in a previous readdir(p) response. When readdir-ahead detects non-monotonic offset it turns itself off which can result in poor readdir performance. * readdirp can be cpu-intensive on brick and there is no point to read all those dentries just to be thrown away by fuse-bridge. So, the best strategy would be to fill the buffer optimally - neither overfill nor underfill. > Change-Id: Idb3d85dd4c08fdc4526b2df801d49e69e439ba84 > BUG: 1492625 > Signed-off-by: Raghavendra G <rgowdapp@redhat.com> (cherry picked from commit e785faead91f74dce7c832848f2e8f3f43bd0be5) Change-Id: Idb3d85dd4c08fdc4526b2df801d49e69e439ba84 BUG: 1478411 Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: populate inode in dentry for single subvolume dhtRaghavendra G2017-12-062-1/+69
| | | | | | | | | | | | | | | ... in readdirp response if dentry points to a directory inode. This is a special case where the entire layout is stored in one single subvolume and hence no need for lookup to construct the layout >Change-Id: I44fd951e2393ec9dac2af120469be47081a32185 >BUG: 1492625 >Signed-off-by: Raghavendra G <rgowdapp@redhat.com> (cherry picked from commit 59d1cc720f52357f7a6f20bb630febc6a622c99c) Change-Id: I44fd951e2393ec9dac2af120469be47081a32185 BUG: 1478411 Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/afr: Honor default timeout of 5min for analyzing split-brain fileskarthik-us2017-11-301-1/+5
| | | | | | | | | | | | | | | | | | | | Problem: After setting split-brain-choice option to analyze the file to resolve the split brain using the command "setfattr -n replica.split-brain-choice -v "choiceX" <path-to-file>" should allow to access the file from mount for default timeout of 5mins. But the timeout was not honored and was able to access the file even after the timeout. Fix: Call the inode_invalidate() in afr_set_split_brain_choice_cbk() so that it will triger the cache invalidate after resetting the timer and the split brain choice. So the next calls to access the file will fail with EIO. Change-Id: I698cb833676b22ff3e4c6daf8b883a0958f51a64 BUG: 1514380 Signed-off-by: karthik-us <ksubrahm@redhat.com> (cherry picked from commit 933ec57ccda2c1ba5ce6f207313c3b6802e67ca3)
* cluster/dht: make rebalance use truncate incaseSusant Palai2017-11-233-71/+99
| | | | | | | | | | | | | .. the brick file system does not support fallocate. > Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c > BUG: 1488103 > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c BUG: 1516691 Signed-off-by: Susant Palai <spalai@redhat.com>
* cluster/dht: Don't set ACLs on linkto fileN Balachandran2017-11-201-0/+11
| | | | | | | | | | | | | | | | | The trusted.SGI_ACL_FILE appears to set posix ACLs on the linkto file that is a target of file migration. This can mess up file permissions and cause linkto identification to fail. Now we remove all ACL xattrs from the results of the listxattr call on the source before setting them on the target. > BUG: 1514329 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: I56802dbaed783a16e3fb90f59f4ce849f8a4a9b4 BUG: 1515042 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: fix crash when deleting directoriesZhang Huan2017-10-251-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | In DHT, after locks on all subvolumes are acquired, it would perform the following steps sequentially, 1. send remove dir on all other subvolumes except the hashed one in a loop; 2. wait for all pending rmdir to be done 3. remove dir on the hashed subvolume The problem is that in step 1 there is a check to skip hashed subvolume in the loop. If the last subvolume to check is actually the hashed one, and step 3 is quickly done before the last and hashed subvolume is checked, by accessing shared context data be destroyed in step 3, would cause a crash. Fix by saving shared data in a local variable to access later in the loop. > BUG: 1490642 > Signed-off-by: Zhang Huan <zhanghuan@open-fs.com> (cherry picked from commit 206120126d455417a81a48ae473d49be337e9463) Change-Id: I8db7cf7cb262d74efcb58eb00f02ea37df4be4e2 BUG: 1505221 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/ec: Improve performance with xattrop updateSunil Kumar Acharya2017-10-121-24/+102
| | | | | | | | | | | | | | | | Existing EC code updates the xattr on the subvolume in a sequential pattern resulting in very poor performance. With this fix EC now updates the xattr on the subvolume in parallel which improves the xattr update performance. >BUG: 1445663 >Change-Id: I3fc40d66db0b88875ca96a9fa01002ba386c0486 >Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> BUG: 1499150 Change-Id: I3fc40d66db0b88875ca96a9fa01002ba386c0486 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* cluster/afr: Make choose-local "reconfigurable"Krutika Dhananjay2017-10-121-0/+11
| | | | | | | | | | | | | | | | Backport of: > Change-Id: Ibab292ba705d993b475cd0303fb3318211fb2500 > Reviewed-on: https://review.gluster.org/18026 > BUG: 1480525 > cherry-picked from commit 1e2d6537875d16b783e3c50ada7ee61487c6d796 With this change, enabling choose-local (which means its state makes transition from "off" to "on") will be effective after the first gfid-lookup on "/" since volume-set was executed. Change-Id: Ibab292ba705d993b475cd0303fb3318211fb2500 BUG: 1501022 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* cluster/dht: Don't store the entire uuid for subvolsN Balachandran2017-10-124-17/+45
| | | | | | | | | | | | | | | | | Comparing the uuid string of the local node against that stored in the local_subvol information is inefficient, especially as it is done for every file to be migrated. The code has now been changed to set the value of info to 1 if the nodeuuid is that of the node making the comparison so this becomes an integer comparison. > BUG: 1451434 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > https://review.gluster.org/#/c/17851 (cherry picked from commit c4a608799a577a4f38139f6bb8a47da8efb0fec3) Change-Id: I7491d59caad3b71dbf5facc94dcde0cd53962775 BUG: 1500472 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* afr: heal gfid as a part of entry healRavishankar N2017-10-104-67/+120
| | | | | | | | | | | | | | | | | | | Problem: If a brick crashes after an entry (file or dir) is created but before gfid is assigned, the good bricks will have pending entry heal xattrs but the heal won't complete because afr_selfheal_recreate_entry() tries to create the entry again and it fails with EEXIST. Fix: We could have fixed posx_mknod/mkdir etc to assign the gfid if the file already exists but the right thing to do seems to be to trigger a lookup on the bad brick and let it heal the gfid instead of winding an mknod/mkdir in the first place. (cherry picked from commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265) Change-Id: I82f76665a7541f1893ef8d847b78af6466aff1ff BUG: 1499202 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/afr: Sending subvol up/down events when subvol comes up or goes downkarthik-us2017-10-061-0/+2
| | | | | | | | | | > BUG: 1493539 (cherry picked from commit 3bbb4fe4b33dc3a3ed068ed2284077f2a4d8265a) Change-Id: I6580351b245d5f868e9ddc6a4eb4dd6afa3bb6ec BUG: 1492066 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* afr: don't check for file size in afr_mark_source_sinks_if_file_emptyRavishankar N2017-10-051-6/+7
| | | | | | | | | | ... for AFR_METADATA_TRANSACTION and just mark source and sinks if metadata is the same. (cherry picked from commit 24637d54dcbc06de8a7de17c75b9291fcfcfbc84) Change-Id: I69e55d3c842c7636e3538d1b57bc4deca67bed05 BUG: 1496317 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: auto-resolve split-brains for zero-byte filesRavishankar N2017-10-053-0/+79
| | | | | | | | | | | | | | | | | | | | | | | | | Problems: As described in BZ 1491670, renaming hardlinks can result in data/mdata split-brain of the DHT link-to files (T files) without any mismatch of data and metadata. As described in BZ 1486063, for a zero-byte file with only dirty bits set, arbiter brick will likely be chosen as the source brick. Fix: For zero byte files in split-brain, pick first brick as a) data source if file size is zero on all bricks. b) metadata source if metadata is the same on all bricks In arbiter case, if file size is zero on all bricks and there are no pending afr xattrs, pick 1st brick as data source. (cherry picked from commit 1719cffa911c5287715abfdb991bc8862f0c994e) Change-Id: I0270a9a2f97c3b21087e280bb890159b43975e04 BUG: 1496317 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reported-by: Rahul Hinduja <rhinduja@redhat.com> Reported-by: Mabi <mabi@protonmail.ch>
* afr: heal metadata in discover code pathRavishankar N2017-09-112-17/+46
| | | | | | | | | | | | | | | | | | | | | | | Combined backport of https://review.gluster.org/17850 and https://review.gluster.org/18187 During graph switch, if fuse sends nameless (gfid) lookups, afr takes the discover code path to serve it. If there are pending metadata heals, they do not happen unless an inode refresh happens as a part of discover (which is not guaranteed to happen always). This patch fixes it by attempting metadata heal as a part of discover, just like how it is done in lookup code path. Change-Id: I87c493045b9225741cad173bf3f645848697032e BUG: 1488168 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/18202 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Karthik U S <ksubrahm@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
* dht: add FOP check to dht_file_setattr_cbkRavishankar N2017-09-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: bug-797171.7 loaded error-gen xlator on the brick which sent EBADF for a non fd-based fop, namely setattr. This caused dht_check_and_open_fd_on_subvol_task() to crash as local->fd was NULL. Fix: Call dht_check_and_open_fd_on_subvol_task() from dht_file_setattr_cbk only for dht_fsetattr and not dht_setattr or dht_setattr2 > Reviewed-on: https://review.gluster.org/18208 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Susant Palai <spalai@redhat.com> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 47188e9eac59de416a5c86c7ec7540ed6aaa1c98) Signed-off-by: Ravishankar N <ravishankar@redhat.com> Change-Id: Iab4999e213bf2065804f3f8237e470ad454e3c99 BUG: 1489260 Reviewed-on: https://review.gluster.org/18222 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: Log files skipped by rebalanceN Balachandran2017-09-072-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | There was no easy way to find out which files were skipped during a rebalance. Rebalance now logs a message for every skipped file using msgid 109126, making it easier to find all files that were skipped. > BUG: 1480445 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/18021 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: hari gowtham <hari.gowtham005@gmail.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I2cac7db7285e2f82354251f3ea4094827b0daf3e BUG: 1486557 (cherry picked from commit a4c43ba9374b8f75a48d38a032353a0c7d311a73) Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/18149 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/dht: Aggregate xattrs only for dirs in dht_discover_cbkN Balachandran2017-09-071-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | If dht_discover finds data files on more than one subvol, racing calls to dht_discover_cbk could end up calling dht_aggregate_xattr which could delete dictionary data that is being accessed by higher layer translators. Fixed to call dht_aggregate_xattr only for directories and consider only the first file to be found. > BUG: 1484709 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/18137 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I4f3d2a405ec735d4f1bb33a04b7255eb2d179f8a BUG: 1486538 (cherry picked from commit 9420022df0962b1fa4f3ea3774249be81bc945cc) Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/18146 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* afr: check validity of afr_replyRavishankar N2017-09-075-47/+71
| | | | | | | | | | | | | | | | | | | | | | | ...in various self-heal code paths. Originally found by Pranith in __afr_selfheal_name_impunge () Also change __afr_selfheal_assign_gfid() to send lookup only on those bricks that don't have a gfid matching that of the source. > Reviewed-on: https://review.gluster.org/18065 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit d594900dbca92c356152be65fce16f77c402117c) Change-Id: I70a2ccd750a2af92c5fc36e0eefb2b6125404b4a BUG: 1487319 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/18173 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/dht: Reorder dir operations in gf_defrag_fix_layoutN Balachandran2017-08-211-76/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Earlier, rebalance performed a fix-layout on a directory before healing its subdirectories. If there were a lot of subdirs, it could take a while before all subdirs were created on the newly added bricks. As dht_readdirp only lists dirs from their hashed subvol, those dirs which hashed to the newly added bricks but were not yet created on them were not listed. Now, the child dirs are listed and processed before the layout of the parent is fixed. This introduces a change in behaviour where files in subdirs are migrated before those in parent directories. Credit: Shyam <srangana@redhat.com> Github issue: #239 > BUG: 1248393 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/18045 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> (cherry picked from commit 96b33b4b278391ca8a7755cf274931d4f1808cb5) Change-Id: I8ae7f24a510754cd8d1b31e5d608bcf1928599e2 BUG: 1483402 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/18071 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/dht: EBADF handling for fremovexattr and fsetxattrN Balachandran2017-08-123-3/+44
| | | | | | | | | | | | | | | | | | | | Add EBADF handling for dht_fremovexattr and dht_fsetxattr. > BUG: 1476665 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17999 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> (cherry picked from commit 747a08d34e2a1e94d7fce68a3577370288bb1955) Change-Id: Ide0d5812dae79655d2565157e5baabcd753b4309 BUG: 1479303 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/18012 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* afr: Prevent null gfids in self-heal entry re-creationRavishankar N2017-08-121-3/+11
| | | | | | | | | | | | | | | | | > Reviewed-on: https://review.gluster.org/17981 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Karthik U S <ksubrahm@redhat.com> (cherry picked from commit bead74a6e085001225bc0704bad1a5db36dd75a1) Change-Id: I5acb8bd0a19fc4e764d61e349bb690b5236ee610 BUG: 1479474 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17997 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Check for open fd only on EBADFN Balachandran2017-08-094-160/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DHT fd based fops used to check if the fd was open on the cached subvol before winding the call. However, this introduced a performance regression of about 30% for reads. This check was introduced to handle cases where files were migrated while IOs were happening. As this is not the common case, dht will now check if the fd is open on the cached subvol only if the call fails with EBADF. This will prevent a performance hit where a rebalance is not running. > BUG: 1476665 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17976 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Susant Palai <spalai@redhat.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> (cherry picked from commit cdca1cb26a0aba390c6d8485c0d6d95e22ffc8bd) Change-Id: I2035a858d63c3fcd22bb634055bbb0ad01686808 BUG: 1479303 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17995 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Handle wrong rebalance status reportingSusant Palai2017-08-031-29/+32
| | | | | | | | | | | | | | | | | | | | > Change-Id: Id91ef35f890055cd42b9a94462f92297c77f1fff > Bug: 1475282 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17868 > Tested-by: Raghavendra G <rgowdapp@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: Id91ef35f890055cd42b9a94462f92297c77f1fff Bug: 1477152 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17943 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: rebalance min-free-disk fixSusant Palai2017-08-021-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To calculate available space on a subvolume we used to do the following in __dht_check_free_space. post_availspace = (dst_statfs.f_bavail * dst_statfs.f_frsize) - stbuf->ia_size Now to subtracting the file size from available space is tricky here. Sometime available space will be lesser than the file size and since all the participating members in calculation are unsigned int, the result is a large number (integer overflow). Solution: We do not need to subtract the file size from the space available, since fallocate would have reserved file size space already. > Change-Id: I4f724358c44b9911933742ff3ff8d55b3dfda1cb > BUG: 1475282 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17876 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: I4f724358c44b9911933742ff3ff8d55b3dfda1cb BUG: 1477152 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17942 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* ec/cluster: Update failure of fop on a brick properlyAshish Pandey2017-08-011-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In case of truncate, if writev or open fails on a brick, in some cases it does not mark the failure onlock->good_mask. This causes the update of size and version on all the bricks even if it has failed on one of the brick. That ultimately causes a data corruption. Solution: In callback of such writev and open calls, mark fop->good for parent too. Thanks Pranith Kumar K <pkarampu@redhat.com> for finding the root cause. >Change-Id: I8a1da2888bff53b91a0d362b8c44fcdf658e7466 >BUG: 1476205 >Signed-off-by: Ashish Pandey <aspandey@redhat.com> >Reviewed-on: https://review.gluster.org/17906 >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: I8a1da2888bff53b91a0d362b8c44fcdf658e7466 BUG: 1476868 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/17932 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/dht: Correct iterator for decommissioned bricksN Balachandran2017-07-311-1/+1
| | | | | | | | | | | | | | | | | | | | Corrected the iterator for looping over the list of decommissioned bricks while checking if the new target determined because of min-free-disk values has been decommissioned. > BUG: 1474318 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17861 > Reviewed-by: Susant Palai <spalai@redhat.com> (cherry picked from commit 8c3e766fe0a473734e8eca0f70d0318a2b909e2e) Change-Id: Iee778547eb7370a8069e954b5d629fcedf54e59b BUG: 1475181 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17872 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: Update size processed for non-migrated filesN Balachandran2017-07-311-6/+10
| | | | | | | | | | | | | | | | | | | | | | The size of non-migrated files was not added to the size_processed causing incorrect rebalance estimate calculations. This has been fixed. > BUG: 1467209 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17867 > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 24ab0ef44a1646223b59e33d0109d8424f8eddd0) Change-Id: I9f338c44da22b856e9fdc6dc558f732ae9a22f15 BUG: 1475192 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17873 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: Fix negative rebalance estimatesN Balachandran2017-07-311-8/+20
| | | | | | | | | | | | | | | | | | | | | | | The calculation of the rebalance estimates will start after the rebalance operation has been running for 10 minutes. This patch also changes the cli rebalance status code to use unsigned variables for the time calculations. > BUG: 1457985 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17863 > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit e21c915679244ddc1fae886e52badf02b4d95efc) Change-Id: Ic76f517c59ad938a407f1cf5e3b9add571690a6c BUG: 1475399 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17882 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/dht: change log level to debug for thread activitySusant Palai2017-07-311-13/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | Every time all the thread sleeps or wakes up, we log a message about that event. Sometime this can be noisy where the number of files eligible to be migrated are placed far away from each other. Moving the logs to DEBUG. > Change-Id: I4dc2cc9fdf4f42d4001754532a5bc4aeb3f0f959 > BUG: 1474639 > Signed-off-by: Susant Palai <spalai@redhat.com> > Reviewed-on: https://review.gluster.org/17866 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: N Balachandran <nbalacha@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Signed-off-by: Susant Palai <spalai@redhat.com> Change-Id: I4dc2cc9fdf4f42d4001754532a5bc4aeb3f0f959 BUG: 1475662 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17893 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: Fixed crash in dht_rmdir_is_subvol_emptyN Balachandran2017-07-201-13/+34
| | | | | | | | | | | | | | | | | | | The local->call_cnt was being accessed and updated inside the loop where the entries were being processed and the calls were being wound. This could end up in a scenario where the local->call_cnt became 0 before the processing was complete causing the crash when the next entry was being processed. Change-Id: I930f61f1a1d1948f90d4e58e80b7d6680cf27f2f BUG: 1472949 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17825 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* libglusterfs: Name threads on creationRaghavendra Talur2017-07-195-22/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Set names to threads on creation for easier debugging. Output of top -H -p <PID-OF-GLUSTERFSD> Before: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd After: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703 BUG: 1254002 Updates: #271 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: https://review.gluster.org/11926 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/afr: GFID split-brain resolution with existing CLIkarthik-us2017-07-186-135/+315
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently there is no way for the admin from CLI to resolve gfid split-brain based on some policy like choice of the brick, mtime or size. Fix: With the existing CLI options based on size, mtime, and choice of brick, we do lookup on the parent for the specified file. As part of the lookup, if we find gfid mismatch, we resolve them based on the policy and return. If the file is not in gfid split- brain, then we check for the data and metadata split-brain in the getxattr code path, and resolve if any. This will work provided absolute path to the file with the CLI and not with gfid of the file. Hence the source-brick policy without any file path will also not resolve the gfid split-brain since it uses the gfid of the files. But it can resolve any other type of split-brains and skip the gfid mismatch resolution with the usual error message. Reverting the change https://review.gluster.org/17290. This patch resolves the issue. Fixes gluster/glusterfs#135 Change-Id: Iaeba6fc32f184a34255d03be87cda02773130a09 BUG: 1459530 Signed-off-by: karthik-us <ksubrahm@redhat.com> Reviewed-on: https://review.gluster.org/17485 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* cluster/ec: Non-disruptive upgrade on EC volume failsSunil Kumar Acharya2017-07-141-1/+4
| | | | | | | | | | | | | | | | | | | | Problem: Enabling optimistic changelog on EC volume was not handling node down scenarios appropriately resulting in volume data inaccessibility. Solution: Update dirty xattr appropriately on good bricks whenever nodes are down. This would fix the metadata information as part of heal and thus ensures data accessibility. BUG: 1468261 Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-on: https://review.gluster.org/17703 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* afr: mark non sources as sinks in metadata healRavishankar N2017-07-132-3/+5
| | | | | | | | | | | | | | | | | | Problem: In a 3 way replica, when the source brick does not have pending xattrs for the sinks, but the 2 sinks blame each other, metadata heal was not happpening because we were not setting all non-sources as sinks. Fix: Mark all non-sources as sinks, like it is done in data and entry heal. Change-Id: I534978940f5087302e307fcc810a48ffe898ce08 BUG: 1468279 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17717 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/ec: Get size of file in EC [f]xattropPranith Kumar K2017-07-132-4/+17
| | | | | | | | | | | | | | | | | | | | Problem: For allowing parallel writes we shouldn't depend on ia_size to be same for all the bricks in each write_cbk(). But we need to make sure backend size is correct on all the bricks and no crashes/manual modifications happened. Fix: At the time of get_size_version() we do 1 check to make sure size of the file is same across the bricks. From then on the FOPs will give the status of the fop, so we rely on this information to keep which bricks are good/bad. Updates #251 Change-Id: I1df645347e2e9f2e09cfa4411b6cc305d7f4e4e5 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17741 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* cluster/rebalance: Fix hardlink migration failuresSusant Palai2017-07-132-15/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A brief about how hardlink migration works: - Different hardlinks (to the same file) may hash to different bricks, but their cached subvol will be same. Rebalance picks up the first hardlink, calculates it's hash(call it TARGET) and set the hashed subvolume as an xattr on the data file. - Now all the hardlinks those come after this will fetch that xattr and will create linkto files on TARGET (all linkto files for the hardlinks will be hardlink to each other on TARGET). - When number of hardlinks on source is equal to the number of hardlinks on TARGET, the data migration will happen. RACE:1 Since rebalance is multi-threaded, the first lookup (which decides where the TARGET subvol should be), can be called by two hardlink migration parallely and they may end up creating linkto files on two different TARGET subvols. Hence, hardlinks won't be migrated. Fix: Rely on the xattr response of lookup inside gf_defrag_handle_hardlink since it is executed under synclock. RACE:2 The linkto files on TARGET can be created by other clients also if they are doing lookup on the hardlinks. Consider a scenario where you have 100 hardlinks. When rebalance is migrating 99th hardlink, as a result of continuous lookups from other client, linkcount on TARGET is equal to source linkcount. Rebalance will migrate data on the 99th hardlink itself. On 100th hardlink migration, hardlink will have TARGET as cached subvolume. If it's hash is also the same, then a migration will be triggered from TARGET to TARGET leading to data loss. Fix: Make sure before the final data migration, source is not same as destination. RACE:3 Since a hardlink can be migrating to a non-hashed subvolume, a lookup from other client or even the rebalance it self, might delete the linkto file on TARGET leading to hardlinks never getting migrated. This will be addressed in a different patch in future. Change-Id: If0f6852f0e662384ee3875a2ac9d19ac4a6cea98 BUG: 1469964 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17755 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: Clear clean_dst flag on target changeN Balachandran2017-07-111-0/+5
| | | | | | | | | | | | | | | | | | If the target of a file migration was changed because of min-free-disk limits, the dst_fd was closed but the clean_dst flag was not set to false. If the file could not be created on the new target for some reason, the ftruncate call to clean up the dst was sent on the now invalid fd causing the process to deadlock. Change-Id: I5bfa80f519b04567413d84229cf62d143c6e2f04 BUG: 1469029 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17735 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: Fix fd check raceN Balachandran2017-07-112-1/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | There is a another race between the cached subvol being updated in the inode_ctx and the fd being opened on the target. 1. fop1 -> fd1 -> subvol0 2. file migrated from subvol0 to subvol1 and cached_subvol changed to subvol1 in inode_ctx 3. fop2 -> fd1 -> subvol1 [takes new cached subvol] 4. fop2 -> checks fd ctx (fd not open on subvol1) -> opens fd1 on subvol1 5. fop1 -> checks fd ctx (fd not open on subvol0) -> tries to open fd1 on subvol0 -> fails with "No such file on directory". Fix: If dht_fd_open_on_dst fails with ENOENT or ESTALE, wind to old subvol and let the phase1/phase2 checks handle it. Change-Id: I34f8011574a8b72e3bcfe03b0cc4f024b352f225 BUG: 1465075 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17731 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* cluster/dht: Use size to calculate estimatesN Balachandran2017-07-103-24/+188
| | | | | | | | | | | | | | | | | | | The earlier approach of using the number of files to determine when the rebalance would complete did not work well when file sizes differed widely. The new approach now gets the total data size and uses that information to determine how long the rebalance is expected to take. Change-Id: I84e80a0893efab72ff06130e4596fa71c9c8c868 BUG: 1467209 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17668 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/ec : Don't try to heal when no sink is UPAshish Pandey2017-07-071-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: 4 + 2 EC volume configuration. If untar of linux is going on and we kill a brick, indices will be created for the files/dir which need to be healed. ec_shd_index_sweep spawns threads to scan these entries and start heal. If in the middle of this we kill one more brick, we end up in a situation where we can not heal an entry as there are only "ec->fragment" number of bricks are UP. However, the scan will be continued and it will trigger the heal for those entries. Solution: When a heal is triggered for an entry, check if it *CAN* be healed or not. If not come out with ENOTCONN. Change-Id: I305be7701c289f36bd7bde22491b71074771424f BUG: 1464359 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/17692 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* cluster/ec: correctly handle end of file for seekXavier Hernandez2017-07-061-0/+18
| | | | | | | | | | | | | | | | | When a SEEK_HOLE was issued near to the end of file, sometimes an offset beyond the end of file was returned. Another problem was that using some offsets greater than the end of file returned successfully instead of failing with ENXIO. Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5 BUG: 1449348 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/17228 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* core: assorted typos and spelling mistakes from Debian lintianKaleb S. KEITHLEY2017-07-031-1/+1
| | | | | | | | | | | | | | Plus minor readability improvements. Reported-by: pmatthaei@debian.org Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf BUG: 1466785 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17660 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>