summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* cluster/ec: Test script failing with brick multiplexing enabledSunil Kumar Acharya2017-07-201-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Killing the bricks(using kill signal) in test scripts will result in test failures with brick multiplexing enabled. Solution: Updated the script to use kill_brick function to bring down the bricks. >BUG: 1472094 >Change-Id: Ibbf1fdc1be660ad3cd93e95af2838c0aae0181af >Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> >Reviewed-on: https://review.gluster.org/17809 >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> BUG: 1472794 Change-Id: Ibbf1fdc1be660ad3cd93e95af2838c0aae0181af Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-on: https://review.gluster.org/17823 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/ec: Non-disruptive upgrade on EC volume failsSunil Kumar Acharya2017-07-194-1/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Enabling optimistic changelog on EC volume was not handling node down scenarios appropriately resulting in volume data inaccessibility. Solution: Update dirty xattr appropriately on good bricks whenever nodes are down. This would fix the metadata information as part of heal and thus ensures data accessibility. >BUG: 1468261 >Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057 >Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> >Reviewed-on: https://review.gluster.org/17703 >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> BUG: 1470938 Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-on: https://review.gluster.org/17773 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* afr: mark non sources as sinks in metadata healRavishankar N2017-07-191-0/+64
| | | | | | | | | | | | | | | | | | | | | | | | Problem: In a 3 way replica, when the source brick does not have pending xattrs for the sinks, but the 2 sinks blame each other, metadata heal was not happpening because we were not setting all non-sources as sinks. Fix: Mark all non-sources as sinks, like it is done in data and entry heal. > Reviewed-on: https://review.gluster.org/17717 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 77c1ed5fd299914e91ff034d78ef6e3600b9151c) Change-Id: I534978940f5087302e307fcc810a48ffe898ce08 BUG: 1471611 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17781 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/ec: correctly handle end of file for seekXavier Hernandez2017-07-062-0/+242
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When a SEEK_HOLE was issued near to the end of file, sometimes an offset beyond the end of file was returned. Another problem was that using some offsets greater than the end of file returned successfully instead of failing with ENXIO. >Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5 >BUG: 1449348 >Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> >Reviewed-on: https://review.gluster.org/17228 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Amar Tumballi <amarts@redhat.com> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >(cherry picked from commit eb96dd45f8e583c6bad84bf32ca17e2bb01dd38f) Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5 BUG: 1468118 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/17710 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* tests/gfapi:Adding testcase to check handling of "." and ".."Mohammed Rafi KC2017-07-052-0/+165
| | | | | | | | | | | | | | | | | | | | | | | | | | Adding a testcase to check the proper handling of "." and ".." in gfapi path. The patch which fix the issue is https://review.gluster.org/#/c/17177 Backportof> >Change-Id: I5c9cceade30f7d8a3b451b5f34f1cf9815729c4a >BUG: 1447266 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> >Reviewed-on: https://review.gluster.org/17216 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Niels de Vos <ndevos@redhat.com> >Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Change-Id: I5c9cceade30f7d8a3b451b5f34f1cf9815729c4a BUG: 1463528 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: https://review.gluster.org/17238 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/afr: Implement quorum for lk fopPranith Kumar K2017-06-201-0/+255
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: At the moment when we have replica 3 or arbiter setup, even when lk succeeds on just one brick we give success to application which is wrong Fix: Consider quorum-number of successes as success when quorum is enabled. >BUG: 1461792 >Change-Id: I5789e6eb5defb68f8a0eb9cd594d316f5cdebaea >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: https://review.gluster.org/17524 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Ravishankar N <ravishankar@redhat.com> BUG: 1462661 Change-Id: I5789e6eb5defb68f8a0eb9cd594d316f5cdebaea Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17578 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* nl-cache: add group volume set option for ease of usePoornima G2017-06-201-3/+6
| | | | | | | | | | | | | | | | | | > Reviewed-on: https://review.gluster.org/17495 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> > (cherry picked from commit 38780ff2d0717cd800a49072879a664b04385fd1) Change-Id: Id03643a9598da53051a01ca09e1d2a62bc195ab6 BUG: 1460896 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/17528 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* index: Do not proceed with init if brick is not mountedRavishankar N2017-06-194-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ..or else when a volume start force is given, we end up creating /brick-path/.glusterfs/indices folder and various subdirs under it and eventually starting the brick process. As a part of this patch, glusterd_get_index_basepath() is added in glusterd, who will then use it to create the basepath during volume-create, add-brick, replace-brick and reset-brick. It also uses this function to set the 'index-base' xlator option for the index translator. > Reviewed-on: https://review.gluster.org/17426 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> (cherry picked from commit b58a15948fb3fc37b6c0b70171482f50ed957f42) Change-Id: Id018cf3cb6f1e2e35b5c4cf438d1e939025cb0fc BUG: 1462636 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17562 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* core: fix spelling errorsKaleb S. KEITHLEY2017-06-134-6/+6
| | | | | | | | | | | | | | | | | fixes for various minor spelling errors and typos master BUG: 1457808 master: https://review.gluster.org/17442 Reported-by: Patrick Matthäi <pmatthaei@debian.org> Change-Id: Ic1be36f82e3d822bbdc9559878bd79520fc0fcd5 BUG: 1459090 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17475 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* upcall: Update the access time in missing casesPoornima G2017-06-131-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | Issue: In fops like rename, link, unlink etc, the parent dirrs' client access time was not being updated. And in fops like create, link, symlink etc. the new file/dirs' client access time was not updated. Solution: Update the client access time for both parent and new entry. > Reviewed-on: https://review.gluster.org/17450 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Niels de Vos <ndevos@redhat.com> > (cherry picked from commit 149db390fd89beee1e8a3d946d4224ba2a9b4711) Change-Id: Id9f63583216ae857f6251dca15797ac66fa85430 BUG: 1460895 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/17526 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* libglusterfs : Fix crash in glusterd while peer probingGaurav Yadav2017-06-131-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | glusterd crashes when port is being set explcitly to a range which is outside greater than short data type range. Eg. sysctl net.ipv4.ip_local_reserved_ports="49152-49156" In above case glusterd crashes while parsing the port. With this fix glusterd will be able to handle port range between INT_MIN to INT_MAX > Reviewed-on: https://review.gluster.org/17359 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > Reviewed-by: Niels de Vos <ndevos@redhat.com> > Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Change-Id: I7c75ee67937b0e3384502973d96b1c36c89e0fe1 BUG: 1459759 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/17496 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/ec: Update xattr and heal size properlyAshish Pandey2017-06-072-0/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem-1 : Recursive healing of same file is happening when IO is going on even after data heal completes. Solution: RCA: At the end of the write, when ec_update_size_version gets called, we send it only on good bricks and not on healing brick. Due to this, xattr on healing brick will always remain out of sync and when the background heal check source and sink, it finds this brick to be healed and start healing from scratch. That involve ftruncate and writing all of the data again. To solve this, send xattrop on all the good bricks as well as healing bricks. Problem-2: The above fix exposes the data corruption during heal. If the write on a file is going on and heal finishes, we find that the file gets corrupted. RCA: The real problem happens in ec_rebuild_data(). Here we receive the 'size' argument which contains the real file size at the time of starting self-heal and it's assigned to heal->total_size. After that, a sequence of calls to ec_sync_heal_block() are done. Each call ends up calling ec_manager_heal_block(), which does the actual work of healing a block. First a lock on the inode is taken in state EC_STATE_INIT using ec_heal_inodelk(). When the lock is acquired, ec_heal_lock_cbk() is called. This function calls ec_set_inode_size() to store the real size of the inode (it uses heal->total_size). The next step is to read the block to be healed. This is done using a regular ec_readv(). One of the things this call does is to trim the returned size if the file is smaller than the requested size. In our case, when we read the last block of a file whose size was = 512 mod 1024 at the time of starting self-heal, ec_readv() will return only the first 512 bytes, not the whole 1024 bytes. This isn't a problem since the following ec_writev() sent from the heal code only attempts to write the amount of data read, so it shouldn't modify the remaining 512 bytes. However ec_writev() also checks the file size. If we are writing the last block of the file (determined by the size stored on the inode that we have set to heal->total_size), any data beyond the (imposed) end of file will be cleared with 0's. This causes the 512 bytes after the heal->total_size to be cleared. Since the file was written after heal started, the these bytes contained data, so the block written to the damaged brick will be incorrect. Solution: Align heal->total_size to a multiple of the stripe size. Thanks "Xavier Hernandez" <xhernandez@datalab.es> to find out the root cause and to fix the issue. >Change-Id: I6c9f37b3ff9dd7f5dc1858ad6f9845c05b4e204e >BUG: 1428673 >Signed-off-by: Ashish Pandey <aspandey@redhat.com> >Reviewed-on: https://review.gluster.org/16985 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> >Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: I6c9f37b3ff9dd7f5dc1858ad6f9845c05b4e204e BUG: 1459392 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/17482 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* features/shard: Handle offset in appending writesPranith Kumar K2017-05-292-0/+211
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a file is opened with append, all writes are appended at the end of file irrespective of the offset given in the write syscall. This needs to be considered in shard size update function and also for choosing which shard to write to. At the moment shard piggybacks on queuing from write-behind xlator for ordering of the operations. So if write-behind is disabled and two parallel appending-writes come both of which can increase the file size beyond shard-size the file will be corrupted. >BUG: 1455301 >Change-Id: I9007e6a39098ab0b5d5386367bd07eb5f89cb09e >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: https://review.gluster.org/17387 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> BUG: 1456225 Change-Id: I9007e6a39098ab0b5d5386367bd07eb5f89cb09e Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17404 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* nl-cache: In case of nameless operations do not cachePoornima G2017-05-251-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | Issue: In nameless lookup/other fops, parent inode will be NULL, when we try to add the cache to the NULL inode, it causes a crash. Hence handle the scenario of nameless fops, and do not cache/serve the nameless fops. >Reviewed-on: https://review.gluster.org/17316 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit 284cd8851bfe60984d2f11b5c52fe3204ff43b06) Change-Id: I3b90f882ac89e6aaf3419db89e6f890797f37700 BUG: 1454569 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/17361 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/ec: Implement FALLOCATE FOP for ECSunil Kumar Acharya2017-05-252-0/+132
| | | | | | | | | | | | | | | | | | | | | | | | | FALLOCATE file operations is not implemented in the existing EC code. This change set implements it for EC. >BUG: 1448293 >Change-Id: Id9ed914db984c327c16878a5b2304a0ea461b623 >Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> >Reviewed-on: https://review.gluster.org/15200 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> BUG: 1454686 Change-Id: Id9ed914db984c327c16878a5b2304a0ea461b623 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Reviewed-on: https://review.gluster.org/17369 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* Tier: Watermark check for hi and low value being equalhari gowtham2017-05-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Both low and hi watermark can be set to same value as the check missed the case for being equal. Fix: Add the check to both the hi and low values being equal along with the low value being higher than hi value. >Change-Id: Ia235163aeefdcb2a059e2e58a5cfd8fb7f1a4c64 >BUG: 1447960 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Reviewed-on: https://review.gluster.org/17175 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Tested-by: hari gowtham <hari.gowtham005@gmail.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Reviewed-by: Milind Changire <mchangir@redhat.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: Ia235163aeefdcb2a059e2e58a5cfd8fb7f1a4c64 BUG: 1454597 Reviewed-on: https://review.gluster.org/17364 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* tests/lock_revocation: mark as badRaghavendra G2017-05-231-0/+1
| | | | | | | | | | | | | | | | | The test is failing in master. Looks like a hang causing Aborted test runs. > Reviewed-on: https://review.gluster.org/17234 (cherry picked from commit d5865881de5653a0e810093a9867ab3962d00f67) Change-Id: I7a589ad2c54bd55d62f4e66fdf8037c19fc123ea BUG: 1454533 Reviewed-on: https://review.gluster.org/17360 Tested-by: Shyamsundar Ranganathan <srangana@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* rda, glusterd: Change the max of rda-cache-limit to INFINITYPoornima G2017-05-222-1/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue: The max value of rda-cache-limit is 1GB before this patch. When parallel-readdir is enabled, there will be many instances of readdir-ahead, hence the rda-cache-limit depends on the number of instances. Eg: On a volume with distribute count 4, rda-cache-limit when parallel-readdir is enabled, will be 4GB instead of 1GB. Consider a followinf sequence of operations: - Enable parallel readdir - Set rda-cache-limit to lets say 3GB - Disable parallel-readdir, this results in one instance of readdir-ahead and the rda-cache-limit will be back to 1GB, but the current value is 3GB and hence the mount will stop working as 3GB > max 1GB. Solution: To fix this, we can limit the cache to 1GB even when parallel-readdir is enabled. But there is no necessity to limit the cache to 1GB, it can be increased if the system has enough resources. Hence getting rid of the rda-cache-limit max value is more apt. If we just change the rda-cache-limit max to INFINITY, we will render older(<3.11) clients broken, when the rda-cache-limit is set to > 1GB (as the older clients still expect a value < 1GB). To safely change the max value of rda-cache-limit to INFINITY, add a check in glusted to verify all the clients are > 3.11 if the value exceeds 1GB. >Reviewed-on: https://review.gluster.org/17338 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit e43b40296956d132c70ffa3aa07b0078733b39d4) Change-Id: Id0cdda3b053287b659c7bf511b13db2e45b92032 BUG: 1453152 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/17354 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: Don't spawn new glusterfsds on node reboot with brick-muxSamikshan Bairagya2017-05-221-0/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With brick multiplexing enabled, upon a node reboot new bricks were not being attached to the first spawned brick process even though there wasn't any compatibility issues. The reason for this is that upon glusterd restart after a node reboot, since brick services aren't running, glusterd starts the bricks in a "no-wait" mode. So after a brick process is spawned for the first brick, there isn't enough time for the corresponding pid file to get populated with a value before the compatibilty check is made for the next brick. This commit solves this by iteratively waiting for the pidfile to be populated in the brick compatibility comparison stage before checking if the brick process is alive. > Reviewed-on: https://review.gluster.org/17307 > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 13e7b3b354a252ad4065f7b2f0f805c40a3c5d18) Change-Id: Ibd1f8e54c63e4bb04162143c9d70f09918a44aa4 BUG: 1453086 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17351 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* afr: gfid-mismatch-resolution-with-fav-child-policy.t to bad testsRavishankar N2017-05-171-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | gfid-mismatch-resolution-with-fav-child-policy.t does a `TEST ls $M0/f3` (line #170) to trigger healing of a file in gfid split-brain in a rep-3 volume. But the code to trigger name heal of gfid split-brain file is not yet there. The test is passing due a lookup/ stat on $M0 which triggers a background entry self heal (which has the code to heal gfid split-brain files) which may or may not complete the heal before line 170. If it doesn't, lookup on f3 is failing with EIO. Add the .t to bad tests until Karthik's patch for CLI based gfid split-brain resolution fixes name heal also. > BUG: 1450730 > Signed-off-by: Ravishankar N <ravishankar@redhat.com> > Reviewed-on: https://review.gluster.org/17290 (cherry picked from commit ba0fc77947c9e873350b58a0e3e93ab51cc56b37) BUG: 1451887 Change-Id: Iba6e9d81db386bc406aff1ecb6a18851f09bf7c0 Reviewed-on: https://review.gluster.org/17319 Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Tested-by: Shyamsundar Ranganathan <srangana@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* Fixes quota aux mount failureSanoj Unnikrishnan2017-05-1715-19/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The aux mount is created on the first limit/remove_limit/list command and it remains until volume is stopped / deleted / (quota is disabled) , where we do a lazy unmount. If the process is uncleanly terminated, then the mount entry remains and we get (Transport disconnected) error on subsequent attempts to run quota list/limit-usage/remove commands. Second issue, There is also a risk of inadvertent rm -rf on the /var/run/gluster causing data loss for the user. Ideally, /var/run is a temp path for application use and should not cause any data loss to persistent storage. Solution: 1) unmount the aux mount after each use. 2) clean stale mount before mounting, if any. One caveat with doing mount/unmount on each command is that we cannot use same mount point for both list and limit commands. The reason for this is that list command needs mount to be accessible in cli after response from glusterd, So it could be unmounted by a limit command if executed in parallel (had we used same mount point) Hence we use separate mount points for list and limit commands. >Reviewed-on: https://review.gluster.org/16938 >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Raghavendra G <rgowdapp@redhat.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >(cherry picked from commit 2ae4b4058691b324535d802f4e6d24cce89a10e5) Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0 BUG: 1449775 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Reviewed-on: https://review.gluster.org/17240 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/dht: Rebalance on all nodes should migrate filesN Balachandran2017-05-172-2/+165
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Rebalance compares the node-uuid of a file against its own to and migrates a file only if they match. However, the current behaviour in both AFR and EC is to return the node-uuid of the first brick in a replica set for all files. This means a single node ends up migrating all the files if the first brick of every replica set is on the same node. Fix: AFR and EC will return all node-uuids for the replica set. The rebalance process will divide the files to be migrated among all the nodes by hashing the gfid of the file and using that value to select a node to perform the migration. This patch makes the required DHT and tiering changes. Some tests in rebal-all-nodes-migrate.t will need to be uncommented once the AFR and EC changes are merged. > BUG: 1366817 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > Reviewed-on: https://review.gluster.org/17239 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> > Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> (cherry picked from commit b23bd3dbc2c153171d0bb1205e6804afe022a55f) Change-Id: I5ce41600f5ba0e244ddfd986e2ba8fa23329ff0c BUG: 1451573 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: https://review.gluster.org/17312 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* Tier/cli: detach status xml outputhari gowtham2017-05-171-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: detach status xml output was broken because of the wrong argument. The status_op sent to verify whether it is a tier status command was as false. Fix: the argument being passed was changed from false to true. >Change-Id: I8cdd4dd972d6bfbb61c1182cbf4097767f83c7c5 >BUG: 1446362 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Reviewed-on: https://review.gluster.org/17131 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Tested-by: hari gowtham <hari.gowtham005@gmail.com> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Reviewed-by: N Balachandran <nbalacha@redhat.com> Change-Id: I8cdd4dd972d6bfbb61c1182cbf4097767f83c7c5 BUG: 1451591 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/17315 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: Make reset-brick work correctly if brick-mux is onSamikshan Bairagya2017-05-162-1/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | Reset brick currently kills of the corresponding brick process. However, with brick multiplexing enabled, stopping the brick process would render all bricks attached to it unavailable. To handle this correctly, we need to make sure that the brick process is terminated only if brick-multiplexing is disabled. Otherwise, we should send the GLUSTERD_BRICK_TERMINATE rpc to the respective brick process to detach the brick that is to be reset. > Reviewed-on: https://review.gluster.org/17128 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 74383e3ec6f8244b3de9bf14016452498c1ddcf0) Change-Id: I69002d66ffe6ec36ef48af09b66c522c6d35ac58 BUG: 1449933 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17245 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* afr: include quorum type and count when dumping afr privRavishankar N2017-05-121-0/+47
| | | | | | | | | | | | | | | | | | | | | Squash of https://review.gluster.org/17196 and https://review.gluster.org/17215 Dump the client quorum type ('auto', 'fixed' or 'none'). If it is 'fixed', also dump the quorum-count. This information will be available in the client statedump and in /<fuse_mount>/.meta/graphs/active/testvol-replicate-X/private. Also added a test-case. Change-Id: I91367c5250b26efb35e5f7d7c397def09cc77cbc BUG: 1449921 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17243 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: socketfile & pidfile related fixes for brick multiplexing featureMohit Agrawal2017-05-105-11/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: While brick-muliplexing is on after restarting glusterd, CLI is not showing pid of all brick processes in all volumes. Solution: While brick-mux is on all local brick process communicated through one UNIX socket but as per current code (glusterd_brick_start) it is trying to communicate with separate UNIX socket for each volume which is populated based on brick-name and vol-name.Because of multiplexing design only one UNIX socket is opened so it is throwing poller error and not able to fetch correct status of brick process through cli process. To resolve the problem write a new function glusterd_set_socket_filepath_for_mux that will call by glusterd_brick_start to validate about the existence of socketpath. To avoid the continuous EPOLLERR erros in logs update socket_connect code. Test: To reproduce the issue followed below steps 1) Create two distributed volumes(dist1 and dist2) 2) Set cluster.brick-multiplex is on 3) kill glusterd 4) run command gluster v status After apply the patch it shows correct pid for all volumes > BUG: 1444596 > Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2 > Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> > Reviewed-on: https://review.gluster.org/17101 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Prashanth Pai <ppai@redhat.com> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > (cherry picked from commit 21c7f7baccfaf644805e63682e5a7d2a9864a1e6) Change-Id: Ia95b9d36e50566b293a8d6350f8316dafc27033b BUG: 1449004 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17212 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* Halo Replication feature for AFR translatorKevin Vigor2017-05-083-1/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/16177 https://review.gluster.org/17174 Merged both these patches to make sure IPV6 changes don't make it to 3.11 at all. Summary: Halo Geo-replication is a feature which allows Gluster or NFS clients to write locally to their region (as defined by a latency "halo" or threshold if you like), and have their writes asynchronously propagate from their origin to the rest of the cluster. Clients can also write synchronously to the cluster simply by specifying a halo-latency which is very large (e.g. 10seconds) which will include all bricks. In other words, it allows clients to decide at mount time if they desire synchronous or asynchronous IO into a cluster and the cluster can support both of these modes to any number of clients simultaneously. There are a few new volume options due to this feature: halo-shd-latency: The threshold below which self-heal daemons will consider children (bricks) connected. halo-nfsd-latency: The threshold below which NFS daemons will consider children (bricks) connected. halo-latency: The threshold below which all other clients will consider children (bricks) connected. halo-min-replicas: The minimum number of replicas which are to be enforced regardless of latency specified in the above 3 options. If the number of children falls below this threshold the next best (chosen by latency) shall be swapped in. New FUSE mount options: halo-latency & halo-min-replicas: As descripted above. This feature combined with multi-threaded SHD support (D1271745) results in some pretty cool geo-replication possibilities. Operational Notes: - Global consistency is gaurenteed for synchronous clients, this is provided by the existing entry-locking mechanism. - Asynchronous clients on the other hand and merely consistent to their region. Writes & deletes will be protected via entry-locks as usual preventing concurrent writes into files which are undergoing replication. Read operations on the other hand should never block. - Writes are allowed from _any_ region and propagated from the origin to all other regions. The take away from this is care should be taken to ensure multiple writers do not write the same files resulting in a gfid split-brain which will require resolution via split-brain policies (majority, mtime & size). Recommended method for preventing this is using the nfs-auth feature to define which region for each share has RW permissions, tiers not in the origin region should have RO perms. TODO: - Synchronous clients (including the SHD) should choose clients from their own region as preferred sources for reads. Most of the plumbing is in place for this via the child_latency array. - Better GFID split brain handling & better dent type split brain handling (i.e. create a trash can and move the offending files into it). - Tagging in addition to latency as a means of defining which children you wish to synchronously write to Test Plan: - The usual suspects, clang, gcc w/ address sanitizer & valgrind - Prove tests Reviewers: jackl, dph, cjh, meyering Reviewed By: meyering Subscribers: ethanr Differential Revision: https://phabricator.fb.com/D1272053 Tasks: 4117827 >Change-Id: I694a9ab429722da538da171ec528406e77b5e6d1 >BUG: 1428061 >Signed-off-by: Kevin Vigor <kvigor@fb.com> >Reviewed-on: http://review.gluster.org/16099 >Reviewed-on: https://review.gluster.org/16177 >Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> BUG: 1448416 Change-Id: I694a9ab429722da538da171ec528406e77b5e6d1 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17192 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com>
* gfapi/handleops: Introducing glfs_xreaddirplus_r() fop for handleopsSoumya Koduri2017-05-052-0/+134
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Its known that readdirplus operation fetches stat as well for each of the dirents. But often applications may need extra information, like for eg., NFS-Ganesha which operates on handles needs handles for each of those dirents returned. So this would require extra calls to the backend, in this case LOOKUP (which is very expensive operation) resulting in very low readdir performance. To address that introducing this new API using which applications can make request for any extra information to be returned as part of readdirplus response. Currently this new api returns stat and handles as demanded by application. The synopsis of the API is noted in glfs.h. @todo: * Enhance test script using this new API Below were the perf results on single brick volume with and without these changes - Dataset used - 10*100 directories and each directory containing 100 empty files. I used NFS-Ganesha application to test these changes - >for i in {1..5}; do systemctl restart nfs-ganesha; sleep 10; mount -t nfs -o vers=4 localhost:/brick_vol /mnt; cd /mnt; echo "ITERATION$i"; date; find . > tmp-nfs.log; date; cd /; umount /mnt; sleep 2; done; Without these changes - ITERATION1 Mon Mar 20 17:22:26 IST 2017 Mon Mar 20 17:23:18 IST 2017 ITERATION2 Mon Mar 20 17:23:39 IST 2017 Mon Mar 20 17:24:28 IST 2017 ITERATION3 Mon Mar 20 17:24:49 IST 2017 Mon Mar 20 17:25:36 IST 2017 ITERATION4 Mon Mar 20 17:30:57 IST 2017 Mon Mar 20 17:31:37 IST 2017 ITERATION5 Mon Mar 20 17:31:57 IST 2017 Mon Mar 20 17:32:40 IST 2017 [root@dhcp35-197 /]# On an average ~46.2 sec With these changes applied - ITERATION1 Mon Mar 20 17:35:03 IST 2017 Mon Mar 20 17:35:15 IST 2017 ITERATION2 Mon Mar 20 17:35:36 IST 2017 Mon Mar 20 17:35:46 IST 2017 ITERATION3 Mon Mar 20 17:36:06 IST 2017 Mon Mar 20 17:36:17 IST 2017 ITERATION4 Mon Mar 20 17:41:38 IST 2017 Mon Mar 20 17:41:49 IST 2017 ITERATION5 Mon Mar 20 17:42:10 IST 2017 Mon Mar 20 17:42:20 IST 2017 On an average ~10.8 sec This is backport of below upstream patch - https://review.gluster.org/15663 >Updates #174 >BUG: 1442950 >Change-Id: I0f74f74dc62085ca4c4a23c38e3edc84bd850876 >Signed-off-by: Soumya Koduri <skoduri@redhat.com> >Reviewed-on: https://review.gluster.org/15663 >Smoke: Gluster Build System <jenkins@build.gluster.org> >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> >Reviewed-by: Niels de Vos <ndevos@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> BUG: 1447571 Change-Id: I0f74f74dc62085ca4c4a23c38e3edc84bd850876 Signed-off-by: Soumya Koduri <skoduri@redhat.com> Reviewed-on: https://review.gluster.org/17164 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* core: remove experimental xlators and associated testsKaleb S. KEITHLEY2017-05-035-252/+0
| | | | | | | | | | | | | | | | | | | | | | | | experimental xlators not included in 3.11 Cherry picked from 4231c40973c60999f5ef759db450d25e129ef6ba: > Change-Id: I547480ee5e7912664784643e436feb198b6d16d0 > BUG: 1415866 > Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> > Reviewed-on: https://review.gluster.org/16447 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Change-Id: I547480ee5e7912664784643e436feb198b6d16d0 BUG: 1447543 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: https://review.gluster.org/17154 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cluster/dht: Make rebalance throttle option tuned by numberSusant Palai2017-04-291-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current rebalance throttle options: lazy/normal/aggressive may not always be sufficient for the purpose of throttling. In our recent test, we observed for certain setups, normal and aggressive modes behaved similarly consuming full disk bandwidth. So in cases like this admin should be able to tune it down(or vice versa) depending on the need. Along with old throttle configurations, thread counts are tuned based on number. e.g. gluster v set vol-name cluster-rebal.throttle 5. Admin can tune up/down between 0 and the number of cores available. Note: For heterogenous servers, validation will fail on the old server if "number" is given for throttle configuration. The message looks something like this: "volume set: failed: Staging failed on vm2. Error: cluster.rebal-throttle should be {lazy|normal|aggressive}" Test: Manual test by logging active thread number after reconfiguring throttle option. testcase: tests/basic/distribute/throttle-rebal.t Change-Id: I46e3cde546900307831028b344ecf601fd9b02c3 BUG: 1438370 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/16980 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* libglusterfs: accept random volname in glusterfs_graph_prepare()Niels de Vos2017-04-264-1/+108
| | | | | | | | | | | | | | | | | | When the call to glfs_new("volname") passes a name for the volume and it does not match the name of the subvolume in the graph, glfs_init() will fail. This is easily reproducible by a gfapi program that loads the volume from a .vol file, and not from a GlusterD server. Change-Id: I33e77fbee7d12eaefe7c384fad6aecfa3582ea5a BUG: 1425623 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/16796 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* debug/sink: add xlator to aid in resource leak debuggingNiels de Vos2017-04-252-0/+37
| | | | | | | | | | | | | | | | | | | | | | This new xlator does not allocate any resources on init(). This makes it a good option to use for debugging xlator releated resources leaks on fini(). By putting the sink xlator as single xlator in a .vol file, and loading it through gfapi, we can investigate the resource leaks that are happening through gfapi (and the Gluster core). By extending the .vol file with additional xlators, it is possible to analyze resource leaks of single xlators. Change-Id: Idb5faa861b623dd5b2a988b181e669b0d52c2a0e BUG: 1425623 Fixes: #176 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/16806 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* cluster/afr: GFID split brain resolution with favorite-child-policykarthik-us2017-04-201-0/+228
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently the automatic split brain resolution with favorite child policy is not resolving the GFID split brains. Fix: When there is a GFID split brain and the favorite child policy is set to size/mtime/ctime/majority, based on the policy decide on the source and sinks. Delete the entry from the sinks and recreate it from the source. Mark the appropriate pending attributes and resolve the GFID split brain. When the heal takes place it will complete the pending heals and reset the attributes. Change-Id: Ie30e5373f94ca6f276745d9c3ad662b8acca6946 BUG: 1430719 Signed-off-by: karthik-us <ksubrahm@redhat.com> Reviewed-on: https://review.gluster.org/16878 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* Implement negative lookup cachePoornima G2017-04-201-0/+64
| | | | | | | | | | | | | | | | | | | | | Before creating any file negative lookups(1 in Fuse, 4 in SMB etc.) are sent to verify if the file already exists. By serving these lookups from the cache when possible, increases the create performance by multiple folds in SMB access and some percentage in Fuse/NFS access. Feature page: https://review.gluster.org/#/c/16436 Updates #82 Change-Id: Ib1c0e7ac7a386f943d84f6398c27f9a03665b2a4 BUG: 1442569 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/16952 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* features/bit-rot-stub: bring in optional versioningRaghavendra Bhat2017-04-183-0/+9
| | | | | | | | | | | | | | | | * As of now bit-rot-stub does versioning always. This leads lots of getxattr calls being made in lookups. So make object versioning optional. Change-Id: I83713e45ae59fb28004bb3cfa008f2d69edebbfa BUG: 1359599 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/14442 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* afr: don't do a post-op on a brick if op failedRavishankar N2017-04-181-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In afr-v2, self-blaming xattrs are not there by design. But if the FOP failed on a brick due to an error other than ENOTCONN (or even due to ENOTCONN, but we regained connection before postop was wound), we wind the post-op also on the failed brick, leading to setting self-blaming xattrs on that brick. This can lead to undesired results like healing of files in split-brain etc. Fix: If a fop failed on a brick on which pre-op was successful, do not perform post-op on it. This also produces the desired effect of not resetting the dirty xattr on the brick, which is how it should be because if the fop failed on a brick, there is no reason to clear the dirty bit which actually serves as an indication of the failure. Change-Id: I5f1caf4d1b39f36cf8093ccef940118638caa9c4 BUG: 1438255 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/16976 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* worm: add check for internal processes in ftruncate()Amar Tumballi2017-04-181-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch fixes the recently seen issues with worm_sh.t test. RCA: - $ git log --oneline xlators/features/read-only/src/worm.c 1b01bdc worm: allow Self-heal-Daemon to perform some operations c5a4a77 features/worm: Adding implementation for ftruncate - These two patches were merged in reverse order of their submission, and hence the check added for internal processes got missed in new fop 'ftruncate()'. The worm_sh.t passed the tests as while that patch got submitted there was no ftruncate() in worm xlator. Change-Id: I81a8a45fa2679917a2c859c4f5224a2c3edbc784 BUG: 1423413 Signed-off-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: https://review.gluster.org/17048 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Zhou Zhengping <johnzzpcrystal@gmail.com> Reviewed-by: David Spisla <david.spisla@iternity.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* dht: Add readdir-ahead in rebalance graph if parallel-readdir is onPoornima G2017-04-182-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue: The value of linkto xattr is generally the name of the dht's next subvol, this requires that the next subvol of dht is not changed for the life time of the volume. But with parallel readdir enabled, the readdir-ahead loaded below dht, is optional. The linkto xattr for first subvol, when: - parallel readdir is enabled : "<volname>-readdir-head-0" - plain distribute volume : "<volname>-client-0" - distribute replicate volume : "<volname>-afr-0" The value of linkto xattr is "<volname>-readdir-head-0" when parallel readdir is enabled, and is "<volname>-client-0" if its disabled. But the dht_lookup takes care of healing if it cannot identify which linkto subvol, the xattr points to. In dht_lookup_cbk, if linkto xattr is found to be "<volname>-client-0" and parallel readdir is enabled, then it cannot understand the value "<volname>-client-0" as it expects "<volname>-readdir-head-0". In that case, dht_lookup_everywhere is issued and then the linkto file is unlinked and recreated with the right linkto xattr. The issue is when parallel readdir is enabled, mount point accesses the file that is currently being migrated. Since rebalance process doesn't have parallel-readdir feature, it expects "<volname>-client-0" where as mount expects "<volname>-readdir-head-0". Thus at some point either the mount or rebalance will fail. Solution: Enable parallel-readdir for rebalance as well and then do not allow enabling/disabling parallel-readdir if rebalance is in progress. Change-Id: I241ab966bdd850e667f7768840540546f5289483 BUG: 1436090 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/17056 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: Make rebalance honor min-free-diskSusant Palai2017-04-132-2/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | test: Manual created files of size 1K on 2 brick(of size 1GB) setup . added a brick of size 16GB. set min-free-disk to 12GB(so that first two bricks won't receive any files). removed one of the 1st brick of size 1GB. Logs from test: [2017-04-12 08:52:08.196484] W [MSGID: 0] [dht-rebalance.c:895:__dht_check_free_space] 0-test1-dht: Write will cross min-free-disk for file - /tile32 on subvol - test1-client-1. Looking for new subvol. [2017-04-12 08:52:08.196904] I [MSGID: 0] [dht-rebalance.c:925:__dht_check_free_space] 0-test1-dht: new target found - test1-client-2 for file - /tile32 - Post migration we have two files. The new destination (/brick/1) has the data file [root@vm1 ~]# ll /brick/1/tile32 -rw-r--r--. 2 root root 0 Apr 12 14:22 /brick/1/tile32 - On the old target the linkto file is there with linkto xattr pointing to /brick/1 [root@vm1 ~]# ll /tmp/2/tile32 ---------T. 2 root root 1000 Apr 12 14:22 /tmp/2/tile32 [root@vm1 ~]# getfattr -m . -de text /tmp/2/tile32 getfattr: Removing leading '/' from absolute path names security.selinux="unconfined_u:object_r:user_tmp_t:s0" trusted.gfid="����:Aс�#�/'b2" trusted.glusterfs.dht.linkto="test1-client-2" Marking ./tests/features/worm_sh.t as bad test. Reason being, this patch failed on master branch as well and it has nothing to do with rebalance/remove-brick. BUG: 1441508 Change-Id: I90bae251cda3d957a49cdceda90cd08311a392fb Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: https://review.gluster.org/17034 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Add client details to get-state outputSamikshan Bairagya2017-04-121-16/+17
| | | | | | | | | | | | | | | | | | | | | | | | This commit optionally adds client details corresponding to the locally running bricks to the get-state output. Since getting the client details involves sending RPC requests to the respective local bricks, this is a relatively more costly operation. These client details would be added to the get-state output only if the get-state command is invoked with the 'detail' option. This commit therefore also changes the get-state CLI usage. The modified usage is as follows: # gluster get-state [<daemon>] [[odir </path/to/output/dir/>] \ [file <filename>]] [detail] Change-Id: I42cd4ef160f9e96d55a08a10d32c8ba44e4cd3d8 BUG: 1431183 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17003 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* worm: allow Self-heal-Daemon to perform some operationsDavid Spisla2017-04-121-0/+75
| | | | | | | | | | | | | | | | | | The Self-Heal-Daemon should be allowed to trigger unlink, link, trauncate, rename and write operation. The value of frame->root->pid can be used to detect internal (by SHD) operations. Change-Id: I7526148100bef1e2837d69df5c119dc97d91fffd BUG: 1423413 Signed-off-by: David Spisla <david.spisla@iternity.com> Reviewed-on: https://review.gluster.org/16661 Tested-by: jiffin tony Thottan <jthottan@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* tests: remove tests/bugs/core/bug-1421590-brick-mux-reuse-ports.tAtin Mukherjee2017-04-121-60/+0
| | | | | | | | | | | | | | | | | | | | | | | | | bug-1421590-brick-mux-reuse-ports.t seems to be a bad test to me and here is my reasoning: This test tries to check if the ports are reused or not. When a volume is restarted, by the time glusterd tries to allocate a new port to the one of the brick processes of the volume there is no guarantee that the older port will be allocated given the kernel might take some extra time to free up the port between this time frame. From https://build.gluster.org/job/regression-test-burn-in/2932/console we can clearly see that post restart of the volume, glusterd allocated port 49153 & 49155 for brick1 & brick2 respectively but the test was expecting the ports to be matched with 49155 & 49156 which were allocated before the volume was restarted. Change-Id: Id887bf28445261d4de04fc7502e58057659c9512 BUG: 1441035 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17033 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* geo-rep: filter out xtime attribute during getxattrSaravanakumar Arumugam2017-04-113-13/+11
| | | | | | | | | | | | | | | | | | | | | | georep gsyncd's xtime needs to filtered irrespective of any process access. This way, we can avoid (unnecessarily)syncing xtime attribute to slave, which may raise permission denied errors. test case modified to check for xtime xattr only in backend. Change-Id: I2390b703048d5cc747d91fa2ae884dc55de58669 BUG: 1353952 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: https://review.gluster.org/14880 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Kotresh HR <khiremat@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: Add validation for options rda-cache-limit rda-request-sizePoornima G2017-04-111-0/+30
| | | | | | | | | | | | | | | | | Currently when prarallel readdir is enabled, setting any junk value to rda-cache-limit and rda-request-size succeeds. This is because of bug in the special handling of these options. Fixing the same in this patch Change-Id: I902cd9ac9134c158ab6f8aea4b001254a03547bd BUG: 1439640 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/17008 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* TIER/TESTS: improving regression test for tierhari gowtham2017-04-107-151/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The test files that were marked as bad test were checked and updated for centos. The tests that had issue were fixed. Tests that aren't needed anymore are removed. REASON: tests/basic/tier/tier-file-create.t This test checks one line after creating a tiered volume (which is done in every tier test). So this line is moved along with other test in tier and the file is deleted. tests/bugs/tier/bug-1286974.t This bug checks for the tier as a task and tier has been moved from a task to service as a part of the tier as a service patch https://review.gluster.org/#/c/13365/ So it is removed from bad tests. tests/basic/tier/record-metadata-heat.t This test had a bug and has been fixed. tests/basic/tier/bug-1214222-directories_missing_after_attach_tier.t tests/basic/tier/fops-during-migration.t tests/basic/tier/tier-snapshot.t tests/basic/tier/tier_lookup_heal.t These test seem to work fine on centos now. Change-Id: I05537f4bbb91584410177ce43543897eff8761a1 BUG: 1421600 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/16605 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* cli/auth : auth.allow and auth.reject does not accept FQDN/host nameMohit Agrawal2017-04-101-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem : At the time of set FQDN name to "auth.allow/auth.reject" through gluster cli,it does not accept FQDN/host name. Solution: Condition needs to be update in verify_host_name and gf_auth to accept FQDN/host name. Fix : Change the condition to accept FQDN/host Name. To verify the patch followed below procedure 1) Try to set FQDN name for auth.allow or auth.reject parameter gluster v set myvol auth.reject <fqdn name> It gives error "fqdn-name" is not a valid internet-address-list 2) After apply the patch it does not give any error. 3) To verify auth.allow/reject try to mount volume on some client. Change-Id: Ieb76cbb93d43323fd29c7ca04efe3790edb4281b BUG: 1321578 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/15086 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tests: track EW_RETRIES for debuggingJeff Darcy2017-04-091-0/+4
| | | | | | | | | | | | | | | | | It can often be useful while debugging to know how many times EXPECT_WITHIN had to retry a command before it got the result we were looking for. This patch just adds a variable EW_RETRIES that can be inspected to find this info for the last EXPECT_WITHIN. Change-Id: I1bcb09bb7eb118c3d76c60317ef99e02df6b6ee6 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16451 Tested-by: Jeff Darcy <jeff@pl.atyp.us> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd : Disallow peer detach if snapshot bricks exist on itGaurav Yadav2017-03-311-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem : - Deploy gluster on 2 nodes, one brick each, one volume replicated - Create a snapshot - Lose one server - Add a replacement peer and new brick with a new IP address - replace-brick the missing brick onto the new server (wait for replication to finish) - peer detach the old server - after doing above steps, glusterd fails to restart. Solution: With the fix detach peer will populate an error : "N2 is part of existing snapshots. Remove those snapshots before proceeding". While doing so we force user to stay with that peer or to delete all snapshots. Change-Id: I3699afb9b2a5f915768b77f885e783bd9b51818c BUG: 1322145 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/16907 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: hold off volume deletes while still restarting bricksJeff Darcy2017-03-302-0/+96
| | | | | | | | | | | | | | | | We need to do this because modifying the volume/brick tree while glusterd_restart_bricks is still walking it can lead to segfaults. Without waiting we could accidentally "slip in" while attach_brick has released big_lock between retries and make such a modification. Change-Id: I30ccc4efa8d286aae847250f5d4fb28956a74b03 BUG: 1432542 Signed-off-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-on: https://review.gluster.org/16927 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* protocol : fix auth-allow regressionAtin Mukherjee2017-03-301-0/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | One of the brick multiplexing patches (commit 1a95fc3) had some changes in gf_auth () & server_setvolume () functions which caused auth-allow feature to be broken. mount doesn't succeed even if it's part of the auth-allow list. This fix does the following: 1. Reintroduce the peer-info data back in gf_auth () so that fnmatch has valid input and it can decide on the result. 2. config-params dict should capture key values pairs for all the bricks in case brick multiplexing is on. In case brick multiplexing isn't enabled, then config-params should carry attributes from protocol/server such that all rpc auth related attributes stay in tact in the dictionary. Change-Id: I007c4c6d78620a896b8858a29459a77de8b52412 BUG: 1433815 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/16920 Tested-by: Jeff Darcy <jeff@pl.atyp.us> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com>