summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: ensure volinfo->caps is set to correct valueSanju Rakonde2018-11-051-0/+9
| | | | | | | | | | | | | | | | | | | | | With the commit febf5ed4848, during the volume create op, we are setting volinfo->caps to 0, only if any of the bricks belong to the same node and brickinfo->vg[0] is null. Previously, we used to set volinfo->caps to 0, when either brick doesn't belong to the same node or brickinfo->vg[0] is null. With this patch, we set volinfo->caps to 0, when either brick doesn't belong to the same node or brickinfo->vg[0] is null. (as we do earlier without commit febf5ed4848). > BUG: bz#1635820 > Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49 > Signed-off-by: Sanju Rakonde <srakonde@redhat.com> fixes: bz#1643052 Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* tests: correction in tests/bugs/glusterd/optimized-basic-testcases-in-cluster.tSanju Rakonde2018-10-251-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has optimised glusterd test cases by clubbing the similar test cases into a single test case. https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t test case has been deleted and added as a part of tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t In the original test case, we create a volume with two bricks, each on a separate node(N1 & N2). From another node in cluster(N3), we try to detach a node which is hosting bricks. It fails. In the new test, we created volume with single brick on N1. and from another node in cluster, we tried to detach N1. we expect peer detach to fail, but peer detach was success as the node is hosting all the bricks of volume. Now, changing the new test case to cover the original test case scenario. Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to understand why the new test case is not failing in centos-regression. > BUG: bz#1642597 > Change-Id: Ifda12b5677143095f263fbb97a6808573f513234 > Signed-off-by: Sanju Rakonde <srakonde@redhat.com> (cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c) fixes: bz#1643075 Change-Id: Ifda12b5677143095f263fbb97a6808573f513234 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* gfapi: Bug fixes in leases processing code-pathSoumya Koduri2018-10-221-2/+2
| | | | | | | | | | | | | | | | | This patch fixes below issues in gfapi lease code-path * 'glfs_setfsleasid' should allow NULL input to be able to reset leaseid * Applications should be allowed to (un)register for upcall notifications of type GLFS_EVENT_LEASE_RECALL * APIs added to read contents of GLFS_EVENT_LEASE_RECALL argument which is of type "struct glfs_upcall_lease" This is backport of below mainline path - https://review.gluster.org/#/c/glusterfs/+/21391 Change-Id: I3320ddf235cc82fad561e13b9457ebd64db6c76b updates: #350 Signed-off-by: Soumya Koduri <skoduri@redhat.com>
* tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.tRavishankar N2018-10-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing this .t spuriously. On checking one of the failure logs, I see: 22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful: 22:05:44 Self-heal daemon is not running. Check self-heal daemon log file. 22:05:44 not ok 20 , LINENUM:38 In glusterd log: [2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file But the tests which preceed this check whether via a statedump if the shd is conected to the bricks, and they have succeeded and even started healing. From glustershd.log: [2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1 sinks=2 So the only reason I can see launching heal via cli failing is a race where shd has been spawned but glusterd has not yet updated in-memory that it is up, and hence failing the CLI. Fix: Check for shd up status before launching heal via CLI Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4 fixes: bz#1641761 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
* afr: prevent winding inodelks twice for arbiter volumesRavishankar N2018-10-101-0/+44
| | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/glusterfs/+/21380/ Problem: In an arbiter volume, if there is a pending data heal of a file only on arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks it only once, leaving behind a stale lock on the brick. This causes the next write to the file to hang. Fix: Fix the code-bug to take lock only once. This bug was introduced master with commit eb472d82a083883335bc494b87ea175ac43471ff Thanks to Pranith Kumar K <pkarampu@redhat.com> for finding the RCA. fixes: bz#1637953 Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: fix incorrect reporting of directory split-brainRavishankar N2018-10-052-1/+63
| | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/glusterfs/+/21135/ Problem: When a directory has dirty xattrs due to failed post-ops or when replace/reset brick is performed, AFR does a conservative merge as expected, but heal-info reports it as split-brain because there are no clear sources. Fix: Modify pending flag to contain information about pending heals and split-brains. For directories, if spit-brain flag is not set,just show them as needing heal and not being in split-brain. Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383 fixes: bz#1633634 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/afr: Delegate name-heal when possiblePranith Kumar K2018-09-212-0/+120
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: When name-self-heal is triggered on the mount, it blocks lookup until name-self-heal completes. But that can lead to hangs when lot of clients are accessing a directory which needs name heal and all of them trigger heals waiting for other clients to complete heal. Fix: When a name-heal is needed but quorum number of names have the file and pending xattrs exist on the parent, then better to delegate the heal to SHD which will be completed as part of entry-heal of the parent directory. We could also do the same for quorum-number of names not present but we don't have any known use-case where this is a frequent occurrence so not changing that part at the moment. When there is a gfid mismatch or missing gfid it is important to complete the heal so that next rename doesn't assume everything is fine and perform a rename etc fixes bz#1625575 Change-Id: I8b002c85dffc6eb6f2833e742684a233daefeb2c Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/afr: Delegate metadata heal with pending xattrs to SHDPranith Kumar K2018-09-212-13/+25
| | | | | | | | | | | | | | | | | | | Problem: When metadata-self-heal is triggered on the mount, it blocks lookup until metadata-self-heal completes. But that can lead to hangs when lot of clients are accessing a directory which needs metadata heal and all of them trigger heals waiting for other clients to complete heal. Fix: Only when the heal is needed but the pending xattrs are not set, trigger metadata heal that could block lookup. This is the only case where different clients may give different metadata to the clients without heals, which should be avoided. Updates bz#1625575 Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* libgfchangelog: Fix changelog history APIKotresh HR2018-09-213-0/+279
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: If requested start time and end time doesn't fall into first HTIME file, then history API fails even though continuous changelogs are avaiable for the requested range in other HTIME files. This is induced by changelog disable and enable which creates fresh HTIME index file. Cause and Analysis: Each HTIME index file represents the availability of continuous changelogs. If changelog is disabled and enabled, a new HTIME index file is created represents non availability of continuous changelogs. So as long as the requested start and end falls into single HTIME index file and not across, history API should succeed. But History API checks for the changelogs only in first HTIME index file and errors out if not available. Fix: Check in all HTIME index files for availability of continuous changelogs for requested change. Backport of: > Patch: https://review.gluster.org/21016/ > BUG: bz#1622549 > Change-Id: I80eeceb5afbd1b89f86a9dc4c320e161907d3559 > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 35aa67001c8fac99b040fbc61f36ef4f1b1590ac) fixes: bz#1630141 Change-Id: I80eeceb5afbd1b89f86a9dc4c320e161907d3559 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* geo-rep: Fix issues related config setKotresh HR2018-09-213-49/+235
| | | | | | | | | | | | | | | | | | | | | | | | | | 1. '--ignore-mising-args' option for rsync is not being used even though the rsync version is greater than 3.1.0. Fixed the same. 2. '--existing' option for rsync is also not being used. Fixed the same. 3. geo-rep config fails to set rsync-options as the value contains '--'. Interestingly, python argsparse treats the value with '--' (e.g., --ignore-missing-args) as option. But when passed with something like --value=--ignore-missing-args, it succeeds. Fixed the same. Backport of: > Patch: https://review.gluster.org/21191 > Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1 > Signed-off-by: Kotresh HR <khiremat@redhat.com> > BUG: 1629561 Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1 Signed-off-by: Kotresh HR <khiremat@redhat.com> fixes: bz#1630140
* geo-rep/hook-script: Fix ssh/scp optionsKotresh HR2018-09-214-13/+72
| | | | | | | | | | | | | | | | | | | | Always use ssh and scp with "-oPasswordAuthentication=no" and "-oStrictHostKeyChecking=no" options. It might hang the post script otherwise leading geo-rep setup failure Also increased geo-rep timeout. Occasionally, it's taking more time to reach Active/Passive status. Especially, the first start after create. Backport of: > Patch: https://review.gluster.org/20601 > BUG: bz#1610405 > Change-Id: I9560d64dbe0edf5db73446a9fc97dda19b88d233 > Signed-off-by: Kotresh HR <khiremat@redhat.com> fixes: bz#1630144 Change-Id: I9560d64dbe0edf5db73446a9fc97dda19b88d233 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* io-stats: dump io-stats info in /var/run/glusterAmar Tumballi2018-09-061-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | It wouldn't make sense to allow iostats file to be written in *any* directory. While the formating makes sure we try to append io-stats-name for the file, so overwriting existing file is slim, but in any case it makes sense to restrict dumping to one directory. Below are the sample commands, and files created for the corresponding values: $ setfattr -n trusted.io-stats-dump -v file-for-dump $M0 In this case, the file would be in /var/run/gluster/file-for-dump $ setfattr -n trusted.io-stats-dump -v /dir1/dir2/file-for-dump $M0 In this case, then the dump file is in /var/run/gluster/dir1-dir2-file-for-dump Note that the value passed for this virtual xattr would be treated as a file, and even if the value has '/' in it, it would be changed to '-' for sanity. Fixes: bz#1625106 Change-Id: Id9ae6a40a190b8937c51662e6e1c2a0f6c86a0e0 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* storage/posix: Increment trusted.pgfid in posix_mknodN Balachandran2018-08-291-0/+57
| | | | | | | | | | The value of trusted.pgfid.xx was always set to 1 in posix_mknod. This is incorrect if posix_mknod calls posix_create_link_if_gfid_exists. Change-Id: Ibe87ca6f155846b9a7c7abbfb1eb8b6a99a5eb68 fixes: bz#1623317 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* afr: heal gfids when file is not present on all bricksRavishankar N2018-07-091-0/+128
| | | | | | | | | | | | commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression wherein if a file is present in only 1 brick of replica *and* doesn't have a gfid associated with it, it doesn't get healed upon the next lookup from the client. Fix it. Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599 fixes: bz#1597117 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit eb472d82a083883335bc494b87ea175ac43471ff)
* ctime: Fix self heal of symlink in EC volumeKotresh HR2018-07-021-0/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Since IEEE Std 1003.1-2001 does not require any association of file times with symbolic links, there is no requirement that file times be updated by readlink() states [1]. stat on symlink file was generating a readlink fop on one of the subvolumes of ec set which in turn updates atime on that subvolume. This causes mdata xattr to be different across ec set and hence self heal fails. So based on [1], atime is no longer updated by readlink fop. [1] http://pubs.opengroup.org/onlinepubs/009695399/functions/readlink.html Backport of: > Patch: https://review.gluster.org/#/c/20311/ > BUG: 1592509 > Change-Id: I08bd3ca3bdb222bd18160b1aa58fc2f7630c8083 > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit c097a7894d458e33a41f6db6092677108ef30fec) fixes: bz#1593536 Change-Id: I08bd3ca3bdb222bd18160b1aa58fc2f7630c8083 Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit c097a7894d458e33a41f6db6092677108ef30fec)
* storage/posix: Fix posix_symlinks_match()Pranith Kumar K2018-07-021-5/+10
| | | | | | | | | | | 1) snprintf into linkname_expected should happen with PATH_MAX 2) comparison should happen with linkname_actual with complete string linkname_expected fixes bz#1595524 Change-Id: Ic3b3c362dc6c69c046b9a13e031989be47ecff14 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> (cherry picked from commit 3099d3e6ba81d3e1abf37385b13aabf5837b9c5e)
* storage/posix: Handle ENOSPC correctly in zero_fillPranith Kumar K2018-06-142-0/+99
| | | | | | | Change-Id: Icc521d86cc510f88b67d334b346095713899087a fixes: bz#1591185 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> (cherry picked from commit 6ef91480f9e75f63100585bfd19694deb0c2457b)
* libgfapi: Fix lookup on rootKotresh HR2018-05-292-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | Lookup on root was sending "/" as the path. This will break the basename calculation in loc_copy and hence lookup on root was failing if the loc_copy was involved in the stack. With ctime, a first lookup on root initiates a metadata self heal because of ctime xattr not being same on all afr subvolumes. This results in loc_copy and hence the failure of lookup. Fix would be to send path with "." for the root. Backport of: > Patch: https://review.gluster.org/#/c/20086/ > BUG: 1582516 > Change-Id: Iafe4b99f249a4f5034ad34c1d30590de0e35aa0d (cherry picked from commit 3d38e4e47f129bdb36c3fbbd481dabe4ba4413d6) fixes: bz#1583016 Change-Id: Iafe4b99f249a4f5034ad34c1d30590de0e35aa0d Signed-off-by: Kotresh HR <khiremat@redhat.com>
* afr: fix bug-1363721.t failureRavishankar N2018-05-251-2/+8
| | | | | | | | | | | | | | | | | | | | | | | Problem: In the .t, when the only good brick was brought down, writes on the fd were still succeeding on the bad bricks. The inflight split-brain check was marking the write as failure but since the write succeeded on all the bad bricks, afr_txn_nothing_failed() was set to true and we were unwinding writev with success to DHT and then catching the failure in post-op in the background. Fix: Don't wind the FOP phase if the write_subvol (which is populated with readable subvols obtained in pre-op cbk) does not have at least 1 good brick which was up when the transaction started. Note: This fix is not related to brick muliplexing. I ran the .t 10 times with this fix and brick-mux enabled without any failures. Change-Id: I915c9c366aa32cd342b1565827ca2d83cb02ae85 updates: bz#1581548 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit 985a1d15db910e012ddc1dcdc2e333cc28a9968b)
* Revert "gluster: Sometimes Brick process is crashed at the time of stopping ↵Mohit Agrawal2018-05-252-30/+3
| | | | | | | | brick" Updates: bz#1582286 This reverts commit 0043c63f70776444f69667a4ef9596217ecb42b7. Change-Id: Iab3b4f4a54e122c589e515add93c6effc966b3e0
* test: Marking test case bug-1309462.t as badShyamsundarR2018-05-231-1/+2
| | | | | | | | | | Details in the bug Updates: bz#1581745 Change-Id: Id984e10b60daf274d5510e3ccbf7abf0cb19f368 Signed-off-by: ShyamsundarR <srangana@redhat.com> (cherry picked from commit 92839c5a4a6c87973e4f6f2f96b359e9c2a0f5c0)
* make posix return errors when gfid2path key is absentRaghavendra Bhat2018-05-221-0/+70
| | | | | | | Change-Id: I3a8d452d00560dac5e0b7ff0b1835d1f20a59f91 updates: bz#1580540 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> (cherry picked from commit c2cf3f686f3ea0efd936d2eafc404fc9d2e0acc7)
* core: remove experimental xlators and associated testsKaleb S. KEITHLEY2018-05-095-252/+0
| | | | | | | | experimental xlators removed from 4.1 Signed-off-by: ShyamsundarR <srangana@redhat.com> Change-Id: I34419ce22ca09b7626b8f9382c377a614fd9fed8 Updates: bz#1575386
* Revert "gfapi: return pre/post attributes from glfs_pread/pwrite"ShyamsundarR2018-05-083-3/+3
| | | | | | | | | | | | | | | | | | This reverts commit d01f7244e9d9f7e3ef84e0ba7b48ef1b1b09d809. This is being reverted as the API signatures should adapt to a statx like structure, and also all APIs that need to return pre/post attrs are not complete. As a result, instead of fixing up part of the APIs and then refixing the same in a later release, removing these set of fixes from the branch Additionally fixed up posix-entry-ops.c which was using the new syncop signature Updates: bz#1575386 Change-Id: I35222dadc4a2e97010bc1e6b97b6f83583c311f6
* Revert "gfapi: return pre/post attributes from glfs_ftruncate"ShyamsundarR2018-05-081-1/+1
| | | | | | | | | | | | | | | | | | This reverts commit 248152767b0599986bbb6bb35fc27197f6be6964. This is being reverted as the API signatures should adapt to a statx like structure, and also all APIs that need to return pre/post attrs are not complete. As a result, instead of fixing up part of the APIs and then refixing the same in a later release, removing these set of fixes from the branch. Additionally fixed up cloudsync.c code that was using the new syncop signature. Updates: bz#1575386 Change-Id: Idb59d20666c0d7b0c83e7fdc31dd68b8c7db9550
* Revert "gfapi: return pre/post attributes at callback for glfs api"ShyamsundarR2018-05-082-4/+2
| | | | | | | | | | | | | | | This reverts commit 384562b294e9a7847403961e878a4daa0fff33eb. This is being reverted as the API signatures should adapt to a statx like structure, and also all APIs that need to return pre/post attrs are not complete. As a result, instead of fixing up part of the APIs and then refixing the same in a later release, removing these set of fixes from the branch. Updates: bz#1575386 Change-Id: Ia071797bec1e2ac085818e3909771f9ddeac6676
* tests: Add lease test casePoornima G2018-05-052-2/+704
| | | | | | | Updates: #350 Change-Id: Iee78ab4baf48c481de1e13ff2b0393bc106b7d0e Signed-off-by: Poornima G <pgurusid@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* use awk to get a specific line from the output instead of cutRaghavendra Bhat2018-05-041-7/+1
| | | | | | | | | | cut -d$'\n' is not separating the xattrs shown as part of getfattr output. Hence use awk to get the nth line of getfattr output for nth iteration in the for loop. Change-Id: I1a96cd3f72f4f407f9a783375f78d9a69d5d3885 fixes: bz#1574606 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* cli/glusterd: Add warning message in cli for user to checkSusant Palai2018-05-031-1/+1
| | | | | | | | | | | | | | | | | | | | force-migration config for remove-brick operation. The cli will take input from the user before starting "remove-brick" start operation. The message/confirmation looks like the following: <Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated. Files that are not migrated can then be manually copied after the remove-brick commit operation. Do you want to continue with your current cluster.force-migration settings? (y/n)> And also question for COMMIT_FORCE is changed. Fixes: bz#1572586 Change-Id: Ifdb6b108a646f50339dd196d6e65962864635139 Signed-off-by: Susant Palai <spalai@redhat.com>
* core/various: python3 compat, prepare for python2 -> python3Kaleb S. KEITHLEY2018-05-026-21/+27
| | | | | | | | | | see https://review.gluster.org/#/c/19788/ use print fn from __future__ Change-Id: If5075d8d9ca9641058fbc71df8a52aa35804cda4 updates: #411 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* gluster: Sometimes Brick process is crashed at the time of stopping brickMohit Agrawal2018-04-192-3/+30
| | | | | | | | | | | | | | | | | | | | | | | | Problem: Sometimes brick process is getting crashed at the time of stop brick while brick mux is enabled. Solution: Brick process was getting crashed because of rpc connection was not cleaning properly while brick mux is enabled.In this patch after sending GF_EVENT_CLEANUP notification to xlator(server) waits for all rpc client connection destroy for specific xlator.Once rpc connections are destroyed in server_rpc_notify for all associated client for that brick then call xlator_mem_cleanup for for brick xlator as well as all child xlators.To avoid races at the time of cleanup introduce two new flags at each xlator cleanup_starting, call_cleanup. BUG: 1544090 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/) with same patch after enable brick mux forcefully, all test cases are passed. Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf updates: bz#1544090
* glusterd: volume inode/fd status broken with brick muxhari gowtham2018-04-191-0/+12
| | | | | | | | | | | | | | | | | | | | | | | Problem: The values for inode/fd was populated from the ctx received from the server xlator. Without brickmux, every brick from a volume belonged to a single brick from the volume. So searching the server and populating it worked. With brickmux, a number of bricks can be confined to a single process. These bricks can be from different volumes too (if we use the max-bricks-per-process option). If they are from different volumes, using the server xlator to populate causes problem. Fix: Use the brick to validate and populate the inode/fd status. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd fixes: bz#1566067
* afr: fixes to afr-eager lockingRavishankar N2018-04-181-0/+24
| | | | | | | | | | | | | 1. If pre-op fails on all bricks,set lock->release to true in afr_handle_lock_acquire_failure so that the GF_ASSERT in afr_unlock() does not crash. 2. Added a missing 'return' after handling pre-op failure in afr_transaction_perform_fop(), fixing a use-after-free issue. Change-Id: If0627a9124cb5d6405037cab3f17f8325eed2d83 fixes: bz#1561129 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* core/build/various: python3 compat, prepare for python2 -> python3Kaleb S. KEITHLEY2018-04-128-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note 1) we're not supposed to be using #!/usr/bin/env python, see https://fedoraproject.org/wiki/Packaging:Guidelines?rd=Packaging/Guidelines#Shebang_lines Note 2) we're also not supposed to be using "!/usr/bin/python, see https://fedoraproject.org/wiki/Changes/Avoid_usr_bin_python_in_RPM_Build#Quick_Opt-Out The previous patch (https://review.gluster.org/19767) tried to do too much in one patch, so it was abandoned. This patch does two things: 1) minor cleanup of configure(.ac) to explicitly use python2 2) change all the shebang lines to #!/usr/bin/python2 and add them where they were missing based on warnings emitted during rpmbuild. In a follow-up patch python2 will eventually be changed to python3. Before that python2-isms (e.g. print, string.join(), etc.) need to be converted to python3. Some of those can be rewritten in version agnostic python. E.g. print statements become print() with "from __future_ import print_function". The python 2to3 utility will be used for some of those. Also Aravinda has given guidance in the comments to the first patch for changes. updates: #411 Change-Id: I471730962b2526022115a1fc33629fb078b74338 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* experimental/cloudsync: Download xlator for archival featureSusant Palai2018-04-101-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spec-files: https://review.gluster.org/#/c/18854/ Overview: * Cloudsync maintains three file states in it's inode-ctx i.e 1 - LOCAL, 2 - REMOTE, 3 - DOWNLOADING. * A data modifying fop is allowed only if the state is LOCAL. If the state is REMOTE or DOWNLOADING, client will download or wait for the download to finish initiated by other client. * Multiple download and upload from different clients are synchronized by inodelk. * In POSIX a state check is done (part of different commit)before allowing the fop to continue. If the state is remote/downloading the fop is unwound with EREMOTE. The client will then download the file and continue with the fop again. * Basic Algo for fop (let's say write fop): - If LOCAL -> resume fop - If REMOTE -> - INODELK - STAT (this gets state and heal the state if needed) - DOWNLOAD - resume fop Note: * Developers will need to write plugins for download, based on the remote store they choose. In phase-1, support will be added for one remote store per volume. In future, more options for multiple remote stores will be explored. TODOs: - Implement stat/lookup/readdirp to return size info from xattr - Make plugins configurable - Implement unlink fop - Add metrics collection - Add sharding support Design Contributions: Aravinda V K <avishwan@redhat.com> Amar Tumballi <amarts@redhat.com> Ram Ankireddypalle <areddy@commvault.com> Susant Palai <spalai@redhat.com> updates: #387 Change-Id: Iddf711ee7ab4e946ae3e472ff62791a7b85e6d4b Signed-off-by: Susant Palai <spalai@redhat.com>
* features/index: Choose different base file on EMLINK errorPranith Kumar K2018-04-061-0/+52
| | | | | | | Change-Id: I4648816af908539efdc2528608aa2ebf7f0d0e2f fixes: bz#1559004 BUG: 1559004 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* glusterd: mark port_registered to true for all running bricks with brick muxAtin Mukherjee2018-04-051-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | glusterd maintains a boolean flag 'port_registered' which is used to determine if a brick has completed its portmap sign in process. This flag is (re)set in pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is the identifier to determine if the very first brick with which the process is spawned up has completed its sign in process. However in case of glusterd restart when a brick is already identified as running, glusterd does a pmap_registry_bind to ensure its portmap table is updated but this flag isn't which is fine in case of non brick multiplex case but causes an issue if the very first brick which came as part of process is replaced and then the subsequent brick attach will fail. One of the way to validate this is to create and start a volume, remove the first brick and then add-brick a new one. Add-brick operation will take a very long time and post that the volume status will show all other brick status apart from the new brick as down. Solution is to set brickinfo->port_registered to true for all the running bricks when brick multiplexing is enabled. Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff Fixes: bz#1560957 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/dht: enable lookup-optimize by defaultN Balachandran2018-04-041-1/+1
| | | | | | | | | | | | | | Lookup-optimize has been shown to improve create performance. The code has been in the project for several years and is considered stable. Enabling this by default in order to test this in the upstream regression runs. Change-Id: Iab792979ee34f0af4713931e0b5b399c23f65313 updates: bz#1557435 BUG: 1557435 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* Revert "glusterd: handling brick termination in brick-mux"Sanju Rakonde2018-03-291-32/+0
| | | | | | | | | | | | | This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b. This commit was causing multiple tests to time out when brick multiplexing is enabled. With further debugging, it's found that even though the volume stop transaction is converted into mgmt_v3 to allow the remote nodes to follow the synctask framework to process the command, there are other callers of glusterd_brick_stop () which are not synctask based. Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693 updates: bz#1545048
* afr: add new value for read-hash-mode volume optionRavishankar N2018-03-291-0/+56
| | | | | | | | | | Updates: #363 This new value (3) will try to wind read requests to the child of AFR having the least amount of pending requests in its queue. Change-Id: If6bda2aac9bf7aec3fc39622f78659313c4b6508 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/ec: send list-node-uuids request to all subvolumesXavi Hernandez2018-03-281-0/+1
| | | | | | | | | | | | The xattr trusted.glusterfs.list-node-uuids was only sent to a single subvolume. This was returning null uuids from the other subvolumes as if they were down. This fix forces that xattr to be requested from all subvolumes. Change-Id: If62eb39a6857258923ba625e153d4ad79018ea2f fixes: bz#1561406 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* glusterd: handling brick termination in brick-muxSanju Rakonde2018-03-281-0/+32
| | | | | | | | | | | | | | | Problem: There's a race between the last glusterfs_handle_terminate() response sent to glusterd and the kill that happens immediately if the terminated brick is the last brick. Solution: When it is a last brick for the brick process, instead of glusterfsd killing itself, glusterd will kill the process in case of brick multiplexing. And also changing gf_attach utility accordingly. Change-Id: I386c19ca592536daa71294a13d9fc89a26d7e8c0 fixes: bz#1545048 BUG: 1545048 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* tests: fix nl-cache.t failureAtin Mukherjee2018-03-261-1/+1
| | | | | | | | | | commit fef9293 changed network.inode-lru-limit from 50000 to 200000 in nl-cache group profile but the test wasn't changed to reflect it accordingly. Change-Id: Ibb5fb0a387f160f6b726246b161a9a7b33135755 fixes: bz#1560589 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* md-cache: fix ./tests/basic/md-cache/bug-1418249.tSusant Palai2018-03-261-1/+1
| | | | | | | | | inode table size is currently set to 200000. Hence the need of change in testcase which was expecting the old value 50000. Change-Id: I8e44b1d0a2da1e8100bebd25f48bb36e2897b4f8 fixes: bz#1560393 Signed-off-by: Susant Palai <spalai@redhat.com>
* python: Remove all uses of find_library. Fixes #1450593Niklas Hambüchen2018-03-242-11/+4
| | | | | | | | `find_library()` doesn't consider LD_LIBRARY_PATH on Python < 3.6. Change-Id: Iee26085cb5d14061001f19f032c2664d69a378a8 BUG: 1450593 Signed-off-by: Niklas Hambüchen <mail@nh2.me>
* cluster/afr: Switch to active-fd-count for open-fd checksPranith Kumar K2018-03-211-0/+20
| | | | | | BUG: 1557932 Change-Id: I3783e41b3812267bc10c0d05d062a31396ce135b Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* storage/posix: Add active-fd-count option in glusterPranith Kumar K2018-03-211-1/+13
| | | | | | | | | | | | | | | | | | | | Problem: when dd happens on sharded replicate volume all the writes on shards happen through anon-fd. When the writes don't come quick enough, old anon-fd closes and new fd gets created to serve the new writes. open-fd-count is decremented only after the fd is closed as part of fd_destroy(). So even when one fd is on the way to be closed a new fd will be created and during this short period it appears as though there are multiple fds opened on the file. AFR thinks another application opened the same file and switches off eager-lock leading to extra latency. Fix: Have a different option called active-fd whose life cycle starts at fd_bind() and ends just before fd_destroy() BUG: 1557932 Change-Id: I2e221f6030feeedf29fbb3bd6554673b8a5b9c94 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Add test cases for stripe-cache optionAshish Pandey2018-03-201-0/+227
| | | | | | Change-Id: I1508a336a7a927b389a19815ef57001cdf29b109 BUG: 1558074 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/afr: Make AFR eager-locking similar to ECPranith Kumar K2018-03-141-36/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: 1) Afr's eager-lock only works for data transactions. 2) When there are conflicting writes, write with conflicting region initiates unlock of eager-lock leading to extra pre-ops and post-ops on the file. When eager-lock goes off, it leads to extra fsyncs for random-write workload in afr. Solution (that is modeled after EC): In EC, when there is a conflicting write, it waits for the current write to complete before it winds the conflicted write. This leads to better utilization of network and disk, because we will not be doing extra xattrops and FSYNCs and inodelk/unlock. Moved fd based counters to inode based counters. I tried to model the solution based on EC's locking, but it is not similar to AFR because we had to keep backward compatibility. Lifecycle of lock: ================== First transaction is added to inode->owners list and an inodelk will be sent on the wire. All the next transactions will be put in inode->waiters list until the first transaction completes inodelk and [f]xattrop completely. Once [f]xattrop also completes, all the requests in the inode->waiters list are checked if it conflict with any of the existing locks which are in inode->owners list and if not are added to inode->owners list and resumed with doing transaction. When these transactions complete fop phase they will be moved to inode->post_op list and resume the transactions that were paused because of conflicts. Post-op and unlock will not be issued on the wire until that is the last transaction on that inode. Last transaction when it has to perform post-op can choose to sleep for deyed-post-op-secs value. During that time if any other transaction comes, it will wake up the sleeping transaction and takes over the ownership of the lock and the cycle continues. If the dealyed-post-op-secs expire, then the timer thread will wakeup the sleeping transaction and it will set lock->release to true and starts doing post-op and then unlock. During this time if any other transactions come, they will be put in inode->frozen list. Once the previous unlock comes it will move the frozen list to waiters list and moves the first element from this waiters-list to owners-list and attempts the lock and the cycle continues. This is the general idea. There is logic at the time of dealying and at the time of new transaction or in flush fop to wakeup existing sleeping transactions or choosing whether to delay a transaction etc, which is subjected to change based on future enhancements etc. Fixes: #418 BUG: 1549606 Change-Id: I88b570bbcf332a27c82d2767dfa82472f60055dc Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Change default read policy to gfid-hashAshish Pandey2018-03-141-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Whenever we read data from file over NFS, NFS reads more data then requested and caches it. Based on the stat information it makes sure that the cached/pre-read data is valid or not. Consider 4 + 2 EC volume and all the bricks are on differnt nodes. In EC, with round-robin read policy, reads are sent on different set of data bricks. This way, it balances the read fops to go on all the bricks and avoid heating UP (overloading) same set of bricks. Due to small difference in clock speed, it is possible that we get minor difference for atime, mtime or ctime for different bricks. That might cause a different stat returned to NFS based on which NFS will discard cached/pre-read data which is actually not changed and could be used. Solution: Change read policy for EC as gfid-hash. That will force all the read to go to same set of bricks. Change-Id: I825441cc519e94bf3dc3aa0bd4cb7c6ae6392c84 BUG: 1554743 Signed-off-by: Ashish Pandey <aspandey@redhat.com>