summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
* cloudsync/cvlt: Cloudsync plugin for commvault storeAnuradha Talur2019-04-2611-3/+1213
| | | | | | Change-Id: Icbe53e78e9c4f6699c7a26a806ef4b14b39f5019 updates: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* glusterd: coverity fixesAtin Mukherjee2019-04-265-8/+9
| | | | | | | | | | | | | | | | 1400775 - USE_AFTER_FREE 1400742 - Missing Unlock 1400736 - CHECKED_RETURN 1398470 - Missing Unlock Missing unlock is the tricky one, we have had annotation added, but coverity still continued to complaint. Added pthread_mutex_unlock to clean up the lock before destroying it to see if it makes coverity happy. Updates: bz#789278 Change-Id: I1d892612a17f805144d96c1b15004a85a1639414 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: enable fips-mode-rchecksum for new volumesRavishankar N2019-04-261-0/+13
| | | | | | | | | | | | | ...during volume create if the cluster op-version is >=GD_OP_VERSION_7_0. This option itself was introduced in GD_OP_VERSION_4_0_0 via commit 6daa65356. We missed enabling it by default for new volume creates in that commit. If we are to do it now safely, we need to use op version GD_OP_VERSION_7_0 and target it for release-7. fixes: bz#1702303 Change-Id: I7c6d4a8abe0816367e7069cb5cad01744f04858f Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* features/locks: error-out {inode,entry}lk fops with all-zero lk-ownerPranith Kumar K2019-04-265-15/+53
| | | | | | | | | | | | | | | | | Problem: Sometimes we find that developers forget to assign lk-owner for an inodelk/entrylk/lk before writing code to wind these fops. locks xlator at the moment allows this operation. This leads to multiple threads in the same client being able to get locks on the inode because lk-owner is same and transport is same. So isolation with locks can't be achieved. Fix: Disallow locks with lk-owner zero. fixes bz#1624701 Change-Id: I1aadcfbaaa4d49308f7c819505857e201809b3bc Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cloudsync: Make readdirp return stat info of all the direntsAnuradha Talur2019-04-253-1/+38
| | | | | | | | | | | | | | | This change got missed while the initial changes were sent. Should have been a part of : https://review.gluster.org/#/c/glusterfs/+/21757/ Gist of the change: Function that fills in stat info for dirents is invoked in readdirp in posix when cloudsync populates xdata request with GF_CS_OBJECT_STATUS. Change-Id: Ide0c4e80afb74cd2120f74ba934ed40123152d69 updates: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* glusterd: coverity fixesAtin Mukherjee2019-04-255-13/+50
| | | | | | | | | | | | | | | | | | | | | | | | Addresses the following: * CID 1124776: Resource leaks (RESOURCE_LEAK) - Variable "aa" going out of scope leaks the storage it points to in glusterd-volgen.c * Bunch of CHECKED_RETURN defects in the callers of synctask_barrier_init * CID 1400755: Error handling issues (CHECKED_RETURN) - Calling "gf_is_service_running" without checking return value in xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 671 in glusterd_shdsvc_stop() * CID 1400745: Memory - illegal accesses (USE_AFTER_FREE) - Dereferencing freed pointer "volinfo" in /xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 460 in glusterd_shdsvc_start() * CID 1400742: Program hangs (LOCK) - adding annotation to fix this false positive Updates: bz#789278 Change-Id: I02f16e7eeb8c5cf72f7d0b29d00df4f03b3718b3 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* features/bit-rot: Unconditionally sign the files during oneshot crawlRaghavendra Bhat2019-04-251-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently bit-rot feature has an issue with disabling and reenabling it on the same volume. Consider enabling bit-rot detection which goes on to crawl and sign all the files present in the volume. Then some files are modified and the bit-rot daemon goes on to sign the modified files with the correct signature. Now, disable bit-rot feature. While, signing and scrubbing are not happening, previous checksums of the files continue to exist as extended attributes. Now, if some files with checksum xattrs get modified, they are not signed with new signature as the feature is off. At this point, if the feature is enabled again, the bit rot daemon will go and sign those files which does not have any bit-rot specific xattrs (i.e. those files which were created after bit-rot was disabled). Whereas the files with bit-rot xattrs wont get signed with proper new checksum. At this point if scrubber runs, it finds the on disk checksum and the actual checksum of the file to be different (because the file got modified) and marks the file as corrupted. FIX: The fix is to unconditionally sign the files when the bit-rot daemon comes up (instead of skipping the files with bit-rot xattrs). Change-Id: Iadfb47dd39f7e2e77f22d549a4a07a385284f4f5 fixes: bz#1700078 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* cluster/dht: Refactor dht lookup functionsN Balachandran2019-04-251-74/+30
| | | | | | | | | | Part 2: Modify dht_revalidate_cbk to call dht_selfheal_directory instead of separate calls to heal attrs and xattrs. Change-Id: Id41ac6c4220c2c35484812bbfc6157fc3c86b142 updates: bz#1590385 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* ctime: Fix log repeated logging during openKotresh HR2019-04-241-10/+5
| | | | | | | | | | | | The log "posix set mdata failed, No ctime" logged repeatedly after the fix [1]. Those could be internal fops. This patch fixes the same. [1] https://review.gluster.org/22540 fixes: bz#1701457 Change-Id: I42799a90b976982cedb0ca11fa224d555eb05650 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* core: avoid dynamic TLS allocation when possibleXavi Hernandez2019-04-243-54/+6
| | | | | | | | | | | | | | | | | | | Some interdependencies between logging and memory management functions make it impossible to use the logging framework before initializing memory subsystem because they both depend on Thread Local Storage allocated through pthread_key_create() during initialization. This causes a crash when we try to log something very early in the initialization phase. To prevent this, several dynamically allocated TLS structures have been replaced by static TLS reserved at compile time using '__thread' keyword. This also reduces the number of error sources, making initialization simpler. Updates: bz#1193929 Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* glusterd/shd: Keep a ref on volinfo until attach rpc execute cbkMohammed Rafi KC2019-04-242-0/+7
| | | | | | | | | | | When svc attach execute for multiplexing a daemon, we have to keep a ref on volinfo until it finish the execution. Because, if the attach is an aysnc call, then a parallel volume delete can lead to free the volinfo Change-Id: Ibc02b89557baaed2f63db63d7fb1a7480444ae0d fixes: bz#1702185 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* cluster/ec: fix fd reopenXavi Hernandez2019-04-2314-274/+328
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently EC tries to reopen fd's that have been opened while a brick was down. This is done as part of regular write operations, just after having acquired the locks, and it's sent as a sub-fop of the main write fop. There were two problems: 1. The reopen was attempted on all UP bricks, even if a previous lock didn't succeed. This is incorrect because most probably the open will fail. 2. If reopen is sent and fails, the error is propagated to the main operation, causing it to fail when it shouldn't. To fix this, we only attempt reopens on bricks where the current fop owns a lock, and we prevent any error to be propagated to the main fop. To implement this behaviour an argument used to indicate the minimum number of required answers has overloaded to also include some flags. To make the change consistent, it has been necessary to rename the argument, which means that a lot of files have been changed. However there are no functional changes. This change has also uncovered a problem in discard code, which didn't correctely process requests of small sizes because no real discard fop was being processed, only a write of 0's on some region. In this case some fields of the fop remained uninitialized or with incorrect values. To fix this, a new function has been created to simulate success on a fop and it's used in the discard case. Thanks to Pranith for providing a test script that has also detected an issue in this patch. This patch includes a small modification of this script to force data to be written into bricks before stopping them. Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec Fixes: bz#1699866 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* features/sdfs: Assign unique lk-owner for entrylk fopPranith Kumar K2019-04-221-0/+6
| | | | | | | | | | | | | sdfs is supposed to serialize entry fops by doing entrylk, but all the locks are being done with all-zero lk-owner. In essence sdfs doesn't achieve its goal of mutual exclusion when conflicting operations are executed by same client because two locks on same entry with same all-zero-owner will get locks. Fixed this up by assigning lk-owner before taking entrylk updates bz#1624701 Change-Id: Ifabfc998c9f1724915d38e90ed8287e05797d769 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/afr: Set lk-owner before inodelk/entrylk/lkPranith Kumar K2019-04-222-19/+23
| | | | | | Updates: bz#1624701 Change-Id: I7152c28ad85925abccdcc4cd6de8cb2a2b847a51 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* GlusterD:Checking for null value to void explicit dereferencing of null pointerrishubhjain2019-04-221-0/+14
| | | | | | | | | | | | Before calling strtok_r a check for null pointer is necessary to avoid dereferencing of null pointer CID:1398617 CID:1274074 Change-Id: I34956c6e04af1faa22d550e6474909ecd36f5d6c updates: bz#789278 Signed-off-by: rishubhjain <rishubhjain47@gmail.com>
* features/locks: fix coverity issuesXavi Hernandez2019-04-192-1/+6
| | | | | | | | | | | | | | This patch fixes the following NULL dereferences identified by Coverity: * CID 1398619 * CID 1398621 * CID 1398623 * CID 1398625 * CID 1398626 Change-Id: Id6af0d7cba0bb3346005376bc27180e8476255a4 Updates: bz#789278 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* glusterd: fix loading ctime in client graph logicAtin Mukherjee2019-04-171-3/+9
| | | | | | | | | Commit efbf8ab wasn't handling all the scenarios of toggling ctime option correctly and more over a ! had completely tossed up the logic. Fixes: bz#1697907 Change-Id: If12e2f69045e59878992ee2cd0518cc0eabcce0d Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* Revert "features/locks: error-out {inode,entry}lk fops with all-zero lk-owner"Atin Mukherjee2019-04-175-53/+15
| | | | | | | | | This reverts commit 3883887427a7f2dc458a9773e05f7c8ce8e62301 as it has broken sdfs-sanity.t. Updates: bz#1624701 Change-Id: Icb2b0d6bfcce4d556f1cd0f11695c03ffc138736 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* features/bit-rot-stub: clean the mutex after cancelling the signer threadRaghavendra Bhat2019-04-172-7/+59
| | | | | | | | | | | | | | | | | | When bit-rot feature is disabled, the signer thread from the bit-rot-stub xlator (the thread which performs the setxattr of the signature on to the disk) is cancelled. But, if the cancelled signer thread had already held the mutex (&priv->lock) which it uses to monitor the queue of files to be signed, then the mutex is never released. This creates problems in future when the feature is enabled again. Both the new instance of the signer thread and the regular thread which enqueues the files to be signed will be blocked on this mutex. So, as part of cancelling the signer thread, unlock the mutex associated with it as well using pthread_cleanup_push and pthread_cleanup_pop. Change-Id: Ib761910caed90b268e69794ddeb108165487af40 updates: bz#1700078 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd: fix op-version of glusterd.vol_count_per_threadAtin Mukherjee2019-04-161-1/+1
| | | | | | | | It was hardcoded and with a wrong value. Fixes: bz#1699339 Change-Id: Ibabe2424a0d35e172a9259bd8849c9bb7cebff1e Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* features/locks: error-out {inode,entry}lk fops with all-zero lk-ownerPranith Kumar K2019-04-165-15/+53
| | | | | | | | | | | | | | | | | Problem: Sometimes we find that developers forget to assign lk-owner for an inodelk/entrylk/lk before writing code to wind these fops. locks xlator at the moment allows this operation. This leads to multiple threads in the same client being able to get locks on the inode because lk-owner is same and transport is same. So isolation with locks can't be achieved. Fix: Disallow locks with lk-owner zero. fixes bz#1624701 Change-Id: I1c816280cffd150ebb392e3dcd4d21007cdd767f Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* glusterd: Optimize glusterd handshaking code pathMohit Agrawal2019-04-155-48/+290
| | | | | | | | | | | | | | | | | | | | Problem: At the time of handshaking glusterd populate volume data in a dictionary.While no. of volumes are configured more than 1500 glusterd takes more than 10 min to generated the data.Due to taking more time rpc request times out and rpc start bailing of call frames. Solution: To optimize the code done below changes 1) Spawn multiple threads to populate volumes data in bulk in separate dictionary and introduce an option glusterd.brick-dict-thread-count to configure no. of threads to populate volume data. 2) Populate tier data only while volume type is tier 3) Compare snap data only while snap_count is non zero Fixes: bz#1699339 Change-Id: I38dc71970c049217f9d1a06fc0aaf4c26eab18f5 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* libgfchangelog : use find_library to locate shared librarySunny Kumar2019-04-151-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue: libgfchangelog.so: cannot open shared object file Due to hardcoded shared library name runtime loader looks for particular version of a shared library. Solution: Using find_library to locate shared library at runtime solves this issue. Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 323, in main func(args) File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 82, in subcmd_worker local.service_loop(remote) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1261, in service_loop changelog_agent.init() File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 233, in __call__ return self.ins(self.meth, *a) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 215, in __call__ raise res OSError: libgfchangelog.so: cannot open shared object file: No such file or directory Change-Id: I3dd013d701ed1cd99ba7ef20d1898f343e1db8f5 fixes: bz#1699394 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* shd/mux: Fix coverity issues introduced by shd mux patchMohammed Rafi KC2019-04-152-7/+23
| | | | | | | | | | | | | CID 1400475: Null pointer dereferences (FORWARD_NULL) CID 1400474: Null pointer dereferences (FORWARD_NULL) CID 1400471: Code maintainability issues (UNUSED_VALUE) CID 1400470: Null pointer dereferences (FORWARD_NULL) CID 1400469: Memory - illegal accesses (USE_AFTER_FREE) CID 1400467: Code maintainability issues (UNUSED_VALUE) Change-Id: I0ca1c733be335c6e5844f44850f8066626ac40d4 updates: bz#789278 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* cluster/afr: Remove local from owners_list on failure of lock-acquisitionPranith Kumar K2019-04-154-18/+14
| | | | | | | | | | | | | When eager-lock lock acquisition fails because of say network failures, the local is not being removed from owners_list, this leads to accumulation of waiting frames and the application will hang because the waiting frames are under the assumption that another transaction is in the process of acquiring lock because owner-list is not empty. Handled this case as well in this patch. Added asserts to make it easier to find these problems in future. fixes bz#1696599 Change-Id: I3101393265e9827755725b1f2d94a93d8709e923 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* core: Log level changes do not effect on running client processMohit Agrawal2019-04-152-9/+20
| | | | | | | | | | | | | | | | | | | Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level per xlator during reconfigure only for a brick process not for the client process. Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure about brick multiplex introudce a flag brick_mux at ctx->cmd_args. Note: There are two other changes done with this patch 1) Ignore client-log-level option to attach a brick with already running brick if brick_mux is enabled 2) Add a log to print pid of the running process to make easier debugging Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com> Fixes: bz#1696046
* posix/ctime: Fix stat(time attributes) inconsistency during readdirpKotresh HR2019-04-152-26/+44
| | | | | | | | | | | | | | | | | | | | Problem: Creation of tar file on gluster volume throws warning 'file changed as we read it' Cause: During readdirp, for few of the files whose inode is not present, time attributes were served from backend. This caused the ctime of few files to be different between before readdir and after readdir by tar. Solution: If ctime feature is enabled and inode is not present, don't serve the time attributes from backend file, serve it from xattr. fixes: bz#1698078 Change-Id: I427ef865f97399475faf5aa6ca495f7e317603ae Signed-off-by: Kotresh HR <khiremat@redhat.com>
* marker-quota: remove dead codeAmar Tumballi2019-04-151-37/+4
| | | | | | | | | also make minor changes for signature (int -> void) where return value was not checked anywhere. updates: bz#1693692 Change-Id: Iff117712eb65e0b6b8b441a779202a117fcdf1fb Signed-off-by: Amar Tumballi <amarts@redhat.com>
* core: Brick is not able to detach successfully in brick_mux environmentMohit Agrawal2019-04-141-0/+1
| | | | | | | | | | | | | | | | | | | | | Problem: In brick_mux environment, while volumes are stopped in a loop bricks are not detached successfully. Brick's are not detached because xprtrefcnt has not become 0 for detached brick. At the time of initiating brick detach process server_notify saves xprtrefcnt on detach brick and once counter has become 0 then server_rpc_notify spawn a server_graph_janitor_threads for cleanup brick resources.xprtrefcnt has not become 0 because socket framework is not working due to assigning 0 as a fd for socket. In commit dc25d2c1eeace91669052e3cecc083896e7329b2 there was a change in changelog fini to close htime_fd if htime_fd is not negative, by default htime_fd is 0 so it close 0 also. Solution: Initialize htime_fd to -1 after just allocate changelog_priv by GF_CALLOC Fixes: bz#1699025 Change-Id: I5f7ca62a0eb1c0510c3e9b880d6ab8af8d736a25 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd-volgen.c: skip fetching some vol settings in a bricks loop.Yaniv Kaul2019-04-131-13/+15
| | | | | | | | | | | The values are per volume, and are not going to change while processing its bricks, as far as I can understand the code. Fetch them and store them outside the loop. updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I2bc263f92f9141ea26a9dfb8265225f38307cbac
* Replace memdup() with gf_memdup()Vijay Bellur2019-04-124-7/+6
| | | | | | | | | memdup() and gf_memdup() have the same implementation. Removed one API as the presence of both can be confusing. Change-Id: I562130c668457e13e4288e592792872d2e49887e updates: bz#1193929 Signed-off-by: Vijay Bellur <vbellur@redhat.com>
* ec: fix truncate lock to cover the write in tuncate cleanKinglong Mee2019-04-121-2/+6
| | | | | | | | | | ec_truncate_clean does writing under the lock granted for truncate, but the lock is calculated by ec_adjust_offset_up, so that, the write in ec_truncate_clean is out of lock. Updates: bz#1699189 Change-Id: Idbe1fd48d26afe49c36b77db9f12e0907f5a4134 Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
* cluster/afr: Thin-arbiter SHD fixeskarthik-us2019-04-122-13/+13
| | | | | | | | | This patch address post-merge review comments for commit 5784a00f997212d34bd52b2303e20c097240d91c Change-Id: I7ed954664a2ae8e1091d23ee3ceb9c66e83bfeac fixes: bz#1697930 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* glusterd: remove glusterd_check_volume_exists() callAtin Mukherjee2019-04-115-23/+16
| | | | | | | | As the same functionality is covered in glusterd_volinfo_find Updates: bz#1193929 Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: provide a way to detach failed nodeSanju Rakonde2019-04-111-2/+5
| | | | | | | | | | | | | | | | | | When a gluster node in trusted storage pool has failed due to hardware issues, volume delete operation fails saying "Not all peers are up" and peer detach for failed node fails saying "Brick(s) with peer <peer_ip> exists in cluster". The idea here is to use either replace-brick or remove-brick command to remove all the bricks hosted by failed node and then re-attempting the peer detach. This change adds this trick in peer detach error message. fixes: bz#1697866 Change-Id: I0c58887479d31db603ad8d6535ea9d547880ccc8 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* client/fini: return fini after rpc cleanupMohammed Rafi KC2019-04-112-6/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | There is a race condition in rpc_transport later and client fini. Sequence of events to happen the race condition 1) When we want to destroy a graph, we send a parent down event first 2) Once parent down received on a client xlator, we will initiates a rpc disconnect 3) This will in turn generates a child down event. 4) When we process child down, we first do fini for Every xlator 5) On successful return of fini, we delete the graph Here after the step 5, there is a chance that the fini on client might not be finished. Because an rpc_tranpsort ref can race with the above sequence. So we have to wait till all rpc's are successfully freed before returning the fini from client Change-Id: I20145662d71fb837e448a4d3210d1fcb2855f2d4 fixes: bz#1659708 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* features/cloudsync : Added some new functionsAnuradha Talur2019-04-107-94/+597
| | | | | | | | | | | | | This patch contains the following changes: 1) Store ID info will now be stored in the inode ctx 2) Added new readv type where read is made directly from the remote store. This choice is made by volume set operation. 3) cs_forget() was missing. Added it. Change-Id: Ie3232b3d7ffb5313a03f011b0553b19793eedfa2 fixes: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* storage/posix: changes with respect to cloudsyncAnuradha Talur2019-04-104-15/+177
| | | | | | | | | | Main changes include logic to update iatt buf with file size from extended attributes in posix rather than having this logic in cloudsync xlator. Change-Id: I44f5f8df7a01e496372557fe2f4eff368dbdaa33 fixes: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* mgmt/glusterd: Make changes related to cloudsync xlatorAnuradha Talur2019-04-101-12/+12
| | | | | | | | | | 1) The placement of cloudsync xlator has been changed to make it shard xlator's child. If cloudsync has to work with shard in the graph, it needs to be child of shard. Change-Id: Ib55424fdcb7ce8edae9f19b8a6e3d3ba86c1f0c4 fixes: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* protocol: add an option to force using old-protocolAmar Tumballi2019-04-103-3/+24
| | | | | | | | | | | | | | As protocol implements every fop, and in general a large part of the codebase. Considering our regression is run mostly in 1 machine, there was no way of forcing the client to use old protocol (while new one is available). With this patch, a new 'testing' option is provided which forces client to use old protocol if found. This should help increase the code coverage by at least 10k lines overall. updates: bz#1693692 Change-Id: Ie45256f7dea250671b689c72b4b6f25037cef948 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* build: glusterfs build is failing on RHEL-6Mohit Agrawal2019-04-101-1/+1
| | | | | | | | | | | Problem: glusterfs build is throwing error undefined reference to `dlclose' on RHEL 6 Solution: Add LIB_DL link in Makefile.am to resolve the same Fixes: bz#1696512 Change-Id: I58019ca9e29d569d8e6df282b8ab178ad540843b Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd: load ctime in the client graph only if it's not turned offAtin Mukherjee2019-04-091-1/+2
| | | | | | | | | | | Considering ctime is a client side feature, we can't blindly load ctime xlator into the client graph if it's explicitly turned off, that'd result into backward compatibility issue where an old client can't mount a volume configured on a server which is having ctime feature. Fixes: bz#1697907 Change-Id: I6ae7b96d056073aa6746de9a449cf319786d45cc Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd-volgen.c: skip fetching skip-CLIOT in a loop.Yaniv Kaul2019-04-081-2/+3
| | | | | | | | | | Its value is not going to change within the loop, as far as I can understand the code. Fetch and store it outside the loop. Change-Id: I6327c23212dceec6006349421ef185495892dd8a updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: remove redundant glusterd_check_volume_exists () callsAtin Mukherjee2019-04-086-117/+23
| | | | | | | | | | | | | | | | | | A pattern of following was found in multiple places where both glusterd_check_volume_exists and glusterd_volinfo_find do the same job. We just need one of them not both. In a scaled environment having many volumes this is a bottleneck to iterate over the volume list to find a volume twice! exists = glusterd_check_volume_exists(volname); ret = glusterd_volinfo_find(volname, &volinfo); if ((ret) || (!exists)) { Credits: ykaul@redhat.com for finding this out Updates: bz#1193929 Change-Id: Ie116fe5c93e261a2bddd267c28ccb20a2884a36f Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* GlusterD: Resolves the issue of referencing memory after it has been freedrishubhjain2019-04-081-4/+0
| | | | | | | | | | | Setting the pointer to NULL after GF_FREE() and checking the pointer value before calling GF_FREE() to avoid referencing memory after its has been freed CID: 1398622 Change-Id: Iba0d8879abccf5923a69132a207d53bb94551417 updates: bz#789278 Signed-off-by: rishubhjain <rishubhjain47@gmail.com>
* cluster/dht: refactor dht lookup functionsN Balachandran2019-04-052-124/+119
| | | | | | | | | | Part 1: refactor the dht_lookup_dir_cbk and dht_selfheal_directory functions. Added a simple dht selfheal directory test Change-Id: I1410c26359e3c14b396adbe751937a52bd2fcff9 updates: bz#1590385 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/afr: Invalidate inode on change of split-brain-choicePranith Kumar K2019-04-051-4/+1
| | | | | | | | | | | When split-brain choice is changed from one brick to another brick, inode-invalidate is not called so readv call is served from cache leading to failures in split-brain-resolution.t. Fixed it by calling inode_invaldate() when this happens. updates bz#1193929 Change-Id: I2624614eec38c0303f3e1dc55dfae3d4b864218b Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Fix handling of heal info cases without locksAshish Pandey2019-04-041-25/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we use heal info command, it takes lot of time as in some cases it takes lock on entries to find out if the entry actually needs heal or not. There are some cases where we can avoid these locks and can conclude if the entry needs heal or not. 1 - We do a lookup (without lock) on an entry, which we found in .glusterfs/indices/xattrop, and find that lock count is zero. Now if the file contains dirty bit set on all or any brick, we can say that this entry needs heal. 2 - If the lock count is one and dirty is greater than 1, then it also means that some fop had left the dirty bit set which made the dirty count of current fop (which has taken lock) more than one. At this point also we can definitely say that this entry needs heal. This patch is modifying code to take into consideration above two points. It is also changing code to not to call ec_heal_inspect if ec_heal_do was called from client side heal. Client side heal triggeres heal only when it is sure that it requires heal. [We have changed the code to not to call heal for lookup] updates bz#1689799 Change-Id: I7f09f0ecd12f65a353297aefd57026fd2bebdf9c Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* sdfs: enable pass-throughAmar Tumballi2019-04-031-0/+5
| | | | | | | | | | | we have 'sdfs-sanity.t' which covers at least 90% of the functions and 70% of lines in the translator. But the recent changes to disable it due to performance impact made even the test to not consider the translator. updates: bz#1693692 Change-Id: I0ebcb307c4ab48a6e59ded27bf39f72ce2304ebc Signed-off-by: Amar Tumballi <amarts@redhat.com>
* changelog: remove unused code.Yaniv Kaul2019-04-034-32/+0
| | | | | | | | Seems to be unused. Change-Id: I75eed9641dd030a1fbb1b942a9d818f10a7e1437 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>