summaryrefslogtreecommitdiffstats
path: root/xlators/cluster
Commit message (Collapse)AuthorAgeFilesLines
...
* dht: gf_defrag_process_dir is called even if gf_defrag_fix_layout has failedSusant Palai2020-03-241-0/+1
| | | | | | | | | Currently even though gf_defrag_fix_layout fails with ENOENT or ESTALE, a subsequent call is made to gf_defrag_process_dir leading to rebalance failure. fixes: #1102 Change-Id: Ib0c309fd78e89a000fed3feb4bbe2c5b48e61478 Signed-off-by: Susant Palai <spalai@redhat.com>
* cluster/afr: Fixes for haloPranith Kumar K2020-03-133-5/+19
| | | | | | | | | | | Current implementation assumes that ping-event will come after connect event but that may not be the case in the cases where after socket connection fds need to be re-opened which would consume more time. So handle any order of the ping/child-up events. fixes: bz#1800583 Change-Id: I6bcdc0caa503bdc039ef2b4739fbf4afae121f05 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* dht - selfheal code cleaningBarak Sason Rofman2020-03-121-135/+20
| | | | | | | | | 1 - Converted methods to static 2 - Removed unused code Change-Id: I49db3e28116da1c3c9ff0a33dcce7281bc3856f7 updates: bz#1193929 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* dht/rebalance - fixing failure occurace due to rebalance stopBarak Sason Rofman2020-03-041-0/+8
| | | | | | | | | | | | | | | | | Probelm description: When topping rebalance, the following error messages appear in the rebalance log file: [2020-01-28 14:31:42.452070] W [dht-rebalance.c:3447:gf_defrag_process_dir] 0-distrep-dht: Found error from gf_defrag_get_entry [2020-01-28 14:31:42.452764] E [MSGID: 109111] [dht-rebalance.c:3971:gf_defrag_fix_layout] 0-distrep-dht: gf_defrag_process_dir failed for directory: /0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31 [2020-01-28 14:31:42.453498] E [MSGID: 109016] [dht-rebalance.c:3906:gf_defrag_fix_layout] 0-distrep-dht: Fix layout failed for /0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30 In order to avoid seing these error messages, a modification to the error handling mechanism has been made. In addition, several log messages had been added in order to improve debugging efficiency fixes: bz#1800956 Change-Id: Ifc82dae79ab3da9fe22ee25088a2a6b855afcfcf Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* xlator/dht-helper: structure loggingyatipadia2020-03-032-97/+75
| | | | | | | | | | convert gf_msg() to gf_smsg() Updates: #657 Change-Id: Iab35ac89b7d7fb6fb0074fc61b11bf679c517c9d Signed-off-by: yatipadia <ypadia@redhat.com> Signed-off-by: yatip <ypadia@redhat.com>
* cluster/afr: fix race when bricks come upXavi Hernandez2020-03-023-6/+9
| | | | | | | | | | | | | | | | | | | | The was a problem when self-heal was sending lookups at the same time that one of the bricks was coming up. In this case there was a chance that the number of 'up' bricks changes in the middle of sending the requests to subvolumes which caused a discrepancy in the expected number of replies and the actual number of sent requests. This discrepancy caused that AFR continued executing requests before all requests were complete. Eventually, the frame of the pending request was destroyed when the operation terminated, causing a use- after-free issue when the answer was finally received. In theory the same thing could happen in the reverse way, i.e. AFR tries to wait for more replies than sent requests, causing a hang. Change-Id: I7ed6108554ca379d532efb1a29b2de8085410b70 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Fixes: bz#1808875
* xlator/dht-lock: structure loggingyatipadia2020-02-262-105/+113
| | | | | | | | convert gf_msg() to gf_smsg() Change-Id: If540ca921b1cd8ca75b92b3d72eb9eb61bdaaa10 Updates: #657 Signed-off-by: yatip <ypadia@redhat.com>
* afr: prevent spurious entry heals leading to gfid split-brainRavishankar N2020-02-185-15/+7
| | | | | | | | | | | | | | | | | | | | Problem: In a hyperconverged setup with granular-entry-heal enabled, if a file is recreated while one of the bricks is down, and an index heal is triggered (with the brick still down), entry-self heal was doing a spurious heal with just the 2 good bricks. It was doing a post-op leading to removal of the filename from .glusterfs/indices/entry-changes as well as erroneous setting of afr xattrs on the parent. When the brick came up, the xattrs were cleared, resulting in the renamed file not getting healed and leading to gfid split-brain and EIO on the mount. Fix: Proceed with entry heal only when shd can connect to all bricks of the replica, just like in data and metadata heal. fixes: bz#1801624 Change-Id: I916ae26ad1fabf259bc6362da52d433b7223b17e Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/thin-arbiter: Wait for TA connection before ta-file lookupAshish Pandey2020-02-171-17/+21
| | | | | | | | | | | | | | | | | | Problem: When we mount a ta volume, as soon as 2 data bricks are connected we consider that the mount is done and then send a lookup/create on ta file on ta node. However, this connection with ta node might not have been completed. Due to this delay, ta replica id file will not be created and we will see ENOTCONN error in log file if we do lookup. Solution: As we know that this ta node could have a higher latency, we should wait for reasonable time for connection to happen before sending lookup/create on replica id file. fixes: bz#1720463 Change-Id: I36f90865afe617e4e84cee57fec832a16f5dd6cc
* dht - Reducing methods scopeBarak Sason Rofman2020-02-136-104/+60
| | | | | | | | | | 1. Reduced methods scope in the following: inode read&write, layout, linkfile, shard 2. Removed dead code @ dht-linkkile.c:174-228 & dht-shard.c:44 Change-Id: I2d08a10c7b074fccdb0c020845cad60c6ea32db5 updates: bz#1193929 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* cluster/afr: Check for lock on source & sink before doing data healkarthik-us2020-02-131-3/+19
| | | | | | | | | | | | | | | | Problem: In function afr_selfheal_data_block(), we only check for the lock count to be equal to or greater than the number of sinks. There can be a case where we have 2 source bricks and one sink and the locking is successful on only the source brick(s). In this case we continue with the healing on sink without having a lock, which is not correct. Fix: Check for lock on atleast source & one sink before starting the data heal. Change-Id: Iebcb57dcaa4b31831fedfee63d6ca16e9d6c8df8 fixes: bz#1688115 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* tests: Fix spurious self-heald.t failurePranith Kumar K2020-02-111-23/+15
| | | | | | | | | | | | | | | | | | Problem: heal-info code assumes that all indices in xattrop directory definitely need heal. There is one corner case. The very first xattrop on the file will lead to adding the gfid to 'xattrop' index in fop path and in _cbk path it is removed because the fop is zero-xattr xattrop in success case. These gfids could be read by heal-info and shown as needing heal. Fix: Check the pending flag to see if the file definitely needs or not instead of which index is being crawled at the moment. fixes: bz#1801623 Change-Id: I79f00dc7366fedbbb25ec4bec838dba3b34c7ad5 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* dht: Fix stale-layout and create issueSusant Palai2020-02-092-15/+140
| | | | | | | | | | | | | | | | | Problem: With lookup-optimize set to on by default, a client with stale-layout can create a new file on a wrong subvol. This will lead to possible duplicate files if two different clients attempt to create the same file with two different layouts. Solution: Send in-memory layout to be cross checked at posix before commiting a "create". In case of a mismatch, sync the client layout with that of the server and attempt the create fop one more time. test: Manual, testcase(attached) fixes: bz#1786679 Change-Id: Ife0941f105113f1c572f4363cbcee65e0dd9bd6a Signed-off-by: Susant Palai <spalai@redhat.com>
* dht-hashfn.c: ensure we do not try to calculate hash on NULL pathYaniv Kaul2020-02-051-0/+3
| | | | | | | | | | | For some reason, dht_selfheal_layout_alloc_start() sends a NULL loc->path string to dht_hash_compute(). Until we understand why it happens, we should strive not to crash on a strlen of a NULL pointer. Change-Id: I8c2a22602cfccba9af85f432a1841556f6978450 updates: bz#1793378 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* xlator/dht-selfheal: structure loggingyatipadia2020-02-043-258/+222
| | | | | | | | Convert gf_msg() to gf_smsg() Change-Id: Ic72f2513e641cfcbe074933cb2697ee9fc05a766 Updates: #657 Signed-off-by: yatip <ypadia@redhat.com>
* xlator/dht-linkfile: structure loggingyatip2020-02-042-26/+269
| | | | | | | | | convert all gf_msg() to gf_smsg() Updates: #657 Change-Id: I9104ba8a8102f04d031a208abb06b6cf8ea8fd13 Signed-off-by: yatip <ypadia@redhat.com>
* Improve logging in EC, client and lock translatorAshish Pandey2020-02-032-3/+4
| | | | | Change-Id: I98af8672a25ff9fd9dba91a2e1384719f9155255 Fixes: bz#1779760
* geo-rep: Fix for "Transport End Point not connected" issueHarpreet Kaur2020-01-312-0/+63
| | | | | | | | | | | | | | | | | | | | | | problem: Geo-rep gsyncd process mounts the master and slave volume on master nodes and slave nodes respectively and starts the sync. But it doesn't wait for the mount to be in ready state to accept I/O. The gluster mount is considered to be ready when all the distribute sub-volumes is up. If the all the distribute subvolumes are not up, it can cause ENOTCONN error, when lookup on file comes and file is on the subvol that is down. solution: Added a Virtual Xattr "dht.subvol.status" which returns "1" if all subvols are up and "0" if all subvols are not up. Geo-rep then uses this virtual xattr after a fresh mount, to check whether all subvols are up or not and then starts the I/O. fixes: bz#1664335 Change-Id: If3ad01d728b1372da7c08ccbe75a45bdc1ab2a91 Signed-off-by: Harpreet Kaur <hlalwani@redhat.com> Signed-off-by: Kotresh HR <khiremat@redhat.com>
* afr: restore timestamp of files during metadata healSheetal Pamecha2020-01-241-6/+2
| | | | | | | | | | | For files: During metadata heal, we restore timestamps only for non-regular (char, block etc.) files. Extenting it for regular files as timestamp is updated via touch command also fixes: bz#1787274 Change-Id: I26fe4fb6dff679422ba4698a7f828bf62ca7ca18 Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* dht-hashfn.c: remove a strlen()Yaniv Kaul2020-01-141-16/+19
| | | | | | | | | | | We already have the length of the name, or when we munge it, we can return the length of it instead of strlen() again. Also, reduce a bit the code under the lock. Change-Id: I0141b0725ed1a4134d8d9f81ed1187b551b038b5 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* multiple xlators: reduce key lengthYaniv Kaul2020-01-141-3/+3
| | | | | | | | | | | | | | | In many cases, we were freely allocating long keys with no need. Smaller char arrays are just fine almost anywhere, so just went ahead and looked where they we can use smaller ones. In some cases, annotated the functions as static and the prefixes passed as const as it was easier to read and understand. Where relevant, converted the dict functions to use known key length. Change-Id: I882ab33ea20d90b63278336cd1370c09ffdab7f2 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* dht-rename.c: fix Coverity issues 1397018/7 - strcat into uninitialized valueYaniv Kaul2020-01-101-0/+4
| | | | | | | | | initialize both src and dst if they were not initialized already. fixes: CID#1397018 Change-Id: Ic91954423953e8bf24eaa11fc2798c554f304d28 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* afr: expose cluster.optimistic-change-log to CLI.Ravishankar N2020-01-071-0/+2
| | | | | | | | | | | This volume option was not made avaialble to `gluster volume set` CLI. Reported-by: epolakis(https://github.com/kinsu) in https://github.com/gluster/glusterfs/issues/781 fixes: bz#1787554 Change-Id: I7141bdd4e53ee99e22b354edde8d023bfc0b2cd7 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: simplify afr_has_quorum()Yaniv Kaul2020-01-023-29/+10
| | | | | | | | | | | | | 1. Perform AFR_COUNT() once, in afr_has_quorum() and pass the result to afr_lookup_has_quorum() 2. Simplify afr_lookup_has_quorum() - pass less parameters to it. (Via the change in item 1 above). 3. Make afr_is_add_replica_mount_lookup_on_root() static function. 4. Remove dead code - afr_decide_heal_info() which was not used. Change-Id: If9168cd01e22788a0e60b91e315787d2aa60e97b updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* Avoid buffer overwrite due to uuid_utoa() misuseDmitry Antipov2019-12-271-2/+3
| | | | | | | | | | | | | | | Code like: f(..., uuid_utoa(x), uuid_utoa(y)); is not valid (causes undefined behaviour) because uuid_utoa() uses the only static thread-local buffer which will be overwritten by the subsequent call. All such cases should be converted to use uuid_utoa_r() with explicitly specified buffer. Change-Id: I5e72bab806d96a9dd1707c28ed69ca033b9c8d6c Updates: bz#1193929 Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru>
* afr_inode/xlator: structure loggingyatipadia2019-12-203-57/+159
| | | | | | | | convert gf_msg() into gf_smsg() Change-Id: I8f5b7bbb9caa78902b06f67257502b67adab7405 Updates: #657 Signed-off-by: yatipadia <ypadia@redhat.com>
* DHT - Reduce methods scope (dht-common.c)Barak Sason Rofman2019-12-173-804/+708
| | | | | | | | | | | Methods that should have been static were defined as global, and the other way around. This patch fixes the issue in order to enforce encapsulation. updates: bz#1776757 Change-Id: I3eb5781849c5e597c1dd347e03f356c00db62a39 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* dht-common.h/dht-helper.c: exctract the LOCK() from DHT_UPDATE_TIMEYaniv Kaul2019-12-172-20/+20
| | | | | | | | | | | | | Currently, the code (and only place) that is using this macro is in dht_inode_ctx_time_update() where it is called 3 times in a row, which is essentially 3 cycles of LOCK/UNLOCK on the same lock. Instead, extract the LOCK()/UNLOCK() part of the macro and wrap those calls with it. Change-Id: I6312b985e3d97517857b55f342440accc4063db6 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* afr: make heal info locklessRavishankar N2019-12-123-203/+211
| | | | | | | | | | | | | | | | | | | Changes in locks xlator: Added support for per-domain inodelk count requests. Caller needs to set GLUSTERFS_MULTIPLE_DOM_LK_CNT_REQUESTS key in the dict and then set each key with name 'GLUSTERFS_INODELK_DOM_PREFIX:<domain name>'. In the response dict, the xlator will send the per domain count as values for each of these keys. Changes in AFR: Replaced afr_selfheal_locked_inspect() with afr_lockless_inspect(). Logic has been added to make the latter behave same as the former, thus not breaking the current heal info output behaviour. fixes: bz#1774011 Change-Id: Ie9e83c162aa77f44a39c2ba7115de558120ada4d Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr/self-heald - Missing error logsBarak Sason Rofman2019-12-102-24/+77
| | | | | | | | | | As a follow up on https://review.gluster.org/#/c/glusterfs/+/23749/, adding error logging for the entire method. In addition, converted logging to structured logging in the method. Fixes: bz#1778457 Change-Id: I1f412159e6849d6f6ddbde53ec4a85ad709bbdf4 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* cluster/dht: Add comments to codeN Balachandran2019-12-102-2/+14
| | | | | | Change-Id: Ieb7531af19ae89fb8a8387e81663c7f157b10c02 Updates: bz#1765421 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/afr - coverity issue fixBarak Sason Rofman2019-11-281-0/+5
| | | | | | | | | | | Added a log for a failure in order to avoid "unused variable" coverity issue. fixes: CID#1274209 Change-Id: Ibc6b0ab4bdff482096e42e88fd4c8c7eadfeeadb Updates: bz#789278 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* Revert "afr: make heal info lockless"Ravishankar N2019-11-283-178/+68
| | | | | | | | This reverts commit fce5f68bc72d448490a0d41be494ac54a9181b3c. I merged the wrong patch by mistake! Hence reverting it. updates: bz#1774011 Change-Id: Id7d6ed1d727efc02467c8a9aea3374331261ebd5
* afr: make heal info locklessRavishankar N2019-11-283-68/+178
| | | | | | | | | | | | | | | | | | | Changes in locks xlator: Added support for per-domain inodelk count requests. Caller needs to set GLUSTERFS_MULTIPLE_DOM_LK_CNT_REQUESTS key in the dict and then set each key with name 'GLUSTERFS_INODELK_DOM_PREFIX:<domain name>'. In the response dict, the xlator will send the per domain count as values for each of these keys. Changes in AFR: Replaced afr_selfheal_locked_inspect() with afr_lockless_inspect(). Logic has been added to make the latter behave same as the former, thus not breaking the current heal info output behaviour. fixes: bz#1774011 Change-Id: I9ae08ce768b39aeb6ee230207b5b7fa744176952 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/dht: Add comments to the codeN Balachandran2019-11-161-12/+75
| | | | | | | | | Add comments to the code to explain what is being done and why. Change-Id: I50831d7bd4bb73e75f6cda05fafaeb5a8619baae Updates: bz#1765421 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* afr: fix log floodingRavishankar N2019-11-162-1/+4
| | | | | | | | | | | | | | | | | | | | | | | Commit "ccf33e789 - dict.c: remove redundant checks" removed some NULL checks in certain dict functions. This caused flooding of fuse mount logs when I/O was done on the mount on a replica volume: Message: W [dict.c:1478:dict_get_with_refn] (-->/usr/local/lib/libglusterfs.so.0(dict_get_uint32+0x4d) [0x7ff9121ec963] -->/usr/local/lib/libglusterfs.so.0(dict_get_with_ref+0x90) [0x7ff9121eb93f] -->/usr/local/lib/libglusterfs.so.0(+0x229be) [0x7ff9121eb9be] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid argument] Fix: In the relevant AFR functions, check that dict is not NULL before trying to perform operations on it. See bug description for more details. fixes: bz#1772006 Change-Id: I30c89c0b5d6c80cc86a6047aae70127769412120 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/dht: Don't skip entries with invalid statN Balachandran2019-11-131-5/+4
| | | | | | | | | Don't strip out entries with invalid stats in dht_readdirp_cbk. Change-Id: I136ab342762d020a3c0f43e51e0090aed2af4120 Fixes: bz#1769754 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/ec: Change handling of heal failure to avoid crashAshish Pandey2019-11-042-13/+13
| | | | | | | | | | | | | | | | | Problem: ec_getxattr_heal_cbk was called with NULL as second argument in case heal was failing. This function was dereferencing "cookie" argument which caused crash. Solution: Cookie is changed to carry the value that was supposed to be stored in fop->data, so even in the case when fop is NULL in error case, there won't be any NULL dereference. Thanks to Xavi for the suggestion about the fix. Change-Id: I0798000d5cadb17c3c2fbfa1baf77033ffc2bb8c fixes: bz#1729085
* afr: lock healing changesRavishankar N2019-10-307-36/+849
| | | | | | | | | | | | | | | | | | | Implements lock healing for gluster-block fencing use case. If mandatory lock is enabled: - Add domain lock/unlock to afr_lk fop. - Maintain a list of locks to be healed in afr_private_t. - Add lock to the list if afr_lk(F_SETLK or F_SETLKW) was sucessful. - Remove it from the list during afr_lk(F_UNLCK). - On child_down, mark lock as needing heal on that child. If lock is lost on quorum no. of bricks, remove it from the list and mark fd bad. - For fds marked as bad, fail the subsequent fd based fops. - On parent up, traverse the list and heal the locks IFF the client is the lk owner and has quorum. (shd does not heal any locks). updates: #613 Change-Id: I03c46ceaea30f5e6236d5ec13f71d843d827f1bc Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/afr: Take a copy of xattr-reqPranith Kumar K2019-10-301-2/+7
| | | | | | | | | | Afr adds its own xattrs to the req, so it should take a copy of the dictionary to prevent parent xlator re-using the modified xattr-req to another subvolume fixes: bz#1765155 Change-Id: I268e2dbd1b12323135d369e90a22a8bdde2cf7c2 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* dht: Rebalance causing IO Error - File descriptor in bad stateMohit Agrawal2019-10-155-17/+116
| | | | | | | | | | | | | | | | Problem : When a file is migrated, dht attempts to re-open all open fds on the new cached subvol. Earlier, if dht had not opened the fd, the client xlator would be unable to find the remote fd and would fall back to using an anon fd for the fop. That behavior changed with https://review.gluster.org/#/c/glusterfs/+/15804, causing fops to fail with EBADFD if the fd was not available on the cached subvol. The client xlator returns EBADFD if the remote fd is not found but dht only checks for EBADF before re-opening fds on the new cached subvol. Solution: Handle EBADFD at dht code path to avoid the issue Change-Id: I43c51995cdd48d05b12e4b2889c8dbe2bb2a72d8 Fixes: bz#1758579
* cluster/afr: Add afr_seek to fops tablePranith Kumar K2019-10-142-0/+4
| | | | | | fixes: bz#1760189 Change-Id: Iffbf8d6f4c50b8e2de8364658697bdbe96549f5d Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* Multiple files: make root gfid a static variableYaniv Kaul2019-10-144-15/+4
| | | | | | | | | | | | | In many places we use it, compare to it, etc. It could be a static variable, as it really doesn't change. I think it's better than initializing to 0 and then doing gfid[15] = 1 or other tricks. I think there are additional oppportunuties to make more variables static. This is an attempt at an easy one. Change-Id: I7f23a30a94056d8f043645371ab841cbd0f90d19 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* afr: align structsYaniv Kaul2019-10-142-138/+138
| | | | | | | | | | squash >50 warnings on padding of structs in afr structures. The warnings were found by manually added '-Wpadded' to the GCC command line. Change-Id: I961fbdeb33715cedf3dd10db8e4f8ef40cd3e867 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* cluster/afr: Heal entries when there is a source & no healed_sinkskarthik-us2019-10-091-0/+15
| | | | | | | | | | | | | | | | | | | | Problem: In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame anything for entry heal, heal will not complete even though we have clear source and sinks. This will happen because while doing afr_selfheal_find_direction() only the bricks which are blamed by non-accused bricks are considered as sinks. Later in __afr_selfheal_entry_finalize_source() when it tries to mark all the non-sources as sinks it fails to do so because there won't be any healed_sinks marked, no witness present and there will be a source. Fix: If there is a source and no healed_sinks, then reset all the locked sources to 0 and healed sinks to 1 to do conservative merge. Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548 Fixes: bz#1749322 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* afr: support split-brain CLI for replica 3Ravishankar N2019-10-091-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Ever since we added quorum checks for lookups in afr via commit bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution commands would not work for replica 3 because there would be no readables for the lookup fop. The argument was that split-brains do not occur in replica 3 but we do see (data/metadata) split-brain cases once in a while which indicate that there are a few bugs/corner cases yet to be discovered and fixed. Fortunately, commit 8016d51a3bbd410b0b927ed66be50a09574b7982 added GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD, split-brain resolution commands will work for replica 3 volumes too. Likewise, the check is added in shard_lookup as well to permit resolving split-brains by specifying "/.shard/shard-file.xx" as the file name (which previously used to fail with EPERM). Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9 Fixes: bz#1756938 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: replace afr_frame_return() when possible with direct callYaniv Kaul2019-10-075-15/+9
| | | | | | | | | | | | If you are already under lock, just decrement the call count directly instead of removing the lock, re-taking the lock and decrementing. Implements https://github.com/gluster/glusterfs/issues/728 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I3fa20b4651fbdb826655c5a03baeed46e99b5487
* cluster/dht: Correct fd processing loopN Balachandran2019-10-021-22/+62
| | | | | | | | | | | | The fd processing loops in the dht_migration_complete_check_task and the dht_rebalance_inprogress_task functions were unsafe and could cause an open to be sent on an already freed fd. This has been fixed. Change-Id: I0a3c7d2fba314089e03dfd704f9dceb134749540 Fixes: bz#1757399 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/ec: Implement read-mask featurePranith Kumar K2019-09-273-0/+82
| | | | | | fixes: #725 Change-Id: Iaaefe6f49c8193c476b987b92df6bab3e2f62601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: prevent filling shd log with "table not found" messagesXavi Hernandez2019-09-261-2/+13
| | | | | | | | | | | | | | | When self-heal daemon receives an inodelk contention notification, it tries to locate the related inode using inode_find() and the inode table owned by top-most xlator, which in this case doesn't have any inode table. This causes many messages to be logged by inode_find() function because the inode table passed is NULL. This patch prevents this by making sure the inode table is not NULL before calling inode_find(). Change-Id: I8d001bd180aaaf1521ba40a536b097fcf70c991f Fixes: bz#1755344 Signed-off-by: Xavi Hernandez <jahernan@redhat.com>