summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* ssl/test: ssl test case is failing when using specific cipherMohit Agrawal2019-11-151-19/+40
| | | | | | | | | | | | | Problem: On RHEL-8 ssl test case is failing when trying to connect with a peer after using the specific cipher. Solution: If cipher is not supported by openssl on rhel-8 then test case is failed.To avoid the issue validate the cipher before connecting with peer. Change-Id: I96d92d3602cf7fd40337126c8305a3f8925faf9b Fixes: bz#1756900 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: Client Handling of Elastic ClustersMohit Agrawal2019-11-122-0/+65
| | | | | | | | | | | | | | Configure the list of gluster servers in the key GLUSTERD_BRICK_SERVERS at the time of GETSPEC RPC CALL and access the value in client side to update volfile serve list so that client would be able to connect next volfile server in case of current volfile server is down Updates #741 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Change-Id: I23f36ddb92982bb02ffd83937a8bd8a2c97e8104
* cli: display detailed rebalance infoSanju Rakonde2019-11-121-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: When one of the node is down in cluster, rebalance status is not displaying detailed information. Cause: In glusterd_volume_rebalance_use_rsp_dict() we are aggregating rsp from all the nodes into a dictionary and sending it to cli for printing. While assigning a index to keys we are considering all the peers instead of considering only the peers which are up. Because of which, index is not reaching till 1. while parsing the rsp cli unable to find status-1 key in dictionary and going out without printing any information. Solution: The simplest fix for this without much code change is to continue to look for other keys when status-1 key is not found. fixes: bz#1764119 Change-Id: I0062839933c9706119eb85416256eade97e976dc Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* rpc: Cleanup SSL specific data at the time of freeing rpc objectl17zhou2019-11-081-3/+20
| | | | | | | | | | | | Problem: At the time of cleanup rpc object ssl specific data is not freeing so it has become a leak. Solution: To avoid the leak cleanup ssl specific data at the time of cleanup rpc object Credits: l17zhou <cynthia.zhou@nokia-sbell.com.cn> Fixes: bz#1768407 Change-Id: I37f598673ae2d7a33c75f39eb8843ccc6dffaaf0
* glusterfsd-mgmt: unify read and write testsYaniv Kaul2019-11-071-0/+3
| | | | | | | | | | | | | | | | | 1. Both read and write tests required writing first. Either just writing (write test) or write and then read (read test). So the code is now unified. 2. There's no reason to read zeros from /dev/zero. Just use a CALLOC'ed buffer. I don't think we should read and write zeros, but I did not change the code yet (I think compression and/or dedup will offset results) It appears neither read-perf nor write-perf were tested, so added basic tests for them. Change-Id: I24b1f249fa0335ed652a8982e99c0687d940230e updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* afr: lock healing changesRavishankar N2019-10-304-0/+612
| | | | | | | | | | | | | | | | | | | Implements lock healing for gluster-block fencing use case. If mandatory lock is enabled: - Add domain lock/unlock to afr_lk fop. - Maintain a list of locks to be healed in afr_private_t. - Add lock to the list if afr_lk(F_SETLK or F_SETLKW) was sucessful. - Remove it from the list during afr_lk(F_UNLCK). - On child_down, mark lock as needing heal on that child. If lock is lost on quorum no. of bricks, remove it from the list and mark fd bad. - For fds marked as bad, fail the subsequent fd based fops. - On parent up, traverse the list and heal the locks IFF the client is the lk owner and has quorum. (shd does not heal any locks). updates: #613 Change-Id: I03c46ceaea30f5e6236d5ec13f71d843d827f1bc Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* ssl/test: Change the rsa key length to 2048Mohammed Rafi KC2019-10-295-5/+5
| | | | | | | | | | | | On a rhel-8 machine, we need to have a key length greater than or eaual to 2048. So changing the values to 2048 to pass the test. Credits: Mohit Agrawal <moagrawal@redhat.com> Change-Id: I0f21db4d737203d0b2e44e7e61f50ae1279795ad Updates: bz#1756900 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* test: fix suspicous non-root geo-rep test failuresSunny Kumar2019-10-251-1/+1
| | | | | | | | Export of env variable is required for ssh-copy-id command. fixes: bz#1765426 Change-Id: Icaf7a848cb8f4ae9f887d885a8c5bb71f26633b4 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Worm: xattr update on changing access time of a WORM-Retained fileVishal Pandey2019-10-231-0/+31
| | | | | | | | | Retention-period must be updated on changing the access time of a worm-retained file. Retention-period must be changed in the "trusted.reten-state" xattr Change-Id: Ieab758a4cf6da3b4bb1d6a3e4f95f400c8a11f1d Fixes: bz#1554286
* geo-rep: Fix Permission denied traceback on non root setupKotresh HR2019-10-211-6/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: While syncing rename of directory in hybrid crawl, geo-rep crashes as below. Traceback (most recent call last): File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in worker res = getattr(self.obj, rmeth)(*in_data[2:]) File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 588, in entry_ops src_entry = get_slv_dir_path(slv_host, slv_volume, gfid) File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 687, in get_slv_dir_path [ENOENT], [ESTALE]) File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 546, in errno_wrap return call(*arg) PermissionError: [Errno 13] Permission denied: '/bricks/brick1/b1/.glusterfs/8e/c0/8ec0fcd4-d50f-4a6e-b473-a7943ab66640' Cause: Conversion of gfid to path for a directory uses readlink on backend .glusterfs gfid path. But this fails for non root user with permission denied. Fix: Use gfid2path interface to get the path from gfid Change-Id: I9d40c713a1b32cea95144cbc0f384ada82972222 fixes: bz#1763439 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* geo-rep: Fix config upgrade on non-participating nodeKotresh HR2019-10-172-0/+179
| | | | | | | | | | | | | After upgrade, if the config files are of old format, it gets migrated to new format. Monitor process migrates it. Since monitor doesn't run on nodes where bricks are not hosted, it doesn't get migrated there. So this patch fixes the config upgrade on nodes which doesn't host bricks. This happens during config either on get/set/reset. Change-Id: Ibade2f2310b0f3affea21a3baa1ae0eb71162cba Signed-off-by: Kotresh HR <khiremat@redhat.com> fixes: bz#1762220
* tests: add tests for bz#1758878Ravishankar N2019-10-174-34/+67
| | | | | | | | updates: bz#1193929 Change-Id: I517fa29e57bde970c2c22ebc2de80fec1509cd2d Signed-off-by: Sanju Rakonde <srakonde@redhat.com> Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* tests: Fix spurious failurePranith Kumar K2019-10-161-2/+2
| | | | | | fixes: bz#1759002 Change-Id: I4d49e1c2ca9b3c1d74b9dd5a30f1c66983a76529 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* tests: Specify bs for ddPranith Kumar K2019-10-161-1/+1
| | | | | | | | | On some distros default bs is very slow and the test takes close to 2 minutes instead of 20 seconds. fixes: bz#1761769 Change-Id: If10d595a7ca05f053237f3c5ffbb09c5151eab35 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* tests: Remove hard-coding of lower boundPranith Kumar K2019-10-161-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Total blocks of XFS partition are dependent on XFS parameters used for formatting. So hardcoded lower-bound may not be the lower bound for different default parameters. Used blocks are lesser on my machine for a freshly formatted XFS partition compared to where the test fails. On my machine: Filesystem 1024-blocks Used Available Capacity Mounted on /dev/loop0 98980 5472 93508 6% /d/backends/patchy1 /dev/loop1 98980 5472 93508 6% /d/backends/patchy2 On a machine where this test fails: Filesystem 1024-blocks Used Available Capacity Mounted on /dev/loop0 96928 6112 90816 7% /d/backends/patchy1 /dev/loop1 96928 6112 90816 7% /d/backends/patchy2 Fix: Make lower bound 2% less than the brick-blocks available fixes: bz#1761759 Change-Id: I974d5e75766f7ff44780a2e4c2a19cd5d1d14a79 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* tests: mark tests/bugs/glusterd/quorum-value-check.t as NFS testSanju Rakonde2019-10-151-0/+2
| | | | | | | Fixes: bz#1665358 Change-Id: Iea000dd839d4e4dbef45941f97ab3725a2aa1726 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* cluster/afr: Add afr_seek to fops tablePranith Kumar K2019-10-143-1/+57
| | | | | | fixes: bz#1760189 Change-Id: Iffbf8d6f4c50b8e2de8364658697bdbe96549f5d Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* glusterd: rebalance start should fail when quorum is not metSanju Rakonde2019-10-101-0/+2
| | | | | | | | | | rebalance start should not succeed if quorum is not met. this patch adds a condition to check whether quorum is met in pre-validation stage. fixes: bz#1760467 Change-Id: Ic7d0d08f69e4bc6d5e7abae713ec1881531c8ad4 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Fix spurious failure in bug-1744548-heal-timeout.tPranith Kumar K2019-10-091-6/+11
| | | | | | | | | | | | | | | | | | | | | Script was assuming that the heal would have triggered by the time test was executed, which may not be the case. It can lead to following failures when the race happens: ... 18:29:45 not ok 14 [ 85/ 1] < 26> '[ 331 == 333 ]' -> '' ... 18:29:45 not ok 16 [ 10097/ 1] < 33> '[ 668 == 666 ]' -> '' Heal on 3rd brick didn't start completely first time the command was executed. So the extra count got added to the next profile info. Fixed it by depending on cumulative stats and waiting until the count is satisfied using EXPECT_WITHIN fixes: bz#1759002 Change-Id: I3b410671c902d6b1458a757fa245613cb29d967d Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/afr: Heal entries when there is a source & no healed_sinkskarthik-us2019-10-091-0/+89
| | | | | | | | | | | | | | | | | | | | Problem: In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame anything for entry heal, heal will not complete even though we have clear source and sinks. This will happen because while doing afr_selfheal_find_direction() only the bricks which are blamed by non-accused bricks are considered as sinks. Later in __afr_selfheal_entry_finalize_source() when it tries to mark all the non-sources as sinks it fails to do so because there won't be any healed_sinks marked, no witness present and there will be a source. Fix: If there is a source and no healed_sinks, then reset all the locked sources to 0 and healed sinks to 1 to do conservative merge. Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548 Fixes: bz#1749322 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* tests: Fix spurious failure in bug-1134691-afr-lookup-metadata-heal.tRavishankar N2019-10-091-0/+2
| | | | | | | | | | | | | Problem: The .t was examining the sink brick's iatt value before the launched client-side metadata heal got a chance to complete. Fix: Wait for heal completion. Fixes: bz#1759081 Change-Id: I4dd4e3a1cccf35fd18e8cdfea6aa76a726a4763b Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: support split-brain CLI for replica 3Ravishankar N2019-10-091-0/+111
| | | | | | | | | | | | | | | | | | | | | | | | Ever since we added quorum checks for lookups in afr via commit bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution commands would not work for replica 3 because there would be no readables for the lookup fop. The argument was that split-brains do not occur in replica 3 but we do see (data/metadata) split-brain cases once in a while which indicate that there are a few bugs/corner cases yet to be discovered and fixed. Fortunately, commit 8016d51a3bbd410b0b927ed66be50a09574b7982 added GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD, split-brain resolution commands will work for replica 3 volumes too. Likewise, the check is added in shard_lookup as well to permit resolving split-brains by specifying "/.shard/shard-file.xx" as the file name (which previously used to fail with EPERM). Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9 Fixes: bz#1756938 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* tests: add a pending test caseAmar Tumballi2019-10-031-1/+17
| | | | | | | | | While merging the protocol handshake fixes (bz#1620580), there was a case which was left out. Adding it separately now. Change-Id: I52133d5fe160b4567400a65e60aac8f7bc20697f Updates: bz#1193929 Signed-off-by: Amar Tumballi <amarts@gmail.com>
* ssl: fix RHEL8 regression failureSanju Rakonde2019-10-011-1/+1
| | | | | | | | | | | | This tests is failing with "SSL routines:SSL_CTX_use_certificate:ee key too small" in RHEL8. This change is made according to https://access.redhat.com/solutions/4157431 updates: bz#1756900 Change-Id: Ib436372c3bd94bcf7324976337add7da4088b3d5 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* protocol/handshake: pass volume-id for extra checkAmar Tumballi2019-09-301-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With added check of volume-id during handshake, we can be sure to not connect with a brick if this gets re-used in another volume. This prevents any accidental issues which can happen with a stale client process lurking along. Also added test case for testing same volume name which would fetch a different volfile (ie, different bricks, different type), and a different volume name, but same brick. For reference: Currently a client<->server handshake happens in glusterfs through protocol/client translator (setvolume) to protocol/server using a dictionary which containes many keys. Rejection happens in server side if some of the required keys are missing in handshake dictionary. Till now, there was no single unique identifier to validate for a client to tell server if it is actually talking to a corresponding server. All we look in protocol/client is a key called 'remote-subvolume', which should match with a subvolume name in server volume file, and for any volume with same brick name (can be present in same cluster due to recreate), it would be same. This could cause major issue, when a client was connected to a given brick, in one volume would be connected to another volume's brick if its re-created/re-used. To prevent this behavior, we are now passing along 'volume-id' in handshake, which would be preserved for the life of client process, which can prevent this accidental connections. NOTE: This behavior wouldn't be applicable for user-snapshot enabled volumes, as snapshotted volume's would have different volume-id. Fixes: bz#1620580 Change-Id: Ie98286e94ce95ae09c2135fd6ec7d7c2ca1e8095 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* tests/shard: Remove dependence on distributed cachePranith Kumar K2019-09-271-3/+3
| | | | | | fixes: bz#1756211 Change-Id: Iee5b37af89ab624c16a45df364806003238280e5 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Implement read-mask featurePranith Kumar K2019-09-271-0/+114
| | | | | | fixes: #725 Change-Id: Iaaefe6f49c8193c476b987b92df6bab3e2f62601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* read-ahead/io-cache: turn off by defaultRaghavendra Gowdappa2019-09-262-1/+3
| | | | | | | | | | | | | | | | We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these functionalities, this patch makes these two translators turned off by default for native fuse mounts. For non-native fuse mounts like gfapi (NFS-ganesha/samba) we can have these xlators on by having custom profiles. Change-Id: Ie7535788909d4c741844473696f001274dc0bb60 Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com> fixes: bz#1676479
* gfapi: 'glfs_h_creat_open' - new API to create handle and open fdSoumya Koduri2019-09-252-0/+145
| | | | | | | | | | | | | | | | | Right now we have two separate APIs, one - 'glfs_h_creat_handle' to create handle & another - 'glfs_h_open' to create a glfd to return to application Having two separate routines can result in access errors while trying to create and write into a read-only file. Since a fd is opened even during file/directory creation, introducing a new API to make these two operations atomic i.e, which can create both handle & fd and pass them to application Change-Id: Ibf513fcfcdad175f4d7eb6fa7a61b8feec6d33b5 Fixes: bz#1753569 Signed-off-by: Soumya Koduri <skoduri@redhat.com>
* tests : test case for non-root geo-rep setupSunny Kumar2019-09-251-0/+251
| | | | | | | | Added test case for non-root geo-rep setup. Change-Id: Ib6ebee79949a9f61bdc5c7b5e11b51b262750e98 fixes: bz#1717827 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* ctime/rebalance: Heal ctime xattr on directory during rebalanceKotresh HR2019-09-168-1/+497
| | | | | | | | | | | | | | | | After add-brick and rebalance, the ctime xattr is not present on rebalanced directories on new brick. This patch fixes the same. Note that ctime still doesn't support consistent time across distribute sub-volume. This patch also fixes the in-memory inconsistency of time attributes when metadata is self healed. Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df fixes: bz#1734026 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* tests/shd: Mark "tests/basic/volume-scale-shd-mux.t" as badMohammed Rafi KC2019-09-161-0/+2
| | | | | | | | | This test case is failing in upstream. Marking this test as bad for now. Change-Id: I014c67628c14683c32a3c1dd770b10aaf35ad4cc Updates: bz#1752331 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* protocol/client: don't reopen fds on which POSIX locks are held after a ↵Raghavendra G2019-09-121-0/+63
| | | | | | | | | | | | | | | | | | | reconnect Bricks cleanup any granted locks after a client disconnects and currently these locks are not healed after a reconnect. This means post reconnect a competing process could be granted a lock even though the first process which was granted locks has not unlocked. By not re-opening fds, subsequent operations on such fds will fail forcing the application to close the current fd and reopen a new one. This way we prevent any silent corruption. A new option "client.strict-locks" is introduced to control this behaviour. This option is set to "off" by default. Change-Id: Ieed545efea466cb5e8f5a36199aa26380c301b9e Signed-off-by: Raghavendra G <rgowdapp@redhat.com> updates: bz#1694920
* libgfapi: return correct errno on invalid volume nameSheetal Pamecha2019-09-123-8/+95
| | | | | | | | | | | | | glfs_init when called with volume name prefixed by '/' sets errno to 0. Setting errno to EINVAL to resolve the issue. Also volname is a parameter to glfs_new. Thus, validating volname in glfs_new itself and returning EINVAL from that function fixes: bz#1507896 Change-Id: I0d4d2423e26cc07644d50ec8cce788ecc639203d Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* api: Cleanup of executable not doneSheetal2019-09-121-1/+1
| | | | | | | | | | | In test tests/bugs/gfapi/bug-1447266/bug-1447266.t actual file created is - tests/bugs/gfapi/bug-1447266/bug-1447266 which is not cleaned up later fixes: bz#1750618 Change-Id: I93120418e54b95018a7213d106a1f1c990766281 Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* tests: revive back volume-scale-shd-mux.tAtin Mukherjee2019-09-125-30/+28
| | | | | | | Fixes: bz#1708929 Change-Id: I9cc81a9047ff874df752ca5552e00bf033485bd8 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* tests: Fix spurious failurePranith Kumar K2019-09-111-2/+20
| | | | | | | | | | If heal from next brick starts after the first brick completes heal, then opendir on the brick can change atime leading to failure of the test. When ctime is disabled it is better to just check mtime to be same after heal. fixes: bz#1751134 Change-Id: Ia03e30fd547e6bbe85c1e299845ffa122f3a2692 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: quorum-count implementationPranith Kumar K2019-09-083-0/+224
| | | | | | fixes: #721 Change-Id: I5333540e3c635ccf441cf1f4696e4c8986e38ea8 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Fail fsync/flush for files on update size/version failurePranith Kumar K2019-09-062-0/+150
| | | | | | | | | | | | | | | | | Problem: If update size/version is not successful on the file, updates on the same stripe could lead to data corruptions if the earlier un-aligned write is not successful on all the bricks. Application won't have any knowledge of this because update size/version happens in the background. Fix: Fail fsync/flush on fds that are opened before update-size-version went bad. fixes: bz#1748836 Change-Id: I9d323eddcda703bd27d55f340c4079d76e06e492 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* graph/cleanup: Fix race in graph cleanupMohammed Rafi KC2019-09-052-3/+68
| | | | | | | | | | | | | | | | | We were unconditionally cleaning up the grap when we get child_down followed by parent_down. But this is prone to race condition when some of the bricks are already disconnected. In this case, even before the last child down is executed in the client xlator code,we might have freed the graph. Because the child_down event is alreadt recevied. To fix this race, we have introduced a check to see if all client xlator have cleared thier reconnect chain, and called the child_down for last time. Change-Id: I7d02813bc366dac733a836e0cd7b14a6fac52042 fixes: bz#1727329 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* afr/lookup: Pass xattr_req in while doing a selfheal in lookupMohammed Rafi KC2019-09-052-0/+53
| | | | | | | | | | We were not passing xattr_req when doing a name self heal as well as a meta data heal. Because of this, some xdata was missing which causes i/o errors Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f Fixes: bz#1728770 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* tests: fix spurious failure of bug-1402841.t-mt-dir-scan-race.tRavishankar N2019-09-041-4/+5
| | | | | | | | | | | | | | | | | | | | | | Problem: Since commit 600ba94183333c4af9b4a09616690994fd528478, shd starts healing as soon as it is toggled from disabled to enabled. This was causing the following line in the .t to fail on a 'fast' machine (always on my laptop and sometimes on the jenkins slaves). EXPECT_NOT "^0$" get_pending_heal_count $V0 because by the time shd was disabled, the heal was already completed. Fix: Increase the no. of files to be healed and make it a variable called FILE_COUNT, should we need to bump it up further because the machines become even faster. Also created pending metadata heals to increase the time taken to heal a file. fixes: bz#1748744 Change-Id: I5a26b08e45b8c19bce3c01ce67bdcc28ed48198d Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: wake up index healer threadsRavishankar N2019-08-301-0/+42
| | | | | | | | | | | | | ...whenever shd is re-enabled after disabling or there is a change in `cluster.heal-timeout`, without needing to restart shd or waiting for the current `cluster.heal-timeout` seconds to expire. See BZ 1743988 for more details. Change-Id: Ia5ebd7c8e9f5b54cba3199c141fdd1af2f9b9bfe fixes: bz#1744548 Reported-by: Glen Kiessling <glenk1973@hotmail.com> Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* performance/md-cache: Do not skip caching of null character xattr valuesAnoop C S2019-08-201-0/+22
| | | | | | | | | | | | | | | | | | | | | Null character string is a valid xattr value in file system. But for those xattrs processed by md-cache, it does not update its entries if value is null('\0'). This results in ENODATA when those xattrs are queried afterwards via getxattr() causing failures in basic operations like create, copy etc in a specially configured Samba setup for Mac OS clients. On the other side snapview-server is internally setting empty string("") as value for xattrs received as part of listxattr() and are not intended to be cached. Therefore we try to maintain that behaviour using an additional dictionary key to prevent updation of entries in getxattr() and fgetxattr() callbacks in md-cache. Credits: Poornima G <pgurusid@redhat.com> Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7 Fixes: bz#1726205 Signed-off-by: Anoop C S <anoopcs@redhat.com>
* glusterd: ./tests/bugs/glusterd/bug-1595320.t is failingMohit Agrawal2019-08-191-1/+1
| | | | | | | | | | | | | Problem: sometime ./tests/bugs/glusterd/bug-1595320.t is failing is failing at the time of checking brick_process after sending a kill signal to brick process Solution: Wait sometime after just sending a kill signal to brick process to make sure brick process is stopped Change-Id: Iee9e91284618abfc62a550d47e4f9117785def58 Fixes: bz#1743200 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* tests/dht: Add a test file for file renamesN Balachandran2019-08-191-0/+1021
| | | | | | | | | Test the various combinations of hashed and cached subvols for the src and dst. Change-Id: I41416f9e5f2b7ea1c880d1913fdd6576da1ee868 fixes: bz#1626543 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* tests: mark ↵Atin Mukherjee2019-08-191-0/+1
| | | | | | | | bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t as BRICK_MUX_BAD_TEST Updates: bz#1743069 Change-Id: I1eea0186ca0c1b1226f4b3d0d7c0e41fc7821cbd Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* afr: restore timestamp of parent dir during entry-healRavishankar N2019-08-141-0/+78
| | | | | | Fixes: bz#1734370 Change-Id: I29e338bac62104233a6f80212df8d0fb016affda Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterd: create separate logdirs for cluster.rc instancesN Balachandran2019-08-141-5/+6
| | | | | | | | | | Create a separate logdir for each host instance created by cluster.rc. This makes it easier to determine the files belonging to a particular instance. Change-Id: Ic8321f83f98995412b7d5f095b3d3f0391767a8b Fixes: bz#1733042 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* features/shard: Send correct size when reads are sent beyond file sizeKrutika Dhananjay2019-08-121-0/+29
| | | | | | Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b fixes bz#1738419 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>