summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* [Lib] Add is_shd_daemon_running methodPranav2020-04-231-0/+30
| | | | | | | | | | | Verifies whether the shd daemon is up and running on a particular node. The method verifies whether the shd pid is present or not on the given node. If present, as an additional verification, verifies that the 'self-heal daemon' for the node specified is not there in the get volume status output Change-Id: I4865dc5c493a72ed7334ea998d0a231f4f8c75c8 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Add code to override the volume configsayaleeraut2020-04-231-38/+73
| | | | | | | | | | | | | | | Changing the distribute count to 4 for the volume type distributed-replicated or distributed-dispersed, as earlier with distribute count 2, after remove-brick, the dist-rep & dist-disp volumes were converted to pure rep or pure dispersed, which caused "layout not complete" error as with the DHT pass-through feature layout is not set on bricks if volume type is pure replicated/pure dispersed on gluster version 6.0 Adding distributed-arbiter volume type and have added code to override its configuration as well. Change-Id: Ic7a3404ed49d24f956de33f7bd5ca8ea61297e5b Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add tc to check get-state when brick is killedPranav2020-04-231-0/+124
| | | | | | | | | Test case verifies whether the gluster get-state shows the proper brick status in the output. The test case checks the brick status when the brick is up and also after killing the brick process. It also verifies whether the other bricks are up when a particular brick process is killed. Change-Id: I9801249d25be2817104194bb0a8f6a16271d662a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add TC to check SEL context on glusterfs.xml fileLeela Venkaiah G2020-04-221-0/+75
| | | | | | | | | | Test Steps: 1. Check the existence of '/usr/lib/firewalld/services/glusterfs.xml' 2. Validate the owner of this file as 'glusterfs-server' 3. Validate SELinux label context as 'system_u:object_r:lib_t:s0' Change-Id: I55bfb3b51a9188e2088459eaf5304b8b73f2834a Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Add tc to stop rebalance with migration in progresskshithijiyer2020-04-221-0/+167
| | | | | | | | | | | | | | | | | | | | | | Description: This test case creates a large file at mount point, adds extra brick and initiates rebalance. While migration is in progress, it stops rebalance process and checks if it has stopped. Testcase Steps: 1. Create and start a volume. 2. Mount volume on client and create a large file. 3. Add bricks to the volume and check layout 4. Rename the file such that it hashs to different subvol. 5. Start rebalance on volume. 6. Stop rebalance on volume. Change-Id: I7edd37a548467d6624ffe1efa64b0c1b56ff26ed Co-authored-by: Kartik_Burmee <kburmee@redhat.com> Signed-off-by: Kartik_Burmee <kburmee@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Fix to make TC compatible with DHT libsayaleeraut2020-04-221-38/+52
| | | | | | | | | | | | | The TC was failing with "AssertionError: ('hash range is not there %s', False)" even though the bricks were healed and the directory was created on non-hashed bricks. This was due to the conflict between the TC and the DHT library changes (added to fix the issues caused by DHT pass-through functionality). The code is now modified according to the library changes and hence the TC works fine. Change-Id: I501e7db89643822fbc711e631ceacda79e4c4ea4 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Libfix] Fix .next() to work with python 3kshithijiyer2020-04-211-1/+4
| | | | | | | | | | | | Problem: item.next() is not supported in python 3. Solution: Add try except block to take care of both python 2 and python 3. Change-Id: I4c88804e45eee2a2ace24a982447000027e6ca3c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Add brick sharing spt to replace_brick_from_volume()kshithijiyer2020-04-211-5/+17
| | | | | | | | | | | | | | | | | | | | | | Problem: `replace_brick_from_volume()` function doesn't support brick shareing which creates a problem as when we try to perform replace brick with a large number of volumes it is unable to get bricks and hence failes. Solution: Adding code for boolean kawrg for multi_vol and using form_bricks_for_multivol() or form_bricks_list() according to the value of multi_vol. The default value of multi_vol is false which would only use form_bricks_list() as done without the changes. Blocks: This patch currently blocks the below mentioned patch: https://review.gluster.org/#/c/glusto-tests/+/19483/ Change-Id: I842a4ebea81e53e694b5b194294f1b941f47d380 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [py2py3] Fixing a bunch of python3 incompatibilitieskshithijiyer2020-04-202-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: There are two python2 to python3 incompatibilities present in test_add_brick_when_quorum_not_met.py and test_add_identical_brick_new_node.py. In test_add_brick_when_quorum_not_met.py the testcase fails with the below error: > for node in range(num_of_nodes_to_bring_down, num_of_servers): E TypeError: 'float' object cannot be interpreted as an integer This is because a = 10 / 5 returns a float in python3 but it returns a int in python2 as shown below: Python 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a = 10/5 >>> type(a) <type 'int'> Python 3.7.3 (default, Mar 27 2019, 13:41:07) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = 10/5 >>> type(a) <class 'float'> In test_add_identical_brick_new_node.py testcase fails with the below error: > add_bricks.append(string.replace(bricks_list[0], self.servers[0], self.servers[1])) E AttributeError: module 'string' has no attribute 'replace' This is because string is depriciated in python3 and is replaced with str. Solution: For the first issue we would need to change a = 10/5 to a = 10//5 as it is constant across both python versions. For the second issue adding try except block as shown below would be suffice: except AttributeError: add_bricks.append(str.replace(bricks_list[0], self.servers[0], self.servers[1])) Change-Id: I9ec325760b279032af3748101bd2bfc58589d57d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [testfix] Add sleep after glusterd restartSri Vignesh2020-04-201-1/+7
| | | | | | | | | Add sleep after glusterd restart is run async in servers by avoiding another transaction in progress failure in testcase Change-Id: I514c24813dc7c102b807a582ae2b0d19069e0d34 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Lib] Add kill_process methodPranav2020-04-171-2/+49
| | | | | | | | | The method kills the given set of processes running on the specified node. It takes process ids or process names and uses the kill command to terminate the process and returns status as boolean value Change-Id: Ic6c316dac6b3496d34614c568115b0fa0f40d07d Signed-off-by: Pranav <prprakas@redhat.com>
* [Testfix] Fix rpyc_get_connection() issue in test_metadata_self_healkshithijiyer2020-04-161-16/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem: `g.rpyc_get_connection()` has a limitaion where it can't convert python2 calls to python3 calls. Due to this a large number of testcases fail when executed from a python2 machine on a python3 only setup or visa versa with the below stack trace: ``` E ========= Remote Traceback (1) ========= E Traceback (most recent call last): E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request E res = self._HANDLERS[handler](self, *args) E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect E if hasattr(self._local_objects[id_pack], '____conn__'): E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__ E return self._dict[key][0] E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560) ``` Solution: Write generic code which can run from python2 to python3 and visa-versa Change-Id: I7783485a784ef4b57f626f77e6012d918fee6032 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Add get_gluster_state methodPranav2020-04-151-1/+79
| | | | | | | | | The method executes the 'gluster get-state' command on the specified node and verifies the glusterd state dump, reads it and returns the content as a dictionary Change-Id: I0356ccf740fd97d1930e9f09d6111304b14cd015 Signed-off-by: Pranav <prprakas@redhat.com>
* [testfix] Add steps in teardown to wait for all bricks onlineSri Vignesh2020-04-141-1/+7
| | | | | | | | Add steps wait_for_bricks_to_be_online in teardown after the glusterd is started in teststeps Change-Id: Id30a3d870c6ba7c77b0e79604521ec41fe624822 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test]: Check replace brick in ECubansal2020-04-131-0/+336
| | | | | | | | | Steps: 1.Checks replace-brick and data intergrity post that 2.Checks replace-brick while IO's are in progress Change-Id: Idfc801fde50967924696b2e909633b9ca95ac721 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Fix logging errorkshithijiyer2020-04-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: Line 135 is missing () which leads to the below trace back when the testcase fails: ``` Traceback (most recent call last): File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit msg = self.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format return fmt.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format record.message = record.getMessage() File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135 ``` Solution: Adding the missing () brackets in line 135. Change-Id: I318a5b838f01840afee5d4109645cc7dcd86c8fa Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Change daemon_reload() to support newer platformskshithijiyer2020-04-071-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently the code supports both service and systemctl commands but it fails on the latest platforms with the below error on the latest platforms: ``` service glusterd reload Redirecting to /bin/systemctl reload glusterd.service Failed to reload glusterd.service: Job type reload is not applicable for unit glusterd.service. ``` This is because the latest platforms uses systemctl instead of service to reload the daemon processes: ``` systemctl daemon-reload ``` Solution: The present code doesn't work properly as the check is specific to only one platform, hence it fails. The solution for this is to just check for older platforms and run service command. For all other platforms run systemctl command. Change-Id: I19b24652b96c4794553d3659eaf0301395929bca Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Remove rpyc_get_connection() dependency from codekshithijiyer2020-04-018-124/+161
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: `g.rpyc_get_connection()` has a limitaion where it can't convert python2 calls to python3 calls. Due to this a large number of testcases fail when executed from a python2 machine on a python3 only setup or visa versa with the below stack trace: ``` E ========= Remote Traceback (1) ========= E Traceback (most recent call last): E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request E res = self._HANDLERS[handler](self, *args) E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect E if hasattr(self._local_objects[id_pack], '____conn__'): E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__ E return self._dict[key][0] E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560) ``` Solution: The solution here is to modify the code to not use `g.rpyc_get_connection()`. The following changes are done to accomplish it: 1)Remove code which uses g.rpyc_get_connection() and use generic logic in functions: a. do_bricks_exist_in_shd_volfile() b. get_disk_usage() c. mount_volume() d. list_files() f. append_string_to_file() 2)Create files which can be uploaded and executed on clients/servers to avoid rpc calls in functions: a. calculate_hash() b. validate_files_in_dir() 3)Modify setup.py to push the below files to `/usr/share/glustolibs/scripts/`: a.compute_hash.py b.walk_dir.py Change-Id: I00a81a88382bf3f8b366753eebdb2999260788ca Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test]: check heal of custom xattr on newly added bricksayaleeraut2020-03-261-0/+172
| | | | | | | | | | | | | | | | BZ#1702298 - Custom xattrs are not healed on newly added brick Test Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create 100 directories on the mount point. 4) Set the xattr on the directories. 5) Add bricks to the volume and trigger rebalance. 6) Wait for rebalance to complete. 7) After rebalance completes,check if all the bricks have healed. 8) Check the xattr for dirs on the newly added bricks. Change-Id: If83f65ea163ccf16f9024d6b3a867ba7b35773f0 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Libfix] Add function in baseclass for cleanupSri Vignesh2020-03-241-1/+109
| | | | | | | | | | Add docleanup and docleanupclass in baseclass, which will call the function fresh_setup_cleanup, will cleanup the nodes to fresh setup if it is set to true or whenever the testcase fails. Change-Id: I951ff59cc3959ede5580348b7f93b57683880a23 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Testfix] Fix error in loggingkshithijiyer2020-03-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcase test_ec_version was failing with the below traceback: Traceback (most recent call last): File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit msg = self.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format return fmt.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format record.message = record.getMessage() File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage msg = msg % self.args TypeError: %d format: a number is required, not str Logged from file test_ec_version_healing_whenonebrickdown.py, line 233 This was due to a missing 's' in the log message on line 233. Solution: Add the missing s in the log message on line 233 as shown below: g.log.info('Brick %s is offline successfully', brick_b2_down) Also renaming the file for more clarity of what the testcase does. Change-Id: I626fbe23dfaab0dd6d77c75329664a81a120c638 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test+libfix] Add testcase to access of filekshithijiyer2020-03-192-3/+187
| | | | | | | | | | | | | Testcase steps: (file access) - rename the file so that the hashed and cached are different - make sure file can be accessed as long as cached is up Fixes a library issue as well in find_new_hashed() Change-Id: Id81264848d6470b9fe477b50290f5ecf917ceda3 Co-authored-by: Susant Palai <spalai@redhat.com> Signed-off-by: Susant Palai <spalai@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test+libfix] Testcases for rename with subvol downkshithijiyer2020-03-172-4/+231
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Case 1: 1.mkdir srcdir and dstdir(such that srcdir and dstdir hashes to different subvols) 2.Bring down srcdir hashed subvol 3.mv srcdir dstdir (should fail) Case 2: 1.mkdir srcdir dstdir 2.Bring down srcdir hashed 3.Bring down dstdir hashed 4.mv srcdir dstdir (should fail) Case 3: 1.mkdir srcdir dstdir 2.Bring down dstdir hashed subvol 3.mv srcdir dstdir (should fail) Additional library fix details: Also fixing library function to work with distributed-disperse volume by removing `if oldhashed._host != brickdir._host:` as the same node can host multiple bricks of the same volume. Change-Id: Iaa472d1eb304b547bdec7a8e6b62c1df1a0ce591 Co-authored-by: Susant Palai <spalai@redhat.com> Signed-off-by: Susant Palai <spalai@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test]: Add checks for peer detach of offline volumesnchilaka2020-03-171-43/+60
| | | | | | | | | | Changes done in this patch include: 1. reduced runtime of test by removing multiple volume configs 2. added extra validation for node already peer detached 3. added test steps to cover peer detach when volume is offline Change-Id: I80413594e90b59dc63b7f4f52e6e348ddb7a9fa0 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Lib] Library for multi volume creation Bala Konda Reddy M2020-03-162-5/+220
| | | | | | | | | | | | | | | | | | | Earlier, brick creation is carried out based on the difference of used and unused bricks. This is a bottleneck for implementing brick multiplexing testcases. Moreover we can't create more than 10 volumes. With this library, implementing a way to create bricks on top of the existing servers in a cyclic way to have equal number of bricks on each brick partition on each server Added paramter in setup_volume function, if multi_vol flag is set it will fetch bricks using cyclic manner using (form_bricks_for_multi_vol) otherwise it will fetch using old mechanism. Added bulk_volume_creation function, to create multiple volumes the user has specified. Change-Id: I2103ec6ce2be4e091e0a96b18220d5e3502284a0 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [testfix] Add sleep in snapshot failuresSri Vignesh2020-03-121-22/+27
| | | | | | | | | | | | Add sleep in test_snap_delete_original_volume.py after cloning a volume added sleep and then starting the volume. Changed io to write in one mount point otherwise seeing issues with validate io. removed baseclass cleanup because original volume is already cleaned up in testcase. Change-Id: I7bf9686384e238e1afe8491013a3058865343eee Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test][BUG] Add testcase to replace-brick and test self-heal of fileskshithijiyer2020-03-111-0/+263
| | | | | | | | | | | | | | | | | | | | | | Testcase steps: 1.Create directory on mount point and write files/dirs 2.Create another set of files (1K files) 3.While creation of files/dirs are in progress Kill one brick 4.Remove the contents of the killed brick(simulating disk replacement) 5.When the IO's are still in progress, restart glusterd on the nodes where we simulated disk replacement to bring back bricks online 6.Start volume heal 7.Wait for IO's to complete 8.Verify whether the files are self-healed 9.Calculate arequals of the mount point and all the bricks CentOS-CI failure due to the following bug: https://bugzilla.redhat.com/show_bug.cgi?id=1807384 Change-Id: I9e9f58a16a7950fd7d6493cbb5c4f5483892851e Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 5)kshithijiyer2020-03-0921-98/+72
| | | | | | | | | Please refer to commit message of patch [1]. [1] https://review.gluster.org/#/c/glusto-tests/+/24140/ Change-Id: I5319ce497ca3359e0e7dbd9ece481bada1ee2205 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test for manual heal full of file using cmdkshithijiyer2020-03-061-0/+182
| | | | | | | | | | | | | | | | | | Testcase steps: 1.Create a single brick volume 2.Add some files and directories 3.Get arequal from mountpoint 4.Add-brick such that this brick makes the volume a replica vol 1x3 5.Start heal full 6.Make sure heal is completed 7.Get arequals from all bricks and compare with arequal from mountpoint Change-Id: I4ef140b326b3d9edcbd5b1f0b7d9c43f38ccfe66 Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test]: check peer probe behavior when glusterd is downnchilaka2020-03-051-0/+160
| | | | | | | | | | | | | | | | | | | | | BZ#1257394 - Provide meaningful errors on peer probe and peer detach Test Steps: 1 check the current peer status 2 detach one of the valid nodes which is already part of cluster 3 stop glusterd on that node 4 try to attach above node to cluster, which must fail with Transport End point error 5 Recheck the test using hostname, expected to see same result 6 start glusterd on that node 7 halt/reboot the node 8 try to peer probe the halted node, which must fail again. 9 The only error accepted is as below "peer probe: failed: Probe returned with Transport endpoint is not connected" 10 Check peer status and make sure no other nodes in peer reject state Change-Id: Ic0a083d5cb150275e927723d960e89fe1a5528fb Signed-off-by: nchilaka <nchilaka@redhat.com>
* [testfix] Add timeout to fix failuresSri Vignesh2020-03-034-11/+31
| | | | | | | | | | | Add extra time for beaker machines to validate the testcases for test_rebalance_spurious.py added cleanup in teardown because fix layout patch is still not merged. Change-Id: I7ee8324ff136bbdb74600b730b4b802d86116427 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [libfix] Fix framework break due to geo_rep patchkshithijiyer2020-03-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Due to patch [1], the framework broke and was failing for all the testcases with the below backtrace: ``` > mount_dict['server'] = cls.snode E AttributeError: type object 'VolumeAccessibilityTests_cplex_replicated_glusterf' has no attribute 'snode' ``` Solution: This was becasue mnode_slave was accidentally written as snode. And cls.geo_rep_info wasn't a safe condition operator hence changed it to cls.slaves. Testcase results with patch: test_cvt.py::TestGlusterHealSanity_cplex_replicated_glusterfs::test_self_heal_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed-dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterHealSanity_cplex_dispersed_glusterfs::test_self_heal_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_nfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_replicated_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_distributed-dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestGlusterReplaceBrickSanity_cplex_distributed-replicated_glusterfs::test_replace_brick_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_distributed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_nfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_glusterfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_dispersed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_distributed-replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_distributed-replicated_glusterfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_nfs::test_shrinking_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_nfs::test_shrinking_volume_when_io_in_progress PASSED links: [1] https://review.gluster.org/#/c/glusto-tests/+/24029/ Change-Id: If7b329e232ab61df9f9d38f5491c58693336dd48 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [lib] Add functionality to setup master and slave volumeskshithijiyer2020-03-022-1/+118
| | | | | | | | | | | | | Adding the code for the following: 1.Adding function setup_master_and_slave_volumes() to geo_rep_libs. 2.Adding variables for master_mounts, slave_mounts, master_volume and slave_volume to gluster_base_class.py 3.Adding class class method setup_and_mount_geo_rep_master_and_slave_volumes to gluster_base_class.py. Change-Id: Ic8ae1cb1c8b5719d4774996c3e9e978551414b44 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test][Bug]Test to check directory gfid mismatch per BZ#1661258kshithijiyer2020-03-021-0/+147
| | | | | | | | | | | | | | | | | | Testcase steps: 1. Create a volume and mount it. 2. Create a directory on mount and check whether all the bricks have the same gfid. 3. Now delete gfid attr from all but one backend bricks, 4. Do lookup from the mount. 5. Check whether all the bricks have the same gfid assigned. Failing in CentOS-CI due to the following bug: https://bugzilla.redhat.com/show_bug.cgi?id=1696075 Change-Id: I4eebc247b15c488cfa24599e0afec2fa5671656f Co-authored-by: Anees Patel <anepatel@redhat.com> Signed-off-by: Anees Patel <anepatel@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [LibFix] Update in run_layout_tests() and validate_files_in_dir()sayaleeraut2020-03-021-27/+43
| | | | | | | | | | | | | The new function volume_type() will check if the volume under test is of pure Replicated/Disperse/Arbiter type and return the result in string. The functions,run_layout_tests() & validate_files_in_dir() have been modified to check the Gluster version and volume type in order to fix the DHT pass-through caused issues. Change-Id: Ie7ad259883907c1fdc0b54e6743636fdab793272 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [LibFix] Adding code to fix issues caused by DHT pass-throughsayaleeraut2020-02-281-45/+74
| | | | | | | | | | | | | | | | | The issue earlier was that whenever a TC called the _get_layout() and _is_complete() methods, it failed on Replicate/Arbiter/Disperse volume types because of DHT pass-through. The functions,get_layout() and is_complete() have been modified to check for the Gluster version and volume type before running, in order to fix the issue. About DHT pass-through : Please refer to- https://github.com/gluster/glusterfs/issues/405 for the details. Change-Id: I0b0dc0ac3cbdef070a20854fbc89442fee1da8b6 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Testfix] Fix timeout issue in test_heal_full_node_rebootkshithijiyer2020-02-261-1/+1
| | | | | | | | | | | | | | | | | | | | | Problem: The current timeout for reboot given in test_heal_full_node_reboot is about 350 seconds which works with most hardware configurations. However when reboot is done on slower systems which take time to come up this logic fails due to which this testcase and the preceding testcases fail. Solution: Change the timeout for reboot from 350 to 700, this wouldn't affect the testcase's perfromance in good hardware configurations as the timeout is for the max value and if the node is up before the testcase it'll exit anyways. Change-Id: I60d05236e8b08ba7d0fec29657a93f2ae53404d4 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 4)kshithijiyer2020-02-2619-98/+58
| | | | | | | | | Please refer to commit message of patch [1]. [1] https://review.gluster.org/#/c/glusto-tests/+/24140/ Change-Id: I25d30f7bdb20f0825709c4c852140e1906870ce7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 3)kshithijiyer2020-02-2618-95/+63
| | | | | | | | Please refer to commit message of patch [1]. [1] https://review.gluster.org/#/c/glusto-tests/+/24140/ Change-Id: Ib357d5690bb28131d788073b80a088647167fe80 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 2)kshithijiyer2020-02-2622-137/+89
| | | | | | | | Please refer to commit message of patch [1]. [1] https://review.gluster.org/#/c/glusto-tests/+/24140/ Change-Id: Ic0b3b1333ac7b1ae02f701943d49510e6d46c259 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 1)kshithijiyer2020-02-2627-257/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sys library was added to all the testcases to fetch the `sys.version_info.major` which fetches the version of python with which glusto and glusto-tests is installed and runs the I/O script i.e file_dir_ops.py with that version of python but this creates a problem as older jobs running on older platforms won't run the way they use to, like if the older platform had python2 by default and we are running it tests from a slave which has python3 it'll fails and visa-versa. The problem is introduced due the below code: ``` cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files " "--dirname-start-num 10 --dir-depth 1 --dir-length 1 " "--max-num-of-dirs 1 --num-of-files 5 %s" % ( sys.version_info.major, self.script_upload_path, self.mounts[0].mountpoint)) ``` The solution to this problem is to change `python%d` to `python` which would enable the code to run with whatever version of python is avaliable on that client this would enable us to run any version of framework with both the older and latest platforms. Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix test_excerise_rebalance_command script failureskshithijiyer2020-02-251-35/+68
| | | | | | | | | | | | Script sometimes fails at expand volume with "Already part of volume" error fixed it with this patch. Change-Id: I628bbdb268e5a42112f68d9148da6bdb775acd26 Co-authored-by: Prasad Desala <tdesala@redhat.com>, Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: Prasad Desala <tdesala@redhat.com> Signed-off-by: Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Modify test to work with new performance.io-cache defaultkshithijiyer2020-02-251-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: The default value of performance.io-cache was ON by default before gluster 6.0, in gluster 6.0 it was set to OFF. Solution: Adding code to check gluster version and then check weather it is ON or OFF as shown below: ``` if get_gluster_version(self.mnode) >= 6.0: self.assertIn("off", ret['performance.io-cache'], "io-cache value is not correct") else: self.assertIn("on", ret['performance.io-cache'], "io-cache value is not correct") ``` CentOS-CI failure analysis: This patch is expected to failed as if we run `gluster --version` on nightly builds the output returned as shown below: ``` # gluster --version glusterfs 20200220.a0e0890 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. ``` This output can't be parsed by get_gluster_version() function which is used in this patch to get the gluster version and check for perfromance.io-cache's default value accordingly. Change-Id: I00b652a9d5747cbf3006825bb17b9ca2f69cb9cd Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [testfix] Add steps to stabilise glusterdSri Vignesh2020-02-256-54/+78
| | | | | | | | | | | Moved steps from teardown class to teardown and removed unwanted teardown class and rectified the testcase failing with wait for io to complete by removing the step because after validate io the sub process terminates and results in failure. Change-Id: I2eaf05680b817b681aff8b48683fc9dac88896b0 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [testfix] Add steps to add peer_probe_servers in cleanupSri Vignesh2020-02-2010-89/+60
| | | | | Change-Id: I0fa6bbacda16fb97d3454a8510a937442b5755a4 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [testfix] removed the rmdir which was merged in baseclassSri Vignesh2020-02-198-47/+4
| | | | | Change-Id: I04f7b7c894d48d0188379028412d9c6b48eac210 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [testfix] Add steps to stabilise content in glusterd - part2Sri Vignesh2020-02-1912-93/+57
| | | | | | | | | | and used wait for peer to connect and wait for glusterd to connect functions in testcases added fixes to check file exists increased timeout value for failure cases Change-Id: I9d5692f635ed324ffe7dac9944ec9b8f3b933fd1 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [TestFix] Changing code used to create arbiter volumeskshithijiyer2020-02-1922-289/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As distributed-arbiter and arbiter weren't present before patch [1], arbiter and distributed-arbiter volumes were created by the hack show below where a distributed-replicated or replicated volume's configuration was modified to create an arbiter volume. ``` @runs_on([['replicated', 'distributed-replicated'], ['glusterfs', 'nfs']]) class TestSelfHeal(GlusterBaseClass): ................. @classmethod def setUpClass(cls): ............... # Overriding the volume type to specifically test the volume # type Change from distributed-replicated to arbiter if cls.volume_type == "distributed-replicated": cls.volume['voltype'] = { 'type': 'distributed-replicated', 'dist_count': 2, 'replica_count': 3, 'arbiter_count': 1, 'transport': 'tcp'} ``` Now this code is to be updated where we need to remove code which was used to override volume configuration and just add arbiter or distributed-arbiter in `@runs_on([],[])` as shown below: ``` @runs_on([['replicated', 'distributed-arbiter'], ['glusterfs', 'nfs']]) class TestSelfHeal(GlusterBaseClass): ``` Links: [1] https://github.com/gluster/glusto-tests/commit/08b727842bc66603e3b8d1160ee4b15051b0cd20 Change-Id: I4c44c2f3506bd0183fd991354fb723f8ec235a4b Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib Fix] Typo leading to error not being publishednchilaka2020-02-191-1/+1
| | | | | | | | Instead of calling g.log.error, we were calling g.log.err Due to this instead of throwing the right error message in, say, when doing volume cleanup, it was throwing ambiguos traceback. Change-Id: I39887ce08756eaf29df2d99f73cc7795a4d2c065
* [LibFix] Add code to check bricksayaleeraut2020-02-181-4/+7
| | | | | | | | | | | | | | | | | | | | | Earlier the method get_volume_type() passed when ran on interpreter. But when the method was called in a TC, it failed at condition (Line: 2235) because : e.g. The value of brickdir_path is "dhcp47-3.lab.eng.blr.redhat.com:/bricks/ brick2/testvol_replicated_brick2/" and it is trying to find the value in the list ['10.70.46.172:/bricks/brick2/testvol_replicated_brick0', '10.70.46.195:/bricks/brick2/testvol_replicated_brick1', '10.70.47.3:/bricks/brick2/testvol_replicated_brick2'] returned by get_all_bricks(), which will fail. Now, with fix, it will run successfully as it tries to check if for host dhcp47-3.lab.eng.blr.redhat.com, the brick /bricks/brick2/testvol_replicated_brick2 is present in the list brick_paths[] which consists of only the paths and not the IP addresses of the bricks present on that host. Change-Id: Ie595faba1e92c559293ddd04f46b85065b23dfc5 Signed-off-by: sayaleeraut <saraut@redhat.com>