summaryrefslogtreecommitdiffstats
path: root/glustolibs-gluster/glustolibs
Commit message (Collapse)AuthorAgeFilesLines
* [Lib] Library to setup ctdb for samba.vdas-redhat2020-07-201-0/+142
| | | | | | | | CTDB works as an HA for samba. Configuring ctdb is mandatory for samba. Change-Id: I5ee28afb86dbc5853e5d54ad2b4460d37c8bfcef Signed-off-by: vdas-redhat <vdas@redhat.com>
* [Libfix] Fix rpyc dependency for NFS-Ganesha libsPranav2020-07-102-45/+51
| | | | | | | | | | | | | Problem: The rpyc connection fails in envs where the python versions are different, resulting in test failures Fix: Replace rpyc with standard ssh approach to overcome this issue Change-Id: Iee4bb968b8b94a6ab3e0fe0d16babacad914a92d Signed-off-by: Pranav <prprakas@redhat.com>
* [lib] CTDB library operations.vdas-redhat2020-07-081-0/+478
| | | | | Change-Id: Ia323aa80efdf5331d58c57be1f087b012fc94e1a Signed-off-by: vdas-redhat <vdas@redhat.com>
* [Libfix] Remove tier libraries from glusto-testsBala Konda Reddy M2020-07-085-2138/+161
| | | | | | | | | | | | Tier libraries are not used across test cases and due to checks across brick_libs.py and volume_libs.py, performance of regular test cases(time taken for execution) is getting degraded. One more factor to remove Tier libraries across glusto-tests is, the functionality is deprecated. Change-Id: Ie56955800515b2ff5bb3b55debaad0fd88b5ab5e Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Skip gluster_shared_storage deletionkshithijiyer2020-07-021-9/+24
| | | | | | | | | | | | | | | | Problem: In the present logic gluster_shared_storage gets deleted in the force cleanup, this causes nfs-ganesha testcases to fail. Fix: Add logic to check is shared_storage is enabled if enabled skip: 1. Peer cleanup and peer probe 2. Deleting gluster_shared_storage vol files Change-Id: I5219491e081bd36dd40342262eaba540ccf00f51 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix is_linkto_file() compatible with newer platformsPranav2020-06-301-5/+2
| | | | | | | | | | | | | Problem: The 'file <file_path>' command output differs in newer platforms. An additional ',' is present in latest builds of the packages, causing tests which uses this method to fail on newer platforms. Fix: Modify the method to handle the latest package output as well Change-Id: I3e59a69b09b960e3a38131a3e76d664b34799ab1 Signed-off-by: Pranav <prprakas@redhat.com>
* [LibFix] Monitor heal only on specific bricksLeela Venkaiah G2020-06-241-3/+12
| | | | | | | | | - Add an optional argument (bricks) to monitor_heal_completion - If provides, heal will be monitored on these set of bricks - Useful when dealing with EC volumes Change-Id: I1c3b137e98966e21c52e0e212efc493aca9c5da0 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Start/Stop of snapd deamon on the cloned volumesrivickynesh2020-06-241-1/+27
| | | | | | | | | | | | Test Cases in this module tests the USS functionality of snapshots snapd on cloned volume and validated snapshots are present inside .snaps directory by terminating snapd on one by one nodes and validating .snaps directory is still accessible. Change-Id: I98d48268e7c5c5952a7f0f544960203d8634b7ac Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Lib] Add parse_vol_file methodPranav2020-06-241-0/+59
| | | | | | | | This method parses the given .vol file and returns the content as a dictionary. Change-Id: I6d57366ddf4d4c0249fff6faaca2ed005cd89e7d Signed-off-by: Pranav <prprakas@redhat.com>
* [Libfix] Add retry logic to restart_glusterd()kshithijiyer2020-06-172-19/+14
| | | | | | | | | | | | | | | | | | | | | | Problem: Patch [1] and [2] sent to glusterfs where changes are made to glusterd.service.in to not allow glusterd restart for more than 6 times within an hour, due this glusterd restarts present in testcases may fail as there is no way to figure out when we reach the 6 restart limit. Fix: Add code to check if glusterd restart has failed if true then call reset_failed_glusterd(), and redo the restart. Links: [1] https://review.gluster.org/#/c/glusterfs/+/23751/ [2] https://review.gluster.org/#/c/glusterfs/+/23970/ Change-Id: I041a019f9a8757d8fead00302e6bbcd6563dc74e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Add library for reset-failedkshithijiyer2020-06-162-4/+39
| | | | | | | | | | | | | | | | | Adding library function reset_failed_glusterd() and modifying scratch_cleanup() to use reset_failed_glusterd(). This is needed because of patch [1] and [2] sent to glusterfs where changes are made to glusterd.service.in to not allow glusterd restart for more than 6 times within an hour. Links: [1] https://review.gluster.org/#/c/glusterfs/+/23751/ [2] https://review.gluster.org/#/c/glusterfs/+/23970/ Change-Id: I25f982517420f20f11a610e8a68afc71f3b7f2a9 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix find_specific_hashed methodPranav2020-06-101-2/+6
| | | | | | | | | | | | | | | Problem: There are scenarios where multiple files are to be renamed to hash to a particular subvol. The existing method returns the same name as the loop always starts from 1. Fix: Adding an optional argument, existing_names which contains names already hashed to the subvol. An additional check is added to ensure the name found is not already used Change-Id: I453ee290c8462322194cebb42c40e8fbc7c373ed Signed-off-by: Pranav <prprakas@redhat.com>
* [Lib] Add method set_rebalance_throttle()sayaleeraut2020-06-091-0/+30
| | | | | | | | | The method takes mnode, volname and throttle-type as parameters. It sets the rebal-throttle for the volume as per the mentioned throttle-type. Change-Id: I9eb14e39f87158c9ae7581636c2cad1333fd573c Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Libfix] Change sequence of option set & start opPranav2020-06-031-8/+24
| | | | | | | | As SSL cannot be set after volume start op, moving set_volume_option prior to volume start. Change-Id: I14e1dc42deb0c0c28736f03e07cf25f3adb48349 Signed-off-by: Pranav <prprakas@redhat.com>
* [Libfix] Fix get_bricks_to_bring_offline_from_replicated_volumePranav2020-06-031-2/+2
| | | | | | | | | | | | | | Finding the offline brick limit using ceil returns incorrect value. E.g., For replica count 3, ceil(3/2) returns 2, and the subsequent method uses this value to bring down 2 out of 3 available bricks, resulting in IO and many other failures. Fix: Change ceil to floor. Also change the '/' operator to '//' for py2/3 compatibility Change-Id: I3ee10647bb037a3efe95d1b04e0864cf61e2499e Signed-off-by: Pranav <prprakas@redhat.com>
* [Lib] Add find_specific_hashed methodPranav2020-06-021-0/+34
| | | | | | | | This method helps in cases of rename scenarios where the new filename has to be hashed to a specific subvol Change-Id: Ia36ea8e3d279ddf130f3a8a940dbe1fcb1910974 Signed-off-by: Pranav <prprakas@redhat.com>
* [Libfix] Fetch all entries under a directory in recursive fashionnchilaka2020-05-221-5/+12
| | | | | | | This method fetches all entries under a directory in recursive fashion Change-Id: I4fc066ccf7a3a4730d568f96d926e46dea7b20a1 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Libfix] Add parameter for volume create onlyBala Konda Reddy M2020-05-182-7/+30
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently setup_volume in volume_libs.py and gluster_base_class.py are to create volume and start it. There are tests, where only volume_create is required and if the test has to run on all volume types. Any contributor have to do all the validations which are already implemented in setup_volume and classmethod of setup volume in the gluster_base_class to their test. Solution: Added a parameter in the setup_volume() function "create_only" by default it is false, unless specified this paramter setup_volume will work as it is. similarly, have added a parameter in classmethod of setup_volume in gluster_base_class.py "only_volume_create", here also defaults to false unless specified. Note: After calling "setup_volume() -> volume_stop" is not same as just "volume_create()" in the actual test. Change-Id: I76cde1b668b3afcac41dd882c2a376cb6fac88a3 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Assign correct atime, ctime, mtime valuesnchilaka2020-05-151-2/+2
| | | | | | | | | Changed get_file_stat function to assign correct key-value pairs for atime, mtime and ctime respectively. Previously, all timestamp keys were assigned to atime value Change-Id: I471ec341d1a213395a89f6c01315f3d0f2e976af Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Libfix] Fixing the pkill commandBala Konda Reddy M2020-05-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Below command to 'pkill pidof glusterd' is not right, as it is not getting the pidof glusterd. eg: cmd = "pkill pidof glusterd" ret, out ,err = g.run("10.20.30.40", cmd, "root") >>> ret, out, err (2, '', "pkill: only one pattern can be provided\n Try `pkill --help' for more information.\n") Here command is failing. Solution: Added `pidof glusterd` which will get proper glusterd pid and kill the stale pid after glusterd stop failed. cmd = "pkill `pidof glusterd`" ret, out ,err = g.run("10.20.30.40", cmd, "root") >>> ret, out, err (1, '', '') Note: The ret value is 1, as it is tried on a machine where glusterd is running. The purpose of the fix is to get the proper pid. Change-Id: Iacba3712852b9d16546ced9a4c071c62182fe385 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Kill stale bricks in scratch_cleanupBala Konda Reddy M2020-05-151-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: While performing scratch clenaup, observerd posix health checkers warnings once the glusterd is started as shown below [2020-05-05 12:19:10.633623] M [MSGID: 113075] [posix-helpers.c:2194:posix_health_check_thread_proc] 0-testvol_distributed-dispersed-posix: health-check failed, going down Solution: In scartch cleanup, once the glusterd is stopped, and runtime socket file removed for glusterd daemon, there are stale glusterfsd present on few the machines. Adding a step to get glusterfsd processes if any and using kill_process method killing the stale glusterfsd processes and continuing with the existing procedure. Once the glusterd is started won't see any posix health checkers. Change-Id: Ib3e9492ec029b5c9efd1c07b4badc779375a66d6 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Added atime, ctime and mtime for filesBala Konda Reddy M2020-05-141-2/+10
| | | | | | | | | | | | | | | | | | | | | get_file_stat function doesn't have access time modified time and change time for a file or directory. Added respective parameters for get- ting the values into the dictionary. Changed the separator from ':' to '$', reason is to overcome the unpacking of the tuple error as below: 2020-04-02 19:27:45.962477021 If ":" as separator is used, will be hitting "ValueError: too many values to unpack" error. Used $ as separator, as it is not used for the filenames in the glusto-tests and not part of the stat output. Change-Id: I40b0c1fd08a5175d3730c1cf8478d5ad8df6e8dd Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Remove ssl_ops.py librarykshithijiyer2020-05-121-226/+0
| | | | | | | | | | | | | | | | Problem: Ideally operations done in ssl_ops.py should be performed on a gluster cluster even before peer probing the nodes. This makes the library useless as we can't run any library in glusto-tests without peer probing Solution: Enable SSL on gluster cluster before parsing it to run tests present in glusto-tests. Change-Id: If803179c67d5b3271b70c1578269350444aa3cf6 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix georep_config_get methodPranav2020-05-121-1/+1
| | | | | | | | The command creation with a specific user had six substitutions, but had only 5 placeholders. Change-Id: I2c9f63213f78e5cec9e5bd30cac8d75eb8dbd6ce Signed-off-by: Pranav <prprakas@redhat.com>
* [Lib] Add create_link_file() to glusterfile.pykshithijiyer2020-05-041-0/+39
| | | | | | | | Adding function create_link_file() to create soft and hard links for an existing file. Change-Id: I6be313ded1a640beb450425fbd29374df51fbfa3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] functional/disperse: Verify remove brick operationSri Vignesh2020-04-291-1/+44
| | | | | | | | | | This test verifies remove brick operations on disperse volume. Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468 Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test+Lib] No fresh lookups on directoryBala Konda Reddy M2020-04-281-0/+33
| | | | | | | | | | | | | | | | | | | Test Steps: 1. Create a volume and set the volume option 'diagnostics.client-log-level' to DEBUG mount the volume on one client. 2. Create a directory 3. Validate the number of lookups for the directory creation from the log file. 4. Perform a new lookup of the directory 5. No new lookups should have happened on the directory, validate from the log file. 6. Bring down one subvol of the volume and repeat step 4, 5 7. Bring down one brick from the online bricks and repeat step 4, 5 8. Start the volume with force and wait for all process to be online. Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* [Libfix] Fix check_brick_pid_matches_glusterfsd_pid() to use pgrepkshithijiyer2020-04-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Problem: On latest platforms pidof command is returning multiple pids as shown below: 27190 27078 26854 This is becasue it was returning glusterd,glusterfsd and glusterfs processes as well. The problem is that /usr/sbin/glusterd is a link to glusterfsd. 'pidof' has a new feature that pidof searches for the pattern in /proc/PID/cmdline, /proc/PID/stat and finally /proc/PID/exe. Hence pidof matches realpath of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd and results in glusterd, glusterfs and glusterfsd pids being returned in output. Fix: Use pgrep instead of pidof to get glusterfsd pids. And change the split logic accordingly. Change-Id: I729e05c3f4cacf7bf826592da965a94a49bb6f33 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix get_brick_processes_count() to use pgrepkshithijiyer2020-04-271-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | Problem: On latest platforms pidof command is returning multiple pids as shown below: 27190 27078 26854 This is becasue it was returning glusterd,glusterfsd and glusterfs processes as well. The problem is that /usr/sbin/glusterd is a link to glusterfsd. 'pidof' has a new feature that pidof searches for the pattern in /proc/PID/cmdline, /proc/PID/stat and finally /proc/PID/exe. Hence pidof matches realpath of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd and results in glusterd, glusterfs and glusterfsd pids being returned in output. Fix: Use pgrep instead of pidof to get glusterfsd pids. And change the split logic accordingly. Change-Id: Ie215734387989f2d8cb19e4b4f7cddc73d2a5608 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Function to check xattr of the bricksubansal2020-04-241-1/+41
| | | | | | | Check if the xattr of the given bricks are same Change-Id: Ib1ba010bfeafc132123a88a893017f870a989789 Signed-off-by: ubansal <ubansal@redhat.com>
* [Lib] Add is_shd_daemon_running methodPranav2020-04-231-0/+30
| | | | | | | | | | | Verifies whether the shd daemon is up and running on a particular node. The method verifies whether the shd pid is present or not on the given node. If present, as an additional verification, verifies that the 'self-heal daemon' for the node specified is not there in the get volume status output Change-Id: I4865dc5c493a72ed7334ea998d0a231f4f8c75c8 Signed-off-by: Pranav <prprakas@redhat.com>
* [Libfix] Fix .next() to work with python 3kshithijiyer2020-04-211-1/+4
| | | | | | | | | | | | Problem: item.next() is not supported in python 3. Solution: Add try except block to take care of both python 2 and python 3. Change-Id: I4c88804e45eee2a2ace24a982447000027e6ca3c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Add brick sharing spt to replace_brick_from_volume()kshithijiyer2020-04-211-5/+17
| | | | | | | | | | | | | | | | | | | | | | Problem: `replace_brick_from_volume()` function doesn't support brick shareing which creates a problem as when we try to perform replace brick with a large number of volumes it is unable to get bricks and hence failes. Solution: Adding code for boolean kawrg for multi_vol and using form_bricks_for_multivol() or form_bricks_list() according to the value of multi_vol. The default value of multi_vol is false which would only use form_bricks_list() as done without the changes. Blocks: This patch currently blocks the below mentioned patch: https://review.gluster.org/#/c/glusto-tests/+/19483/ Change-Id: I842a4ebea81e53e694b5b194294f1b941f47d380 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Add get_gluster_state methodPranav2020-04-151-1/+79
| | | | | | | | | The method executes the 'gluster get-state' command on the specified node and verifies the glusterd state dump, reads it and returns the content as a dictionary Change-Id: I0356ccf740fd97d1930e9f09d6111304b14cd015 Signed-off-by: Pranav <prprakas@redhat.com>
* [Libfix] Remove rpyc_get_connection() dependency from codekshithijiyer2020-04-015-122/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: `g.rpyc_get_connection()` has a limitaion where it can't convert python2 calls to python3 calls. Due to this a large number of testcases fail when executed from a python2 machine on a python3 only setup or visa versa with the below stack trace: ``` E ========= Remote Traceback (1) ========= E Traceback (most recent call last): E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request E res = self._HANDLERS[handler](self, *args) E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect E if hasattr(self._local_objects[id_pack], '____conn__'): E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__ E return self._dict[key][0] E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560) ``` Solution: The solution here is to modify the code to not use `g.rpyc_get_connection()`. The following changes are done to accomplish it: 1)Remove code which uses g.rpyc_get_connection() and use generic logic in functions: a. do_bricks_exist_in_shd_volfile() b. get_disk_usage() c. mount_volume() d. list_files() f. append_string_to_file() 2)Create files which can be uploaded and executed on clients/servers to avoid rpc calls in functions: a. calculate_hash() b. validate_files_in_dir() 3)Modify setup.py to push the below files to `/usr/share/glustolibs/scripts/`: a.compute_hash.py b.walk_dir.py Change-Id: I00a81a88382bf3f8b366753eebdb2999260788ca Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Add function in baseclass for cleanupSri Vignesh2020-03-241-1/+109
| | | | | | | | | | Add docleanup and docleanupclass in baseclass, which will call the function fresh_setup_cleanup, will cleanup the nodes to fresh setup if it is set to true or whenever the testcase fails. Change-Id: I951ff59cc3959ede5580348b7f93b57683880a23 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test+libfix] Add testcase to access of filekshithijiyer2020-03-191-3/+15
| | | | | | | | | | | | | Testcase steps: (file access) - rename the file so that the hashed and cached are different - make sure file can be accessed as long as cached is up Fixes a library issue as well in find_new_hashed() Change-Id: Id81264848d6470b9fe477b50290f5ecf917ceda3 Co-authored-by: Susant Palai <spalai@redhat.com> Signed-off-by: Susant Palai <spalai@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test+libfix] Testcases for rename with subvol downkshithijiyer2020-03-171-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Case 1: 1.mkdir srcdir and dstdir(such that srcdir and dstdir hashes to different subvols) 2.Bring down srcdir hashed subvol 3.mv srcdir dstdir (should fail) Case 2: 1.mkdir srcdir dstdir 2.Bring down srcdir hashed 3.Bring down dstdir hashed 4.mv srcdir dstdir (should fail) Case 3: 1.mkdir srcdir dstdir 2.Bring down dstdir hashed subvol 3.mv srcdir dstdir (should fail) Additional library fix details: Also fixing library function to work with distributed-disperse volume by removing `if oldhashed._host != brickdir._host:` as the same node can host multiple bricks of the same volume. Change-Id: Iaa472d1eb304b547bdec7a8e6b62c1df1a0ce591 Co-authored-by: Susant Palai <spalai@redhat.com> Signed-off-by: Susant Palai <spalai@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Library for multi volume creation Bala Konda Reddy M2020-03-162-5/+220
| | | | | | | | | | | | | | | | | | | Earlier, brick creation is carried out based on the difference of used and unused bricks. This is a bottleneck for implementing brick multiplexing testcases. Moreover we can't create more than 10 volumes. With this library, implementing a way to create bricks on top of the existing servers in a cyclic way to have equal number of bricks on each brick partition on each server Added paramter in setup_volume function, if multi_vol flag is set it will fetch bricks using cyclic manner using (form_bricks_for_multi_vol) otherwise it will fetch using old mechanism. Added bulk_volume_creation function, to create multiple volumes the user has specified. Change-Id: I2103ec6ce2be4e091e0a96b18220d5e3502284a0 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [libfix] Fix framework break due to geo_rep patchkshithijiyer2020-03-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Due to patch [1], the framework broke and was failing for all the testcases with the below backtrace: ``` > mount_dict['server'] = cls.snode E AttributeError: type object 'VolumeAccessibilityTests_cplex_replicated_glusterf' has no attribute 'snode' ``` Solution: This was becasue mnode_slave was accidentally written as snode. And cls.geo_rep_info wasn't a safe condition operator hence changed it to cls.slaves. Testcase results with patch: test_cvt.py::TestGlusterHealSanity_cplex_replicated_glusterfs::test_self_heal_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed-dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterHealSanity_cplex_dispersed_glusterfs::test_self_heal_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_nfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_replicated_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_distributed-dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestGlusterReplaceBrickSanity_cplex_distributed-replicated_glusterfs::test_replace_brick_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_distributed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_nfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_glusterfs::test_expanding_volume_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_dispersed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_distributed-replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestQuotaSanity_cplex_distributed-replicated_glusterfs::test_quota_enable_disable_enable_when_io_in_progress PASSED test_cvt.py::TestSnapshotSanity_cplex_replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_nfs::test_shrinking_volume_when_io_in_progress PASSED test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_nfs::test_shrinking_volume_when_io_in_progress PASSED links: [1] https://review.gluster.org/#/c/glusto-tests/+/24029/ Change-Id: If7b329e232ab61df9f9d38f5491c58693336dd48 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [lib] Add functionality to setup master and slave volumeskshithijiyer2020-03-022-1/+118
| | | | | | | | | | | | | Adding the code for the following: 1.Adding function setup_master_and_slave_volumes() to geo_rep_libs. 2.Adding variables for master_mounts, slave_mounts, master_volume and slave_volume to gluster_base_class.py 3.Adding class class method setup_and_mount_geo_rep_master_and_slave_volumes to gluster_base_class.py. Change-Id: Ic8ae1cb1c8b5719d4774996c3e9e978551414b44 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [LibFix] Update in run_layout_tests() and validate_files_in_dir()sayaleeraut2020-03-021-27/+43
| | | | | | | | | | | | | The new function volume_type() will check if the volume under test is of pure Replicated/Disperse/Arbiter type and return the result in string. The functions,run_layout_tests() & validate_files_in_dir() have been modified to check the Gluster version and volume type in order to fix the DHT pass-through caused issues. Change-Id: Ie7ad259883907c1fdc0b54e6743636fdab793272 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [LibFix] Adding code to fix issues caused by DHT pass-throughsayaleeraut2020-02-281-45/+74
| | | | | | | | | | | | | | | | | The issue earlier was that whenever a TC called the _get_layout() and _is_complete() methods, it failed on Replicate/Arbiter/Disperse volume types because of DHT pass-through. The functions,get_layout() and is_complete() have been modified to check for the Gluster version and volume type before running, in order to fix the issue. About DHT pass-through : Please refer to- https://github.com/gluster/glusterfs/issues/405 for the details. Change-Id: I0b0dc0ac3cbdef070a20854fbc89442fee1da8b6 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Lib Fix] Typo leading to error not being publishednchilaka2020-02-191-1/+1
| | | | | | | | Instead of calling g.log.error, we were calling g.log.err Due to this instead of throwing the right error message in, say, when doing volume cleanup, it was throwing ambiguos traceback. Change-Id: I39887ce08756eaf29df2d99f73cc7795a4d2c065
* [LibFix] Add code to check bricksayaleeraut2020-02-181-4/+7
| | | | | | | | | | | | | | | | | | | | | Earlier the method get_volume_type() passed when ran on interpreter. But when the method was called in a TC, it failed at condition (Line: 2235) because : e.g. The value of brickdir_path is "dhcp47-3.lab.eng.blr.redhat.com:/bricks/ brick2/testvol_replicated_brick2/" and it is trying to find the value in the list ['10.70.46.172:/bricks/brick2/testvol_replicated_brick0', '10.70.46.195:/bricks/brick2/testvol_replicated_brick1', '10.70.47.3:/bricks/brick2/testvol_replicated_brick2'] returned by get_all_bricks(), which will fail. Now, with fix, it will run successfully as it tries to check if for host dhcp47-3.lab.eng.blr.redhat.com, the brick /bricks/brick2/testvol_replicated_brick2 is present in the list brick_paths[] which consists of only the paths and not the IP addresses of the bricks present on that host. Change-Id: Ie595faba1e92c559293ddd04f46b85065b23dfc5 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Libfix] Add steps to bring bricks online and volume resetSri Vignesh2020-02-171-1/+28
| | | | | | | | Add steps to include bring offline bricks to online and volume reset in case of failure scenarios Change-Id: I9bdadd8a80ded81cf7cb4e324a18321400bfcc4c Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [LibFix] Fix volume type checksayaleeraut2020-02-141-4/+4
| | | | | | | | | | | Earlier the elements passed in list for volume type check were ('replicate', 'disperse', 'arbiter'), but as the volume type returned by get_volume_type() will be in the format 'Replicate', 'Disperse', 'Arbiter' and lists are case sensitive, these changes will make sure it does not change. Change-Id: Ic73ca946cd9c06bfa5b92605dbeba74d6ffa83d9 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Lib] Add function get_volume_type()sayaleeraut2020-02-141-1/+35
| | | | | | | | | The function get_volume_type() will return the type of volume (as distributed/replicate/disperse/arbiter/distributed-replicated/ distributed-dispersed/distributed-arbiter) under test. Change-Id: Ib23ae1ad18ef65d0520fe041a5f80211030a034b Signed-off-by: sayaleeraut <saraut@redhat.com>
* [LibFix] Fix issue caused by DHT pass-throughsayaleeraut2020-02-131-9/+96
| | | | | | | | | | | | | | | | | | | | | | The DHT pass-through functionality was introduced in the Gluster 6, due to which the TCs were failing for Replicate, Disperse and Arbiter volume types whenever the function to get hashrange was called. With this fix, first the Gluster version and later the volume type will be checked before calling the function to get the hashrange. If the Gluster version is greater than or equal to 6, the layout will not be checked for the pure AFR/Arbiter/EC volumes. About DHT pass-through option : The distribute xlator now skips unnecessary checks and operations when the distribute count is one for a volume, resulting in improved performance. Comes into play when there is only 1 brick or it is a pure replicate or pure disperse or pure arbiter volume. Change-Id: I55634f495a54e3c9909b1e1c716990b9ee9834a3 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [lib] Adding collect_bricks_arequal() to lib_utils.pykshithijiyer2020-02-041-1/+47
| | | | | | | | | | | | | | | | | | | | Adding function collect_bricks_arequal() to lib_utils.py to collect arequal-checksum on all the bricks of all the nodes used to create a volume using the below command: ``` arequal-checksum -p <BrickPath> -i .glusterfs -i .landfill -i .trashcan ``` Usage: ``` >>> all_bricks = get_all_bricks(self.mnode, self.volname) >>> ret, arequal = collect_bricks_arequal(all_bricks) >>> ret True ``` Change-Id: Id42615469be18d84e5691c982369634c436ed0cf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>