summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* [TestFix] Breakdown sparefile creation into chunkspre_glusterfs_6_0Leela Venkaiah G2020-07-221-17/+43
| | | | | | | | | - Remove decimal before passing to `head` command - Breakup sparsefile into chunks to ~half of brick size - Whole test has to be skipped due to BZ #1339144 Change-Id: I7a9ae25798b442c74248954023dd821c3442f8f9 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] skip test unless 3 clients are providedBala Konda Reddy M2020-07-221-24/+9
| | | | | | | | | | | | Problem: Creating a third mount obj works for glusterfs protocol but in future while running for nfs/cifs might face complications and test might fail. Solution: Skip test unless three clients are provided Removing redundant logging and minor fixes. Change-Id: Ie657975a46b6989cb9f057f5cc337333bbf1010d Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Make test compatible with Python 2Leela Venkaiah G2020-07-201-8/+16
| | | | | | | - Translate function is availble on `unicode` string in Python2 Change-Id: I6aa01606acc73b18d889a965f1c01f9a393c2c46 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Truncate a file with brick downBala Konda Reddy M2020-07-201-0/+124
| | | | | | | | | | | | | | Test steps: 1. Create a volume, start and mount it on a client 2. Bring down redundant bricks in the subvol 3. Create a file on the volume using "touch" 4. Truncate the file using "O_TRUNC" 5. Bring the brick online 6. Write data on the file and wait for heal completion 7. Check for crashes and coredumps Change-Id: Ie02a56ab5180f6a88e4499c8cf6e5fe5019e8df1 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Lib] Library to setup ctdb for samba.vdas-redhat2020-07-201-0/+142
| | | | | | | | CTDB works as an HA for samba. Configuring ctdb is mandatory for samba. Change-Id: I5ee28afb86dbc5853e5d54ad2b4460d37c8bfcef Signed-off-by: vdas-redhat <vdas@redhat.com>
* [Test] mv and ls operations with bricks offlineBala Konda Reddy M2020-07-171-2/+121
| | | | | | | | | | | | | | | | | Test Steps: 1. Created a volume and mount this volume on 3 clients. 2. Bring down two bricks offline in each subvol. 3. On client1: under dir1 create files f{1..10000} run in background 4. On client2: under / touch x{1..1000} 5. On client3: start creating x{1001..10000} 6. Bring bricks online which were offline(brought up all the bricks which were down (2 in each of the two subvols) 7. While IO on Client1 and Client3 were happening, On client2 move all the x* files into dir1 8. Perform lookup from client 3 Change-Id: Ib72648af783535557e20cea7e64ea68036b23121 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Check data integrity for EC volumeubansal2020-07-171-0/+314
| | | | | | | | | | | | | | Steps: 1.Create a EC volume and mount it 2.Run different types of IO's 3.Take arequal of mountpoint 4.Bring down redundant bricks 5.Take arequal of mountpoint 6.Bring down another set of redundant bricks 7.Take arequal of mountpoint Change-Id: If253cdfe462c6671488e858871ec904fbb2f9ead Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Add tc for NFS-Ganesha ExportIDManisha Saini2020-07-171-0/+53
| | | | | Change-Id: I8ae78b06706bc4818cbd2b00b386f362883cb9d7 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Rootsquash functionality test with glusterd restartManisha Saini2020-07-151-0/+190
| | | | | | | | | | | | | | | | Verification of rootsquash functionality with glusterd restart * Create some files and dirs inside mount point * Set permission as 777 for mount point * Enable root-squash on volume * Create some more files and dirs * Restart glusterd on all the node * Try to edit file created in step 1 nfsnobody user should not be allowed to edit file * Try to edit the file created in step 5 nfsnobody user should be allowed to edit file Change-Id: Id2208127ce3c3ea2181d64af0e5e114c49f196ba Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Tool] Add tool to split log to tc wise logskshithijiyer2020-07-153-0/+170
| | | | | | | | | | | | | | | | | | | Adding tool to split glusto-tests logs into tc wise logs. usage: log_splitter [-h] -f LOG_FILE [-d DESTINATION_DIR] Tool to split glusto logs to individual testcase logs. optional arguments: -h, --help show this help message and exit -f LOG_FILE, --log_file LOG_FILE Glusto test log file -d DESTINATION_DIR, --dist-dir DESTINATION_DIR Path were individual test logs are to be stored. Change-Id: I776a1455f9f70c13ae6ad9d11f23a4b5366c5f6f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate open file migrationsayaleeraut2020-07-141-0/+131
| | | | | | | | | | | | | | | | | | | | Description: Checks that files with open fd are migrated successfully. Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create files on volume mount. 4) Open fd for the files and keep on doing read write operations on these files. 5) While fds are open, add bricks to the volume and trigger rebalance. 6) Wait for rebalance to complete. 7) Wait for write on open fd to complete. 8) Check for any data loss during rebalance. 9) Check if rebalance has any failures. Change-Id: I9345827ae36eb6d2c264d0e0874738211aadc55e Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Validate EIO changes to EDQUOTE errorsLeela Venkaiah G2020-07-141-0/+389
| | | | | | | | | | | - Tests to check EIO changes to EDQUOTE errors on reaching quota - Scenarios covered are: - Redundant bricks are down in a volume - Multiple IOs were happening from clients - Single IO session from a client Change-Id: Ie15244231dae7fe2e61cc6df0d7f35d2231d9bdf Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Libfix] Fix rpyc dependency for NFS-Ganesha libsPranav2020-07-102-45/+51
| | | | | | | | | | | | | Problem: The rpyc connection fails in envs where the python versions are different, resulting in test failures Fix: Replace rpyc with standard ssh approach to overcome this issue Change-Id: Iee4bb968b8b94a6ab3e0fe0d16babacad914a92d Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Fix failures in test_dht_create_dir.pysayaleeraut2020-07-091-80/+44
| | | | | | | | | | | | | | Added following changes: 1) The test script consists of 2 test cases. Hence changed the setUpClass(cls) to setUp(self). 2) Changed the code that checks if the symlink is pointing to correct location in the test_create_link_for_directory(self), as earlier it was failing with "AssertionError: sym link does not point to correct location" as the output of command 'stat' for symlink file varies as per the platform. Change-Id: I43f98a0d60b3ebf30236ff7e702667373a39a0e1 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [lib] CTDB library operations.vdas-redhat2020-07-081-0/+478
| | | | | Change-Id: Ia323aa80efdf5331d58c57be1f087b012fc94e1a Signed-off-by: vdas-redhat <vdas@redhat.com>
* [Tool] Add tool to log memory and CPU usagekshithijiyer2020-07-081-0/+108
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding tool to log memory and cpu of a given process. usage: memory_and_cpu_logger.py [-h] [-p PROCESS_NAME] [-i INTERVAL] [-c COUNT] [-t TESTNAME] A tool to log memory usage of a given process optional arguments: -h, --help show this help message and exit -p PROCESS_NAME, --process_name PROCESS_NAME Name of process for which cpu and memory is to be logged -i INTERVAL, --interval INTERVAL Time interval to wait between consecutive logs(Default:60) -c COUNT, --count COUNT Number of times memory and CPU has to be logged (Default:10) -t TESTNAME, --testname TESTNAME Test name for which memory is logged Tasks to be done: 1.Add library run the tool for clients and servers. 2.Add base_class function to log all values. 3.Add library function to read csv files and compute information. Change-Id: I9e2e8825b103cf941c0a7e1f7eadadd65fc670d1 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Remove tier libraries from glusto-testsBala Konda Reddy M2020-07-085-2138/+161
| | | | | | | | | | | | Tier libraries are not used across test cases and due to checks across brick_libs.py and volume_libs.py, performance of regular test cases(time taken for execution) is getting degraded. One more factor to remove Tier libraries across glusto-tests is, the functionality is deprecated. Change-Id: Ie56955800515b2ff5bb3b55debaad0fd88b5ab5e Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Add file rename test when dest exist in diff subvol combinationsPranav2020-07-081-0/+919
| | | | | | | | Tests to validate the behaviour of rename cases when destination file exists and is hashed or cached to different subvol combinations Change-Id: I44752a444d9c112d590efd66c48ff095c22fcecd Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Remove hot and cold bricks list - Part2Bala Konda Reddy M2020-07-084-22/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the non-tiered volume types, In few test cases while bringing bricks offline, collecting both hot_tier_bricks and cold_tier_bricks and it is not needed to collect hot and cold tier bricks. Removing tier kwarg in one of the test. Removing the hot and cold tiered bricks and collecting only bricks of the particular volume as mentioned below. Removing below section ``` bricks_to_bring_offline_dict = (select_bricks_to_bring_offline( self.mnode, self.volname)) bricks_to_bring_offline = list(filter(None, ( bricks_to_bring_offline_dict['hot_tier_bricks'] + bricks_to_bring_offline_dict['cold_tier_bricks'] + bricks_to_bring_offline_dict['volume_bricks']))) ``` Modifying as below for bringing bricks offline. ``` bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks'] ``` Change-Id: I4f59343b380ced498516794a8cc7c968390a8459 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] IO conitinuity on brick down in EC VolumeLeela Venkaiah G2020-07-081-0/+215
| | | | | | | | | | | | | | | | | Test Steps: - Create, start and mount an EC volume in two clients - Create multiple files and directories including all file types on one directory from client 1 - Take arequal check sum of above data - Create another folder and pump different fops from client 2 - Fail and bring up redundant bricks in a cyclic fashion in all of the subvols maintaining a minimum delay between each operation - In every cycle create new dir when brick is down and wait for heal - Validate heal info on volume when brick down erroring out instantly - Validate arequal on brining the brick offline Change-Id: Ied5e0787eef786e5af7ea70191f5521b9d5e34f6 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] Fix test_mount_point_not_go_to_rofs failurekshithijiyer2020-07-061-34/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcase test_mount_point_not_go_to_rofs fails every time in the CI runs with the below traceback: > ret = wait_for_io_to_complete(self.all_mounts_procs, self.mounts) tests/functional/arbiter/test_mount_point_while_deleting_files.py:137: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ build/bdist.linux-x86_64/egg/glustolibs/io/utils.py:290: in wait_for_io_to_complete ??? /usr/lib/python2.7/site-packages/glusto/connectible.py:247: in async_communicate stdout, stderr = p.communicate() /usr/lib64/python2.7/subprocess.py:800: in communicate return self._communicate(input) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <subprocess.Popen object at 0x7febb64238d0>, input = None def _communicate(self, input): if self.stdin: # Flush stdio buffer. This might block, if the user has # been writing to .stdin in an uncontrolled fashion. > self.stdin.flush() E ValueError: I/O operation on closed file /usr/lib64/python2.7/subprocess.py:1396: ValueError This is because the self.io_validation_complete is never set to True in the testcase. Fix: Adding code to set self.io_validation_complete to True and moving code from TearDownClass to TearDown. Modifying logic to not add both clients to self.mounts. Change-Id: I51ed635e713838ee3054c4d1dd8c6cdc16bbd8bf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Skip gluster_shared_storage deletionkshithijiyer2020-07-021-9/+24
| | | | | | | | | | | | | | | | Problem: In the present logic gluster_shared_storage gets deleted in the force cleanup, this causes nfs-ganesha testcases to fail. Fix: Add logic to check is shared_storage is enabled if enabled skip: 1. Peer cleanup and peer probe 2. Deleting gluster_shared_storage vol files Change-Id: I5219491e081bd36dd40342262eaba540ccf00f51 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix is_linkto_file() compatible with newer platformsPranav2020-06-301-5/+2
| | | | | | | | | | | | | Problem: The 'file <file_path>' command output differs in newer platforms. An additional ',' is present in latest builds of the packages, causing tests which uses this method to fail on newer platforms. Fix: Modify the method to handle the latest package output as well Change-Id: I3e59a69b09b960e3a38131a3e76d664b34799ab1 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Increased timeout for some TC'subansal2020-06-291-3/+5
| | | | | | | | Few TC's were failing sue to timeout issue increased timeout for those TC's Change-Id: Id62bee81e1cb6b8bb3a712858404c7092142072b Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix] Increasing timeout for healubansal2020-06-261-2/+2
| | | | | | | | As heal completion is failing intermitently for disperse volume, increased timeout for heal Change-Id: I5e7b7c8eb332ada1abc72389fc8ce883e269d226 Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix] Test FD IO's on replace-brick in EC volumeubansal2020-06-261-4/+41
| | | | | Change-Id: Ib39894e9f44c41f5539377c5c124ad45a786cbb3 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Add file rename tests when dest file in src hashed/cached subvolPranav2020-06-251-0/+639
| | | | | | | | | Tests to validate behaviour of different scenarios of file rename cases, when destination file exists intially and is hashed to the source file hashed or cached subvol. Change-Id: Iec12d33c459cb966861d2efac2bae85103555cc1 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Change method namesayaleeraut2020-06-241-1/+1
| | | | | | | | Changing the method name from test_readdirp_with_rebalance(self) to test_access_file_with_stale_linkto_xattr(self) Change-Id: I5503e301d65f96e38aa135827d8bc698a0371281 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Doc] Fix tox command code formattingkshithijiyer2020-06-241-4/+0
| | | | | Change-Id: I0ac4fca1b41921e01ee6003d01fd1557df97053c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Check access file with stale linkto xattrsayaleeraut2020-06-241-0/+169
| | | | | | | | | | | | | | | | | | | | Description: The test script verfies that a file with stale linkto xattr can be accessed from a non-root user. Steps: 1) Create a volume and start it. 2) Mount the volume on client node using FUSE. 3) Create a file. 4) Enable performance.parallel-readdir and performance.readdir-ahead on the volume. 5) Rename the file in order to create a linkto file. 6) Force the linkto xattr values to become stale by changing the dht subvols in the graph. 7) Login as an non-root user and access the file. Change-Id: I4f275dedd47a851c2c4839f51cf1867638a66667 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [LibFix] Monitor heal only on specific bricksLeela Venkaiah G2020-06-241-3/+12
| | | | | | | | | - Add an optional argument (bricks) to monitor_heal_completion - If provides, heal will be monitored on these set of bricks - Useful when dealing with EC volumes Change-Id: I1c3b137e98966e21c52e0e212efc493aca9c5da0 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Remove tier related kwarg from testBala Konda Reddy M2020-06-241-1/+1
| | | | | | | | | Removing 'add_to_hot_tier' parameter as it defaults to False and it is not needed for the add-brick operation in the test as the volume type is not tier. Change-Id: I4a697a453e368197dfaf143d344a623d449e2614 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Start/Stop of snapd deamon on the cloned volumesrivickynesh2020-06-242-1/+404
| | | | | | | | | | | | Test Cases in this module tests the USS functionality of snapshots snapd on cloned volume and validated snapshots are present inside .snaps directory by terminating snapd on one by one nodes and validating .snaps directory is still accessible. Change-Id: I98d48268e7c5c5952a7f0f544960203d8634b7ac Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test] Set disperse quorum count to 5 and test EC volumeubansal2020-06-241-0/+309
| | | | | | | | | | | | | | | | | | | | | | On setting disperse quorum count to 5 , atleat 5 bricks should be online for successful writes on volume Steps: 1.Set disperse quorum count to 5 2.Write and read IO's 2.Bring down 1st brick 3.Writes and reads successful 4.Brind down 2nd brick 5.Writes should fail and reads successful 4.Write and read again 5.Writes should fail and reads successful 6.Rebalance should fail as quorum not met 7.Reset volume 8.Write and read IO's and vaildate them 9.Bring down redundant bricks 10.Write and read IO's and vaildate them Change-Id: Ib825783f01a394918c9016808cc62f6530fe8c67 Signed-off-by: ubansal <ubansal@redhat.com>
* [Lib] Add parse_vol_file methodPranav2020-06-241-0/+59
| | | | | | | | This method parses the given .vol file and returns the content as a dictionary. Change-Id: I6d57366ddf4d4c0249fff6faaca2ed005cd89e7d Signed-off-by: Pranav <prprakas@redhat.com>
* [LibFix] Add kwargs start_range and end_rangesayaleeraut2020-06-241-3/+21
| | | | | | | | | Adding the kwargs start_range and end_range to the method open_file_fd() so that FD can be opened for multiple files if required. Change-Id: Ia6d78941935c7fb26045d000c428aba9b9f2425b Signed-off-by: sayaleeraut <saraut@redhat.com>
* [TestFix] Remove hot and cold bricks list on regular volumesBala Konda Reddy M2020-06-2415-60/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | For the non-tiered volume types, In few test cases while bringing bricks offline, collecting both hot_tier_bricks and cold_tier_bricks and it is not needed to collect hot and cold tier bricks. Removing the hot and cold tiered bricks and collecting only bricks of the particular volume as mentioned below. Removing below section ``` bricks_to_bring_offline_dict = (select_bricks_to_bring_offline( self.mnode, self.volname)) bricks_to_bring_offline = list(filter(None, ( bricks_to_bring_offline_dict['hot_tier_bricks'] + bricks_to_bring_offline_dict['cold_tier_bricks'] + bricks_to_bring_offline_dict['volume_bricks']))) ``` Modifying as below for bringing bricks offline. ``` bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks'] ``` Change-Id: Icb1dc4a79cf311b686d839f2c9390371e42142f7 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Testfix] Fix I/O of test_snap_delete_multiplekshithijiyer2020-06-221-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: test_snap_delete_multiple fails on I/O validation across all automation runs constantly with the below trace back: Traceback (most recent call last): File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module> rc = args.func(args) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files base_file_name, file_types) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/lib64/python2.7/multiprocessing/pool.py", line 250, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib64/python2.7/multiprocessing/pool.py", line 554, in get raise self._value IOError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/testfile42.txt' Fix: Change the I/O to use --base-file-name parameter when running the I/O scripts. Change-Id: Ic5a8222f4fafeac4ac9aadc9c4d23327711ed9f0 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Set disperse quorum count to 6 and test EC volumeubansal2020-06-221-0/+286
| | | | | | | | | | | | | | | | | | | | On setting disperse quorum count to 6 all bricks should be online for successful writes on volume Steps: 1.Set disperse quorum count to 6 2.Write and read IO's 2.Bring down 1 brick 3.Writes should fail and reads successful 4.Write and read again 5.Writes should fail and reads successful 6.Rebalance should fail as quorum not met 7.Reset volume 8.Write and read IO's and vaildate them 9.Bring down redundant bricks 10.Write and read IO's and vaildate them Change-Id: I93d418fd75d75fa3563d23f52fdd5aed71cfe540 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] mv and ls operations on ec volumesBala Konda Reddy M2020-06-191-0/+155
| | | | | | | | | | | | | | | | | | | Test Steps: 1. Create volume and mount the volume on 3 clients, c1(client1), c2(client2), and, c3(client3) 2. On c1, mkdir /c1/dir 3. On c2, Create 4000 files on mount point i.e. "/" 4. After step 3, Create next 4000 files on c2 on mount point i.e. "/" 5. On c1 Create 10000 files on /dir/ 6. On c3 start moving 4000 files created on step 3 from mount point to /dir/ 7. On c3, start ls in a loop for 20 iterations Note: Used upload scripts in setupclass, as there is one more test to be added in the same file. Change-Id: Ibab74433cbec4d6a4f9b494f257b3e517b8fbfbc Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test]: Validate EC Eagerlock Behavior and Performancenchilaka2020-06-191-0/+264
| | | | | | | | | | Description: This script tests Disperse(EC) eagerlock default values and the performance impact on lookups with eagerlock and other-eagerlock default values Change-Id: Ia083d0d00f99a42865fb6f06eda75ecb18ff474f Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Test] Add tc to check eager lock clikshithijiyer2020-06-191-0/+71
| | | | | | | | | | | | Testcase Steps: 1.Create an EC volume 2.Set the eager lock option by turning on disperse.eager-lock by using different inputs: - Try non boolean values(Must fail) - Try boolean values Change-Id: Iec875ce9fb4c8f7c68b012ede98bd94b82d04d7e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate delete file picked for migrationsayaleeraut2020-06-191-0/+165
| | | | | | | | | | | | | | | | | | | Description: The test script verifies that if a file is picked for migration and if it is deleted, then the file should be removed successfully. Steps : 1) First create a big data file of 10GB. 2) Rename that file, such that after rename a linkto file is created (we are doing this to make sure that file is picked for migration.) 3) Add bricks to the volume and trigger rebalance using force option. 4) When the file has been picked for migration, delete that file from the mount point. 5) Check whether the file has been deleted or not on the mount-point as well as the back-end bricks. Change-Id: I137512a1d94a89aa811a3a9d61a9fb4002bf26be Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add dht file rename cases where the destination file existsPranav2020-06-191-0/+540
| | | | | | | | Tests to validate behaviour of different scenarios of file rename cases, when destination file exists intially. Change-Id: I12cd2568540bec198f1c3cf85213e0107c9ddd6b Signed-off-by: Pranav <prprakas@redhat.com>
* [Testfix] Add reset-failed cmdkshithijiyer2020-06-181-3/+7
| | | | | | | | | | | | | | | | | | | | Problem: The testcase test_volume_create_with_glusterd_restarts consist of a asynchronous loop of glusterd restarts which fails in the lastest runs due to patch [1] and [2] added to glusterfs which limits the glusterd restarts to 6. Fix: Add `systemctl reset-failed glusterd` to the asynchronous loop. Links: [1] https://review.gluster.org/#/c/glusterfs/+/23751/ [2] https://review.gluster.org/#/c/glusterfs/+/23970/ Change-Id: Idd52bfeb99c0c43afa45403d71852f5f7b4514fa Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Remove 'replicated' from volume typeLeela Venkaiah G2020-06-171-1/+1
| | | | | | | - Test is designed to run on EC volumes only Change-Id: Ice6a77422695ebabbec6b9cfd910e453e5b2c81a Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Checks all types of heal on EC volumeubansal2020-06-171-0/+285
| | | | | | | | | | | | | | | | Steps: 1.Create a volume and mount it 2.Create a directory, dir1 and run different types of IO's 3.Create a directory, dir2 4.Bring down redundant bricks 5.Write IO's to directory dir2 6.Create a directory, dir3 and run IO's(read,write,ammend) 7.Bring up bricks 8.Monitor heal 9.Check for data intergrity of dir1 Change-Id: I9a7e366084bb46dcfc769b1d98b89b303fc16150 Signed-off-by: ubansal <ubansal@redhat.com>
* [Libfix] Add retry logic to restart_glusterd()kshithijiyer2020-06-172-19/+14
| | | | | | | | | | | | | | | | | | | | | | Problem: Patch [1] and [2] sent to glusterfs where changes are made to glusterd.service.in to not allow glusterd restart for more than 6 times within an hour, due this glusterd restarts present in testcases may fail as there is no way to figure out when we reach the 6 restart limit. Fix: Add code to check if glusterd restart has failed if true then call reset_failed_glusterd(), and redo the restart. Links: [1] https://review.gluster.org/#/c/glusterfs/+/23751/ [2] https://review.gluster.org/#/c/glusterfs/+/23970/ Change-Id: I041a019f9a8757d8fead00302e6bbcd6563dc74e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rename of filetypes on brick downLeela Venkaiah G2020-06-171-0/+221
| | | | | | | | | | | | | | | | | | Test Steps: 1. Create an EC volume 2. Mount the volume using FUSE on two different clients 3. Create ~9 files from one of the client 4. Create ~9 dir with ~9 files each from another client 5. Create soft-links, hard-links for file{4..6}, file{7..9} 6. Create soft-links for dir{4..6} 7. Begin renaming the files, in multiple iterations 8. Bring down a brick while renaming the files 9. Bring the brick online after renaming some of the files 10. Wait for renaming of the files 11. Validate no data loss and files are renamed successfully Change-Id: I6d98c00ff510cb473978377bb44221908555681e Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Git clone on glusterfs volumeBala Konda Reddy M2020-06-161-0/+80
| | | | | | | | | | | Test Steps: 1. Create a volume and mount it on one client 2. git clone the glusterfs repo on the glusterfs volume 3. Set the performance options to off 4. Repeat step 2 on a different directory Change-Id: Iaecce7cd14ecf84058c75847a037c6589d3833e9 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>