summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
...
* [Testfix] Fix hostname issue with special file caseskshithijiyer2020-09-041-3/+3
| | | | | | | | | | | | | | Problem: The code fails if we have give hostname in glusto-tests config file. This is becuase we have a converstion logic present in the testcase which converts IP to hostname. Solution: Adding code to check if it's an IP and then only run the code to convert it. Change-Id: I3bb1a566d469a4c32161c91fa610da378d46e77e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add basic tests for different device fileskshithijiyer2020-09-021-0/+328
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding the following testcases for block, character and pipe files: Test case 1: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only the bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Verify stat output from mount point and bricks. Test case 2: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only one bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Delete the files. 6. Verify if the files are delete from all the bricks Test case 3: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Set a custom xattr for files. 5. Verify that xattr for files is displayed on mount point and bricks. 6. Modify custom xattr value and verify that xattr for files is displayed on mount point and bricks. 7. Remove the xattr and verify that custom xattr is not displayed. 8. Verify that mount point and brick shows pathinfo xattr properly. Test case 4: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create a pipe file. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only the bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Verify stat output from mount point and bricks. 6. Write data to fifo file and read data from fifo file from the other instance of the same client. Upstream bug: https://github.com/gluster/glusterfs/issues/1461 Change-Id: I0e72246ba3d6d20a5de95a95d51271337b6b5a57 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate data, metadata, entry split-brainLeela Venkaiah G2020-09-011-0/+264
| | | | | | | | | | | | | | | | | Steps: - Create and mount a replicated volume and disable quorum, self-heal deamon - Create ~10 files from the mount point and simulate data, metadata split-brain for 2 files each - Create a dir with some files and simulate entry/gfid split brain - Validate volume successfully recognizing split-brain - Validate a lookup on split-brain files fails with EIO error on mount - Validate `heal info` and `heal info split-brain` command shows only the files that are in split-brain - Validate new files and dir's can be created from the mount Change-Id: I8caeb284c53304a74473815ae5181213c710b085 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] Fix wrong comparion in test_create_filekshithijiyer2020-08-311-1/+1
| | | | | | | | | | | | | Problem: brickdir.hashrange_contains_hash() returns true or False. However it test test_create_file it's check it ret == 1 or not Fix: Changing ret == 1 to ret. Change-Id: I53655794f10fc5d778790bdffbe65563907bef6d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Fix py2/3 compatiblity in `str.rsplit`Leela Venkaiah G2020-08-311-1/+1
| | | | | | | | - `str.rsplit` doesn't accept named args in py2 - Removed named arg to make it compatible with both versions Change-Id: Iba287ef4c98ebcbafe55f2166c99aef0c20ed9aa Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Checks data rename and delete in arbiter volumeubansal2020-08-241-0/+110
| | | | | | | | | | | | | | Steps: 1.Create a volume and mount it 2.Create deep directory and file in each directory 3.Rename the file 4.Check if brickpath contains old files 5.Delete all data 6.Check .glusterfs/indices/xattrop is empty 7.Check if brickpath is empty Change-Id: I04e50ef94379daa344be1ae1d19cf2d66f8f460b Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Creates a split brain and verifies IO failureubansal2020-08-241-0/+165
| | | | | | | | | | | | | | | | | | | | | Steps: 1. Create a volume and mount it 2. Disable heal and cluster-quorum-count 3. Bring down one data and arbiter brick from one subvol 4. Write IO and validate it 5. Bring up bricks 6. Bring down another data brick and arbiter brick from the same subvol 7. Write IO and validate it 8. Bring up bricks 9. Check if split-brain is created 10. Write IO -> should fail 11. Enable heal and cluster-quorum-count 12. Write IO -> should fail Change-Id: I229b58c1bcd70dcd87d35dc410e12f51b032b9c4 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Directory creation with subvol downBala Konda Reddy M2020-08-211-0/+194
| | | | | | | | | | | | | | | | | | | | | | Test Steps: 1. Create a distributed-replicated(3X3)/distributed-arbiter(3X(2+1)) and mount it on one client 2. Kill 3 bricks corresponding to the 1st subvol 3. Unmount and remount the volume on the same client 4. Create deep dir from mount point 'dir1/subdir1/deepdir1' 5. Create files under dir1/subdir1/deepdir1; touch <filename> 6. Now bring all sub-vols up by volume start force 7. Validate backend bricks for dir creation, the subvol which is offline will have no dirs created, whereas other subvols will have dirs created from step 4 8. Trigger heal from client by '#find . | xargs stat' 9. Verify that the directory entries are created on all back-end bricks 10. Create new dir (dir2) on location dir1/subdir1/deepdir1 11. Trigger rebalance and wait for the completion 12. Check backend bricks for all entries of dirs Change-Id: I4d8f39e69c84c28ec238ea73935cd7ca0288bffc Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Testfix] Remove redundant logging - Part 2Bala Konda Reddy M2020-08-215-155/+9
| | | | | | | | | | | | | | | | | | Problem: In most of the testcases due to redundant logging, the performance of the whole suite completion time is affected. Solution: In BVT Test suite, there are 184 g.log.info messages more than half of them are redundant. Removed logs wherever it is not required. Added missing get_super_method for setUp and tearDown for one testcase, modified increment in the test. Change-Id: I19e4462f2565906710c2be117bc7c16c121ddd32 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Test conservative merge between two bricksBala Konda Reddy M2020-08-211-0/+175
| | | | | | | | | | | | | | | | | | | | | | Test Steps: 1) Create 1x3 volume and fuse mount the volume 2) On mount created a dir dir1 3) Pkill glusterfsd on node n1 (b2 on node2 and b3 and node3 up) 4) touch f{1..10} on the mountpoint 5) b2 and b3 xattrs would be blaming b1 as files are created while b1 is down 6) Reset the b3 xattrs to NOT blame b1 by using setattr 7) Now pkill glusterfsd of b2 on node2 8) Restart glusterd on node1 to bring up b1 9) Now bricks b1 online , b2 down, b3 online 10) touch x{1..10} under dir1 itself 11) Again reset xattr on node3 of b3 so that it doesn't blame b2, as done for b1 in step 6 12) Do restart glusterd on node2 hosting b2 to bring all bricks online 13) Check for heal info, split-brain and arequal for the bricks Change-Id: Ieea875dd7243c7f8d2c6959aebde220508134d7a Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Testfix] Add logic to log more infokshithijiyer2020-08-201-6/+26
| | | | | | | | Add logic to do ls -l before and after. Add logic to set all log-levels to debug. Change-Id: I512e3b229fe9e2126f6c596fdc031c00a25fbe0b Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Tests read and write on afr volumeubansal2020-08-191-0/+192
| | | | | | | | | | | | | | | | | | Steps: 1.Create a volume and mount it 2.Starting writing and reading data on file 3.Bring down 1 brick 4.Validate read and write to file 5.Bring up brick and start healing 6.Monitor healing and completion 7.Bring down 2nd brick 8.Read and write to same file 9.Bring up brick and start healing 10.Monitor healing and completion 11.Check split-brain Change-Id: Ib03a1ad7ee626337904b084e85eee38750fea141 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Validate AFR, arbiter self-heal with IOLeela Venkaiah G2020-08-191-0/+306
| | | | | | | | | | - Validate `heal info` returns before timeout with IO - Validate `heal info` returns before timeout with IO and brick down - Validate data heal on file append in AFR, arbiter - Validate entry heal on file append in AFR, arbiter Change-Id: I803b931cd82d97b5c20bd23cd5670cb9e6f04176 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] Remove redundant logging - Part 1Bala Konda Reddy M2020-08-187-79/+32
| | | | | | | | | | | | | | | | | | | | | | | Problem: In most of the testcases due to redundant logging, the performance of the whole suite completion time is affected. Solution: Currently there are 100+ g.log.info statements in the authentincation suite and half of them are redundant. Removed the g.log.info statements whereever it is not required. After the changes the g.log.info statements are around 50 and not removed the statements to reduce the number of lines but for the improvement of the whole suite. Modified few line indents as well and added teardown for the missing files. Note: Will be submitting for each components separately Change-Id: I63973e115dd5dbbc7fc9462978397e7915181265 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Fix python3 getfattr() issueskshithijiyer2020-08-172-16/+22
| | | | | | | | | | | | | | | | | Problem: Due to patch [1] which was sent for issue #24 causes a large number of testcases to fail or get stuck in the latest DHT run. Solution: Make changes sot that getfattr command sends back the output in text wherever needed. Links: [1] https://review.gluster.org/#/c/glusto-tests/+/24841/ Change-Id: I6390e38130b0699ceae652dee8c3b2db2ef3f379 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Change g.log.error() to self.assertTrue()kshithijiyer2020-08-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcases test_volume_start_stop_while_rebalance_is_in_progress throws the below traceback when run: ``` Traceback (most recent call last): File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit msg = self.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format return fmt.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format record.message = record.getMessage() File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135 ``` This is because g.log.error() was used instead of self.assertTrue(). Solution: Changing to self.assertTrue(). Change-Id: If926eb834c0128a4e507da9fdd805916196432cb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Reset brick and trigger heal fullBala Konda Reddy M2020-08-131-0/+157
| | | | | | | | | | | | | 1. Create volume and create files/dirs from mount point 2. With IO in progress execute reset-brick start 3. Now format the disk from back-end, using rm -rf <brick path> 4. Execute reset brick commit and check for the brick is online. 5. Issue volume heal using "gluster vol heal <volname> full" 6. Check arequal for all bricks to verify all backend bricks including the resetted brick have same data Change-Id: I06b93d79200decb25f863e7a3f72fc8e8b1c4ab4 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Replace brick after add brick on disperseBala Konda Reddy M2020-08-131-0/+168
| | | | | | | | | | | | | | | | Test Steps: 1. Create a pure-ec volume (say 1x(4+2)) 2. Mount volume on two clients 3. Create some files and dirs from both mnts 4. Add bricks in this case the (4+2) ie 6 bricks 5. Create a new dir(common_dir) and in that directory create a distinct directory(using hostname as dirname) for each client and pump IOs from the clients(dd) 6. While IOs are in progress replace any of the bricks 7. Check for errors if any collected after step 6 Change-Id: I3125fc5906b5d5e0bc40477e1ed88825f53fa758 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Increased timeout for rebalanceubansal2020-08-121-1/+1
| | | | | | | | TC's was failing due to timeout issue increased reabalnce timeout from 900 to 1800 Change-Id: I726217a21ebbde6391660dd3c9dc096cc9ca6bb4 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Rootsquash functionality with multiple clientsManisha Saini2020-08-121-0/+174
| | | | | Change-Id: I813f3e78ad8b0b79940635df6721e34e6bc93f34 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Tests gfid self heal on arbiter volumeubansal2020-08-101-0/+206
| | | | | | | | | | | | | | | | Steps: 1.Create a volume and mount it 2.Create a directory say d1 3.Create deep directories and files in d1 4.Bring down redundant bricks 5.Delete d1 6.Create d1 and same data again 7.Bring bricks up 8.Monitor heal 9.Verify split-brain Change-Id: I778fab6bf6d9f81fca79fe18285073e1f7ccc7e7 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Increase sleep time in test_alert_time_outkshithijiyer2020-08-081-2/+2
| | | | | | | | | | | | | | | | | | | | | Problem: Test script test_alert_time_out currently fails 2 out of 6 times when executed on the same setup this is due to the log files not have 120004 A alert message. This issue is only observed in distributed volume type mounted over fuse protocol. Solution: There is no permanent solution to this problem as even if we increase the sleep 20 seconds there is still a chance that it might fail. The optimal sleep time where it only fails 5 times after 15 attempts is 6 seconds. Hence changing sleep time to 6 seconds. Change-Id: I9e9bd41321e24f502d90c3c34edce9113133755e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix missing param in test_rmdir_subvol_down.pyPranav2020-08-071-1/+2
| | | | | | | The assertIsNotNone is missing the param. Change-Id: Iddff9b203672b2edf702ada624bfac1892641712 Signed-off-by: Pranav <prprakas@redhat.com>
* [Testfix] Add logic to log more infokshithijiyer2020-08-061-1/+19
| | | | | | | | | Adding code to get dir tree and dump all xattr in hex for Bug 1810901 before remove-brick also adding logic to set log-level to debug. Change-Id: I9c9c970c4de7d313832f6f189cdca8428a073b1e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Fix test test_root_squash_with_volume_restartPranav2020-08-061-25/+6
| | | | | | | Integrate the changes made in library to the test Change-Id: I9bf8c3f1f732132170a96405a4a12839463a2eaa Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Validate data integritysayaleeraut2020-08-051-0/+169
| | | | | | | | | | | | | | | | | | | | Description: Checks that there is no data loss when remove-brick operation is stopped and then new bricks are added to the volume. Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create files and dirs on the mount-point. 4) Calculate the arequal-checksum on the mount-point. 5) Start remove-brick operation on the volume. 6) While migration is in progress, stop the remove-brick operation. 7) Add-bricks to the volume and trigger rebalance. 8) Wait for rebalance to complete. 9) Calculate the arequal-checksum on the mount-point. Change-Id: I96a7311f5acd0ae19b17d7b7c7da4d3899cdef77 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Test self heal of data and hardlink on arbiterubansal2020-07-311-0/+256
| | | | | | | | | | | | | | | | | | Steps: - Create a volume and mount it - disable metadata,data,entry heal - Create files and take arequal of mount point - Bring down redundant bricks - Append data and create hardlinks - Bring up bricks - Check healing and split-brain - Bring down redundant bricks - Truncate data - Check file and hardlink stat match - Bring up bricks Change-Id: I9b26f2fb26d72b71abd63a25ef8d9173f32997d4 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Fix I/O logic to use distinct file nameskshithijiyer2020-07-312-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcases test_mount_snap_delete and test_restore_online_vol were failing in the latest runs with the below traceback ``` Traceback (most recent call last): File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module> rc = args.func(args) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files base_file_name, file_types) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/lib64/python3.6/multiprocessing/pool.py", line 266, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/usr/lib64/python3.6/multiprocessing/pool.py", line 644, in get raise self._value File "/usr/lib64/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/usr/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in <lambda> ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 270, in _create_file with open(file_abs_path, "w+") as new_file: FileExistsError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/file1.txt' ``` This was because I/O logic was trying to create 2 files with the same name from 2 clients. Fix: Modify logic to use counters to create files with different names. Change-Id: I2896736d28f6bd17435f941088fd634347e3f4fd Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Test self-heal of 50k filesubansal2020-07-311-0/+140
| | | | | | | | | | | | | Steps: - Create an volume and mount it - Bring bricks offline - Write 50k files - Bring bricks online - Monitor heal completion - Check for split-brain Change-Id: I40739effdfa1c1068fa0628467154b9a667161a3 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Add logic to collect additional infokshithijiyer2020-07-311-1/+16
| | | | | | | | Adding code to get dir tree and dump all xattr for Bug 1810901. Change-Id: Ia59dcd2623e845066e31037c96a64249efa074c2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Convert arb to x3 repl volume with IOLeela Venkaiah G2020-07-301-0/+221
| | | | | | | | | | | | | | | | Steps: - Create, start and mount an arbiter volume in two clients - Create two dir's, fill IO in first dir and take note of arequal - Start a continuous IO from second directory - Convert arbiter to x2 replicated volume (remove brick) - Convert x2 replicated to x3 replicated volume (add brick) - Wait for ~5 min for vol file to be updated on all clients - Enable client side heal options and issue volume heal - Validate heal completes with no errors and arequal of first dir matches against initial checksum Change-Id: I291acf892b72bc8a05e76d0cffde44d517d05f06 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Test absence of `healed` and `heal-failed` optionsLeela Venkaiah G2020-07-301-0/+104
| | | | | | | | | | | | | Steps: - Create and mount a replicated volume - Kill one of the bricks and write IO from mount point - Verify `gluster volume heal <volname> info healed` and `gluster volume heal <volname> info heal-failed` command results in error - Validate `gluster volume help` doesn't list `healed` and `heal-failed` commands Change-Id: Ie1c3db12cdfbd54914e61f812cbdac382c9c723e Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Libfix] Move NFS Ganesha support to GlusterBaseClassPranav2020-07-3012-239/+42
| | | | | | | | | | | | | | | | Problem: NFS-Ganesha Tests inherits 'NfsGaneshaClusterSetupClass' whereas the other tests inherits 'GlusterBaseClass'. This causes a cyclic dependency when trying to run other modules with Nfs-Ganesha. Fix: 1. Move the Nfs-Ganesha dependencies to GlusterBaseClass 2. Modify the Nfs-Ganesha tests to inherit from GlusterBaseClass 3. Remove setup_nfs_ganesha method call from existing Ganesha tests as its invoked by default from GlusterBaseClass.SetUpClass Change-Id: I1e382fdb2b29585c097dfd0fea0b45edafb6442b Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Rootsquash functionality with volume restartManisha Saini2020-07-291-0/+196
| | | | | Change-Id: I16c5f070d807673662e5ac3583aace06873a9c14 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test]: Test sosreport interoperability with GlusterFSnchilaka2020-07-291-0/+143
| | | | | | | | | | Description: Sos must be able to capture the required logs in sosreport including gluster logs, without compromising the integrity of Gluster like deleting socket files etc Change-Id: Ifec57778ff5d1fc0ceaa3ecf94a9851244076d2b Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Testfix] Fix TypeError for python3Bala Konda Reddy M2020-07-281-5/+6
| | | | | | | | | | | | | | | | | | Problem: Test is failing with below traceback when ran with python3 as default. ` Traceback (most recent call last): File "<string>", line 1, in <module> TypeError: a bytes-like object is required, not 'str' ` Solution: Added ''.encode() which will fix the issue when ran using both python2 and python3 Added a check for core file on the client node. Change-Id: I8f800f5fad97c3b7591db79ea51203e5293a1f69 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] AFR Data heal by default and explicit self-heal commandnchilaka2020-07-271-126/+29
| | | | | | | | | | - Remove unneccessary disablement of client side heal options - Check if client side heal options are disabled by default - Test data heal by default method - Explicit data heal by calling self heal command Change-Id: I3be9001fc1cf124a4cf5a290cee985e166c0b685 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Test] Validate readdirp with rebalancesayaleeraut2020-07-271-0/+173
| | | | | | | | | | | | | | | | | | | | | Description : Check that all directories are read and listed while rebalance is still in progress. Steps : 1) Create a volume. 2) Mount the volume using FUSE. 3) Create a dir "master" on mount-point. 4) Create 8000 empty dirs (dir1 to dir8000) inside dir "master". 5) Now inside a few dirs (e.g. dir1 to dir10), create deep dirs and inside every dir, create 50 files. 6) Collect the number of dirs present on /mnt/<volname>/master 7) Change the rebalance throttle to lazy. 8) Add-brick to the volume (at least 3 replica sets.) 9) Start rebalance using "force" option on the volume. 10) List the directories on dir "master". Change-Id: I4d04b3e2be93b5c25b5ed70516bb99d99fb1fb8a Signed-off-by: sayaleeraut <saraut@redhat.com>
* [TestFix] Replace `translate` with `replace`Leela Venkaiah G2020-07-271-8/+1
| | | | | | | | | - `replace` funciton to used to forgo version check - `unicode` is not being recognized from builtins in py2 - `replace` seems correct alternative than fixing unicode Change-Id: Ieb9b5ad283e1a31d65bd8a9715b80f9deb0c05fe Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Breakdown sparefile creation into chunkspre_glusterfs_6_0Leela Venkaiah G2020-07-221-17/+43
| | | | | | | | | - Remove decimal before passing to `head` command - Breakup sparsefile into chunks to ~half of brick size - Whole test has to be skipped due to BZ #1339144 Change-Id: I7a9ae25798b442c74248954023dd821c3442f8f9 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] skip test unless 3 clients are providedBala Konda Reddy M2020-07-221-24/+9
| | | | | | | | | | | | Problem: Creating a third mount obj works for glusterfs protocol but in future while running for nfs/cifs might face complications and test might fail. Solution: Skip test unless three clients are provided Removing redundant logging and minor fixes. Change-Id: Ie657975a46b6989cb9f057f5cc337333bbf1010d Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Make test compatible with Python 2Leela Venkaiah G2020-07-201-8/+16
| | | | | | | - Translate function is availble on `unicode` string in Python2 Change-Id: I6aa01606acc73b18d889a965f1c01f9a393c2c46 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Truncate a file with brick downBala Konda Reddy M2020-07-201-0/+124
| | | | | | | | | | | | | | Test steps: 1. Create a volume, start and mount it on a client 2. Bring down redundant bricks in the subvol 3. Create a file on the volume using "touch" 4. Truncate the file using "O_TRUNC" 5. Bring the brick online 6. Write data on the file and wait for heal completion 7. Check for crashes and coredumps Change-Id: Ie02a56ab5180f6a88e4499c8cf6e5fe5019e8df1 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] mv and ls operations with bricks offlineBala Konda Reddy M2020-07-171-2/+121
| | | | | | | | | | | | | | | | | Test Steps: 1. Created a volume and mount this volume on 3 clients. 2. Bring down two bricks offline in each subvol. 3. On client1: under dir1 create files f{1..10000} run in background 4. On client2: under / touch x{1..1000} 5. On client3: start creating x{1001..10000} 6. Bring bricks online which were offline(brought up all the bricks which were down (2 in each of the two subvols) 7. While IO on Client1 and Client3 were happening, On client2 move all the x* files into dir1 8. Perform lookup from client 3 Change-Id: Ib72648af783535557e20cea7e64ea68036b23121 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Check data integrity for EC volumeubansal2020-07-171-0/+314
| | | | | | | | | | | | | | Steps: 1.Create a EC volume and mount it 2.Run different types of IO's 3.Take arequal of mountpoint 4.Bring down redundant bricks 5.Take arequal of mountpoint 6.Bring down another set of redundant bricks 7.Take arequal of mountpoint Change-Id: If253cdfe462c6671488e858871ec904fbb2f9ead Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Add tc for NFS-Ganesha ExportIDManisha Saini2020-07-171-0/+53
| | | | | Change-Id: I8ae78b06706bc4818cbd2b00b386f362883cb9d7 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Rootsquash functionality test with glusterd restartManisha Saini2020-07-151-0/+190
| | | | | | | | | | | | | | | | Verification of rootsquash functionality with glusterd restart * Create some files and dirs inside mount point * Set permission as 777 for mount point * Enable root-squash on volume * Create some more files and dirs * Restart glusterd on all the node * Try to edit file created in step 1 nfsnobody user should not be allowed to edit file * Try to edit the file created in step 5 nfsnobody user should be allowed to edit file Change-Id: Id2208127ce3c3ea2181d64af0e5e114c49f196ba Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Validate open file migrationsayaleeraut2020-07-141-0/+131
| | | | | | | | | | | | | | | | | | | | Description: Checks that files with open fd are migrated successfully. Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create files on volume mount. 4) Open fd for the files and keep on doing read write operations on these files. 5) While fds are open, add bricks to the volume and trigger rebalance. 6) Wait for rebalance to complete. 7) Wait for write on open fd to complete. 8) Check for any data loss during rebalance. 9) Check if rebalance has any failures. Change-Id: I9345827ae36eb6d2c264d0e0874738211aadc55e Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Validate EIO changes to EDQUOTE errorsLeela Venkaiah G2020-07-141-0/+389
| | | | | | | | | | | - Tests to check EIO changes to EDQUOTE errors on reaching quota - Scenarios covered are: - Redundant bricks are down in a volume - Multiple IOs were happening from clients - Single IO session from a client Change-Id: Ie15244231dae7fe2e61cc6df0d7f35d2231d9bdf Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Fix failures in test_dht_create_dir.pysayaleeraut2020-07-091-80/+44
| | | | | | | | | | | | | | Added following changes: 1) The test script consists of 2 test cases. Hence changed the setUpClass(cls) to setUp(self). 2) Changed the code that checks if the symlink is pointing to correct location in the test_create_link_for_directory(self), as earlier it was failing with "AssertionError: sym link does not point to correct location" as the output of command 'stat' for symlink file varies as per the platform. Change-Id: I43f98a0d60b3ebf30236ff7e702667373a39a0e1 Signed-off-by: sayaleeraut <saraut@redhat.com>