summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* [Test] Self-heal, add-brick on replicated volume typesBala Konda Reddy M2020-10-231-0/+199
| | | | | | | | | | | | | | | | | 1. Create a replicated/distributed-replicate volume and mount it 2. Start IO from the clients 3. Bring down a brick from the subvol and validate it is offline 4. Bring back the brick online and wait for heal to complete 5. Once the heal is completed, expand the volume. 6. Trigger rebalance and wait for rebalance to complete 7. Validate IO, no errors during the steps performed from step 2 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Note: This tests is cleary for replicated volume types. Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test]: Add tc to check profile simultaneously on 2 different nodesnik-redhat2020-10-221-0/+185
| | | | | | | | | | | | | | | | Test Steps: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile on the volume. 4) Create another volume. 5) Start profile on the volume. 6) Run volume status in a loop for 100 times in one node. 7) Run profile info for the new volume on one of the other node 8) Run profile info for the new volume in loop for 100 times on the other node Change-Id: I1c32a938bf434a88aca033c54618dca88623b9d1 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add TC to check glusterd config file“Milind”2020-10-221-0/+29
| | | | | | | | | 1 . Check the location of glusterd socket file ( glusterd.socket ) ls /var/run/ | grep -i glusterd.socket 2. systemctl is-enabled glusterd -> enabled Change-Id: I6557c27ffb7e91482043741eeac0294e171a0925 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Add 2 memory leak tests and fix library issueskshithijiyer2020-10-213-0/+237
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Scenarios added: ---------------- Test case: 1. Create a volume, start it and mount it. 2. Start I/O from mount point. 3. Check if there are any memory leaks and OOM killers. Test case: 1. Create a volume, start it and mount it. 2. Set features.cache-invalidation to ON. 3. Start I/O from mount point. 4. Run gluster volume heal command in a loop 5. Check if there are any memory leaks and OOM killers on servers. Design change: -------------- - self.id() is moved into test class as it was hitting bound errors in the original logic. - Logic changed for checking leaks fuse. - Fixed breakage in methods where ever needed. Change-Id: Icb600d833d0c08636b6002abb489342ea1f946d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Default volume behavior and quorum optionssrijan-sivakumar2020-10-201-0/+129
| | | | | | | | | | | Steps- 1. Create and start volume. 2. Check that the quorum options aren't coming up in the vol info. 3. Kill two glusterd processes. 4. There shouldn't be any effect on the glusterfsd processes. Change-Id: I40e6ab5081e723ae41417f1e5a6ece13c65046b3 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test gluster does not release posix lock multiple clients “Milind”2020-10-191-0/+91
| | | | | | | | | | | | Steps: 1. Create all types of volumes. 2. Mount the brick on two client mounts 3. Prepare same script to do flock on the two nodes while running this script it should not hang 4. Wait till 300 iteration on both the node Change-Id: I53e5c8b3b924ac502e876fb41dee34e9b5a74ff7 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Test Eager lock reduce the number of locks during write.Sheetal2020-10-191-0/+161
| | | | | | | | | | | | | Steps- 1. Create a disperse volume and start it. 2. Set the eager lock option 3. mount the volume and create a file 4. Check the profile info of the volume for inodelk count. 5. check xattrs of the file for dirty bit. 6. Reset the eager lock option and check the attributes again. Change-Id: I0ef1a0e89c1bc202e5df4022c6d98ad0de0c1a68 Signed-off-by: Sheetal <spamecha@redhat.com>
* [TestFix] Changing the assert statement“Milind”2020-10-191-1/+5
| | | | | | | | | | changed from `self.validate_vol_option('storage.reserve', '1 (DEFAULT)')` to `self.validate_vol_option('storage.reserve', '1')` Change-Id: If75820b4ab3c3b04454e232ea1eccc4ee5f7be0b Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Test mountpoint ownership post volume restart.srijan-sivakumar2020-10-191-0/+109
| | | | | | | | | | | Steps- 1. Create a volume and mount it. 2. Set ownership permissions on the mountpoint and validate it. 3. Restart the volume. 4. Validate the permissions set on the mountpoint. Change-Id: I1bd3f0b5181bc93a7afd8e77ab5244224f2f4fed Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test check glusterd crash when firewall ports not openedPranav2020-10-121-0/+140
| | | | | | | | Add test to verify whether the glusterd crash is found while performing a peer probe with firewall services removed. Change-Id: If68c3da2ec90135a480a3cb1ffc85a6b46b1f3ef Signed-off-by: Pranav <prprakas@redhat.com>
* [Test]: Add tc to check volume status with brick removalnik-redhat2020-10-121-12/+69
| | | | | | | | | | | | | Steps: 1. Create a volume and start it. 2. Fetch the brick list 3. Bring any one brick down umount the brick 4. Force start the volume and check that all the bricks are not online 5. Remount the removed brick and bring back the brick online 6. Force start the volume and check if all the bricks are online Change-Id: I464d3fe451cb7c99e5f21835f3f44f0ea112d7d2 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add test to fill brick and perform renamekshithijiyer2020-10-121-0/+85
| | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Calculate the usable size and fill till it reachs min free limit 3. Rename the file 4. Try to perfrom I/O from mount point.(This should fail) Change-Id: Iaee9944b6ba676157ee2453d734a4335aac27811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance preserves / and user subdirs permissionsTamar Shacked2020-10-121-0/+192
| | | | | | | | | | | | | | | | | Test case: 1. Create a volume start it and mount on the client. 2. Set full permission on the mount point. 3. Add new user to the client. 4. As the new user create dirs/files. 5. Compute arequal checksum and verfiy permission on / and subdir. 6. Add brick into the volume and start rebalance. 7. After rebalance is completed: 7.1 check arequal checksum 7.2 verfiy no change in permission on / and sub dir 7.3 As the new user create and delete file/dir. Change-Id: Iacd829c0714c28e231c9fc52df6526200cb53041 Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* [TestFix]: Add tc to check volume status with bricks absentnik-redhat2020-10-091-45/+30
| | | | | | | | | Fix: Added more volume types to perform tests and optimized the code for a better flow. Change-Id: I8249763161f30109d068da401504e0a24cde4d78 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Add check to verify glusterd ErrorPranav2020-10-071-0/+27
| | | | | | | | Adding check to verify gluster volume status doesn't cause any error msg in glusterd logs Change-Id: I5666aa7fb7932a7b61a56afa7d60341ef66a978e Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Add check vol size after bringing min brick downPranav2020-10-071-37/+67
| | | | | | | | | | | | Added check to verify the behavior after bringing down the smallest brick. The available volume size should not be greater than the initial vol size Test skipped due to bug: https://bugzilla.redhat.com/show_bug.cgi?id=1883429 Change-Id: I00c0310210f6fe218cedd23e055dfaec3632ec8d Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Volume profile info without starting profilenik-redhat2020-10-061-0/+188
| | | | | | | | | | | | | | | | Steps- 1. Create a volume and start it. 2. Mount volume on the client and start IO. 3. Start profile on the volume 4. Run profile info and see if all bricks are present or not 5. Create another volume and start it. 6. Run profile info without starting profile. 7. Run profile info with all possible options without starting profile. Change-Id: I0eb2424f385197c45bc0c4e3084c053a9498ae7d Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Replica 3 to arbiter conversion with ongoing IO'sArthy Loganathan2020-10-061-13/+106
| | | | | Change-Id: I3920be66ac84fe700c4d0d6a1d2c1750efb43335 Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] multiple clients dd on same-fileArthy Loganathan2020-10-011-9/+14
| | | | | Change-Id: I465fefeae36a5b700009bb1d6a3c6639ffafd6bd Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] Add tests to check rebalance of files with holeskshithijiyer2020-09-301-0/+128
| | | | | | | | | | | | | | | | | | | | | Scenarios: --------- Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, add bricks to the volume. 4. Trigger rebalance on the volume. 5. Wait for rebalance to complete. Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, remove-brick from volume. 4. Wait for remove-brick to complete. Change-Id: Icf512685ed8d9ceeb467fb694d3207797aa34e4c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Glusterfind --full --type optionShwetha K Acharya2020-09-301-0/+175
| | | | | | | | | | | | | | | | | | | | | | * Create a volume * Create a session on the volume * Create various files on mount point * Create various directories on point * Perform glusterfind pre with --full --type f --regenerate-outfile * Check the contents of outfile * Perform glusterfind pre with --full --type d --regenerate-outfile * Check the contents of outfile * Perform glusterfind pre with --full --type both --regenerate-outfile * Check the contents of outfile * Perform glusterfind query with --full --type f * Check the contents of outfile * Perform glusterfind query with --full --type d * Check the contents of outfile * Perform glusterfind query with --full --type d * Check the contents of outfile Change-Id: I5c4827ff2052a90613de7bd38d61aaf23cb3284b Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>
* [Test] Validate copy of filesayaleeraut2020-09-291-0/+336
| | | | | | | | | | | | | | | | | | | This test script covers following scenarios: 1) Sub-volume is down copy file where source and destination files are on up sub-volume 2) Sub-volume is down copy file where source - hashed down, cached up, destination - hashed down 3) Sub-volume is down copy file where source - hashed down, cached up, destination hashed to up 4) Sub-volume is down copy file where source and destination files are hashing to down sub-volume 5) Sub-volume is down copy file where source file is stored on down sub-volume and destination file is stored on up sub-volume 6) Sub-volume is down copy file where source file is stored on up sub-volume and destination file is stored on down sub-volume Change-Id: I2765857950723aa8907456364aee9159f9a529ed Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Test Quorum specific CLI commands.srijan-sivakumar2020-09-291-0/+97
| | | | | | | | | | | Steps- 1. Create a volume and start it. 2. Set the quorum-type to 'server' and verify it. 3. Set the quorum-type to 'none' and verify it. 4. Set the quorum-ratio to some value and verify it. Change-Id: I08715972c13fc455cee25f25bdda852b92a48e10 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Test set and reset of storage.reserve limit on glusterdsrijan-sivakumar2020-09-291-0/+91
| | | | | | | | | | Steps- 1. Create a volume and start it. 2. Set storage.reserve limit on the created volume and verify 3. Reset storage.reserve limit on the created volume and verify Change-Id: I6592d19463696ba2c43efbb8f281024fc610d18d Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Validate peer probe with hostname,ip,fqdnPranav2020-09-291-0/+146
| | | | | | | | | Test to validate gluster peer probe scenarios using ip addr, hostname and fqdn by verifying each with peer status output, pool list and cmd_history.log Change-Id: I77512cfcf62b28e70682405c47014646be71593c Signed-off-by: Pranav <prprakas@redhat.com>
* [Test]: Volume status show bricks online though brickpath is deletednik-redhat2020-09-281-0/+81
| | | | | | | | | | | | Steps- 1) Create a volume and start it. 2) Fetch the brick list 3) Remove any brickpath 4) Check number of bricks online is equal to number of bricks in volume Change-Id: I4c3a6692fc88561a47a7d2564901f21dfe0073d4 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test]Add test to validate glusterd.info configuration file“Milind”2020-09-281-0/+67
| | | | | | | | | | | | | 1. Check for the presence of /var/lib/glusterd/glusterd.info file 2. Get the UUID of the current NODE 3. check the value of the uuid returned by executing the command "gluster system:: uuid get " 4. Check the uuid value shown by other node in the cluster for the same node "gluster peer status" on one node will give the UUID of the other node Change-Id: I61dfb227e37b87e889577b77283d65eda4b3cd29 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [TestFix] Removing the 'cd' in file operationssrijan-sivakumar2020-09-251-4/+4
| | | | | | | | | Reason : The cd will change the working directory to root and renames and softlink creations for subsequent files will fail as seen in the glusto logs. Change-Id: I174ac11007dc301ba6ec8ccddaeb919a181b1c30 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test to check invalid mem read after freedkshithijiyer2020-09-251-0/+102
| | | | | | | | | | | | | Test case: 1. Create a volume and start it. 2. Mount the volume using FUSE. 3. Create multiple level of dirs and files inside every dir. 4. Rename files such that linkto files are created. 5. From the mount point do an rm -rf * and check if all files are delete or not from mount point as well as backend bricks. Change-Id: I658f67832715dde7260827cc0a27b005b6df5fe3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check for data loss with readdrip offkshithijiyer2020-09-241-0/+103
| | | | | | | | | | | | | | Test case: 1. Create a 2 x (4+2) disperse volume and start it. 2. Disable performance.force-readdirp and dht.force-readdirp. 3. Mount the volume on one client and create 8 directories. 4. Do a lookup on the mount using the same mount point, number of directories should be 8. 5. Mount the volume again on a different client and check if number of directories is the same or not. Change-Id: Id94db2bc9200ab2ce4ca2fb604f38ca4525e6ed1 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test rm -rf * with self pointing linkto fileskshithijiyer2020-09-241-0/+140
| | | | | | | | | | | | | | | | | | | Test case: 1. Create a pure distribute volume with 2 bricks, start and mount it. 2. Create dir dir0/dir1/dir2 inside which create 1000 files and rename all the files. 3. Start remove-brick operation on the volume. 4. Check remove-brick status till status is completed. 5. When remove-brick status is completed stop it. 6. Go to brick used for remove brick and perform lookup on the files. 8. Change the linkto xattr value for every file in brick used for remove brick to point to itself. 9. Perfrom rm -rf * from mount point. Change-Id: Ic4a5e0ff93485c9c7d9a768093a52e1d34b78bdf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to create and delete sparse fileskshithijiyer2020-09-241-0/+156
| | | | | | | | | | | | | | | | | | | Test case: 1. Create volume with 5 sub-volumes, start and mount it. 2. Check df -h for available size. 3. Create 2 sparse file one from /dev/null and one from /dev/zero. 4. Find out size of files and compare them through du and ls. (They shouldn't match.) 5. Check df -h for available size.(It should be less than step 2.) 6. Remove the files using rm -rf. CentOS-CI failure analysis: The testcase fails on CentOS-CI on distributed-disperse volumes as it requires 30 bricks which aren't avaliable on CentOS-CI. Change-Id: Ie53b2531cf6105117625889d21c6e27ad2c10667 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to nuke happy pathkshithijiyer2020-09-221-0/+95
| | | | | | | | | | | | | Test case: 1. Create a distributed volume, start and mount it 2. Create 1000 dirs and 1000 files under a directory say 'dir1' 3. Set xattr glusterfs.dht.nuke to "test" for dir1 4. Validate dir-1 is not seen from mount point 5. Validate if the entry is moved to '/brickpath/.glusterfs/landfill' and deleted eventually. Change-Id: I6359ee3c39df4e9e024a1536c95d861966f78ce5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Add node restart validationPranav2020-09-221-64/+84
| | | | | | | | | | Extending the existing validation by adding node restart as a method to bring back offline bricks along with exiting volume start approach. Change-Id: I1291b7d9b4a3c299859175b4cdcd2952339c48a4 Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add test to check directory permissions wipe outkshithijiyer2020-09-221-0/+132
| | | | | | | | | | | | | | | | | | Test case: 1. Create a 1 brick pure distributed volume. 2. Start the volume and mount it on a client node using FUSE. 3. Create a directory on the mount point. 4. Check trusted.glusterfs.dht xattr on the backend brick. 5. Add brick to the volume using force. 6. Do lookup from the mount point. 7. Check the directory permissions from the backend bricks. 8. Check trusted.glusterfs.dht xattr on the backend bricks. 9. From mount point cd into the directory. 10. Check the directory permissions from backend bricks. 11. Check trusted.glusterfs.dht xattr on the backend bricks. Change-Id: I1ba2c07560bf4bdbf7de5d3831e5de71173b64a2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance with quota on mountpointsrijan-sivakumar2020-09-211-0/+188
| | | | | | | | | | | | | | Steps- 1. Create Volume of type distribute 2. Set Quota limit on the root directory 3. Do some IO to reach the Hard limit 4. After IO ends, compute arequal checksum 5. Add bricks to the volume. 6. Start rebalance 7. After rebalance is completed, check arequal checksum Change-Id: I1cffafbe90dd30013e615c353d6fd7daa5990a86 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with special filessrijan-sivakumar2020-09-211-0/+158
| | | | | | | | | | | | | | Steps- 1. Create and start volume. 2. Create some special files on mount point. 3. Once it is complete, start some IO. 4. Add brick into the volume and start rebalance. 5. All IO should be successful. Failing on centos-ci issue due to: https://github.com/gluster/glusterfs/issues/1461 Change-Id: If91886afb3f44d5ede09dfc84e966f66c89ff709 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with quota on subdirectorysrijan-sivakumar2020-09-181-0/+195
| | | | | | | | | | | | | | Steps- 1. Create Volume of type distribute 2. Set Quota limit on subdirectory 3. Do some IO to reach the Hard limit 4. After IO ends, compute arequal checksum 5. Add bricks to the volume. 6. Start rebalance 7. After rebalance is completed, check arequal checksum Change-Id: I0a431ffb5d1c957e8d11817dd8142d9551323a65 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rename Files after Rebalancesrijan-sivakumar2020-09-181-0/+181
| | | | | | | | | | | | | | | Steps- 1. Create a volume 2. Create directories or files 3. Calculate checksum using arequal 4. Add brick and start rebalance 5. While rebalance is running, rename the files or directories 6. After rebalance is completed, calculate checksum 7. Compare the Checksum Change-Id: I59f80b06a23f6b4c406907673d71b254d054461d Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Check heal of custom xattr on directorysayaleeraut2020-09-181-0/+332
| | | | | | | | | | | | | | | This test script covers below scenarios: 1) Sub-volume is down - Directory - Verify extended attribute creation, display, modification and removal. 2) Directory self heal - extended custom attribute when sub-volume is up again. 3) Sub-volume is down -create new Directory - Verify extended attribute creation, display, modification and removal. 4) Newly Directory self heal - extended custom attribute when sub-volume is up again. Change-Id: I35f8772d7758c2e9c02558b46301681d6c0f319b Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Brick removal with Quota in Distribute volumesrijan-sivakumar2020-09-181-0/+160
| | | | | | | | | | | | | Steps- 1. Create a distribute volume. 2. Set quota limit on a directory on mount. 3. Do IO to reach the hardlimit on the directory. 4. After IO is completed, remove a brick. 5. Check if quota is validated, i.e. hardlimit exceeded true after rebalance. Change-Id: I8408cc31f70019c799df91e1c3faa7dc82ee5519 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [TestFix] As bug #BZ1761932 is fixed removing skip logic“Milind”2020-09-181-3/+0
| | | | | Change-Id: I54ecce22f243b10248eea78b52c6b8cf2c7fd338 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Rebalance with brick down in replicasrijan-sivakumar2020-09-181-0/+171
| | | | | | | | | | | | | | Steps- 1. Create a Replica volume. 2. Bring down one of the brick down in the replica pair 3. Do some IO and create files on the mount point 4. Add a pair of bricks to the volume 5. Initiate rebalance 6. Bring back the brick which was down 7. After self heal happens, all the files should be present. Change-Id: I78a42866d585b00c40a2712c4ae8f2ab3552adca Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Testfix] Added reboot scenario to shared_storage testBala Konda Reddy M2020-09-171-74/+129
| | | | | | | | | | Currently, there is no validation for shared storage whether it is mounted or not post reboot. Added the validation for reboot scenario Made the testcase modular for future updates to the test. Change-Id: I9d39beb3c6718e648eabe15a409c4b4985736645 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Adding sleep so that the brick will get port“Milind”2020-09-151-0/+3
| | | | | | | | | | Problem :ValueError: invalid literal for int() with base 10: 'N/A' Solution : Wait for 5 sec so that brick will get the port Change-Id: Idf518392ba5584d09e81e76fca6e29037ac43e90 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Testfix] Increase timeouts and fix I/O errorskshithijiyer2020-09-145-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: -------- Problem 1: In the latest runs the following testcases fail with wait timeout mostly on rebalance with an exception on test_stack_overflow which fails on layout: 1.functional.dht.test_stack_overflow.TestStackOverflow_cplex_dispersed_glusterfs.test_stack_overflow 2.functional.dht.test_rebalance_dir_file_from_multiple_clients.RebalanceValidation_cplex_dispersed_glusterfs.test_expanding_volume_when_io_in_progress 3.functional.dht.test_restart_glusterd_after_rebalance.RebalanceValidation_cplex_dispersed_glusterfs.test_restart_glusterd_after_rebalance 4.functional.dht.test_stop_glusterd_while_rebalance_in_progress.RebalanceValidation_cplex_dispersed_glusterfs.test_stop_glusterd_while_rebalance_in_progress 5.functional.dht.test_rebalance_with_hidden_files.RebalanceValidation_cplex_dispersed_glusterfs.test_rebalance_with_hidden_files This is mostly observed on disprese volumes which is expected as in most cases disprese volumes take more time than pure replicated or distributed volumes due to it's design. Problem 2: Another issue which was observed was that test_rebalance_with_hidden_files failing on I/O with distributed volume type with the below stack trace: Traceback (most recent call last): File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module> rc = args.func(args) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files base_file_name, file_types) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/lib64/python2.7/multiprocessing/pool.py", line 250, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib64/python2.7/multiprocessing/pool.py", line 554, in get raise self._value IOError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/.1.txt' Solution: -------- Problem 1 Increasing or adding timeout so that wait timeouts are not observed. Problem 2 Adding counter logic to fix the I/O failure. Change-Id: I917137abdeb2e3844ee666258235f6ccc854ee9f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate copy of directorysayaleeraut2020-09-111-0/+308
| | | | | | | | | | | | | | This test script verifies below scenarios: 1)Sub-volume is down copy directory 2)Sub-volume is down copy directory - destination dir hash to up sub-volume 3)Sub-volume is down copy newly created directory - destination dir hash to up sub-volume 4)Sub-volume is down copy newly created directory - destination dir hash to down sub-volume Change-Id: I22b9bf79ef4775b1128477fb858c509a719efb4a Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Resolve and validate gfid split filesLeela Venkaiah G2020-09-081-228/+200
| | | | | | | | | | | Three Scenarios: - Simulate gfid split brain files under a directory - Resolve gfid splits using `source-brick`, `bigger-file` and `latest-mtime` methods - Validate all the files are healed and data is consistent Change-Id: I8b143f341c0db2f32086ecb6878cbfe3bdb247ce Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Add test to add brick with IO & rsync runningkshithijiyer2020-09-071-0/+151
| | | | | | | | | | | | | | | | | Test case: 1. Create, start and mount a volume. 2. Create a directory on the mount point and start linux utar. 3. Create another directory on the mount point and start rsync of linux untar directory. 4. Add bricks to the volume 5. Trigger rebalance on the volume. 6. Wait for rebalance to complete on volume. 7. Wait for I/O to complete. 8. Validate if checksum of both the untar and rsync is same. Change-Id: I008c65b1783d581129b4c35f3ff90642fffe29d8 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Checks gluster compilation at glusterfs mountpointubansal2020-09-071-0/+209
| | | | | | | | | | | | | | | | | | | Steps- 1. Create a volume and mount it 2. Start gluster compilation 3. Bring down redundant bricks 4. Wait for compilation to complete 5. Bring up bricks 6. Check if mountpoint is accessible 7. Delete glusterfs from mountpoint and start gluster compilation again 8. Bring down redundant bricks 9. Wait for compilation to complete 10. Bring up bricks 11. Check if mountpoint is accessible Change-Id: Ic5a272fba7db9707c4acf776d5a505a31a34b915 Signed-off-by: ubansal <ubansal@redhat.com>