summaryrefslogtreecommitdiffstats
path: root/tests/functional/dht
Commit message (Collapse)AuthorAgeFilesLines
* [Test] Add TC to test multiple volume shrinks and rebalanceHEADmasterBarak Sason Rofman2021-03-091-0/+87
| | | | | | | | | | | | | | | Test case: 1. Modify the distribution count of a volume 2. Create a volume, start it and mount it 3. Create some file on mountpoint 4. Collect arequal checksum on mount point pre-rebalance 5. Do the following 3 times: 6. Shrink the volume 7. Collect arequal checksum on mount point post-rebalance and compare with value from step 4 Change-Id: Ib64575e759617684009c68d8b6bb5f011c553b55 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Add TC to rebalance highly nested dir structureBarak Sason Rofman2021-03-091-0/+99
| | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it 2. On mount point, create a large nested dir structure with files in the inner-most dir 3. Collect arequal checksum on mount point pre-rebalance 4. Expand the volume 5. Start rebalance and wait for it to finish 6. Collect arequal checksum on mount point post-rebalance and compare wth value from step 3 Change-Id: I87f0e8df8c4ca850bdf749583635fc8cd2ba1b86 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Add test to verify permission changes made on mount dirPranav2021-02-141-0/+134
| | | | | | | | | | | | | | | | | | | Adding test to verify whether the permission changes made on the mount point dir is reflected on the brick dirs as well, even when the change is made while a brick was down. 1. create pure dist volume 2. mount on client 3. Checked default permission (should be 755) 4. Change the permission to 444 and verify 5. Kill a brick 6. Change root permission to 755 7. Verify permission changes on all bricks, except down brick 8. Bring back the brick and verify the changes are reflected Change-Id: I0a24b94fc5b75706c664f9e2d1363d38b77f9e3a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add TC to test multiple volume expansions and rebalanceBarak Sason Rofman2021-01-201-0/+100
| | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it 2. On mount point, create some files 3. Collect arequal checksum on mount point pre-rebalance 4. Do the following 3 times: 5. Expand the volume 6. Start rebalance and wait for it to finish 7. Collect arequal checksum on mount point post-rebalance and compare with value from step 3 Change-Id: I8a455ad9baf2edb336258448965b54a403a48ae1 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Test read file from a stale layoutLeela Venkaiah G2021-01-081-0/+181
| | | | | | | | | | | | | | | | | Test Steps: 1. Create, start and mount a volume consisting 2 subvols on 2 clients 2. Create a dir `dir` and file `dir/file` from client0 3. Take note of layouts of `brick1`/dir and `brick2`/dir of the volume 4. Validate for success lookup from only one brick path 5. Re-assign layouts ie., brick1/dir to brick2/dir and vice-versa 6. Remove `dir/file` from client0 and recreate same file from client0 and client1 7. Validate for success lookup from only one brick path (as layout is changed file creation path will be changed) 8. Validate checksum is matched from both the clients Change-Id: I91ec020ee616a0f60be9eff92e71b12d20a5cadf Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Add TC to rebalance two volume simultaneouslyBarak Sason Rofman2021-01-041-0/+163
| | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it 2. Create a 2nd volume, start it and mount it 3. Create files on mount points 4. Collect arequal checksum on mount point pre-rebalance 5. Expand the volumes 6. Start rebalance simultaneously on the 2 volumes 7. Wait for rebalance to complete 8. Collect arequal checksum on mount point post-rebalance and compare with value from step 4 Change-Id: I6120bb5f96ff5cfc345d2b0f84dd99ca749ffc74 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Add TC to add peer to cluster while rebalance is in progressBarak Sason Rofman2021-01-041-0/+130
| | | | | | | | | | | | | | | | | Test case: 1. Detach a peer 2. Create a volume, start it and mount it 3. Start creating a few files on mount point 4. Collect arequal checksum on mount point pre-rebalance 5. Expand the volume 6. Start rebalance 7. While rebalance is going, probe a peer and check if the peer was probed successfully 8. Collect arequal checksum on mount point post-rebalance and compare with value from step 4 Change-Id: Ifee9f3dcd69e87ba1d5b4b97c29c0a6cb7491e60 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Add double-expand test to rebalance-preserves-permission TCTamar Shacked2020-12-301-55/+57
| | | | | | | | | For implementing a similar testcase, Add a test to rebalance-preserves-permission TC which runs double-expand before rebalance. Change-Id: I4a37c383bb8e823c6ca84c1a6e6699b18e80a450 Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* [Test+Lib] Add test to check rebalance impact on aclkshithijiyer2020-12-181-0/+129
| | | | | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it to a client. 2. Create 10 files on the mount point and set acls on the files. 3. Check the acl value and collect arequal-checksum. 4. Add bricks to the volume and start rebalance. 5. Check the value of acl(it should be same as step 3), collect and compare arequal-checksum with the one collected in step 3 Additional functions added: a. set_acl(): Set acl rule on a specific file b. get_acl(): Get all acl rules set to a file c. delete_acl(): Delete a specific or all acl rules set on a file Change-Id: Ia420cbcc8daea272cd4a282ae27d24f13b4991fe Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check rebalance with self heal runningkshithijiyer2020-12-171-0/+136
| | | | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Start creating a few files on mount point. 3. While file creation is going on, kill one of the bricks in the replica pair. 4. After file creattion is complete collect arequal checksum on mount point. 5. Bring back the brick online by starting volume with force. 6. Check if all bricks are online and if heal is in progress. 7. Add bricks to the volume and start rebalance. 8. Wait for rebalance and heal to complete on volume. 9. Collect arequal checksum on mount point and compare it with the one taken in step 4. Change-Id: I2999b81443e8acabdb976401b0a56566a6740a39 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add brick with symlink pointing outkshithijiyer2020-12-171-0/+133
| | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create symlinks on the volume such that the files for the symlink are outside the volume. 3. Once all the symlinks are create a data file using dd: dd if=/dev/urandom of=FILE bs=1024 count=100 4. Start copying the file's data to all the symlink. 5. When data is getting copied to all files through symlink add brick and start rebalance. 6. Once rebalance is complete check the md5sum of each file through symlink and compare if it's same as the orginal file. Change-Id: Icbeaa75f11e7605e13fa4a64594137c8f4ae8aa2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fixing minor typos in comment and docstringkshithijiyer2020-12-102-2/+2
| | | | | Change-Id: I7cbc6422a6a6d2946440e51e8d540f47ccc9bf46 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Check perfromance of ls on distributed volumeskshithijiyer2020-12-031-0/+105
| | | | | | | | | | | | | Test case: 1. Create a volume of type distributed-replicated or distributed-arbiter or distributed-dispersed and start it. 2. Mount the volume to clients and create 2000 directories and 10 files inside each directory. 3. Wait for I/O to complete on mount point and perform ls (ls should complete within 10 seconds). Change-Id: I5c08c185f409b23bd71de875ad1d0236288b0dcc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance with add brick and lookup on mountsrijan-sivakumar2020-11-181-0/+113
| | | | | | | | | | | | | | Steps- 1. Create a distributed-replicated volume, start and mount it. 2. Create deep dirs (200) and create some 100 files on the deepest directory. 3. Expand volume. 4. Start rebalance. 5. Once rebalance is completed, do a lookup on mount and log the time taken. Change-Id: I3a55d2670cc6bda7670f97f0cd6208dc9e36a5d6 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test to add brick, replace brick and fix layoutkshithijiyer2020-11-121-0/+124
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Add bricks to the volume. 4. Replace 2 old brick to the volume. 5. Trigger rebalance fix layout and wait for it to complete. 6. Check layout on all the bricks through trusted.glusterfs.dht. Change-Id: Ibc8ded6ce2a54b9e4ec8bf0dc82436fcbcc25f56 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test with filled bricks + add brick + rebalancekshithijiyer2020-11-121-0/+120
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create a data set on the client node such that all the available space is used and "No space left on device" error is generated. 3. Set cluster.min-free-disk to 30%. 4. Add bricks to the volume, trigger rebalance and wait for rebalance to complete. Change-Id: I69c9d447b4713b107f15b4801f4371c33f5fb2fc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests for add brick with hard links and sticky bitkshithijiyer2020-11-121-0/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scenarios: ---------- Test case 1: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and create 50 hardlinks for the files. 3. After the files and hard links creation is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete and check if files are skipped or not. 5. Trigger rebalance on the volume with force and repeat step 4. Test case 2: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and set sticky bit to the files. 3. After the files creation and sticky bit addition is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete. 5. Check for data corruption by comparing arequal before and after. Change-Id: I61bcf14185b0fe31b44e9d2b0a58671f21752633 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test for full brick + add brick + remove brickkshithijiyer2020-11-121-0/+111
| | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Fill few bricks till min.free.limit is reached. 3. Add brick to the volume. 4. Set cluster.min-free-disk to 30%. 5. Remove bricks from the volume. (Remove brick should pass without any errors) 6. Check for data loss by comparing arequal before and after. Change-Id: I0033ec47ab2a2958178ce23c9d164939c9bce2f3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check kill brick with remove brick runningkshithijiyer2020-11-121-0/+128
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Start remove-brick on the volume. 4. When remove-brick is in progress kill brick process of a brick which is being remove. 5. Remove-brick should complete without any failures. Change-Id: I8b8740d0db82d3345279dee3f0f5f6e17160df47 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests to check remove brick with different optionskshithijiyer2020-11-121-0/+113
| | | | | | | | | | | | | | | | | | | | Test scenarios: =============== Test case: 1 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick start, status and finally commit. 4. Check if there is any data loss or not. Test case: 2 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick with force. 4. Check if bricks are still seen on volume or not Change-Id: I2cfd324093c0a835811a682accab8fb0a19551cb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add & remove bricks with lookups & I/O runningkshithijiyer2020-11-121-0/+162
| | | | | | | | | | | | | | | | | | | | Test case: 1. Enable brickmux on cluster, create a volume, start it and mount it. 2. Start the below I/O from 4 clients: From client-1 : run script to create folders and files continuously From client-2 : start linux kernel untar From client-3 : while true;do find;done From client-4 : while true;do ls -lRt;done 3. Kill brick process on one of the nodes. 4. Add brick to the volume. 5. Remove bricks from the volume. 6. Validate if I/O was successful or not. Skip reason: Test case skipped due to bug 1571317. Change-Id: I48bdb433230c0b13b0738bbebb5bb71a95357f57 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check remove brick with open fdkshithijiyer2020-11-111-0/+107
| | | | | | | | | | | | | Test case: 1. Create volume, start it and mount it. 2. Open file datafile on mount point and start copying /etc/passwd line by line(Make sure that the copy is slow). 3. Start remove-brick of the subvol to which has datafile is hashed. 4. Once remove-brick is complete compare the checksum of /etc/passwd and datafile. Change-Id: I278e819731af03094dcee93963ec1da115297bef Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add brick rebal with one brick fullkshithijiyer2020-11-091-0/+139
| | | | | | | | | | | | | Test case: 1. Create a pure distribute volume with 3 bricks. 2. Start it and mount it on client. 3. Fill one disk of the volume till it's full 4. Add brick to volume, start rebalance and wait for it to complete. 5. Check arequal checksum before and after add brick should be same. 6. Check if link files are present on bricks or not. Change-Id: I4645a3eea33fefe78d48805a3794556b81b189bc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Add get_usable_size_per_disk() to librarykshithijiyer2020-10-291-5/+2
| | | | | | | | | | Changes done in this patch: 1. Adding get_usable_size_per_disk() to lib_utils.py. 2. Removing the redundant code from dht/test_rename_with_brick_min_free_limit_crossed.py. Change-Id: I80c1d6124b7f0ce562d8608565f7c46fd8612d0d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to copy huge file with remove-brickkshithijiyer2020-10-291-0/+111
| | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Start remove-brick and copy huge file when remove-brick is in progress. 4. Commit remove-brick and check checksum of orginal and copied file. Change-Id: I487ca05114c1f36db666088f06cf5512671ee7d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate creation of different file typessayaleeraut2020-10-281-0/+494
| | | | | | | | | | | | | | This test script covers below scenarios: 1) Creation of various file types - regular, block, character and pipe file 2) Hard link create, validate 3) Symbolic link create, validate Issue : Fails on CI due to- https://github.com/gluster/glusterfs/issues/1461 Change-Id: If50b8d697115ae7c23b4d30e0f8946e9fe705ece Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add test to fill brick and perform renamekshithijiyer2020-10-121-0/+85
| | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Calculate the usable size and fill till it reachs min free limit 3. Rename the file 4. Try to perfrom I/O from mount point.(This should fail) Change-Id: Iaee9944b6ba676157ee2453d734a4335aac27811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance preserves / and user subdirs permissionsTamar Shacked2020-10-121-0/+192
| | | | | | | | | | | | | | | | | Test case: 1. Create a volume start it and mount on the client. 2. Set full permission on the mount point. 3. Add new user to the client. 4. As the new user create dirs/files. 5. Compute arequal checksum and verfiy permission on / and subdir. 6. Add brick into the volume and start rebalance. 7. After rebalance is completed: 7.1 check arequal checksum 7.2 verfiy no change in permission on / and sub dir 7.3 As the new user create and delete file/dir. Change-Id: Iacd829c0714c28e231c9fc52df6526200cb53041 Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* [Test] Add tests to check rebalance of files with holeskshithijiyer2020-09-301-0/+128
| | | | | | | | | | | | | | | | | | | | | Scenarios: --------- Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, add bricks to the volume. 4. Trigger rebalance on the volume. 5. Wait for rebalance to complete. Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, remove-brick from volume. 4. Wait for remove-brick to complete. Change-Id: Icf512685ed8d9ceeb467fb694d3207797aa34e4c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate copy of filesayaleeraut2020-09-291-0/+336
| | | | | | | | | | | | | | | | | | | This test script covers following scenarios: 1) Sub-volume is down copy file where source and destination files are on up sub-volume 2) Sub-volume is down copy file where source - hashed down, cached up, destination - hashed down 3) Sub-volume is down copy file where source - hashed down, cached up, destination hashed to up 4) Sub-volume is down copy file where source and destination files are hashing to down sub-volume 5) Sub-volume is down copy file where source file is stored on down sub-volume and destination file is stored on up sub-volume 6) Sub-volume is down copy file where source file is stored on up sub-volume and destination file is stored on down sub-volume Change-Id: I2765857950723aa8907456364aee9159f9a529ed Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add test to check invalid mem read after freedkshithijiyer2020-09-251-0/+102
| | | | | | | | | | | | | Test case: 1. Create a volume and start it. 2. Mount the volume using FUSE. 3. Create multiple level of dirs and files inside every dir. 4. Rename files such that linkto files are created. 5. From the mount point do an rm -rf * and check if all files are delete or not from mount point as well as backend bricks. Change-Id: I658f67832715dde7260827cc0a27b005b6df5fe3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check for data loss with readdrip offkshithijiyer2020-09-241-0/+103
| | | | | | | | | | | | | | Test case: 1. Create a 2 x (4+2) disperse volume and start it. 2. Disable performance.force-readdirp and dht.force-readdirp. 3. Mount the volume on one client and create 8 directories. 4. Do a lookup on the mount using the same mount point, number of directories should be 8. 5. Mount the volume again on a different client and check if number of directories is the same or not. Change-Id: Id94db2bc9200ab2ce4ca2fb604f38ca4525e6ed1 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test rm -rf * with self pointing linkto fileskshithijiyer2020-09-241-0/+140
| | | | | | | | | | | | | | | | | | | Test case: 1. Create a pure distribute volume with 2 bricks, start and mount it. 2. Create dir dir0/dir1/dir2 inside which create 1000 files and rename all the files. 3. Start remove-brick operation on the volume. 4. Check remove-brick status till status is completed. 5. When remove-brick status is completed stop it. 6. Go to brick used for remove brick and perform lookup on the files. 8. Change the linkto xattr value for every file in brick used for remove brick to point to itself. 9. Perfrom rm -rf * from mount point. Change-Id: Ic4a5e0ff93485c9c7d9a768093a52e1d34b78bdf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to create and delete sparse fileskshithijiyer2020-09-241-0/+156
| | | | | | | | | | | | | | | | | | | Test case: 1. Create volume with 5 sub-volumes, start and mount it. 2. Check df -h for available size. 3. Create 2 sparse file one from /dev/null and one from /dev/zero. 4. Find out size of files and compare them through du and ls. (They shouldn't match.) 5. Check df -h for available size.(It should be less than step 2.) 6. Remove the files using rm -rf. CentOS-CI failure analysis: The testcase fails on CentOS-CI on distributed-disperse volumes as it requires 30 bricks which aren't avaliable on CentOS-CI. Change-Id: Ie53b2531cf6105117625889d21c6e27ad2c10667 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to nuke happy pathkshithijiyer2020-09-221-0/+95
| | | | | | | | | | | | | Test case: 1. Create a distributed volume, start and mount it 2. Create 1000 dirs and 1000 files under a directory say 'dir1' 3. Set xattr glusterfs.dht.nuke to "test" for dir1 4. Validate dir-1 is not seen from mount point 5. Validate if the entry is moved to '/brickpath/.glusterfs/landfill' and deleted eventually. Change-Id: I6359ee3c39df4e9e024a1536c95d861966f78ce5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check directory permissions wipe outkshithijiyer2020-09-221-0/+132
| | | | | | | | | | | | | | | | | | Test case: 1. Create a 1 brick pure distributed volume. 2. Start the volume and mount it on a client node using FUSE. 3. Create a directory on the mount point. 4. Check trusted.glusterfs.dht xattr on the backend brick. 5. Add brick to the volume using force. 6. Do lookup from the mount point. 7. Check the directory permissions from the backend bricks. 8. Check trusted.glusterfs.dht xattr on the backend bricks. 9. From mount point cd into the directory. 10. Check the directory permissions from backend bricks. 11. Check trusted.glusterfs.dht xattr on the backend bricks. Change-Id: I1ba2c07560bf4bdbf7de5d3831e5de71173b64a2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance with quota on mountpointsrijan-sivakumar2020-09-211-0/+188
| | | | | | | | | | | | | | Steps- 1. Create Volume of type distribute 2. Set Quota limit on the root directory 3. Do some IO to reach the Hard limit 4. After IO ends, compute arequal checksum 5. Add bricks to the volume. 6. Start rebalance 7. After rebalance is completed, check arequal checksum Change-Id: I1cffafbe90dd30013e615c353d6fd7daa5990a86 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with special filessrijan-sivakumar2020-09-211-0/+158
| | | | | | | | | | | | | | Steps- 1. Create and start volume. 2. Create some special files on mount point. 3. Once it is complete, start some IO. 4. Add brick into the volume and start rebalance. 5. All IO should be successful. Failing on centos-ci issue due to: https://github.com/gluster/glusterfs/issues/1461 Change-Id: If91886afb3f44d5ede09dfc84e966f66c89ff709 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with quota on subdirectorysrijan-sivakumar2020-09-181-0/+195
| | | | | | | | | | | | | | Steps- 1. Create Volume of type distribute 2. Set Quota limit on subdirectory 3. Do some IO to reach the Hard limit 4. After IO ends, compute arequal checksum 5. Add bricks to the volume. 6. Start rebalance 7. After rebalance is completed, check arequal checksum Change-Id: I0a431ffb5d1c957e8d11817dd8142d9551323a65 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rename Files after Rebalancesrijan-sivakumar2020-09-181-0/+181
| | | | | | | | | | | | | | | Steps- 1. Create a volume 2. Create directories or files 3. Calculate checksum using arequal 4. Add brick and start rebalance 5. While rebalance is running, rename the files or directories 6. After rebalance is completed, calculate checksum 7. Compare the Checksum Change-Id: I59f80b06a23f6b4c406907673d71b254d054461d Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Check heal of custom xattr on directorysayaleeraut2020-09-181-0/+332
| | | | | | | | | | | | | | | This test script covers below scenarios: 1) Sub-volume is down - Directory - Verify extended attribute creation, display, modification and removal. 2) Directory self heal - extended custom attribute when sub-volume is up again. 3) Sub-volume is down -create new Directory - Verify extended attribute creation, display, modification and removal. 4) Newly Directory self heal - extended custom attribute when sub-volume is up again. Change-Id: I35f8772d7758c2e9c02558b46301681d6c0f319b Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Brick removal with Quota in Distribute volumesrijan-sivakumar2020-09-181-0/+160
| | | | | | | | | | | | | Steps- 1. Create a distribute volume. 2. Set quota limit on a directory on mount. 3. Do IO to reach the hardlimit on the directory. 4. After IO is completed, remove a brick. 5. Check if quota is validated, i.e. hardlimit exceeded true after rebalance. Change-Id: I8408cc31f70019c799df91e1c3faa7dc82ee5519 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with brick down in replicasrijan-sivakumar2020-09-181-0/+171
| | | | | | | | | | | | | | Steps- 1. Create a Replica volume. 2. Bring down one of the brick down in the replica pair 3. Do some IO and create files on the mount point 4. Add a pair of bricks to the volume 5. Initiate rebalance 6. Bring back the brick which was down 7. After self heal happens, all the files should be present. Change-Id: I78a42866d585b00c40a2712c4ae8f2ab3552adca Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Testfix] Increase timeouts and fix I/O errorskshithijiyer2020-09-145-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: -------- Problem 1: In the latest runs the following testcases fail with wait timeout mostly on rebalance with an exception on test_stack_overflow which fails on layout: 1.functional.dht.test_stack_overflow.TestStackOverflow_cplex_dispersed_glusterfs.test_stack_overflow 2.functional.dht.test_rebalance_dir_file_from_multiple_clients.RebalanceValidation_cplex_dispersed_glusterfs.test_expanding_volume_when_io_in_progress 3.functional.dht.test_restart_glusterd_after_rebalance.RebalanceValidation_cplex_dispersed_glusterfs.test_restart_glusterd_after_rebalance 4.functional.dht.test_stop_glusterd_while_rebalance_in_progress.RebalanceValidation_cplex_dispersed_glusterfs.test_stop_glusterd_while_rebalance_in_progress 5.functional.dht.test_rebalance_with_hidden_files.RebalanceValidation_cplex_dispersed_glusterfs.test_rebalance_with_hidden_files This is mostly observed on disprese volumes which is expected as in most cases disprese volumes take more time than pure replicated or distributed volumes due to it's design. Problem 2: Another issue which was observed was that test_rebalance_with_hidden_files failing on I/O with distributed volume type with the below stack trace: Traceback (most recent call last): File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module> rc = args.func(args) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files base_file_name, file_types) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/lib64/python2.7/multiprocessing/pool.py", line 250, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib64/python2.7/multiprocessing/pool.py", line 554, in get raise self._value IOError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/.1.txt' Solution: -------- Problem 1 Increasing or adding timeout so that wait timeouts are not observed. Problem 2 Adding counter logic to fix the I/O failure. Change-Id: I917137abdeb2e3844ee666258235f6ccc854ee9f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate copy of directorysayaleeraut2020-09-111-0/+308
| | | | | | | | | | | | | | This test script verifies below scenarios: 1)Sub-volume is down copy directory 2)Sub-volume is down copy directory - destination dir hash to up sub-volume 3)Sub-volume is down copy newly created directory - destination dir hash to up sub-volume 4)Sub-volume is down copy newly created directory - destination dir hash to down sub-volume Change-Id: I22b9bf79ef4775b1128477fb858c509a719efb4a Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add test to add brick with IO & rsync runningkshithijiyer2020-09-071-0/+151
| | | | | | | | | | | | | | | | | Test case: 1. Create, start and mount a volume. 2. Create a directory on the mount point and start linux utar. 3. Create another directory on the mount point and start rsync of linux untar directory. 4. Add bricks to the volume 5. Trigger rebalance on the volume. 6. Wait for rebalance to complete on volume. 7. Wait for I/O to complete. 8. Validate if checksum of both the untar and rsync is same. Change-Id: I008c65b1783d581129b4c35f3ff90642fffe29d8 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix hostname issue with special file caseskshithijiyer2020-09-041-3/+3
| | | | | | | | | | | | | | Problem: The code fails if we have give hostname in glusto-tests config file. This is becuase we have a converstion logic present in the testcase which converts IP to hostname. Solution: Adding code to check if it's an IP and then only run the code to convert it. Change-Id: I3bb1a566d469a4c32161c91fa610da378d46e77e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add basic tests for different device fileskshithijiyer2020-09-021-0/+328
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding the following testcases for block, character and pipe files: Test case 1: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only the bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Verify stat output from mount point and bricks. Test case 2: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only one bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Delete the files. 6. Verify if the files are delete from all the bricks Test case 3: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Set a custom xattr for files. 5. Verify that xattr for files is displayed on mount point and bricks. 6. Modify custom xattr value and verify that xattr for files is displayed on mount point and bricks. 7. Remove the xattr and verify that custom xattr is not displayed. 8. Verify that mount point and brick shows pathinfo xattr properly. Test case 4: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create a pipe file. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only the bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Verify stat output from mount point and bricks. 6. Write data to fifo file and read data from fifo file from the other instance of the same client. Upstream bug: https://github.com/gluster/glusterfs/issues/1461 Change-Id: I0e72246ba3d6d20a5de95a95d51271337b6b5a57 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix wrong comparion in test_create_filekshithijiyer2020-08-311-1/+1
| | | | | | | | | | | | | Problem: brickdir.hashrange_contains_hash() returns true or False. However it test test_create_file it's check it ret == 1 or not Fix: Changing ret == 1 to ret. Change-Id: I53655794f10fc5d778790bdffbe65563907bef6d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix python3 getfattr() issueskshithijiyer2020-08-172-16/+22
| | | | | | | | | | | | | | | | | Problem: Due to patch [1] which was sent for issue #24 causes a large number of testcases to fail or get stuck in the latest DHT run. Solution: Make changes sot that getfattr command sends back the output in text wherever needed. Links: [1] https://review.gluster.org/#/c/glusto-tests/+/24841/ Change-Id: I6390e38130b0699ceae652dee8c3b2db2ef3f379 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>