summaryrefslogtreecommitdiffstats
path: root/tests/functional/disperse
Commit message (Collapse)AuthorAgeFilesLines
* [Test] IO conitinuity on brick down in EC VolumeLeela Venkaiah G2020-07-081-0/+215
| | | | | | | | | | | | | | | | | Test Steps: - Create, start and mount an EC volume in two clients - Create multiple files and directories including all file types on one directory from client 1 - Take arequal check sum of above data - Create another folder and pump different fops from client 2 - Fail and bring up redundant bricks in a cyclic fashion in all of the subvols maintaining a minimum delay between each operation - In every cycle create new dir when brick is down and wait for heal - Validate heal info on volume when brick down erroring out instantly - Validate arequal on brining the brick offline Change-Id: Ied5e0787eef786e5af7ea70191f5521b9d5e34f6 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Test FD IO's on replace-brick in EC volumeubansal2020-06-261-4/+41
| | | | | Change-Id: Ib39894e9f44c41f5539377c5c124ad45a786cbb3 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Set disperse quorum count to 5 and test EC volumeubansal2020-06-241-0/+309
| | | | | | | | | | | | | | | | | | | | | | On setting disperse quorum count to 5 , atleat 5 bricks should be online for successful writes on volume Steps: 1.Set disperse quorum count to 5 2.Write and read IO's 2.Bring down 1st brick 3.Writes and reads successful 4.Brind down 2nd brick 5.Writes should fail and reads successful 4.Write and read again 5.Writes should fail and reads successful 6.Rebalance should fail as quorum not met 7.Reset volume 8.Write and read IO's and vaildate them 9.Bring down redundant bricks 10.Write and read IO's and vaildate them Change-Id: Ib825783f01a394918c9016808cc62f6530fe8c67 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Set disperse quorum count to 6 and test EC volumeubansal2020-06-221-0/+286
| | | | | | | | | | | | | | | | | | | | On setting disperse quorum count to 6 all bricks should be online for successful writes on volume Steps: 1.Set disperse quorum count to 6 2.Write and read IO's 2.Bring down 1 brick 3.Writes should fail and reads successful 4.Write and read again 5.Writes should fail and reads successful 6.Rebalance should fail as quorum not met 7.Reset volume 8.Write and read IO's and vaildate them 9.Bring down redundant bricks 10.Write and read IO's and vaildate them Change-Id: I93d418fd75d75fa3563d23f52fdd5aed71cfe540 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] mv and ls operations on ec volumesBala Konda Reddy M2020-06-191-0/+155
| | | | | | | | | | | | | | | | | | | Test Steps: 1. Create volume and mount the volume on 3 clients, c1(client1), c2(client2), and, c3(client3) 2. On c1, mkdir /c1/dir 3. On c2, Create 4000 files on mount point i.e. "/" 4. After step 3, Create next 4000 files on c2 on mount point i.e. "/" 5. On c1 Create 10000 files on /dir/ 6. On c3 start moving 4000 files created on step 3 from mount point to /dir/ 7. On c3, start ls in a loop for 20 iterations Note: Used upload scripts in setupclass, as there is one more test to be added in the same file. Change-Id: Ibab74433cbec4d6a4f9b494f257b3e517b8fbfbc Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test]: Validate EC Eagerlock Behavior and Performancenchilaka2020-06-191-0/+264
| | | | | | | | | | Description: This script tests Disperse(EC) eagerlock default values and the performance impact on lookups with eagerlock and other-eagerlock default values Change-Id: Ia083d0d00f99a42865fb6f06eda75ecb18ff474f Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Test] Add tc to check eager lock clikshithijiyer2020-06-191-0/+71
| | | | | | | | | | | | Testcase Steps: 1.Create an EC volume 2.Set the eager lock option by turning on disperse.eager-lock by using different inputs: - Try non boolean values(Must fail) - Try boolean values Change-Id: Iec875ce9fb4c8f7c68b012ede98bd94b82d04d7e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Remove 'replicated' from volume typeLeela Venkaiah G2020-06-171-1/+1
| | | | | | | - Test is designed to run on EC volumes only Change-Id: Ice6a77422695ebabbec6b9cfd910e453e5b2c81a Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Checks all types of heal on EC volumeubansal2020-06-171-0/+285
| | | | | | | | | | | | | | | | Steps: 1.Create a volume and mount it 2.Create a directory, dir1 and run different types of IO's 3.Create a directory, dir2 4.Bring down redundant bricks 5.Write IO's to directory dir2 6.Create a directory, dir3 and run IO's(read,write,ammend) 7.Bring up bricks 8.Monitor heal 9.Check for data intergrity of dir1 Change-Id: I9a7e366084bb46dcfc769b1d98b89b303fc16150 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Rename of filetypes on brick downLeela Venkaiah G2020-06-171-0/+221
| | | | | | | | | | | | | | | | | | Test Steps: 1. Create an EC volume 2. Mount the volume using FUSE on two different clients 3. Create ~9 files from one of the client 4. Create ~9 dir with ~9 files each from another client 5. Create soft-links, hard-links for file{4..6}, file{7..9} 6. Create soft-links for dir{4..6} 7. Begin renaming the files, in multiple iterations 8. Bring down a brick while renaming the files 9. Bring the brick online after renaming some of the files 10. Wait for renaming of the files 11. Validate no data loss and files are renamed successfully Change-Id: I6d98c00ff510cb473978377bb44221908555681e Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test]: Check File descriptor in ECubansal2020-06-161-0/+174
| | | | | | | | | | Steps: 1.Open a File descriptor when a brick in down 2.Write to File descriptor when brick has come up and check if healing is complete Change-Id: I721cedf4dc6a420f0c153d4232b046f780da201b Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Verify heal on EC volume on file appendsLeela Venkaiah G2020-06-051-0/+186
| | | | | | | | | | | | | | Test Steps: - create and mount EC volume 4+2 - start append to a file from client - bring down one of the bricks (say b1) - wait for ~minute and bring down another brick (say b2) - after ~minute bring up first brick (b1) - check the xattrs 'ec.size', 'ec.version' - xattrs of online bricks should be same as an indication to heal Change-Id: I81a5bad4a91dd891fbbc9d93ae3f76610237789e Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Validates data deletion on EC volumeubansal2020-06-041-0/+270
| | | | | | | | | | | | | | Test Steps: 1. Create a volume, start and mount it 2. Create directories and files 3. Rename, change permissions of files 4. Create hardlink and soflink and different types of IO's 5. Delete all the data 6. Check no heals are pending 7. Check al bricks are empty Change-Id: Ic8f5dad1a44de71688a6b0a2fcfb4a25cef435ba Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Read from hardlink in disperseBala Konda Reddy M2020-06-031-0/+111
| | | | | | | | | | | | Test steps: 1. Create volume, start and mount it one client. 2. Enable metadata-cache(md-cache) options on the volume. 3. Touch a file and create a hardlink for it. 4. Read data from the hardlink. 5. Read data from the actual file. Change-Id: Ibf4b8757262707fcfb4d09b4b031ff9dea166570 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Quota validation on EC volumeubansal2020-05-211-0/+159
| | | | | Change-Id: I3f77dc73044a5bc59a26319c55e8e024e2edf449 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test][BUG] Uss and snapshot in EC volumeubansal2020-05-131-0/+328
| | | | | | | | | | | | | | | Steps: 1.Enable uss and create snapshot, list and delete 2.Create Snapshot with same same and list Github Issue for CentOS-CI failure: https://github.com/gluster/glusterfs/issues/1203 Testcase failing due to: https://bugzilla.redhat.com/show_bug.cgi?id=1828820 Change-Id: I829e6b340dfb4963355b445259fcb011b62ba057 Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix] Reduce the workload as rebalance was failingubansal2020-05-111-5/+5
| | | | | Change-Id: Id94870735b26fbeab2bf448d4f80341c92beb5ba Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] functional/disperse: Verify remove brick operationSri Vignesh2020-04-291-0/+324
| | | | | | | | | | This test verifies remove brick operations on disperse volume. Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468 Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test+Lib] No fresh lookups on directoryBala Konda Reddy M2020-04-281-0/+183
| | | | | | | | | | | | | | | | | | | Test Steps: 1. Create a volume and set the volume option 'diagnostics.client-log-level' to DEBUG mount the volume on one client. 2. Create a directory 3. Validate the number of lookups for the directory creation from the log file. 4. Perform a new lookup of the directory 5. No new lookups should have happened on the directory, validate from the log file. 6. Bring down one subvol of the volume and repeat step 4, 5 7. Bring down one brick from the online bricks and repeat step 4, 5 8. Start the volume with force and wait for all process to be online. Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* [Test]: Check replace brick in ECubansal2020-04-131-0/+336
| | | | | | | | | Steps: 1.Checks replace-brick and data intergrity post that 2.Checks replace-brick while IO's are in progress Change-Id: Idfc801fde50967924696b2e909633b9ca95ac721 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Fix error in loggingkshithijiyer2020-03-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcase test_ec_version was failing with the below traceback: Traceback (most recent call last): File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit msg = self.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format return fmt.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format record.message = record.getMessage() File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage msg = msg % self.args TypeError: %d format: a number is required, not str Logged from file test_ec_version_healing_whenonebrickdown.py, line 233 This was due to a missing 's' in the log message on line 233. Solution: Add the missing s in the log message on line 233 as shown below: g.log.info('Brick %s is offline successfully', brick_b2_down) Also renaming the file for more clarity of what the testcase does. Change-Id: I626fbe23dfaab0dd6d77c75329664a81a120c638 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix timeout issue in test_heal_full_node_rebootkshithijiyer2020-02-261-1/+1
| | | | | | | | | | | | | | | | | | | | | Problem: The current timeout for reboot given in test_heal_full_node_reboot is about 350 seconds which works with most hardware configurations. However when reboot is done on slower systems which take time to come up this logic fails due to which this testcase and the preceding testcases fail. Solution: Change the timeout for reboot from 350 to 700, this wouldn't affect the testcase's perfromance in good hardware configurations as the timeout is for the max value and if the node is up before the testcase it'll exit anyways. Change-Id: I60d05236e8b08ba7d0fec29657a93f2ae53404d4 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 2)kshithijiyer2020-02-267-47/+31
| | | | | | | | Please refer to commit message of patch [1]. [1] https://review.gluster.org/#/c/glusto-tests/+/24140/ Change-Id: Ic0b3b1333ac7b1ae02f701943d49510e6d46c259 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* TC to check if EC version and size updates when another brick is downubansal2020-01-081-0/+258
| | | | | Change-Id: I33e75fe773ee26a2d205f5ebd29198968bfe6c59 Signed-off-by: ubansal <ubansal@redhat.com>
* [Fix] Remove variable script_local_path(Part 1)kshithijiyer2020-01-074-12/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Removing script_local_path as both script_local_path and cls.script_upload_path hold the same values which makes each script slow. This will help decrease the execution time of the test suite. PoC: $cat test.py a = ("/usr/share/glustolibs/io/scripts/" "file_dir_ops.py") b = ("/usr/share/glustolibs/io/scripts/" "file_dir_ops.py") $time python test.py real 0m0.063s user 0m0.039s sys 0m0.019s $cat test.py a = ("/usr/share/glustolibs/io/scripts/" "file_dir_ops.py") $time python test.py real 0m0.013s user 0m0.009s sys 0m0.003s Code changes needed: From: script_local_path = ("/usr/share/glustolibs/io/scripts/" "file_dir_ops.py") cls.script_upload_path = ("/usr/share/glustolibs/io/scripts/" "file_dir_ops.py") ret = upload_scripts(cls.clients, script_local_path) To: cls.script_upload_path = ("/usr/share/glustolibs/io/scripts/" "file_dir_ops.py") ret = upload_scripts(cls.clients, cls.script_upload_path) Change-Id: I7908b3b418bbc929b7cc3ff81e3675310eecdbeb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [py2to3] Fix bunch of py3 incompatibilitiesValerii Ponomarov2020-01-023-27/+40
| | | | | Change-Id: I2e85670e50e3dab8727295c34aa6ec4f1326c19d Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [TC] Adding tc to verify data corruption during full heal with reboot.kshithijiyer2019-12-171-0/+250
| | | | | | | | | | | | | | | | | | | | | This test case verifies that disruption during full heal doesn't result in data corruption. Testcase steps: 1.Create IO from mountpoint. 2.Calculate arequal from mount. 3.Delete data from backend from the EC volume. 4.Trigger heal full. 5.Disable Heal. 6.Again Enable and do Heal full. 7.Reboot a Node. 8.Calculate arequal checksum and compare it. Change-Id: I1fac53df30106ff98fdd270b210aca90a53a1ac5 Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Karan Sandha <ksandha@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* This test case verifies reset of brick for an EC volumeubansal2019-12-121-0/+596
| | | | | Change-Id: Iafa09988617e2e29942aa6ceb003eac2ddf2b561 Signed-off-by: ubansal <ubansal@redhat.com>
* [py2to3] Add py3 support in 'tests/functional/disperse'Valerii Ponomarov2019-12-125-27/+44
| | | | | Change-Id: I308f95d16ac18ec80c5c78aac9152d9ae41449bb Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* This TC is to check FOP in a EC volume with redundant bricks downubansal2019-12-101-0/+350
| | | | | Change-Id: I97bcbc3f9b75129be833ffa7def1b00cfd32a474 Signed-off-by: ubansal <ubansal@redhat.com>
* Add 7 negative scenarios for EC volume createubansal2019-09-111-1/+73
| | | | | | | | | With redundancy count as negative or disperse count with negative(different permutations) and disperse data count equal to disperse count Change-Id: I761851c64833256532464f56a9a78e20ceb8a4e1 Signed-off-by: ubansal <ubansal@redhat.com>
* [EC]Fix a script issue to change permission of the directoryubansal2019-07-291-2/+7
| | | | | | | | Added a line to change permission of the directory so that client side healing happens for the directory also Change-Id: If4a24f2dbd6c9c85d4cb2944d1ad4795dbc39adb Signed-off-by: ubansal <ubansal@redhat.com>
* This TC is to check for File operations in an EC volumeubansal2019-04-081-0/+337
| | | | | Change-Id: Ib1aff1c1bf843dddac5862e55a049d7b47603049 Signed-off-by: ubansal <ubansal@redhat.com>
* This Tc is used to check remove brick on a dispersed volumeubansal2019-02-071-0/+166
| | | | | Change-Id: Id8cfc0dd31cf4f6f381ec7bb07d4aba06d52b43e Signed-off-by: ubansal <ubansal@redhat.com>
* This testcase checks add-brick on a dispersed volumeubansal2019-02-071-0/+186
| | | | | Change-Id: I640f5c554fab791aa5f196415c5204f7cbca83a4 Signed-off-by: ubansal <ubansal@redhat.com>
* This testcase checks that EC version of bricks are not updated when non FOPs ↵ubansal2019-01-251-0/+115
| | | | | | operation are performed like chmod Change-Id: I797253cd4454359bd8f0596c322b2eb71a8a4751
* changing testcase names to reflect uniquenessnagpavanchilakam2019-01-042-2/+2
| | | | Change-Id: I35dd3c387c7b6eb3957c5a790af9ff8693403202
* functional/disperse: verifies full heal functionalitySunil Kumar Acharya2018-09-121-0/+191
| | | | | | | | | | This test case validates that full heal crawls whole filesystem instead of just scanning indices/xattrop. RHG3-9691 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Change-Id: I7064b86750c37f8619960574a93b0da470b225ad Signed-off-by: Karan Sandha <ksandha@redhat.com>
* Fix spelling mistake across the codebaseNigel Babu2018-08-071-7/+7
| | | | Change-Id: I46fc2feffe6443af6913785d67bf310838532421
* functional/disperse: Verify IO hang during server side heal.Ashish Pandey2018-06-261-0/+139
| | | | | | | | | | When the IOs are done with client side heal disabled, it should not hang. RHG3-11098 Change-Id: I2f180dd1ba2f45ae0f302a730a02b90ae77b99ad Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* functional/disperse: verify IO hang during client side healSunil Kumar Acharya2018-06-251-0/+150
| | | | | | | | | | | | | | | | | | | When the IOs are done with server side heal disabled, it should not hang. ec_check_heal_comp function will fail because of the bug 1593224- Client side heal is not removing dirty flag for some of the files While this bug has been raised and investigated by dev, this patch is doing its job and testing the target functionality. RHG3-11097 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: I841285c9b1a747f5800ec8cdd29a099e5fcc08c5 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* functional/disperse: Validate EC volume creationSunil Kumar Acharya2018-06-201-0/+137
| | | | | | | | | This test tries to create and validate EC volume with various combinations of input parameters. RHG3-12926 Change-Id: Icfc15e069d04475ca65b4d7c1dd260434f104cdb Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* tests: Test case to verify brick consumable sizeSunil Kumar Acharya2018-06-141-0/+113
| | | | | | | | | | When bricks of various sizes are used to create a disperse volume, volume size should be of the size (number of data bricks * least of brick size) RHG3-11124 Change-Id: Ic791212bf028328996b896ae4896cf860c153264 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* Adding basic test case for EC.Sunil Kumar Acharya2018-06-062-0/+73
Change-Id: I389aaa59db10b40d3ec117b8bb23d76fad29b41b Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Ashish Pandey <aspandey@redhat.com>