summaryrefslogtreecommitdiffstats
path: root/tests/functional/afr
Commit message (Collapse)AuthorAgeFilesLines
* [Test]Test split-brain with hard link and brick down scenarioManisha Saini2021-02-101-0/+175
| | | | | Change-Id: Ib58a45522fc57b5a55207d03b297060b29ab27cf Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Testfix] Fix I/O logic and change lookup commandkshithijiyer2021-02-021-6/+17
| | | | | | | | | | | | | Problem: 1. I/O command has %s instead of %d for for int. 2. Lookup logic doesn't tigger client heal. Fix: Changing to %d and adding a cmd which automatically triggers heal. Change-Id: Ibc8a1817894ef755b13c3fee21218adce3ed9c77 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to test split brain with node rebootManisha Saini2021-01-251-0/+149
| | | | | Change-Id: Ic5258b83b92f503c1ee50368668bd7e1244ac822 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Add test to check default granular entry healkshithijiyer2021-01-251-0/+235
| | | | | | | | | | | | | | | | | | | | | | Test case: 1. Create a cluster. 2. Create volume start it and mount it. 3. Check if cluster.granular-entry-heal is ON by default or not. 4. Check /var/lib/glusterd/<volname>/info for cluster.granular-entry-heal=on. 5. Check if option granular-entry-heal is present in the volume graph or not. 6. Kill one or two bricks of the volume depending on volume type. 7. Create all types of files on the volume like text files, hidden files, link files, dirs, char device, block device and so on. 8. Bring back the killed brick by restarting the volume. 9. Wait for heal to complete. 10. Check arequal-checksum of all the bricks and see if it's proper or not. Refernce BZ: #1890506 Change-Id: Ic264600e8d1e29c78e40ab7f93709a31ba2b883c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix I/O logic and use text for getfattrkshithijiyer2021-01-221-21/+20
| | | | | | | | | | | | | | | | Problem: 1. The code uses both clients to create files with the same names, causing I/O failures in one set of clients 2. Code tries to match hex value of replica.split-brain-status to text string. Solution: Fix I/O logic to use only one client and getfattr in text for proper comparision. Change-Id: Ia0786a018973a23835cd2fecd57db92aa860ddce Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check self heal with expand volumekshithijiyer2021-01-211-0/+221
| | | | | | | | | | | | | | | | | | | | | | Test case: 1. Created a 2X3 volume. 2. Mount the volume using FUSE and give 777 permissions to the mount. 3. Added a new user. 4. Login as new user and created 100 files from the new user: for i in {1..100};do dd if=/dev/urandom of=$i bs=1024 count=1;done 5. Kill a brick which is part of the volume. 6. On the mount, login as root user and create 1000 files: for i in {1..1000};do dd if=/dev/urandom of=f$i bs=10M count=1;done 7. On the mount, login as new user, and copy existing data to the mount. 8. Start volume using force. 9. While heal is in progress, add-brick and start rebalance. 10. Wait for rebalance and heal to complete. 11. Check for MSGID: 108008 errors in rebalance logs. Refernce BZ: #1821599 Change-Id: I0782d4b6e44782fd612d4f2ced248c3737132855 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test+Lib] Add tests to check self healkshithijiyer2021-01-201-0/+600
| | | | | | | | | | | | | | | | Test scenarios added: 1. Test to check entry self heal. 2. Test to check meta data self heal. 3. Test self heal when files are removed and dirs created with the same name. Additional libraries added: 1. group_del(): Deletes groups created 2. enable_granular_heal(): Enables granular heal 3. disable_granular_heal(): Disables granular heal Change-Id: Iffa9a100fddaecae99c384afe3aaeaf13dd37e0d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test+Libfix] Add test to add brick followed by remove brickkshithijiyer2021-01-181-0/+170
| | | | | | | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it to a client. 2. Start I/O on volume. 3. Add brick and trigger rebalance, wait for rebalance to complete. (The volume which was 1x3 should now be 2x3) 4. Add brick and trigger rebalance, wait for rebalance to complete. (The volume which was 2x3 should now be 3x3) 5. Remove brick from volume such that it becomes a 2x3. 6. Remove brick from volume such that it becomes a 1x3. 7. Wait for I/O to complete and check for any input/output errors in the logs of both the I/O and rebalance logs. Additional library fix: Adding `return True` at the end of is_layout_complete() to return True if no issues found in layout. Refernce BZ: #1726673 Change-Id: Ifd0360f948b334bfcf341c1015a731274acdb2bf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix setup bug in test_no_glustershd_with_distributekshithijiyer2021-01-131-2/+2
| | | | | | | | | | | | | | Problem: In test_no_glustershd_with_distribute, we are trying to setup all volume types at once where it fails on setup in CI as we don't have sufficiant bricks. Solution: Enable brick sharing in setup_volume() by setting multi_vol to True. Change-Id: I2129e3059fd156138d0a874d6aa6904f3cb0cb9b Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests to check heal with hard and soft linkskshithijiyer2021-01-041-0/+405
| | | | | | | | | | | | | Test Scenarios: --------------- 1. Test heal of hard links through default heal 2. Test heal of soft links through default heal CentOS-CI failing due to issue: https://github.com/gluster/glusterfs/issues/1954 Change-Id: I9fd7695de6271581fed7f38ba41bda8634ee0f28 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Check directory time stamps are healed during entry healRavishankar N2020-12-231-0/+160
| | | | | | | | | | | After an entry heal is complete, verify that the atime/mtime/ctime of the parent directory is same on all bricks of the replica. The test is run with features.ctime enabled as well as disabled. Change-Id: Iefb6a8b50bd31cf5c5aae72e4030239cc0f1a43d Reference: BZ# 1572163 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* check that heal info does not hangRavishankar N2020-11-121-0/+162
| | | | | | | | Check that when there are pending heals and healing and I/O are going on, heal info completes successfully. Change-Id: I7b00c5b6446d6ec722c1c48a50e5293272df0fdf Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* [Test] - Check self heal with data-self-heal-algorithm set to diffkarthik-us2020-10-301-0/+162
| | | | | | | | | | | | | | | | | | Steps: 1. Create a replicated/distributed-replicate volume and mount it 2. Set data/metadata/entry-self-heal to off and data-self-heal-algorithm to diff 3. Create few files inside a directory with some data 4. Check arequal of the subvol and all the bricks in the subvol should have same checksum 5. Bring down a brick from the subvol and validate it is offline 6. Modify the data of existing files under the directory 7. Bring back the brick online and wait for heal to complete 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Change-Id: I568a932c6e1db4a9084c01556c5fcca7c8e24a49 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* [Test] Self-heal, add-brick on replicated volume typesBala Konda Reddy M2020-10-231-0/+199
| | | | | | | | | | | | | | | | | 1. Create a replicated/distributed-replicate volume and mount it 2. Start IO from the clients 3. Bring down a brick from the subvol and validate it is offline 4. Bring back the brick online and wait for heal to complete 5. Once the heal is completed, expand the volume. 6. Trigger rebalance and wait for rebalance to complete 7. Validate IO, no errors during the steps performed from step 2 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Note: This tests is cleary for replicated volume types. Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] multiple clients dd on same-fileArthy Loganathan2020-10-011-9/+14
| | | | | Change-Id: I465fefeae36a5b700009bb1d6a3c6639ffafd6bd Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] Resolve and validate gfid split filesLeela Venkaiah G2020-09-081-228/+200
| | | | | | | | | | | Three Scenarios: - Simulate gfid split brain files under a directory - Resolve gfid splits using `source-brick`, `bigger-file` and `latest-mtime` methods - Validate all the files are healed and data is consistent Change-Id: I8b143f341c0db2f32086ecb6878cbfe3bdb247ce Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Validate data, metadata, entry split-brainLeela Venkaiah G2020-09-011-0/+264
| | | | | | | | | | | | | | | | | Steps: - Create and mount a replicated volume and disable quorum, self-heal deamon - Create ~10 files from the mount point and simulate data, metadata split-brain for 2 files each - Create a dir with some files and simulate entry/gfid split brain - Validate volume successfully recognizing split-brain - Validate a lookup on split-brain files fails with EIO error on mount - Validate `heal info` and `heal info split-brain` command shows only the files that are in split-brain - Validate new files and dir's can be created from the mount Change-Id: I8caeb284c53304a74473815ae5181213c710b085 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Fix py2/3 compatiblity in `str.rsplit`Leela Venkaiah G2020-08-311-1/+1
| | | | | | | | - `str.rsplit` doesn't accept named args in py2 - Removed named arg to make it compatible with both versions Change-Id: Iba287ef4c98ebcbafe55f2166c99aef0c20ed9aa Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Directory creation with subvol downBala Konda Reddy M2020-08-211-0/+194
| | | | | | | | | | | | | | | | | | | | | | Test Steps: 1. Create a distributed-replicated(3X3)/distributed-arbiter(3X(2+1)) and mount it on one client 2. Kill 3 bricks corresponding to the 1st subvol 3. Unmount and remount the volume on the same client 4. Create deep dir from mount point 'dir1/subdir1/deepdir1' 5. Create files under dir1/subdir1/deepdir1; touch <filename> 6. Now bring all sub-vols up by volume start force 7. Validate backend bricks for dir creation, the subvol which is offline will have no dirs created, whereas other subvols will have dirs created from step 4 8. Trigger heal from client by '#find . | xargs stat' 9. Verify that the directory entries are created on all back-end bricks 10. Create new dir (dir2) on location dir1/subdir1/deepdir1 11. Trigger rebalance and wait for the completion 12. Check backend bricks for all entries of dirs Change-Id: I4d8f39e69c84c28ec238ea73935cd7ca0288bffc Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Test conservative merge between two bricksBala Konda Reddy M2020-08-211-0/+175
| | | | | | | | | | | | | | | | | | | | | | Test Steps: 1) Create 1x3 volume and fuse mount the volume 2) On mount created a dir dir1 3) Pkill glusterfsd on node n1 (b2 on node2 and b3 and node3 up) 4) touch f{1..10} on the mountpoint 5) b2 and b3 xattrs would be blaming b1 as files are created while b1 is down 6) Reset the b3 xattrs to NOT blame b1 by using setattr 7) Now pkill glusterfsd of b2 on node2 8) Restart glusterd on node1 to bring up b1 9) Now bricks b1 online , b2 down, b3 online 10) touch x{1..10} under dir1 itself 11) Again reset xattr on node3 of b3 so that it doesn't blame b2, as done for b1 in step 6 12) Do restart glusterd on node2 hosting b2 to bring all bricks online 13) Check for heal info, split-brain and arequal for the bricks Change-Id: Ieea875dd7243c7f8d2c6959aebde220508134d7a Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Validate AFR, arbiter self-heal with IOLeela Venkaiah G2020-08-191-0/+306
| | | | | | | | | | - Validate `heal info` returns before timeout with IO - Validate `heal info` returns before timeout with IO and brick down - Validate data heal on file append in AFR, arbiter - Validate entry heal on file append in AFR, arbiter Change-Id: I803b931cd82d97b5c20bd23cd5670cb9e6f04176 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Reset brick and trigger heal fullBala Konda Reddy M2020-08-131-0/+157
| | | | | | | | | | | | | 1. Create volume and create files/dirs from mount point 2. With IO in progress execute reset-brick start 3. Now format the disk from back-end, using rm -rf <brick path> 4. Execute reset brick commit and check for the brick is online. 5. Issue volume heal using "gluster vol heal <volname> full" 6. Check arequal for all bricks to verify all backend bricks including the resetted brick have same data Change-Id: I06b93d79200decb25f863e7a3f72fc8e8b1c4ab4 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Convert arb to x3 repl volume with IOLeela Venkaiah G2020-07-301-0/+221
| | | | | | | | | | | | | | | | Steps: - Create, start and mount an arbiter volume in two clients - Create two dir's, fill IO in first dir and take note of arequal - Start a continuous IO from second directory - Convert arbiter to x2 replicated volume (remove brick) - Convert x2 replicated to x3 replicated volume (add brick) - Wait for ~5 min for vol file to be updated on all clients - Enable client side heal options and issue volume heal - Validate heal completes with no errors and arequal of first dir matches against initial checksum Change-Id: I291acf892b72bc8a05e76d0cffde44d517d05f06 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Test absence of `healed` and `heal-failed` optionsLeela Venkaiah G2020-07-301-0/+104
| | | | | | | | | | | | | Steps: - Create and mount a replicated volume - Kill one of the bricks and write IO from mount point - Verify `gluster volume heal <volname> info healed` and `gluster volume heal <volname> info heal-failed` command results in error - Validate `gluster volume help` doesn't list `healed` and `heal-failed` commands Change-Id: Ie1c3db12cdfbd54914e61f812cbdac382c9c723e Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] AFR Data heal by default and explicit self-heal commandnchilaka2020-07-271-126/+29
| | | | | | | | | | - Remove unneccessary disablement of client side heal options - Check if client side heal options are disabled by default - Test data heal by default method - Explicit data heal by calling self heal command Change-Id: I3be9001fc1cf124a4cf5a290cee985e166c0b685 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [TestFix] Remove hot and cold bricks list - Part2Bala Konda Reddy M2020-07-082-16/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the non-tiered volume types, In few test cases while bringing bricks offline, collecting both hot_tier_bricks and cold_tier_bricks and it is not needed to collect hot and cold tier bricks. Removing tier kwarg in one of the test. Removing the hot and cold tiered bricks and collecting only bricks of the particular volume as mentioned below. Removing below section ``` bricks_to_bring_offline_dict = (select_bricks_to_bring_offline( self.mnode, self.volname)) bricks_to_bring_offline = list(filter(None, ( bricks_to_bring_offline_dict['hot_tier_bricks'] + bricks_to_bring_offline_dict['cold_tier_bricks'] + bricks_to_bring_offline_dict['volume_bricks']))) ``` Modifying as below for bringing bricks offline. ``` bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks'] ``` Change-Id: I4f59343b380ced498516794a8cc7c968390a8459 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Remove hot and cold bricks list on regular volumesBala Konda Reddy M2020-06-243-12/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | For the non-tiered volume types, In few test cases while bringing bricks offline, collecting both hot_tier_bricks and cold_tier_bricks and it is not needed to collect hot and cold tier bricks. Removing the hot and cold tiered bricks and collecting only bricks of the particular volume as mentioned below. Removing below section ``` bricks_to_bring_offline_dict = (select_bricks_to_bring_offline( self.mnode, self.volname)) bricks_to_bring_offline = list(filter(None, ( bricks_to_bring_offline_dict['hot_tier_bricks'] + bricks_to_bring_offline_dict['cold_tier_bricks'] + bricks_to_bring_offline_dict['volume_bricks']))) ``` Modifying as below for bringing bricks offline. ``` bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks'] ``` Change-Id: Icb1dc4a79cf311b686d839f2c9390371e42142f7 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Git clone on glusterfs volumeBala Konda Reddy M2020-06-161-0/+80
| | | | | | | | | | | Test Steps: 1. Create a volume and mount it on one client 2. git clone the glusterfs repo on the glusterfs volume 3. Set the performance options to off 4. Repeat step 2 on a different directory Change-Id: Iaecce7cd14ecf84058c75847a037c6589d3833e9 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Correct the library name for a functionsayaleeraut2020-06-081-3/+3
| | | | | | | | | The function "set_volume_options()" is a part of "volume_ops" lib, but was wrongly imported from "volume_libs" lib earlier. Corrected the import statement. Change-Id: I7295684e7a564468ac42bbe1f00643ee150f769d Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Testfix] Move teardownclass to teardownkshithijiyer2020-06-084-43/+43
| | | | | | | | | | | | | | | | | | Problem: A pattern was observed where testcases which were passing were throwing error in teadownclass this was becuase docleanup was running before teadownclass and when teardownclass was executed it failed as the setup was already cleaned. Solution: Change code to teardown from teardownclass and move setup volume code to setup from setupclass. Change-Id: I37c6fde1f592224c114148f0ed7215b2494b4502 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [BUG][Test] Add tc to check heal with only shd runningPranav2020-05-111-0/+245
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Failing in CentOS-CI due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=1768380 Description: Test Script which verifies that the server side healing must happen only if the heal daemon is running on the node where source brick resides. * Create and start the Replicate volume * Check the glustershd processes - Only 1 glustershd should be listed * Bring down the bricks without affecting the cluster * Create files on volume * kill the glustershd on node where bricks is running * bring the bricks up which was killed in previous steps * check the heal info - heal info must show pending heal info, heal shouldn't happen since glustershd is down on source node * issue heal * trigger client side heal * heal should complete successfully Change-Id: I1fba01f980a520b607c38d8f3371bcfe086f7783 Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>, Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add tc to check volume metadata-self-healkshithijiyer2020-05-111-0/+606
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Testcase steps: 1.Turn off the options self heal daemon 2.Create IO 3.Calculate arequal of the bricks and mount point 4.Bring down "brick1" process 5.Change the permissions of the directories and files 6.Change the ownership of the directories and files 7.Change the group of the directories and files 8.Bring back the brick "brick1" process 9.Execute "find . | xargs stat" from the mount point to trigger heal 10.Verify the changes in permissions are not self healed on brick1 11.Verify the changes in permissions on all bricks but brick1 12.Verify the changes in ownership are not self healed on brick1 13.Verify the changes in ownership on all the bricks but brick1 14.Verify the changes in group are not successfully self-healed on brick1 15.Verify the changes in group on all the bricks but brick1 16.Turn on the option metadata-self-heal 17.Execute "find . | xargs md5sum" from the mount point to trgger heal 18.Wait for heal to complete 19.Verify the changes in permissions are self-healed on brick1 20.Verify the changes in ownership are successfully self-healed on brick1 21.Verify the changes in group are successfully self-healed on brick1 22.Calculate arequal check on all the bricks and mount point Change-Id: Ia7fb1b272c3c6bf85093690819b68bd83efefe14 Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tc to check impact of replace brick on shdkshithijiyer2020-04-291-0/+186
| | | | | | | | | | | | | | | | | | | | | | | Description: Test Script to verify the glustershd server vol file has only entries for replicate volumes. Testcase steps: 1.Create multiple volumes and start all volumes 2.Check the glustershd processes(Only 1 glustershd should be listed) 3.Do replace brick on the replicate volume 4.Confirm that the brick is replaced 5.Check the glustershd processes(Only 1 glustershd should be listed and pid should be different) 6.glustershd server vol should be updated with new bricks Change-Id: I09245c8ff6a2b31a038749643af294aa8b81a51a Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>, Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test][BUG] Add testcase to replace-brick and test self-heal of fileskshithijiyer2020-03-111-0/+263
| | | | | | | | | | | | | | | | | | | | | | Testcase steps: 1.Create directory on mount point and write files/dirs 2.Create another set of files (1K files) 3.While creation of files/dirs are in progress Kill one brick 4.Remove the contents of the killed brick(simulating disk replacement) 5.When the IO's are still in progress, restart glusterd on the nodes where we simulated disk replacement to bring back bricks online 6.Start volume heal 7.Wait for IO's to complete 8.Verify whether the files are self-healed 9.Calculate arequals of the mount point and all the bricks CentOS-CI failure due to the following bug: https://bugzilla.redhat.com/show_bug.cgi?id=1807384 Change-Id: I9e9f58a16a7950fd7d6493cbb5c4f5483892851e Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test for manual heal full of file using cmdkshithijiyer2020-03-061-0/+182
| | | | | | | | | | | | | | | | | | Testcase steps: 1.Create a single brick volume 2.Add some files and directories 3.Get arequal from mountpoint 4.Add-brick such that this brick makes the volume a replica vol 1x3 5.Start heal full 6.Make sure heal is completed 7.Get arequals from all bricks and compare with arequal from mountpoint Change-Id: I4ef140b326b3d9edcbd5b1f0b7d9c43f38ccfe66 Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test][Bug]Test to check directory gfid mismatch per BZ#1661258kshithijiyer2020-03-021-0/+147
| | | | | | | | | | | | | | | | | | Testcase steps: 1. Create a volume and mount it. 2. Create a directory on mount and check whether all the bricks have the same gfid. 3. Now delete gfid attr from all but one backend bricks, 4. Do lookup from the mount. 5. Check whether all the bricks have the same gfid assigned. Failing in CentOS-CI due to the following bug: https://bugzilla.redhat.com/show_bug.cgi?id=1696075 Change-Id: I4eebc247b15c488cfa24599e0afec2fa5671656f Co-authored-by: Anees Patel <anepatel@redhat.com> Signed-off-by: Anees Patel <anepatel@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 1)kshithijiyer2020-02-2625-247/+202
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sys library was added to all the testcases to fetch the `sys.version_info.major` which fetches the version of python with which glusto and glusto-tests is installed and runs the I/O script i.e file_dir_ops.py with that version of python but this creates a problem as older jobs running on older platforms won't run the way they use to, like if the older platform had python2 by default and we are running it tests from a slave which has python3 it'll fails and visa-versa. The problem is introduced due the below code: ``` cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files " "--dirname-start-num 10 --dir-depth 1 --dir-length 1 " "--max-num-of-dirs 1 --num-of-files 5 %s" % ( sys.version_info.major, self.script_upload_path, self.mounts[0].mountpoint)) ``` The solution to this problem is to change `python%d` to `python` which would enable the code to run with whatever version of python is avaliable on that client this would enable us to run any version of framework with both the older and latest platforms. Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [testfix] Add steps to stabilize afr testcasesSri Vignesh2020-02-186-49/+62
| | | | | | | Added steps to reset volume and resolved teardown class cleanup failures. Change-Id: I06b0ed8810c9b064fd2ee7c0bfd261928d8c07db
* [Test] Test entry transaction crash consistency with fopskshithijiyer2020-02-061-0/+384
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Testcase 1: Test entry transaction crash consistency : create - Create IO - Calculate arequal before creating snapshot - Create snapshot - Modify the data - Stop the volume - Restore snapshot - Start the volume - Get arequal after restoring snapshot - Compare arequals Testcase 2: Test entry transaction crash consistency : delete - Create IO of 50 files - Delete 20 files - Calculate arequal before creating snapshot - Create snapshot - Delete 20 files more - Stop the volume - Restore snapshot - Start the volume - Get arequal after restoring snapshot - Compare arequals Testcase 3: Test entry transaction crash consistency : rename - Create IO of 50 files - Rename 20 files - Calculate arequal before creating snapshot - Create snapshot - Rename 20 files more - Stop the volume - Restore snapshot - Start the volume - Get arequal after restoring snapshot - Compare arequals Change-Id: I7cb9182f91ae50c47d5ae9b3f8031413b2bbfbbf Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test]Creating/Deleting sparse files should be consistent to reflect ↵kshithijiyer2020-01-241-0/+164
| | | | | | | | | | | | | | | | | | available space Testcase: - note the current available space on the mount - create 1M file on the mount - note the current available space on the mountpoint and compare with space before creation - remove the file - note the current available space on the mountpoint and compare with space before creation Change-Id: Iff017039d1888d03f067ee2a9f26aff327bd4059 Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Fix] Removing additonal underscore(_) from testcase name.kshithijiyer2020-01-161-2/+2
| | | | | Change-Id: Idcc40442869cb3e44873625887409592d9e0710d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Fix] Remove variable script_local_path/script_abs_path(Part 4)kshithijiyer2020-01-0717-50/+25
| | | | | | | | Please refer to the commit message of the below patch: https://review.gluster.org/#/c/glusto-tests/+/23902/ Change-Id: I1df0324dac2da5aad4064cc72ef77dcb5bf67e4f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Fix] Remove variable script_local_path(Part 3)kshithijiyer2020-01-0711-33/+11
| | | | | | | | Please refer to the commit message of the below patch: https://review.gluster.org/#/c/glusto-tests/+/23902/ Change-Id: Icf32bb20b7eaf2eabb07b59be813997a28872565 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [py2to3] Add py3 support for tests in 'tests/functional/afr'Valerii Ponomarov2019-12-1831-284/+408
| | | | | Change-Id: Ic14be81f1cd42c470d2bb5c15505fc1bc168a393 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [py2to3] Add py3 support for tests in 'tests/functional/afr/heal'Valerii Ponomarov2019-12-127-42/+60
| | | | | Change-Id: Id4df838565ec3f9ad765cf223bb5115e43dac1c5 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [py2to3] Replace usage of ".iteitems()" attr with ".items()"Valerii Ponomarov2019-11-211-2/+2
| | | | | | | | Dict attribute called "iteritems()" is not supported in the py3. So, replace it's usage with another similar attr called "items()". Change-Id: I130b7f67f0a2d5da5ed6c3d792f5ff024ba148f4 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* Adding test case : test_no_glustershd_with_distributeMilind Waykole2019-11-201-0/+176
| | | | | | | Change-Id: I12b5586bdcef128df64fcd8a0ba80f193395f313 Co-authored-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
* Enabling client side heal,as client side heal is disabled by default in RHGS 3.5milindw962019-10-151-5/+42
| | | | | | Change-Id: I7f8769defd34d55d8eec720c40ed55e69523f917 Signed-off-by: Anees Patel <anepatel@redhat.com> Signed-off-by: milindw96 <milindwaykole96@gmail.com>
* [Fix] Fixing string formatting errors and client heal errorskshithijiyer2019-09-191-37/+51
| | | | | | Change-Id: Ifef2ffe022accf59edcbc949c505f47931b19fe4 Signed-off-by: Anees Patel <anepatel@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fix AFR test case tearDown and library importVinayak Papnoi2019-09-111-3/+16
| | | | | | | | | | | | | The test case 'test_client_side_quorum_with_fixed_validate_max_bricks' does not have a tearDown part where the volume options which have been set inside the test case have not been reset to default. The library function 'set_volume_options' was being imported from a wrong library. This fix includes this change along with the tearDown steps. Change-Id: Ic57494e7a7e8a25303b7979f98cc2dfbc9a7d7b6 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>