summaryrefslogtreecommitdiffstats
path: root/tests/functional/afr/heal
Commit message (Collapse)AuthorAgeFilesLines
* check that heal info does not hangRavishankar N2020-11-121-0/+162
| | | | | | | | Check that when there are pending heals and healing and I/O are going on, heal info completes successfully. Change-Id: I7b00c5b6446d6ec722c1c48a50e5293272df0fdf Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* [Test] Self-heal, add-brick on replicated volume typesBala Konda Reddy M2020-10-231-0/+199
| | | | | | | | | | | | | | | | | 1. Create a replicated/distributed-replicate volume and mount it 2. Start IO from the clients 3. Bring down a brick from the subvol and validate it is offline 4. Bring back the brick online and wait for heal to complete 5. Once the heal is completed, expand the volume. 6. Trigger rebalance and wait for rebalance to complete 7. Validate IO, no errors during the steps performed from step 2 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Note: This tests is cleary for replicated volume types. Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Test conservative merge between two bricksBala Konda Reddy M2020-08-211-0/+175
| | | | | | | | | | | | | | | | | | | | | | Test Steps: 1) Create 1x3 volume and fuse mount the volume 2) On mount created a dir dir1 3) Pkill glusterfsd on node n1 (b2 on node2 and b3 and node3 up) 4) touch f{1..10} on the mountpoint 5) b2 and b3 xattrs would be blaming b1 as files are created while b1 is down 6) Reset the b3 xattrs to NOT blame b1 by using setattr 7) Now pkill glusterfsd of b2 on node2 8) Restart glusterd on node1 to bring up b1 9) Now bricks b1 online , b2 down, b3 online 10) touch x{1..10} under dir1 itself 11) Again reset xattr on node3 of b3 so that it doesn't blame b2, as done for b1 in step 6 12) Do restart glusterd on node2 hosting b2 to bring all bricks online 13) Check for heal info, split-brain and arequal for the bricks Change-Id: Ieea875dd7243c7f8d2c6959aebde220508134d7a Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] AFR Data heal by default and explicit self-heal commandnchilaka2020-07-271-126/+29
| | | | | | | | | | - Remove unneccessary disablement of client side heal options - Check if client side heal options are disabled by default - Test data heal by default method - Explicit data heal by calling self heal command Change-Id: I3be9001fc1cf124a4cf5a290cee985e166c0b685 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [TestFix] Remove hot and cold bricks list - Part2Bala Konda Reddy M2020-07-082-16/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the non-tiered volume types, In few test cases while bringing bricks offline, collecting both hot_tier_bricks and cold_tier_bricks and it is not needed to collect hot and cold tier bricks. Removing tier kwarg in one of the test. Removing the hot and cold tiered bricks and collecting only bricks of the particular volume as mentioned below. Removing below section ``` bricks_to_bring_offline_dict = (select_bricks_to_bring_offline( self.mnode, self.volname)) bricks_to_bring_offline = list(filter(None, ( bricks_to_bring_offline_dict['hot_tier_bricks'] + bricks_to_bring_offline_dict['cold_tier_bricks'] + bricks_to_bring_offline_dict['volume_bricks']))) ``` Modifying as below for bringing bricks offline. ``` bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks'] ``` Change-Id: I4f59343b380ced498516794a8cc7c968390a8459 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Testfix] Move teardownclass to teardownkshithijiyer2020-06-081-21/+15
| | | | | | | | | | | | | | | | | | Problem: A pattern was observed where testcases which were passing were throwing error in teadownclass this was becuase docleanup was running before teadownclass and when teardownclass was executed it failed as the setup was already cleaned. Solution: Change code to teardown from teardownclass and move setup volume code to setup from setupclass. Change-Id: I37c6fde1f592224c114148f0ed7215b2494b4502 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [BUG][Test] Add tc to check heal with only shd runningPranav2020-05-111-0/+245
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Failing in CentOS-CI due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=1768380 Description: Test Script which verifies that the server side healing must happen only if the heal daemon is running on the node where source brick resides. * Create and start the Replicate volume * Check the glustershd processes - Only 1 glustershd should be listed * Bring down the bricks without affecting the cluster * Create files on volume * kill the glustershd on node where bricks is running * bring the bricks up which was killed in previous steps * check the heal info - heal info must show pending heal info, heal shouldn't happen since glustershd is down on source node * issue heal * trigger client side heal * heal should complete successfully Change-Id: I1fba01f980a520b607c38d8f3371bcfe086f7783 Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>, Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add tc to check impact of replace brick on shdkshithijiyer2020-04-291-0/+186
| | | | | | | | | | | | | | | | | | | | | | | Description: Test Script to verify the glustershd server vol file has only entries for replicate volumes. Testcase steps: 1.Create multiple volumes and start all volumes 2.Check the glustershd processes(Only 1 glustershd should be listed) 3.Do replace brick on the replicate volume 4.Confirm that the brick is replaced 5.Check the glustershd processes(Only 1 glustershd should be listed and pid should be different) 6.glustershd server vol should be updated with new bricks Change-Id: I09245c8ff6a2b31a038749643af294aa8b81a51a Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>, Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Remove python version dependency(Part 1)kshithijiyer2020-02-263-21/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sys library was added to all the testcases to fetch the `sys.version_info.major` which fetches the version of python with which glusto and glusto-tests is installed and runs the I/O script i.e file_dir_ops.py with that version of python but this creates a problem as older jobs running on older platforms won't run the way they use to, like if the older platform had python2 by default and we are running it tests from a slave which has python3 it'll fails and visa-versa. The problem is introduced due the below code: ``` cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files " "--dirname-start-num 10 --dir-depth 1 --dir-length 1 " "--max-num-of-dirs 1 --num-of-files 5 %s" % ( sys.version_info.major, self.script_upload_path, self.mounts[0].mountpoint)) ``` The solution to this problem is to change `python%d` to `python` which would enable the code to run with whatever version of python is avaliable on that client this would enable us to run any version of framework with both the older and latest platforms. Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [testfix] Add steps to stabilize afr testcasesSri Vignesh2020-02-183-23/+32
| | | | | | | Added steps to reset volume and resolved teardown class cleanup failures. Change-Id: I06b0ed8810c9b064fd2ee7c0bfd261928d8c07db
* [Fix] Remove variable script_local_path(Part 3)kshithijiyer2020-01-075-15/+5
| | | | | | | | Please refer to the commit message of the below patch: https://review.gluster.org/#/c/glusto-tests/+/23902/ Change-Id: Icf32bb20b7eaf2eabb07b59be813997a28872565 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [py2to3] Add py3 support for tests in 'tests/functional/afr'Valerii Ponomarov2019-12-181-8/+8
| | | | | Change-Id: Ic14be81f1cd42c470d2bb5c15505fc1bc168a393 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [py2to3] Add py3 support for tests in 'tests/functional/afr/heal'Valerii Ponomarov2019-12-127-42/+60
| | | | | Change-Id: Id4df838565ec3f9ad765cf223bb5115e43dac1c5 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* Adding test case : test_no_glustershd_with_distributeMilind Waykole2019-11-201-0/+176
| | | | | | | Change-Id: I12b5586bdcef128df64fcd8a0ba80f193395f313 Co-authored-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
* Moved test_glustershd_on_all_volume_types to separate folderVitalii Koriakov2018-12-271-223/+2
| | | | | Change-Id: I3d749c5d131973217d18fc1158236806645e4ab4 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Fixing the test-case to delete brick directories of replaced bricks, As this ↵Anees Patel2018-12-111-158/+0
| | | | | | | was not handled by teardown class Change-Id: I789adbf0909c5edd0a2eb19ed4ccebcb654700fd Signed-off-by: Anees Patel <anepatel@redhat.com>
* Moved test_data_self_heal_algorithm_full_default from afr to arbiter folderVitalii Koriakov2018-12-061-148/+0
| | | | | Change-Id: I04ffdedb1ce25ab05239c77b4dd5893ce18b32f7 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Moved test_self_heal_differing_in_file_type from afr to arbiter folderVitalii Koriakov2018-12-061-193/+0
| | | | | Change-Id: I9f33c84be39bdca85909c2ae337bd4482532d061 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Moved test_existing_glustershd_should_take_care_of_self_healing to separate ↵Vitalii Koriakov2018-12-062-230/+256
| | | | | | | folder Change-Id: I1fb4497ac915c7a93f223ef4e6946eeb4dcd0e90 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Moved test_self_heal_symbolic_links from afr to arbiter folderVitalii Koriakov2018-11-231-245/+0
| | | | | Change-Id: I6a95e82977f4ac6092716c064597931768023710 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Deleting test_metadata_self_heal from test_self_heal fileVitalii Koriakov2018-11-151-353/+0
| | | | | Change-Id: I4560b425aa470da27631eb6401e3775fb90c2330 Signed-off-by: Vitalii Koriakov <vkoriako@nredhat.com>
* Moved test_self_heal_algorithm_full_daemon_off from afr to arbiter folderVitalii Koriakov2018-11-081-111/+1
| | | | | Change-Id: I0143a4ffa16fa0c3ea240f5debbdc5519a9e5445 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Moved test_data_self_heal_algorithm_diff_default from afr to arbiter folderVitalii Koriakov2018-10-231-148/+0
| | | | | Change-Id: I6462446cce6c06a7559028eee1a6968af093c959 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Moved test_data_self_heal_algorithm_diff_heal_command from afr to arbiter folderVitalii Koriakov2018-10-231-204/+0
| | | | | Change-Id: Id32859df069106d6c9913147ecfa8d378dfa8e9d Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Moved test_entry_self_heal_heal_command from afr to arbiter folderVitalii Koriakov2018-10-221-246/+0
| | | | | Change-Id: Id9face2267b9f702bb2b0b5b3c294b3e4082cdf7 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Fix spelling mistake across the codebaseNigel Babu2018-08-071-9/+9
| | | | Change-Id: I46fc2feffe6443af6913785d67bf310838532421
* Shorten all the logs around verify_io_procsYaniv Kaul2018-07-173-80/+80
| | | | | | | | No functional change, just make the tests a bit more readable. It could be moved to a decorator later on, wrapping tests. Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* afr: Test metadata split-brain resolution using heal CLIkarthik-us2018-07-011-0/+282
| | | | | Change-Id: I634d11cb582521b03f0bb481172e2f4f68d1c2ce Signed-off-by: karthik-us <ksubrahm@redhat.com>
* afr: Test data split-brain resolution using heal CLIkarthik-us2018-06-281-0/+270
| | | | | Change-Id: I525f50a42e29270d9ac445d62e12c7e7e25a7ae3 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* Adding test_glustershd_on_all_volume_types test caseVijay Avuthu2018-05-111-2/+223
| | | | | | | | | | | | | | | | | | | Description: Test Script to verify the glustershd server vol file has only entries for replicate volumes * Create multiple volumes and start all volumes * Check the glustershd processes - Only 1 glustershd should be listed * Check the glustershd server vol file - should contain entries only for replicated involved volumes * Add bricks to the replicate volume - it should convert to distributed-replicate * Check the glustershd server vol file - newly added bricks should present * Check the glustershd processes - Only 1 glustershd should be listed Change-Id: Ie110a0312e959e23553417975aa2189ed01be6a4 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Test MetaData Self-Heal (heal command)Vitalii Koriakov2018-05-081-115/+461
| | | | | Change-Id: I32fefdab769e5a361e4dcb5f1328b2c8da2e4f1a Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* selfheal daemon casesKaran Sandha2018-05-081-5/+116
| | | | | | Change-Id: I24e2baddc4f5cdb2c9ae0ab6b9020b2eb9b42a05 Signed-off-by: Karan Sandha <ksandha@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Adding Test Case : test_existing_glustershd_should_take_care_of_self_healingVijay Avuthu2018-05-041-7/+232
| | | | | | | | | | | | | | | | | | | | | Description: Test Script which verifies that the existing glustershd should take care of self healing * Create and start the Replicate volume * Check the glustershd processes - Note the pids * Bring down the One brick ( lets say brick1) without affecting the cluster * Create 5000 files on volume * bring the brick1 up which was killed in previous steps * check the heal info - proactive self healing should start * Bring down brick1 again * wait for 60 sec and brought up the brick1 * Check the glustershd processes - pids should be different * Monitor the heal till its complete Change-Id: Ib044ec60214171f136cc4c2f9225b8fe62e6214d Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Test self-heal of 50k files (heal command)Vitalii Koriakov2018-05-021-0/+189
| | | | | Change-Id: I221b49315db8bc02873fc133ff12837954f0c232 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test Self-Heal of Symbolic Links (heal command)Vitalii Koriakov2018-04-301-0/+245
| | | | | Change-Id: Ie4a4b323e2b7e57e3896550b6f9b7db28fba03b7 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test self heal of files with different file types with deafult configurationVitalii Koriakov2018-04-231-0/+193
| | | | | Change-Id: I84b789f9c0204ca0f0efb40a9a01215902c0ee1d Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test data self-heal of files when "self-heal-algorithm" option value is ↵Vitalii Koriakov2018-04-201-0/+148
| | | | | | | "full" (default) Change-Id: If916d20b0d7c9ded6fb1fc929d9ff1e7719d9594 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test data self-heal of files when "self-heal-algorithm" option value is ↵Vitalii Koriakov2018-04-201-0/+150
| | | | | | | "diff" (default) Change-Id: I34a196e8fc764d87e877a082be2b0575bb1b3b40 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test data self-heal of files when "self-heal-algorithm" option value is ↵Vitalii Koriakov2018-04-201-0/+204
| | | | | | | "diff" (heal command) Change-Id: Id310e0c17a872d8586ad8c7de79f1f68b93edb0a Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test brick process should not be started on read only storage_node disksVitalii Koriakov2018-04-191-3/+171
| | | | | Change-Id: Id0d9e468aaf0061e9ff0f5cc534c06017e97b793 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Fix up coding style issues in testsNigel Babu2018-03-274-82/+941
| | | | Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
* Test Entry-Self-Heal (heal command)Vitalii Koriakov2018-03-051-0/+251
| | | | | Change-Id: Iaecdf6ad44677891340713a5c945a4bdc30ce527 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test Data-Self-Heal daemons off (heal command)Vitalii Koriakov2018-02-071-0/+416
Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>