summaryrefslogtreecommitdiffstats
path: root/tests/functional/glusterd
Commit message (Collapse)AuthorAgeFilesLines
* glusterd test cases: Enabling and disabling shared storagekshithijiyer2019-08-191-0/+192
| | | | | | | | | | | | | | | | | | | | Steps: -> Enable a shared storage -> Disable a shared storage -> Create volume of any type with name gluster_shared_storage -> Disable the shared storage -> Check, volume created in step-3 is not deleted -> Delete the volume -> Enable the shared storage -> Check volume with name gluster_shared_storage is created -> Disable the shared storage Change-Id: I1fd29d51e32cadd7978771f4a37ac87176d90372 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding test case to mount volume, remove /var/log/glusterfs/ and remount the ↵kshithijiyer2019-08-061-0/+60
| | | | | | | | | | | | | | | volume. Test case: 1. Create all types of volumes and start them. 2. Mount all volumes on clients. 3. Delete /var/log/glusterfs folder on client. 4. Run IO on all the mount points. 5. Unmount and remount all volumes. 6. Check if logs are regenerated or not. Change-Id: I4f90d709c4da6e1c73cf95f4075c50aa44cdd811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Optimized test case: tests/functional/glusterd/test_remove_brick_scenarios.pyhadarsharon2019-08-051-16/+28
| | | | | | | | | | Worked on the following: Improved I/O performance of test (writing 100k files to a mounted volume) by applying the following changes: 1. Modified the touch command to write as many files as possible per process, thus requiring less processes to write the 100k files 2. Using Threads to parallelize the touch processes from within the test, for better efficiency Change-Id: Id969f387f4b7b8e88daf688f7bada950cff2c412 Signed-off-by: hadarsharon <hsharon@redhat.com>
* Adding test case to enable brickmux, create start and stop 3 volumes.kshithijiyer2019-07-291-0/+115
| | | | | | | | | | | Test Case: 1.Set cluster.brick-multiplex to enabled. 2.Create three 1x3 replica volumes. 3.Start all the three volumes. 4.Stop three volumes one by one. Change-Id: Ibf3e81e7424d6a429da0aa12efeae7fffd3338f2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding testcase to remove /var/log/glusterfs and mounting volume.kshithijiyer2019-07-231-0/+125
| | | | | | | | | | | | | Test Case: 1. Create all types of volumes. 2. Start all volumes. 3. Delete /var/log/glusterfs folder on the client. 4. Mount all the volumes one by one. 5. Run IO on all the mount points. 6. Check if logs are generated in /var/log/glusterfs/. Change-Id: I7a3275aad940116c3506b22b13a670e455d9ef00 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to not stop glusterd on mnode.kshithijiyer2019-07-081-6/+28
| | | | | | | | | | | | | This test case was failing in a test run as the mnode was not removed from the list self.servers because of which there were runs where glusterd was stoppped instead and command was executed on mnod.As well as adding code to check and start glusterd on the node in instances where the test case fails. Change-Id: Id203102d3f0ec82af0ac215f0ecaf7ae22b630f5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Modifying test_enabling_gluster_debug_mode to do more operations.kshithijiyer2019-07-021-3/+7
| | | | | | | | | | | | While running test_enabling_gluster_debug_mode through jenkins it was observed that running volume operation once wasn't generating enough of logs by the time the logs were checked which lead to failure of the test case in the jenkins run. So modifying the logic which generates logs to run operation in a loop to generated a good amount of logs. Change-Id: Id7a12c86a04dc86d4856dbe30d945e70e64ea4f7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to cleanup bricks in test_add_brick.pykshithijiyer2019-06-131-9/+18
| | | | | | | | | | | While executing the test suit it was observerd that the test case test_add_remove_brick was failing due to remains from the test case test_add_brick_functionality. Hence adding the code to clean all the briks post test in test_add_brick.py. Change-Id: Iace9e51582ab4fa1f0f184283e6205aa6140b4a2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Optimized test case: tests/functional/glusterd/test_add_brickhadarsharon2019-06-091-65/+54
| | | | | | | | | | | Worked on the following: 1. Removed redundant throwaway variables (ret, _, _) 2. More consistent exceptions 3. Added comments within code 4. Clarified Error messages in case of Assertion Errors Change-Id: I8ca0acce848bd9a8a5d217b5a4e247590177154d Signed-off-by: hadarsharon <hsharon@redhat.com>
* glusto-tests/glusterd: enable glusterd in debug modekshithijiyer2019-05-291-0/+169
| | | | | | | | | | | | | | | | | In this test case we will enable glusterd in debug mode and check the glusterd log for the debug messages. Steps followed: 1. Stop glusterd. 2. Change log level to DEBUG in /usr/local/lib/systemd/system/glusterd.service. 3. Remove glusterd log. 4. Start glusterd. 5. Issue some gluster commands. 6. Check for debug messages in glusterd log. Change-Id: Id1173be6da2ef1c2233459fb23f4b27308c923f2 Signed-off-by: Sanju Rakonde <srakonde@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding the assert statement removed during changes.kshithijiyer2019-05-231-0/+2
| | | | | | | | | It seems like an assert statement got missed duing the recent changes. Adding back the assert and submitting a patch Signed-off-by: kshithijiyer <kshithij.ki@gmail.com> Change-Id: Ic8ab0d6e54da510faf479cd09cf122ccf8cedfbb
* Changing systemctl to service to fix jira issue RHGSQE-197kshithijiyer2019-05-231-2/+4
| | | | | | | | Bug https://bugzilla.redhat.com/show_bug.cgi?id=1690254 has to be fixed before merging this patch. Change-Id: I90e669269fafa9d0a064a64883c3e4b88080d25f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Changing error messages to be checked as per new messages.kshithijiyer2019-05-081-2/+18
| | | | | | | | | | | | | | | | | | | | Changing error message displayed when peer detach is issued with bricks are present on the node which is being detached. Adding a logic to handle both the new as well as the old error message. Old msg: peer detach: failed: Brick(s) with the peer <my_server> exist in cluster New msg: peer detach: failed: Peer <my_server> hosts one or more bricks. If the peer is in not recoverable state then use either replace-brick or remove-brick command with force to remove all bricks from the peer and attempt the peer detach again. Change-Id: I3d8fdac2c33638ecc2a8b5782c68caebbf17cf41 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Changing error message to be check in test_volume_getkshithijiyer2019-05-061-0/+2
| | | | | | | | | | | | | | The error message which is displayed when we do a gluster v get for options which don't exist has been changed. Adding a if based logic which can check for the old as well as the new error message. Old msg: volume get option: failed: Did you mean auth.allow or ...reject? New msg: volume get option: failed: Did you mean ctime.noatime? Signed-off-by: kshithijiyer <kshithij.ki@gmail.com> Change-Id: I9496d391a7da9dba64d3426a024c2b1b68455f20
* Adding test to validate output of profile infokshithijiyer2019-05-031-0/+223
| | | | | | | | | | | | | | | Test Case: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile info on the volume. 4) Run profile info with different parameters and see if all bricks are present or not. 5) Stop profile on the volume. 6) Create another volume. 7) Start profile without starting the volume. Change-Id: I6e8ec9285d48c1c828cd1d20bff6ea8f3de064f7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding test for profile operations with one node downkshithijiyer2019-04-291-0/+220
| | | | | | | | | | | | | | Test Case: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile info on the volume. 4) Stop glusterd on one node. 5) Run profile info with different parameters and see if all bricks are present or not. 6) Stop profile on the volume. Change-Id: Ie573414816362ebbe30d2c419fd0e348522ceaec Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test case to detach node used for mountingkshithijiyer2019-04-171-0/+219
| | | | | | | | | | | | | | | | | | | Test case: 1.Create a 1X3 volume with only 3 nodes from the cluster. 2.Mount volume on client node using the ip of the fourth node. 3.Write IOs to the volume. 4.Detach node N4 from cluster. 5.Create a new directory on the mount point. 6.Create a few files using the same command used in step 3. 7.Add three more bricks to make the volume 2x3 using add-brick command. 8.Do a gluster volume rebalance on the volume. 9.Create more files from the client on the mount point. 10.Check for files on bricks from both replica sets. 11.Create a new directory from the client on the mount point. 12.Check for directory in both replica sets. Change-Id: I228b79955dca565a40994919b2903e59cad7d8f5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to profile start when quorum not metBala Konda Reddy M2019-04-161-0/+147
| | | | | | | | | | | | | 1. Create a volume 2. Set the quorum type to server and ratio to 90 3. Stop glusterd randomly on one of the node 4. Start profile on the volume 5. Start glusterd on the node where it is stopped 6. Start profile on the volume 7. Stop profile on the volume where it is started Change-Id: Ifeb9fddf6f1a14c9df73ed2f0453636d2853e944 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Adding test to run gluster commands when glusterd is down on one nodekshithijiyer2019-04-121-0/+102
| | | | | Change-Id: Ibf41c11a4e98baeaad658ee10ba8a807318504be Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Validating whether peers are connected or not before volume creationBala Konda Reddy M2019-04-121-1/+17
| | | | | | | | | | In jenkins this case is failing with peers are not connected while volume creation. Now having a check before creating the volume to make sure that peers are in cluster and in connected state after peer probe. Change-Id: I8aa9d2c4d1669475dd8867d42752a31604ff572f Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Adding code for cleanup of all bricks on each serverkshithijiyer2019-04-091-2/+18
| | | | | Change-Id: I405843e0093ddb7138ee0a8afbfd4cd2f91e6284 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Checking if peers are connected after peer probekshithijiyer2019-04-091-0/+24
| | | | | Change-Id: I252ab0c0f6248b9a5c1d7977146c15876e144b38 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to check if peers are connected in test_spurious_rebalancekshithijiyer2019-03-291-1/+13
| | | | | Change-Id: I4a1097fbdebd49555fffcfa5fe609f4070e39182 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Validating peer is in connected or notBala Konda Reddy M2019-03-281-0/+15
| | | | | | | | | In jenkins right after peer probe, add brick function is failing with peer not in cluster. So having check for the peer to be connected or not then proceed to next step. Change-Id: I73bf92819ad44f7a6a14795ab07c45d260cd04eb Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Test case to change reserve limit with remove brick in progresskshithijiyer2019-03-131-0/+270
| | | | | Change-Id: I53fb7f4cceae395698568129669dc5f3a9a5e4bb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to check remove brick scenarios wrt to glusterdkshithijiyer2019-02-201-0/+188
| | | | | Change-Id: I1bfa2fb3ae4ff1fc247b40c73f4fade9a3afeede Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to reduce volume from 2x3 to 2x2 and to distkshithijiyer2019-02-121-0/+115
| | | | | Change-Id: I64309c3b46dc9087eeb3181acba63b981b2ecc6f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to check volume create with glusterd restart on one nodekshithijiyer2019-02-121-0/+116
| | | | | Change-Id: Ica0771bdee1e96e9d6bb5157fb6c2125a4b419f1 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test case to check peer detach warningkshithijiyer2019-02-111-0/+83
| | | | | Change-Id: Idc379ad7f31274cc63f384d7223bf769bb89ace3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to check if setting volume level options at cluster level is crashing ↵kshithijiyer2019-02-071-0/+72
| | | | | | | glusterd Change-Id: I6ee034f019a4aa36a83e087f2d9fed007e4fd9d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fixed jira issue RHGSQE-33kshithijiyer2019-01-281-3/+10
| | | | | | | | Altered code to check for daemons only on servers where the bricks for a given volume are present. Change-Id: I79312f3b09fd5e1b0fdf6db40e29481662e56303 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fixed jira issue RHGSQE-29 and added quorum reset codekshithijiyer2019-01-242-30/+50
| | | | | Change-Id: Ibd50170d2c3172d7b98c2174630d31a066762f7c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fixed jira issue RHGSQE-31kshithijiyer2019-01-231-13/+16
| | | | | Change-Id: I627c78792c6c1ea12c4a023095a4a983f8cee9b0 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reset quorum value to defaultkshithijiyer2019-01-231-0/+9
| | | | | Change-Id: I0486abff96ea3ea626ce4d18ac0c24f10ed6a846 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting quorum value back to 51%kshithijiyer2019-01-231-0/+7
| | | | | Change-Id: I9e61480ff93d0c16b66eeddacdaef715a0b47d1c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fix in teardown()kshithijiyer2019-01-221-10/+19
| | | | | Change-Id: Ie44703a9d114b9ecaa5bbce07a98c8a040393f2c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fix: Volume create should fail when a node is downBala Konda Reddy M2019-01-221-2/+6
| | | | | | | | | | | | Earlier while volume creation randomly selected one node and stopped gluster but the volume type used in the test is pure distribute and default glusto config takes 4 bricks. With the fix now randomly selecting only first 4 nodes for volume creation and it will fail. Change-Id: I3cf2fc8281c9747b190e1fe9ef471edbb2c4d2ca Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Reverting back quorum value to 51%kshithijiyer2019-01-221-6/+8
| | | | | Change-Id: I7494b877dff64e195a3517af0176e2a00fa2a86c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting back quorum value to 51%kshithijiyer2019-01-221-7/+8
| | | | | Change-Id: I7361059b663ea19c08a44c4763881634703d36a4 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting quorum value back to 51%kshithijiyer2019-01-181-0/+8
| | | | | Change-Id: Ic7f61241addec2ad81c4ef05dd3268c013dbb083 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting the quorum ratio to 51Bala Konda Reddy M2019-01-181-1/+10
| | | | | Change-Id: Iac928311281fac68a5e0c773d87842a35c8d499e Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Quorum ratio not reset to defaultroot2018-12-061-1/+8
| | | | | | | | Earlier quorum ratio is not reset to default which affects the remaining cases.` Change-Id: I5b2e6c23c5b9cd4cfb310534a7ff89a99bf0931b Signed-off-by: root <root@localhost.localdomain>
* Fix for test_add_identical_brick_new_node.pySri2018-09-211-1/+4
| | | | | Change-Id: Ic60e9da0a97818cba59b5be5048492e54fc0edb3 Signed-off-by: Sri <sselvan@redhat.com>
* Fix for tests/functional/glusterd/test_brick_status_when_quorum_not_met.pySri2018-09-201-1/+7
| | | | | Change-Id: I3c6a9169f3b6afb7f92aca28f59efa42ca2a6b21 Signed-off-by: Sri <sselvan@redhat.com>
* Fix for test_add_brick_when_quorum_not_met.pySri2018-09-181-20/+18
| | | | | Change-Id: If8d18cd60d1993ce46fa019b659770cf6e7aa6b8 Signed-off-by: Sri <sselvan@redhat.com>
* Fix spelling mistake across the codebaseNigel Babu2018-08-0718-33/+33
| | | | Change-Id: I46fc2feffe6443af6913785d67bf310838532421
* Tests: ensure volume deletion works when intended.Yaniv Kaul2018-07-172-24/+31
| | | | | | | | | | | It could have failed without anyone noticing. Added 'xfail' - do we expect to fail in deletion (and changed tests accordingly) On the way, ensure stdout and stderr are logged in case of such failures. Change-Id: Ibdf7a43cadb0393707a6c68c19a664453a971eb1 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* Shorten all the logs around verify_io_procsYaniv Kaul2018-07-178-27/+25
| | | | | | | | No functional change, just make the tests a bit more readable. It could be moved to a decorator later on, wrapping tests. Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusto/glusterd: Peer status should have FQDNSanju Rakonde2018-07-011-0/+152
| | | | | | | | | | | | | | | | | | In this test case we are checking whether the peer status is having FQDN or not. Steps followed are: 1. Peer probe to a new node i.e., N1 to N2 using hostname 2. Check peer status in N2, it should have FQDN of N1 3. Check peer status on N1, it should have FQDN of N2 4. Create a distributed volume with single brick on each node. 5. Start volume 6. Peer probe to a new node N3 using IP 7. Add a brick from node3 to the volume, add brick should succeed. 8. Get volume info, it should have correct information Change-Id: I7f2bb8cecf28e61273ca83d7e3ad502ced979c5c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Test peer probe while snapd is running.Rajesh Madaka2018-07-011-0/+107
| | | | | | | | | | | -> Create Volume -> Create snap for that volume -> Enable uss -> Check snapd running or not -> Probe a new node while snapd is running Change-Id: Ic28036436dc501ed894f3f99060d0297dd9d3c8a Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>