summaryrefslogtreecommitdiffstats
path: root/tests/functional/glusterd
Commit message (Collapse)AuthorAgeFilesLines
* Changing systemctl to service to fix jira issue RHGSQE-197kshithijiyer2019-05-231-2/+4
| | | | | | | | Bug https://bugzilla.redhat.com/show_bug.cgi?id=1690254 has to be fixed before merging this patch. Change-Id: I90e669269fafa9d0a064a64883c3e4b88080d25f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Changing error messages to be checked as per new messages.kshithijiyer2019-05-081-2/+18
| | | | | | | | | | | | | | | | | | | | Changing error message displayed when peer detach is issued with bricks are present on the node which is being detached. Adding a logic to handle both the new as well as the old error message. Old msg: peer detach: failed: Brick(s) with the peer <my_server> exist in cluster New msg: peer detach: failed: Peer <my_server> hosts one or more bricks. If the peer is in not recoverable state then use either replace-brick or remove-brick command with force to remove all bricks from the peer and attempt the peer detach again. Change-Id: I3d8fdac2c33638ecc2a8b5782c68caebbf17cf41 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Changing error message to be check in test_volume_getkshithijiyer2019-05-061-0/+2
| | | | | | | | | | | | | | The error message which is displayed when we do a gluster v get for options which don't exist has been changed. Adding a if based logic which can check for the old as well as the new error message. Old msg: volume get option: failed: Did you mean auth.allow or ...reject? New msg: volume get option: failed: Did you mean ctime.noatime? Signed-off-by: kshithijiyer <kshithij.ki@gmail.com> Change-Id: I9496d391a7da9dba64d3426a024c2b1b68455f20
* Adding test to validate output of profile infokshithijiyer2019-05-031-0/+223
| | | | | | | | | | | | | | | Test Case: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile info on the volume. 4) Run profile info with different parameters and see if all bricks are present or not. 5) Stop profile on the volume. 6) Create another volume. 7) Start profile without starting the volume. Change-Id: I6e8ec9285d48c1c828cd1d20bff6ea8f3de064f7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding test for profile operations with one node downkshithijiyer2019-04-291-0/+220
| | | | | | | | | | | | | | Test Case: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile info on the volume. 4) Stop glusterd on one node. 5) Run profile info with different parameters and see if all bricks are present or not. 6) Stop profile on the volume. Change-Id: Ie573414816362ebbe30d2c419fd0e348522ceaec Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test case to detach node used for mountingkshithijiyer2019-04-171-0/+219
| | | | | | | | | | | | | | | | | | | Test case: 1.Create a 1X3 volume with only 3 nodes from the cluster. 2.Mount volume on client node using the ip of the fourth node. 3.Write IOs to the volume. 4.Detach node N4 from cluster. 5.Create a new directory on the mount point. 6.Create a few files using the same command used in step 3. 7.Add three more bricks to make the volume 2x3 using add-brick command. 8.Do a gluster volume rebalance on the volume. 9.Create more files from the client on the mount point. 10.Check for files on bricks from both replica sets. 11.Create a new directory from the client on the mount point. 12.Check for directory in both replica sets. Change-Id: I228b79955dca565a40994919b2903e59cad7d8f5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to profile start when quorum not metBala Konda Reddy M2019-04-161-0/+147
| | | | | | | | | | | | | 1. Create a volume 2. Set the quorum type to server and ratio to 90 3. Stop glusterd randomly on one of the node 4. Start profile on the volume 5. Start glusterd on the node where it is stopped 6. Start profile on the volume 7. Stop profile on the volume where it is started Change-Id: Ifeb9fddf6f1a14c9df73ed2f0453636d2853e944 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Adding test to run gluster commands when glusterd is down on one nodekshithijiyer2019-04-121-0/+102
| | | | | Change-Id: Ibf41c11a4e98baeaad658ee10ba8a807318504be Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Validating whether peers are connected or not before volume creationBala Konda Reddy M2019-04-121-1/+17
| | | | | | | | | | In jenkins this case is failing with peers are not connected while volume creation. Now having a check before creating the volume to make sure that peers are in cluster and in connected state after peer probe. Change-Id: I8aa9d2c4d1669475dd8867d42752a31604ff572f Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Adding code for cleanup of all bricks on each serverkshithijiyer2019-04-091-2/+18
| | | | | Change-Id: I405843e0093ddb7138ee0a8afbfd4cd2f91e6284 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Checking if peers are connected after peer probekshithijiyer2019-04-091-0/+24
| | | | | Change-Id: I252ab0c0f6248b9a5c1d7977146c15876e144b38 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to check if peers are connected in test_spurious_rebalancekshithijiyer2019-03-291-1/+13
| | | | | Change-Id: I4a1097fbdebd49555fffcfa5fe609f4070e39182 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Validating peer is in connected or notBala Konda Reddy M2019-03-281-0/+15
| | | | | | | | | In jenkins right after peer probe, add brick function is failing with peer not in cluster. So having check for the peer to be connected or not then proceed to next step. Change-Id: I73bf92819ad44f7a6a14795ab07c45d260cd04eb Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Test case to change reserve limit with remove brick in progresskshithijiyer2019-03-131-0/+270
| | | | | Change-Id: I53fb7f4cceae395698568129669dc5f3a9a5e4bb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to check remove brick scenarios wrt to glusterdkshithijiyer2019-02-201-0/+188
| | | | | Change-Id: I1bfa2fb3ae4ff1fc247b40c73f4fade9a3afeede Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to reduce volume from 2x3 to 2x2 and to distkshithijiyer2019-02-121-0/+115
| | | | | Change-Id: I64309c3b46dc9087eeb3181acba63b981b2ecc6f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to check volume create with glusterd restart on one nodekshithijiyer2019-02-121-0/+116
| | | | | Change-Id: Ica0771bdee1e96e9d6bb5157fb6c2125a4b419f1 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test case to check peer detach warningkshithijiyer2019-02-111-0/+83
| | | | | Change-Id: Idc379ad7f31274cc63f384d7223bf769bb89ace3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to check if setting volume level options at cluster level is crashing ↵kshithijiyer2019-02-071-0/+72
| | | | | | | glusterd Change-Id: I6ee034f019a4aa36a83e087f2d9fed007e4fd9d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fixed jira issue RHGSQE-33kshithijiyer2019-01-281-3/+10
| | | | | | | | Altered code to check for daemons only on servers where the bricks for a given volume are present. Change-Id: I79312f3b09fd5e1b0fdf6db40e29481662e56303 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fixed jira issue RHGSQE-29 and added quorum reset codekshithijiyer2019-01-242-30/+50
| | | | | Change-Id: Ibd50170d2c3172d7b98c2174630d31a066762f7c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fixed jira issue RHGSQE-31kshithijiyer2019-01-231-13/+16
| | | | | Change-Id: I627c78792c6c1ea12c4a023095a4a983f8cee9b0 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reset quorum value to defaultkshithijiyer2019-01-231-0/+9
| | | | | Change-Id: I0486abff96ea3ea626ce4d18ac0c24f10ed6a846 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting quorum value back to 51%kshithijiyer2019-01-231-0/+7
| | | | | Change-Id: I9e61480ff93d0c16b66eeddacdaef715a0b47d1c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fix in teardown()kshithijiyer2019-01-221-10/+19
| | | | | Change-Id: Ie44703a9d114b9ecaa5bbce07a98c8a040393f2c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Fix: Volume create should fail when a node is downBala Konda Reddy M2019-01-221-2/+6
| | | | | | | | | | | | Earlier while volume creation randomly selected one node and stopped gluster but the volume type used in the test is pure distribute and default glusto config takes 4 bricks. With the fix now randomly selecting only first 4 nodes for volume creation and it will fail. Change-Id: I3cf2fc8281c9747b190e1fe9ef471edbb2c4d2ca Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Reverting back quorum value to 51%kshithijiyer2019-01-221-6/+8
| | | | | Change-Id: I7494b877dff64e195a3517af0176e2a00fa2a86c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting back quorum value to 51%kshithijiyer2019-01-221-7/+8
| | | | | Change-Id: I7361059b663ea19c08a44c4763881634703d36a4 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting quorum value back to 51%kshithijiyer2019-01-181-0/+8
| | | | | Change-Id: Ic7f61241addec2ad81c4ef05dd3268c013dbb083 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Reverting the quorum ratio to 51Bala Konda Reddy M2019-01-181-1/+10
| | | | | Change-Id: Iac928311281fac68a5e0c773d87842a35c8d499e Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Quorum ratio not reset to defaultroot2018-12-061-1/+8
| | | | | | | | Earlier quorum ratio is not reset to default which affects the remaining cases.` Change-Id: I5b2e6c23c5b9cd4cfb310534a7ff89a99bf0931b Signed-off-by: root <root@localhost.localdomain>
* Fix for test_add_identical_brick_new_node.pySri2018-09-211-1/+4
| | | | | Change-Id: Ic60e9da0a97818cba59b5be5048492e54fc0edb3 Signed-off-by: Sri <sselvan@redhat.com>
* Fix for tests/functional/glusterd/test_brick_status_when_quorum_not_met.pySri2018-09-201-1/+7
| | | | | Change-Id: I3c6a9169f3b6afb7f92aca28f59efa42ca2a6b21 Signed-off-by: Sri <sselvan@redhat.com>
* Fix for test_add_brick_when_quorum_not_met.pySri2018-09-181-20/+18
| | | | | Change-Id: If8d18cd60d1993ce46fa019b659770cf6e7aa6b8 Signed-off-by: Sri <sselvan@redhat.com>
* Fix spelling mistake across the codebaseNigel Babu2018-08-0718-33/+33
| | | | Change-Id: I46fc2feffe6443af6913785d67bf310838532421
* Tests: ensure volume deletion works when intended.Yaniv Kaul2018-07-172-24/+31
| | | | | | | | | | | It could have failed without anyone noticing. Added 'xfail' - do we expect to fail in deletion (and changed tests accordingly) On the way, ensure stdout and stderr are logged in case of such failures. Change-Id: Ibdf7a43cadb0393707a6c68c19a664453a971eb1 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* Shorten all the logs around verify_io_procsYaniv Kaul2018-07-178-27/+25
| | | | | | | | No functional change, just make the tests a bit more readable. It could be moved to a decorator later on, wrapping tests. Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusto/glusterd: Peer status should have FQDNSanju Rakonde2018-07-011-0/+152
| | | | | | | | | | | | | | | | | | In this test case we are checking whether the peer status is having FQDN or not. Steps followed are: 1. Peer probe to a new node i.e., N1 to N2 using hostname 2. Check peer status in N2, it should have FQDN of N1 3. Check peer status on N1, it should have FQDN of N2 4. Create a distributed volume with single brick on each node. 5. Start volume 6. Peer probe to a new node N3 using IP 7. Add a brick from node3 to the volume, add brick should succeed. 8. Get volume info, it should have correct information Change-Id: I7f2bb8cecf28e61273ca83d7e3ad502ced979c5c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Test peer probe while snapd is running.Rajesh Madaka2018-07-011-0/+107
| | | | | | | | | | | -> Create Volume -> Create snap for that volume -> Enable uss -> Check snapd running or not -> Probe a new node while snapd is running Change-Id: Ic28036436dc501ed894f3f99060d0297dd9d3c8a Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test remove-brick operation when quorum not metRajesh Madaka2018-06-301-0/+181
| | | | | | | | | | | | | | -> Create volume -> Enabling server quorum -> Set server quorum ratio to 95% -> Stop the glusterd on any one of the node -> Perform remove brick operation -> start glusterd -> Check gluster vol info, bricks should be same before and after performing remove brick operation. Change-Id: I4b7e97ecc6cf6854ec8ff36d296824e549bf9b97 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Volume start and status when one of the brick is absentRajesh Madaka2018-06-281-0/+118
| | | | | | | | | | -> Create Volume -> Remove any one Brick directory -> Start Volume -> Check the gluster volume status Change-Id: I83c25c59607d065f8e411e7befa8f934009a9d64 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test remove brick after restart glusterdMohit Agrawal2018-06-271-0/+174
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 4 nodes 2. Create a distributed-replicated volumes with 4 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Create some data file 6. Start remove-brick operation for one replica pair 7. Restart glusterd on all nodes 8. Try to commit the remove-brick operation while rebalance is in progress, it should fail Change-Id: I64901078865ef282b86c9b3ff54d065f976b9e84 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Test brick status after stop glusterd and modify volumeMohit Agrawal2018-06-271-0/+193
| | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Stop glusterd on one node 2 5. Modify any of the volume option on node 1 6. Start glusterd on node 2 7. Check volume status, brick should get port Change-Id: I688f954f5f53678290e84df955f5529ededaf78f Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Test restart glusterd while rebalance is in progressRajesh Madaka2018-06-271-0/+172
| | | | | | | | | | | | | -> Create Volume -> Fuse mount the volume -> Perform I/O on fuse mount -> Add bricks to the volume -> Perform rebalance on the volume -> While rebalance is in progress, -> restart glusterd on all the nodes in the cluster Change-Id: I522d7aa55adedc2363bf315f96e51469b6565967 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test rebalance spurious failureMohit Agrawal2018-06-261-0/+174
| | | | | | | | | | | | | | | | | | 1. Trusted storage Pool of 3 nodes 2. Create a distributed volumes with 3 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Remove a brick from the volume 6. Check remove-brick status 7. Stop the remove brick process 8. Perform fix-layoyt on the volume 9. Get the rebalance fix-layout status 10. Create a directory from mount point 11. Check trusted.glusterfs.dht extended attribue for newly created directory on the remove brick Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* No Errors should generate in brick logs after deleting files from mountpointRajesh Madaka2018-06-261-0/+179
| | | | | | | | | | | -> Create volume -> Mount volume -> write files on mount point -> delete files from mount point -> check for any errors filled in all brick logs Change-Id: Ic744ad04daa0bdb7adcc672360c9ed03f56004ab Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test rebalance hang after stop glusterd on other nodeMohit Agrawal2018-06-251-0/+201
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Mount the volume 5. Add some data file on mount 6. Start rebalance with force 7. stop glusterd on 2nd node 8. Check rebalance status , it should not hang 9. Issue volume related command Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Validate snap info from detached node in the clusterBala Konda Reddy Mekala2018-06-251-0/+143
| | | | | Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960 Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* glusto-tests/glusterd: add brick should fail, when server quorum is not metSanju Rakonde2018-06-251-0/+159
| | | | | | | | | | | | | | Steps followed are: 1. Create and a start a volume 2. Set cluster.server-quorum-type as server 3. Set cluster.server-quorum-ratio as 95% 4. Bring down glusterd in half of the nodes 5. Confirm that quorum is not met, by check whether the bricks are down. 6. Perform a add brick operation, which should fail. 7. Check whether added brick is part of volume. Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Test peer probe after setting global options to the volumeRajesh Madaka2018-06-241-0/+136
| | | | | | | | | | | | | | -> Set global options and other volume specific options on the volume -> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1 -> gluster volume set VOL nfs.addr-namelookup on -> gluster volume set VOL cluster.server-quorum-type server -> gluster volume set VOL network.ping-timeout 20 -> gluster volume set VOL nfs.port 2049 -> gluster volume set VOL performance.nfs.write-behind on -> Peer probe for a new node Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>