summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
...
* Glusterd split brain scenario with volume options reset and quorumBala Konda Reddy Mekala2018-05-071-0/+151
| | | | | Change-Id: I2802171403490c9de715aa281fefb562e08249fe Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* Testcase involves killing brick in cyclic order and listing the directories ↵Karan Sandha2018-05-041-0/+206
| | | | | | | | after healing from mount point Change-Id: Ifcd1bac10982a0a2f348e4475ad167921625affa Signed-off-by: Karan Sandha <ksandha@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Adding Test Case : test_existing_glustershd_should_take_care_of_self_healingVijay Avuthu2018-05-041-7/+232
| | | | | | | | | | | | | | | | | | | | | Description: Test Script which verifies that the existing glustershd should take care of self healing * Create and start the Replicate volume * Check the glustershd processes - Note the pids * Bring down the One brick ( lets say brick1) without affecting the cluster * Create 5000 files on volume * bring the brick1 up which was killed in previous steps * check the heal info - proactive self healing should start * Bring down brick1 again * wait for 60 sec and brought up the brick1 * Check the glustershd processes - pids should be different * Monitor the heal till its complete Change-Id: Ib044ec60214171f136cc4c2f9225b8fe62e6214d Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Test write I/O on mount point is resumed when client side quorum is restored ↵Vitalii Koriakov2018-05-041-0/+561
| | | | | | | ( x3) Change-Id: Ic0aaccdbf6938702ec1dbb44e888e45eb9f21e28 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Adding test case : test_client_side_quorum_with_fixed_for_cross2Vijay Avuthu2018-05-041-1/+497
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Description: Test Script to verify the Client Side Quorum with fixed for cross 2 volume * Disable self heal daemom * set cluster.quorum-type to fixed. * start I/O( write and read )from the mount point - must succeed * Bring down brick1 * start I/0 ( write and read ) - must succeed * set the cluster.quorum-count to 1 * start I/0 ( write and read ) - must succeed * set the cluster.quorum-count to 2 * start I/0 ( write and read ) - read must pass, write will fail * bring back the brick1 online * start I/0 ( write and read ) - must succeed * Bring down brick2 * start I/0 ( write and read ) - read must pass, write will fail * set the cluster.quorum-count to 1 * start I/0 ( write and read ) - must succeed * cluster.quorum-count back to 2 and cluster.quorum-type to auto * start I/0 ( write and read ) - must succeed * Bring back brick2 online * Bring down brick1 * start I/0 ( write and read ) - read must pass, write will fail * set the quorum-type to none * start I/0 ( write and read ) - must succeed Change-Id: I415aba5db211607476fd7345c8ca6f4d49373402 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Create cluster with shortnames and create volume using IP and FQDNGaurav Yadav2018-05-031-0/+184
| | | | | | | | | | | | - Peer probe using short name - Create volume using IP - Start/stop/getvolumeinfo - Create volume using FQDN - Start/stop/getvolumeinfo Change-Id: I2d55944035c44e8ee360beb4ce41550338586d15 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test Directory - custom extended attribute validation getfattr, setfattrMiroslav Asenov2018-05-031-0/+362
| | | | | | | | | | | | | | | | | | | | Test Optimisations : - Optimized imports - Reduced count local variables - Added more logging points with information about mount point and directories - Add more logging points - Included more log end points and implemented glusto's framework dht functions Changes : - Copyright years - removed nfs mount point, since does not support extra attributes - improved layout validations - fixed typos in logs. - Updated comments Change-Id: If51d033d726edf2344af9aeba1246d4d6591f5c0
* Adding test case test_client_side_quorum_with_auto_option_overwrite_fixedVijay Avuthu2018-05-021-2/+152
| | | | | | | | | | | | | | | | | | Description: Test Script to verify the Client Side Quorum with auto option * check the default value of cluster.quorum-type * try to set any junk value to cluster.quorum-type other than {none,auto,fixed} * check the default value of cluster.quorum-count * set cluster.quorum-type to fixed and cluster.quorum-count to 1 * start I/O from the mount point * kill 2 of the brick process from the each replica set. * set cluster.quorum-type to auto Change-Id: I102373d1a53635563909e4fb80a01d98c24d3355 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Test self-heal of 50k files (heal command)Vitalii Koriakov2018-05-021-0/+189
| | | | | Change-Id: I221b49315db8bc02873fc133ff12837954f0c232 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Creation of clone from snapshot of one volumesrivickynesh2018-04-301-0/+248
| | | | | Change-Id: Ice1e7139613c0d2f15c95e86e6c1e7b595d390a5 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test Self-Heal of Symbolic Links (heal command)Vitalii Koriakov2018-04-301-0/+245
| | | | | Change-Id: Ie4a4b323e2b7e57e3896550b6f9b7db28fba03b7 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Remove logical error in CVT testNigel Babu2018-04-261-8/+0
| | | | | | | | This step is not needed at all. It appears to be a copy-paste error or some sort of mistake. Missed it through review and a few rounds of debugging Change-Id: I232f68c846ebf18a106554c1b0214748f2cdc391
* Test self heal of files with different file types with deafult configurationVitalii Koriakov2018-04-231-0/+193
| | | | | Change-Id: I84b789f9c0204ca0f0efb40a9a01215902c0ee1d Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Adding snapshot delete testcasesrivickynesh2018-04-231-0/+196
| | | | | Change-Id: Ide757078782ad5337d501f3c3ca39036910d995b Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test data self-heal of files when "self-heal-algorithm" option value is ↵Vitalii Koriakov2018-04-201-0/+148
| | | | | | | "full" (default) Change-Id: If916d20b0d7c9ded6fb1fc929d9ff1e7719d9594 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test data self-heal of files when "self-heal-algorithm" option value is ↵Vitalii Koriakov2018-04-201-0/+150
| | | | | | | "diff" (default) Change-Id: I34a196e8fc764d87e877a082be2b0575bb1b3b40 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test data self-heal of files when "self-heal-algorithm" option value is ↵Vitalii Koriakov2018-04-201-0/+204
| | | | | | | "diff" (heal command) Change-Id: Id310e0c17a872d8586ad8c7de79f1f68b93edb0a Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test case for Verifying checksum of mountpointRajesh Madaka2018-04-201-0/+179
| | | | | | | | | before and after changing network ping timeout Desc: Change-Id: I8f3636cd899e536d2401a8cd93b98bf66ceea0f7 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test brick process should not be started on read only storage_node disksVitalii Koriakov2018-04-191-3/+171
| | | | | Change-Id: Id0d9e468aaf0061e9ff0f5cc534c06017e97b793 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test volume set option data-self-healVitalii Koriakov2018-04-191-0/+434
| | | | | Change-Id: I1e0e291954533e602a50d3f6c25365bb0b68b926 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Fix pylint failures on masterNigel Babu2018-04-172-57/+52
| | | | Change-Id: I43a5b87c4acfd3df9483ca869d926714325ae1b9
* Optimized imports and updated commentsMiroslav Asenov2018-04-161-22/+43
| | | | Change-Id: Ib9f4ca5cda02ac1fe66a5c7cdc599255f2fadb4d
* Added test case - dht rename directoryMiroslav Asenov2018-04-161-0/+273
| | | | Change-Id: I243a8ecf57483c20e5060351a9f24e7687ccdcf4
* Test client side quorum with auto option for a x2 volume and client quorum ↵Vitalii Koriakov2018-03-281-0/+285
| | | | | | | as auto first brick must be up to have a rw filesystem in a x2 volume Change-Id: I98b0808070e6d254b1deeb1a3a744d19adccbf03 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Fix up coding style issues in testsNigel Babu2018-03-2739-816/+686
| | | | Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
* Fixing the test_self_heal_when_io_is_in_progress testcase forShwethaHP2018-03-201-0/+21
| | | | | | | | | dispersed volume. Refer to bug: https://bugzilla.redhat.com/show_bug.cgi?id=1470938 Change-Id: Iea327d87c6decbd0d607cb4abcb55384e8463614 Signed-off-by: ShwethaHP <spandura@redhat.com>
* Test Entry-Self-Heal (heal command)Vitalii Koriakov2018-03-051-0/+251
| | | | | Change-Id: Iaecdf6ad44677891340713a5c945a4bdc30ce527 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Impact of replace brick for glustershd on replicate and non-replicate volumesVitalii Koriakov2018-03-051-0/+159
| | | | | Change-Id: I99da69377658f3c5f47722dbc3edb216995e9fa4 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* glusterd test case: check rebalance status form newly probed nodeSanju Rakonde2018-02-271-0/+162
| | | | | | | | | | | | | | In this test case, 1. Create a volume and mount it 2. create some data 3. add brick 4. start rebalance 5. probe a new node 6. check rebalance status from new node. We should be able to check rebalance status from newly probed node. Change-Id: Ib09b468dcd3e81eb01f873e0491afe5ecf5124cc Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd test cases: validate volume create operationSanju Rakonde2018-02-271-0/+200
| | | | | | | | | | | In this test case, volume create operations such as creating volume with non existing brick path, already used brick, already existing volume name, bring the bricks to online with volume start force, creating a volume with bricks in another cluster, creating a volume when one of the brick node is down are validated. Change-Id: I796c8e9023244c592c88116cf3baff52ddade48f Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Enable snapshot auto-delete and validate with multiple snapshot creationsrivickynesh2018-02-271-0/+147
| | | | | Change-Id: Iabe08da9676b027de7b46622ee73162dcbffd98c Signed-off-by: srivickynesh <sselvan@redhat.com>
* Changing replica 3 to Arbiter Volume without IOKaran Sandha2018-02-163-0/+178
| | | | | Change-Id: I35568ef8234bc11a8bcf775315c24d9914fbb99d Signed-off-by: Karan Sandha <ksandha@redhat.com>
* test : snapshot createSunny Kumar2018-02-161-0/+204
| | | | | | | The purpose of this test is to validate snapshot create Change-Id: Ia2941a45ee62661bcef855ed4ed05a5c0aba6fb7 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* test : snapshot restoreSunny Kumar2018-02-161-0/+279
| | | | | | | The purpose of this test is to validate restore of a snapshot. Change-Id: Icd73697b10bbec4a1a9576420207ebb26cd69139 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* test : create >256 snaps for a volumeSunny Kumar2018-02-161-0/+174
| | | | | | | The purpose of this test is to validate create snap>256 Change-Id: Iea5e2ddcc3a5ef066cf4f55e1895947326a07904 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Test client side quorum option as auto should be local to volume and not ↵Vitalii Koriakov2018-02-161-3/+369
| | | | | | | global to whole cluster Change-Id: I9cd8ae1f490bc870540657b4f309197f8cee737e Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* glusterd test case: validating add brick functionalitySanju Rakonde2018-02-141-0/+134
| | | | | | | | | | | In this test case, 1. adding a single brick to replicated volume 2. adding non existing bricks to volume 3. adding bricks from node which is not a part of cluster 4. triggering rebalance start after add brick are validated. Change-Id: I982ff42dcbe6cd0cfbf3653b8cee0b269314db3f Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd-test case: validating volume operationsSanju Rakonde2018-02-141-0/+148
| | | | | | | | | | | | | In this test case basic volume operations are validated. i.e., starting, stopping and deleting a non existing volume, creating all types of volumes, creation of volume by using a brick from node which is not a part of cluster, starting a already started volume, stopping a volume twice, deleting a volume twice and validating volume info, volume list commands. these commands are internally validating xml output also. Change-Id: Ibf44d24e678d8bb14aa68bdeff988488b74741c6 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Concurrent volume set on different volumes simultaneously should succeedGaurav Yadav2018-02-141-0/+107
| | | | | | | | | | It includes: - Create 2 volumes - Run concurrent set operation on both the vols - Check for error or if any core generated Change-Id: I5f735290ff57ec5e9ad8d85fd5d822c739dbbb5c Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* Test Cases for performing NFS disable, enable andRajesh Madaka2018-02-141-0/+172
| | | | | | | | | | | | | | | | | | performing NFS mount and unmoount on all volumes, performing different types quorum settings. -> Set nfs.disable off -> Mount it with nfs and unmount it -> set nfs.disable enable -> Mount it with nfs -> Set nfs.disable disable -> Enable server quorum -> Set the quorum ratio to numbers and percentage, negative- numbers should fail, negative percentage should fail, fraction should fail, negative fraction should fail Change-Id: I6c4f022d571378f726b1cdbb7e74fdbc98d7f8cb Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding testcase: Negative test - Exercise Add-brick commandPrasad Desala2018-02-141-0/+188
| | | | | | | | | | | | | This testcase covers below scenarios, 1) Add-brick without volname 2) Adding a duplicate brick 3) Adding a brick which is already part of another volume 4) Adding a nested brick i.e brick inside another brick 5) Add a brick to a non existent volume 6) Add a brick from the peer which is not in cluster Change-Id: I2d68715facabaa172db94afc7e1b64f95fb069a7 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test Self Heal Info while accessing fileVitalii Koriakov2018-02-131-0/+230
| | | | | Change-Id: Ibef22a1719fe44aac20024d82fd7f2425945149c Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* quorum related messages in logsRajesh Madaka2018-02-081-0/+292
| | | | | | | | | | | | | | | | | | desc: create two volumes Set server quorum to both the volumes set server quorum ratio 90% stop glusterd service any one of the node quorum regain message should be recorded with message id - 106002 for both the volumes in /var/log/messages and /var/log/glusterfs/glusterd.log start the glusterd service of same node quorum regain message should be recorded with message id - 106003 for both the volumes in /var/log/messages and /var/log/glusterfs/glusterd.log Change-Id: I9ecab59b6131fc9c4c58bb972b3a41f15af1b87c Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* quota: check if quota limit can be set on a directory that that not existhari gowtham2018-02-071-0/+81
| | | | | | | | Once a volume is mounted, and quota is enabled, we have to make sure that quota cannot be set a directory that does not exist. Change-Id: Ic89551c6d96b628fe04c19605af696800695721d Signed-off-by: hari gowtham <hgowtham@redhat.com>
* Test Data-Self-Heal daemons off (heal command)Vitalii Koriakov2018-02-071-0/+416
| | | | | Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Creating snapshot when brick is downsrivickynesh2018-02-062-0/+146
| | | | | Change-Id: Ic1191e993db10a110fc753436ec60051adfd5350 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test case for performing different combinations of glusterRajesh Madaka2018-02-051-0/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | volume get functionalities Test steps: 1. Create a gluster cluster 2. Get the option from the non-existing volume, # gluster volume get <non-existing vol> io-cache 3. Get all options from the non-existing volume, # gluster volume get <non-existing volume > all 4. Provide a incorrect command syntax to get the options from the volume # gluster volume get <vol-name> # gluster volume get # gluster volume get io-cache 5. Create any type of volume in the cluster 6. Get the value of the non-existing option # gluster volume get <vol-name> temp.key 7. get all options set on the volume # gluster volume get <vol-name> all 8. get the specific option set on the volume # gluster volume get <vol-name> io-cache 9. Set an option on the volume # gluster volume set <vol-name> performance.low-prio-threads 14 10. Get all the options set on the volume and check for low-prio-threads # gluster volume get <vol-name> all | grep -i low-prio-threads 11. Get all the options set on the volume # gluster volume get <vol-name> all 12. Check for any cores in "cd /" Change-Id: Ifd7697e68d7ecf297d7be75680a5681686c51ca0 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding test case test_client_side_quorum_with_fixed_validate_max_bricksVijay Avuthu2018-02-021-0/+50
| | | | | | | | | | | | | | | Description: Test Script with Client Side Quorum with fixed should validate maximum number of bricks to accept * set cluster quorum to fixed * set cluster.quorum-count to higher number which is greater than number of replicas in a sub-voulme * Above step should fail Change-Id: I83952a07d36f5f890f3649a691afad2d0ccf037f Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* glusterd: Peer detach patch for test caseRajesh Madaka2018-02-021-0/+134
| | | | | | | | | | | | | -> Detaching specified server from cluster -> Detaching detached server again -> Detaching invalid host -> Detaching Non exist host -> Checking Core file created or not -> Peer detach one node which contains the bricks of volume created -> Peer detach force a node which is hosting bricks of a volume Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test case for Validate Peer probe with invalid ip and non existingRajesh Madaka2018-02-022-0/+103
| | | | | | | | | | | | | | | | | | | | | host, non existing ip Library for Core file Create or Not, Added is_core_file_created() function to lib_utils.py Test Desc: Test script to verify peer probe non existing host and invalid-ip, peer probe has to be fail for non existing host, Glusterd services up and running after invalid peer probe, and core file should not get created under "/", /tmp, /var/log/core directory Adding glusterd peer probe test cases with modifications according to comments adding lib for core file verification Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>