summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* Optimized imports and updated commentsMiroslav Asenov2018-04-161-22/+43
| | | | Change-Id: Ib9f4ca5cda02ac1fe66a5c7cdc599255f2fadb4d
* Added test case - dht rename directoryMiroslav Asenov2018-04-161-0/+273
| | | | Change-Id: I243a8ecf57483c20e5060351a9f24e7687ccdcf4
* Test client side quorum with auto option for a x2 volume and client quorum ↵Vitalii Koriakov2018-03-281-0/+285
| | | | | | | as auto first brick must be up to have a rw filesystem in a x2 volume Change-Id: I98b0808070e6d254b1deeb1a3a744d19adccbf03 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Fix up coding style issues in testsNigel Babu2018-03-2739-816/+686
| | | | Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
* Fixing the test_self_heal_when_io_is_in_progress testcase forShwethaHP2018-03-201-0/+21
| | | | | | | | | dispersed volume. Refer to bug: https://bugzilla.redhat.com/show_bug.cgi?id=1470938 Change-Id: Iea327d87c6decbd0d607cb4abcb55384e8463614 Signed-off-by: ShwethaHP <spandura@redhat.com>
* Test Entry-Self-Heal (heal command)Vitalii Koriakov2018-03-051-0/+251
| | | | | Change-Id: Iaecdf6ad44677891340713a5c945a4bdc30ce527 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Impact of replace brick for glustershd on replicate and non-replicate volumesVitalii Koriakov2018-03-051-0/+159
| | | | | Change-Id: I99da69377658f3c5f47722dbc3edb216995e9fa4 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* glusterd test case: check rebalance status form newly probed nodeSanju Rakonde2018-02-271-0/+162
| | | | | | | | | | | | | | In this test case, 1. Create a volume and mount it 2. create some data 3. add brick 4. start rebalance 5. probe a new node 6. check rebalance status from new node. We should be able to check rebalance status from newly probed node. Change-Id: Ib09b468dcd3e81eb01f873e0491afe5ecf5124cc Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd test cases: validate volume create operationSanju Rakonde2018-02-271-0/+200
| | | | | | | | | | | In this test case, volume create operations such as creating volume with non existing brick path, already used brick, already existing volume name, bring the bricks to online with volume start force, creating a volume with bricks in another cluster, creating a volume when one of the brick node is down are validated. Change-Id: I796c8e9023244c592c88116cf3baff52ddade48f Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Enable snapshot auto-delete and validate with multiple snapshot creationsrivickynesh2018-02-271-0/+147
| | | | | Change-Id: Iabe08da9676b027de7b46622ee73162dcbffd98c Signed-off-by: srivickynesh <sselvan@redhat.com>
* Changing replica 3 to Arbiter Volume without IOKaran Sandha2018-02-163-0/+178
| | | | | Change-Id: I35568ef8234bc11a8bcf775315c24d9914fbb99d Signed-off-by: Karan Sandha <ksandha@redhat.com>
* test : snapshot createSunny Kumar2018-02-161-0/+204
| | | | | | | The purpose of this test is to validate snapshot create Change-Id: Ia2941a45ee62661bcef855ed4ed05a5c0aba6fb7 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* test : snapshot restoreSunny Kumar2018-02-161-0/+279
| | | | | | | The purpose of this test is to validate restore of a snapshot. Change-Id: Icd73697b10bbec4a1a9576420207ebb26cd69139 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* test : create >256 snaps for a volumeSunny Kumar2018-02-161-0/+174
| | | | | | | The purpose of this test is to validate create snap>256 Change-Id: Iea5e2ddcc3a5ef066cf4f55e1895947326a07904 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Test client side quorum option as auto should be local to volume and not ↵Vitalii Koriakov2018-02-161-3/+369
| | | | | | | global to whole cluster Change-Id: I9cd8ae1f490bc870540657b4f309197f8cee737e Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* glusterd test case: validating add brick functionalitySanju Rakonde2018-02-141-0/+134
| | | | | | | | | | | In this test case, 1. adding a single brick to replicated volume 2. adding non existing bricks to volume 3. adding bricks from node which is not a part of cluster 4. triggering rebalance start after add brick are validated. Change-Id: I982ff42dcbe6cd0cfbf3653b8cee0b269314db3f Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd-test case: validating volume operationsSanju Rakonde2018-02-141-0/+148
| | | | | | | | | | | | | In this test case basic volume operations are validated. i.e., starting, stopping and deleting a non existing volume, creating all types of volumes, creation of volume by using a brick from node which is not a part of cluster, starting a already started volume, stopping a volume twice, deleting a volume twice and validating volume info, volume list commands. these commands are internally validating xml output also. Change-Id: Ibf44d24e678d8bb14aa68bdeff988488b74741c6 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Concurrent volume set on different volumes simultaneously should succeedGaurav Yadav2018-02-141-0/+107
| | | | | | | | | | It includes: - Create 2 volumes - Run concurrent set operation on both the vols - Check for error or if any core generated Change-Id: I5f735290ff57ec5e9ad8d85fd5d822c739dbbb5c Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* Test Cases for performing NFS disable, enable andRajesh Madaka2018-02-141-0/+172
| | | | | | | | | | | | | | | | | | performing NFS mount and unmoount on all volumes, performing different types quorum settings. -> Set nfs.disable off -> Mount it with nfs and unmount it -> set nfs.disable enable -> Mount it with nfs -> Set nfs.disable disable -> Enable server quorum -> Set the quorum ratio to numbers and percentage, negative- numbers should fail, negative percentage should fail, fraction should fail, negative fraction should fail Change-Id: I6c4f022d571378f726b1cdbb7e74fdbc98d7f8cb Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding testcase: Negative test - Exercise Add-brick commandPrasad Desala2018-02-141-0/+188
| | | | | | | | | | | | | This testcase covers below scenarios, 1) Add-brick without volname 2) Adding a duplicate brick 3) Adding a brick which is already part of another volume 4) Adding a nested brick i.e brick inside another brick 5) Add a brick to a non existent volume 6) Add a brick from the peer which is not in cluster Change-Id: I2d68715facabaa172db94afc7e1b64f95fb069a7 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test Self Heal Info while accessing fileVitalii Koriakov2018-02-131-0/+230
| | | | | Change-Id: Ibef22a1719fe44aac20024d82fd7f2425945149c Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* quorum related messages in logsRajesh Madaka2018-02-081-0/+292
| | | | | | | | | | | | | | | | | | desc: create two volumes Set server quorum to both the volumes set server quorum ratio 90% stop glusterd service any one of the node quorum regain message should be recorded with message id - 106002 for both the volumes in /var/log/messages and /var/log/glusterfs/glusterd.log start the glusterd service of same node quorum regain message should be recorded with message id - 106003 for both the volumes in /var/log/messages and /var/log/glusterfs/glusterd.log Change-Id: I9ecab59b6131fc9c4c58bb972b3a41f15af1b87c Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* quota: check if quota limit can be set on a directory that that not existhari gowtham2018-02-071-0/+81
| | | | | | | | Once a volume is mounted, and quota is enabled, we have to make sure that quota cannot be set a directory that does not exist. Change-Id: Ic89551c6d96b628fe04c19605af696800695721d Signed-off-by: hari gowtham <hgowtham@redhat.com>
* Test Data-Self-Heal daemons off (heal command)Vitalii Koriakov2018-02-071-0/+416
| | | | | Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Creating snapshot when brick is downsrivickynesh2018-02-062-0/+146
| | | | | Change-Id: Ic1191e993db10a110fc753436ec60051adfd5350 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test case for performing different combinations of glusterRajesh Madaka2018-02-051-0/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | volume get functionalities Test steps: 1. Create a gluster cluster 2. Get the option from the non-existing volume, # gluster volume get <non-existing vol> io-cache 3. Get all options from the non-existing volume, # gluster volume get <non-existing volume > all 4. Provide a incorrect command syntax to get the options from the volume # gluster volume get <vol-name> # gluster volume get # gluster volume get io-cache 5. Create any type of volume in the cluster 6. Get the value of the non-existing option # gluster volume get <vol-name> temp.key 7. get all options set on the volume # gluster volume get <vol-name> all 8. get the specific option set on the volume # gluster volume get <vol-name> io-cache 9. Set an option on the volume # gluster volume set <vol-name> performance.low-prio-threads 14 10. Get all the options set on the volume and check for low-prio-threads # gluster volume get <vol-name> all | grep -i low-prio-threads 11. Get all the options set on the volume # gluster volume get <vol-name> all 12. Check for any cores in "cd /" Change-Id: Ifd7697e68d7ecf297d7be75680a5681686c51ca0 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding test case test_client_side_quorum_with_fixed_validate_max_bricksVijay Avuthu2018-02-021-0/+50
| | | | | | | | | | | | | | | Description: Test Script with Client Side Quorum with fixed should validate maximum number of bricks to accept * set cluster quorum to fixed * set cluster.quorum-count to higher number which is greater than number of replicas in a sub-voulme * Above step should fail Change-Id: I83952a07d36f5f890f3649a691afad2d0ccf037f Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* glusterd: Peer detach patch for test caseRajesh Madaka2018-02-021-0/+134
| | | | | | | | | | | | | -> Detaching specified server from cluster -> Detaching detached server again -> Detaching invalid host -> Detaching Non exist host -> Checking Core file created or not -> Peer detach one node which contains the bricks of volume created -> Peer detach force a node which is hosting bricks of a volume Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test case for Validate Peer probe with invalid ip and non existingRajesh Madaka2018-02-022-0/+103
| | | | | | | | | | | | | | | | | | | | | host, non existing ip Library for Core file Create or Not, Added is_core_file_created() function to lib_utils.py Test Desc: Test script to verify peer probe non existing host and invalid-ip, peer probe has to be fail for non existing host, Glusterd services up and running after invalid peer probe, and core file should not get created under "/", /tmp, /var/log/core directory Adding glusterd peer probe test cases with modifications according to comments adding lib for core file verification Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding Client Side Quorum Test CaseVijay Avuthu2018-02-011-0/+294
| | | | | Change-Id: I73512dde33207295fa954a3b3949f653f03f23c0 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* glusterd test cases: validating volume creation with bricks on root pathSanju Rakonde2018-02-011-0/+169
| | | | | | | | | | | | In this test case, 1.volume creation on root brick path without force and with force are validated. 2.deleting a brick manually and then strating the volume with force should not bring that brick into online, this is validated. 3.Ater clearing all attributes, we should be able to create another volume with previously used bricks, it is validated. Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd-test case: volume delete when one of brick node is downSanju Rakonde2018-02-011-0/+114
| | | | | | | | Volume delete operation should fail, when one of the brick node is down, it is vaildated in this test case. Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Adding Test case : test_glustershd_with_restarting_glusterdVijay Avuthu2018-02-011-2/+207
| | | | | Change-Id: Ica984b28d2c23e3d0d716d8c0dde6ab6ef69dc8f Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Test case for running continuously "get volume status volname inode"Rajesh Madaka2018-02-011-0/+170
| | | | | | | | | | | | | | | | | | comand limited number of times while io is in progress. Desc: Create any type of volume then mount the volume, once volume mounted successfully on client, start running IOs on mount point then run the "gluster volume status volname inode" command on all clusters randomly. "gluster volume status volname inode" command should not get hang while IOs in progress. Then check that IOs completed successfullly or not on mount point. Check that files in mount point listing properly or not. Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Use wait_for_volume_process_to_be_onlineShwethaHP2018-01-302-16/+47
| | | | | | | | Replace all the time.sleep() instances with wait_for_volume_process_to_be_online function Change-Id: Id7e34979f811bd85f7475748406803026741a3a8 Signed-off-by: ShwethaHP <spandura@redhat.com>
* Disable HealSanity tests on NFSNigel Babu2018-01-301-1/+3
| | | | | | | This test does not work on NFS mounts due to bug 1473668. Disable this test until we can find a workaround that makes this test actually green Change-Id: Icd93cd796be5e8a72e144ba09e66733d6dcf5913
* Test case for bitd, scrub, snapd checking after volume reset and volumeRajesh Madaka2018-01-281-0/+200
| | | | | | | | | | | | | | | | | | | | | | | | | | reset force Description: Create Distribute volume, then enable bitrot and uss on that volume, then check the bitd, scrub, snapd daemons running or not. Then peform the volume reset, after volume reset only snap daemon will get kill but bitd scrub daemons will be remain running. Then perform volume reset with force, after volume reset with force all three(bitd, scrub, snapd) daemons will get kill, these daemons will not be running. below are the steps performed for developing this test case: -> Create Distributed volume -> Enable BitD, Scrub and Uss on volume -> Verify the BitD, Scrub and Uss daemons are running on every node -> Reset the volume -> Verify the Daemons (BitD, Scrub & Uss ) are running or not -> Eanble Uss on same volume -> Reset the volume with force -> Verify all the daemons(BitD, Scrub & Uss) are running or not Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* [NFS-Ganesha] : Test to create an HA cluster and run I/OAmbarish Soman2018-01-281-0/+101
| | | | | Change-Id: Ia0f07590ceb8f680a8e750f793f37a63177904dc Signed-off-by: Ambarish Soman <asoman@redhat.com>
* Adding AFR self heal daemon test casesVijay Avuthu2018-01-232-0/+299
| | | | | | | | | | | Gave meaningful names to functions Returning -1 if there is no process running Replace numbers with words Rewording the msg "More than 1 or 0 self heal daemon" Review Comments incorporated Change-Id: If424a6f78536279c178ee45d62099fd8f63421dd Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Fixing the typo error of not enclosing the message inside "()"ShwethaHP2018-01-181-2/+2
| | | | | | | | | | | This should fix the test_cvt_test_self_heal_when_io_is_in_progress: self.assertTrue(ret, "Not all the bricks in list:%s are offline", > bricks_to_bring_offline) E TypeError: assertTrue() takes at most 3 arguments (4 given) Change-Id: Ibfee5253020c2f8927c4fd22a992f7cff7509a5d Signed-off-by: ShwethaHP <spandura@redhat.com>
* Adding example files to demonstrate how to use functions available inShwethaHP2018-01-161-127/+0
| | | | | | | glustolibs-gluster libs Change-Id: I44f559dd0477f97278b1444e7a6d292ca58b99dc Signed-off-by: ShwethaHP <spandura@redhat.com>
* Override default volume_type configuration in gluster base class ifShwethaHP2018-01-161-0/+2
| | | | | | | | | volume type configuration is defined in the config file. Providing an option in config file to create volume with 'force' option. Change-Id: Ifeac20685f0949f7573257f30f05df6f79ce1dbd Signed-off-by: ShwethaHP <spandura@redhat.com>
* Adding missing parametersVijay Avuthu2018-01-081-2/+4
| | | | | | | Chnages incorporated as per comment Change-Id: I9a21e0350400198806644c07474ae6aeeeae6c58 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Modifying gluster_base_class to have static methods for :ShwethaHP2017-12-133-81/+172
| | | | | | | | | | | | | | | | | | | | | | | | 1. setup_volume 2. mount_volume 3. setup_volume_and_mount 4. cleanup_volume 5. unmount_volume 6. unmount_and_cleanup_volume These are added as static methods to give the test developer the flexibility to call the setup/cleanup's or any other function from any where in the testclass which inherits GlusterBaseClass Also, this will remove the need for GlusterVolumeBaseClass and hence removing the hardcoding of creattion of volume, mouting in setUpClass of GlusterVolumeBaseClass. This will also help in writing new baseclasses for example: Block which can have class funcitons specific to block and inherit all the functions from GlusterBaseClass Change-Id: I3f0709af75e5bb242d265d04ada3a747c155211d Signed-off-by: ShwethaHP <spandura@redhat.com>
* Adding an example test file.ShwethaHP2017-12-051-0/+127
| | | | | Change-Id: I766c9e1f905728618549a7484a70008d91959538 Signed-off-by: ShwethaHP <spandura@redhat.com>
* Do not validate the return code of the 'rebalance status' command when just ↵stableShwethaHP2017-09-111-7/+3
| | | | | | | | | logging the output Change-Id: I6ff7e363871607c2f9d4272be7198150db59af5d Signed-off-by: ShwethaHP <spandura@redhat.com>
* 1) bring_bricks_online: Wait for bricks to be online for 30 secondsShwethaHP2017-08-281-8/+17
| | | | | | | | | after bringing them online. 2) log all the xml output/error to DEBUG log level. Change-Id: If6bb758ac728f299292def9d72c0ef166a1569ae Signed-off-by: ShwethaHP <spandura@redhat.com>
* Providing configs to set volume options, group options when exporting volume ↵ShwethaHP2017-08-091-1/+12
| | | | | | | | | | | | | | | as 'smb share', 'nfs-ganesha export' in the config yml. Reading the configs in the gluster_base_class and setting those configs when exporting the volumes as 'smb share' or 'nfs-ganesha export'. recommended options when exporting volume as 'smb share': group: "metadata-cache" cache-samba-metadata: "on" Change-Id: I86a118c7015eaedd849a0f6e8b613605df5b6c32 Signed-off-by: ShwethaHP <spandura@redhat.com>
* Set volume options on all the volume typesShwethaHP2017-08-031-0/+10
| | | | | | | | | | | | Providing a section in the config file to set volume options that can be applicable to any volume type created. The glusterbase class also reads the volume_options if provided in config file and set it on all the volumes being created. These volume options will be overwritten if there are any volume options specified while defining the volumes under 'volumes' section. Change-Id: I0003312251b4f8b151c9ba5c71d1b6a8884cc85e Signed-off-by: ShwethaHP <spandura@redhat.com>
* glusto nfs-ganesha: Added test to verify nfsv4 acl functionality with glusterfsArthy Loganathan2017-08-011-0/+113
| | | | | | Change-Id: I2747c3770925b8d8f05e10fb7da49d105b7130e6 Signed-off-by: Arthy Loganathan <aloganat@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>