summaryrefslogtreecommitdiffstats
path: root/tests/functional/glusterd
Commit message (Collapse)AuthorAgeFilesLines
...
* glusterd: Peer detach patch for test caseRajesh Madaka2018-02-021-0/+134
| | | | | | | | | | | | | -> Detaching specified server from cluster -> Detaching detached server again -> Detaching invalid host -> Detaching Non exist host -> Checking Core file created or not -> Peer detach one node which contains the bricks of volume created -> Peer detach force a node which is hosting bricks of a volume Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test case for Validate Peer probe with invalid ip and non existingRajesh Madaka2018-02-022-0/+103
| | | | | | | | | | | | | | | | | | | | | host, non existing ip Library for Core file Create or Not, Added is_core_file_created() function to lib_utils.py Test Desc: Test script to verify peer probe non existing host and invalid-ip, peer probe has to be fail for non existing host, Glusterd services up and running after invalid peer probe, and core file should not get created under "/", /tmp, /var/log/core directory Adding glusterd peer probe test cases with modifications according to comments adding lib for core file verification Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* glusterd test cases: validating volume creation with bricks on root pathSanju Rakonde2018-02-011-0/+169
| | | | | | | | | | | | In this test case, 1.volume creation on root brick path without force and with force are validated. 2.deleting a brick manually and then strating the volume with force should not bring that brick into online, this is validated. 3.Ater clearing all attributes, we should be able to create another volume with previously used bricks, it is validated. Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd-test case: volume delete when one of brick node is downSanju Rakonde2018-02-011-0/+114
| | | | | | | | Volume delete operation should fail, when one of the brick node is down, it is vaildated in this test case. Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Test case for running continuously "get volume status volname inode"Rajesh Madaka2018-02-011-0/+170
| | | | | | | | | | | | | | | | | | comand limited number of times while io is in progress. Desc: Create any type of volume then mount the volume, once volume mounted successfully on client, start running IOs on mount point then run the "gluster volume status volname inode" command on all clusters randomly. "gluster volume status volname inode" command should not get hang while IOs in progress. Then check that IOs completed successfullly or not on mount point. Check that files in mount point listing properly or not. Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test case for bitd, scrub, snapd checking after volume reset and volumeRajesh Madaka2018-01-281-0/+200
reset force Description: Create Distribute volume, then enable bitrot and uss on that volume, then check the bitd, scrub, snapd daemons running or not. Then peform the volume reset, after volume reset only snap daemon will get kill but bitd scrub daemons will be remain running. Then perform volume reset with force, after volume reset with force all three(bitd, scrub, snapd) daemons will get kill, these daemons will not be running. below are the steps performed for developing this test case: -> Create Distributed volume -> Enable BitD, Scrub and Uss on volume -> Verify the BitD, Scrub and Uss daemons are running on every node -> Reset the volume -> Verify the Daemons (BitD, Scrub & Uss ) are running or not -> Eanble Uss on same volume -> Reset the volume with force -> Verify all the daemons(BitD, Scrub & Uss) are running or not Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>