summaryrefslogtreecommitdiffstats
path: root/tests/functional/glusterd
Commit message (Collapse)AuthorAgeFilesLines
...
* glusterd test cases: validating volume creation with bricks on root pathSanju Rakonde2018-02-011-0/+169
| | | | | | | | | | | | In this test case, 1.volume creation on root brick path without force and with force are validated. 2.deleting a brick manually and then strating the volume with force should not bring that brick into online, this is validated. 3.Ater clearing all attributes, we should be able to create another volume with previously used bricks, it is validated. Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd-test case: volume delete when one of brick node is downSanju Rakonde2018-02-011-0/+114
| | | | | | | | Volume delete operation should fail, when one of the brick node is down, it is vaildated in this test case. Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Test case for running continuously "get volume status volname inode"Rajesh Madaka2018-02-011-0/+170
| | | | | | | | | | | | | | | | | | comand limited number of times while io is in progress. Desc: Create any type of volume then mount the volume, once volume mounted successfully on client, start running IOs on mount point then run the "gluster volume status volname inode" command on all clusters randomly. "gluster volume status volname inode" command should not get hang while IOs in progress. Then check that IOs completed successfullly or not on mount point. Check that files in mount point listing properly or not. Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test case for bitd, scrub, snapd checking after volume reset and volumeRajesh Madaka2018-01-281-0/+200
reset force Description: Create Distribute volume, then enable bitrot and uss on that volume, then check the bitd, scrub, snapd daemons running or not. Then peform the volume reset, after volume reset only snap daemon will get kill but bitd scrub daemons will be remain running. Then perform volume reset with force, after volume reset with force all three(bitd, scrub, snapd) daemons will get kill, these daemons will not be running. below are the steps performed for developing this test case: -> Create Distributed volume -> Enable BitD, Scrub and Uss on volume -> Verify the BitD, Scrub and Uss daemons are running on every node -> Reset the volume -> Verify the Daemons (BitD, Scrub & Uss ) are running or not -> Eanble Uss on same volume -> Reset the volume with force -> Verify all the daemons(BitD, Scrub & Uss) are running or not Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>