| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Detaching specified server from cluster
-> Detaching detached server again
-> Detaching invalid host
-> Detaching Non exist host
-> Checking Core file created or not
-> Peer detach one node which contains the bricks of volume created
-> Peer detach force a node which is hosting bricks of a volume
Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
host, non existing ip
Library for Core file Create or Not, Added is_core_file_created()
function to lib_utils.py
Test Desc:
Test script to verify peer probe non existing host
and invalid-ip, peer probe has to be fail for
non existing host, Glusterd services up and running
after invalid peer probe, and core file should not
get created under "/", /tmp, /var/log/core directory
Adding glusterd peer probe test cases with modifications according to comments
adding lib for core file verification
Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1.volume creation on root brick path without force and with force are validated.
2.deleting a brick manually and then strating the volume with force should not
bring that brick into online, this is validated.
3.Ater clearing all attributes, we should be able to create another volume
with previously used bricks, it is validated.
Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Volume delete operation should fail, when one of the brick node is
down, it is vaildated in this test case.
Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comand limited number of times while io is in progress.
Desc:
Create any type of volume then mount the volume, once
volume mounted successfully on client, start running IOs on
mount point then run the "gluster volume status volname inode"
command on all clusters randomly.
"gluster volume status volname inode" command should not get
hang while IOs in progress.
Then check that IOs completed successfullly or not on mount point.
Check that files in mount point listing properly or not.
Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
reset force
Description: Create Distribute volume, then enable bitrot and uss on that volume,
then check the bitd, scrub, snapd daemons running or not.
Then peform the volume reset, after volume reset only snap
daemon will get kill but bitd scrub daemons will be remain running.
Then perform volume reset with force, after volume reset with
force all three(bitd, scrub, snapd) daemons will get kill, these daemons
will not be running.
below are the steps performed for developing this test case:
-> Create Distributed volume
-> Enable BitD, Scrub and Uss on volume
-> Verify the BitD, Scrub and Uss daemons are running on every node
-> Reset the volume
-> Verify the Daemons (BitD, Scrub & Uss ) are running or not
-> Eanble Uss on same volume
-> Reset the volume with force
-> Verify all the daemons(BitD, Scrub & Uss) are running or not
Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|