| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
| |
* glusterfile.py - helper for gluster client and backend files.
* glusterdir.py - helper for gluster client and backend dirs.
* brickdir.py - helper for collection and hashing of brickdirs (from pathinfo data).
* layout.py - base class for simple DHT layout validation.
* dht_test_util.py - utility module to walk a directory tree and run tests against files.
* constants.py - definitions for constants used in DHT libraries.
* exceptions.py - definitions for exceptions raised in DHT libraries.
Change-Id: I44770a822e0ec79561b3aa048e555320f622116a
Signed-off-by: Jonathan Holloway <jholloway@redhat.com>
|
|
|
|
|
| |
Change-Id: I35568ef8234bc11a8bcf775315c24d9914fbb99d
Signed-off-by: Karan Sandha <ksandha@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate snapshot create
Change-Id: Ia2941a45ee62661bcef855ed4ed05a5c0aba6fb7
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate restore of a snapshot.
Change-Id: Icd73697b10bbec4a1a9576420207ebb26cd69139
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate create snap>256
Change-Id: Iea5e2ddcc3a5ef066cf4f55e1895947326a07904
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
global to whole cluster
Change-Id: I9cd8ae1f490bc870540657b4f309197f8cee737e
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1. adding a single brick to replicated volume
2. adding non existing bricks to volume
3. adding bricks from node which is not a part of cluster
4. triggering rebalance start after add brick are validated.
Change-Id: I982ff42dcbe6cd0cfbf3653b8cee0b269314db3f
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case basic volume operations are validated. i.e.,
starting, stopping and deleting a non existing volume, creating
all types of volumes, creation of volume by using a brick from
node which is not a part of cluster, starting a already started
volume, stopping a volume twice, deleting a volume twice and
validating volume info, volume list commands. these commands are
internally validating xml output also.
Change-Id: Ibf44d24e678d8bb14aa68bdeff988488b74741c6
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
It includes:
- Create 2 volumes
- Run concurrent set operation on both the vols
- Check for error or if any core generated
Change-Id: I5f735290ff57ec5e9ad8d85fd5d822c739dbbb5c
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performing NFS mount and unmoount on all volumes,
performing different types quorum settings.
-> Set nfs.disable off
-> Mount it with nfs and unmount it
-> set nfs.disable enable
-> Mount it with nfs
-> Set nfs.disable disable
-> Enable server quorum
-> Set the quorum ratio to numbers and percentage,
negative- numbers should fail, negative percentage should fail,
fraction should fail, negative fraction should fail
Change-Id: I6c4f022d571378f726b1cdbb7e74fdbc98d7f8cb
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase covers below scenarios,
1) Add-brick without volname
2) Adding a duplicate brick
3) Adding a brick which is already part of another volume
4) Adding a nested brick i.e brick inside another brick
5) Add a brick to a non existent volume
6) Add a brick from the peer which is not in cluster
Change-Id: I2d68715facabaa172db94afc7e1b64f95fb069a7
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
| |
Needs to be done for cases where we don't use runs_on
Change-Id: I0d5b424621706842fb1a8cccb17c653c6dcff72d
|
|
|
|
|
| |
Change-Id: Ibef22a1719fe44aac20024d82fd7f2425945149c
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I591c9ca5da52e76b3300c243a5121d27ac89a8f1
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: Ib7dcf883d9aacd89c652d8cee9e66d5bc44169b0
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
desc:
create two volumes
Set server quorum to both the volumes
set server quorum ratio 90%
stop glusterd service any one of the node
quorum regain message should be recorded with message id - 106002
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
start the glusterd service of same node
quorum regain message should be recorded with message id - 106003
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
Change-Id: I9ecab59b6131fc9c4c58bb972b3a41f15af1b87c
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Once a volume is mounted, and quota is enabled, we have to make sure
that quota cannot be set a directory that does not exist.
Change-Id: Ic89551c6d96b628fe04c19605af696800695721d
Signed-off-by: hari gowtham <hgowtham@redhat.com>
|
|
|
|
|
| |
Change-Id: I382efec40d425c21ee5e0b91d0715fcdb07eae9f
Signed-off-by: Nigel Babu <nigelb@redhat.com>
|
|
|
|
|
| |
Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ic1191e993db10a110fc753436ec60051adfd5350
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: If20aac0247dc42194a23c2b64952aac83234292e
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
| |
Add a try around dir copy to eliminate readthedocs.org cannot open error
Change-Id: Ie9160a8b7dc42839fe4c176c89aa67ae26c1266e
Signed-off-by: Jonathan Holloway <jholloway@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volume get functionalities
Test steps:
1. Create a gluster cluster
2. Get the option from the non-existing volume,
# gluster volume get <non-existing vol> io-cache
3. Get all options from the non-existing volume,
# gluster volume get <non-existing volume > all
4. Provide a incorrect command syntax to get the options
from the volume
# gluster volume get <vol-name>
# gluster volume get
# gluster volume get io-cache
5. Create any type of volume in the cluster
6. Get the value of the non-existing option
# gluster volume get <vol-name> temp.key
7. get all options set on the volume
# gluster volume get <vol-name> all
8. get the specific option set on the volume
# gluster volume get <vol-name> io-cache
9. Set an option on the volume
# gluster volume set <vol-name> performance.low-prio-threads 14
10. Get all the options set on the volume and check
for low-prio-threads
# gluster volume get <vol-name> all | grep -i low-prio-threads
11. Get all the options set on the volume
# gluster volume get <vol-name> all
12. Check for any cores in "cd /"
Change-Id: Ifd7697e68d7ecf297d7be75680a5681686c51ca0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script with Client Side Quorum with fixed should validate
maximum number of bricks to accept
* set cluster quorum to fixed
* set cluster.quorum-count to higher number which is greater than
number of replicas in a sub-voulme
* Above step should fail
Change-Id: I83952a07d36f5f890f3649a691afad2d0ccf037f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Detaching specified server from cluster
-> Detaching detached server again
-> Detaching invalid host
-> Detaching Non exist host
-> Checking Core file created or not
-> Peer detach one node which contains the bricks of volume created
-> Peer detach force a node which is hosting bricks of a volume
Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
This is a library to check if quota' hard and soft limit are
exceeded from the output of the quota list xml command.
Change-Id: Ie02ab9fcbf2aa2d248e0cb6385ab3d3f0554dec0
Signed-off-by: hari gowtham <hgowtham>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
host, non existing ip
Library for Core file Create or Not, Added is_core_file_created()
function to lib_utils.py
Test Desc:
Test script to verify peer probe non existing host
and invalid-ip, peer probe has to be fail for
non existing host, Glusterd services up and running
after invalid peer probe, and core file should not
get created under "/", /tmp, /var/log/core directory
Adding glusterd peer probe test cases with modifications according to comments
adding lib for core file verification
Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I73512dde33207295fa954a3b3949f653f03f23c0
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
| |
mount_type is 'glusterfs'
Change-Id: I00f0b5edfea0e09381d1404a0cfd16396a8fbde9
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1.volume creation on root brick path without force and with force are validated.
2.deleting a brick manually and then strating the volume with force should not
bring that brick into online, this is validated.
3.Ater clearing all attributes, we should be able to create another volume
with previously used bricks, it is validated.
Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Volume delete operation should fail, when one of the brick node is
down, it is vaildated in this test case.
Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica984b28d2c23e3d0d716d8c0dde6ab6ef69dc8f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comand limited number of times while io is in progress.
Desc:
Create any type of volume then mount the volume, once
volume mounted successfully on client, start running IOs on
mount point then run the "gluster volume status volname inode"
command on all clusters randomly.
"gluster volume status volname inode" command should not get
hang while IOs in progress.
Then check that IOs completed successfullly or not on mount point.
Check that files in mount point listing properly or not.
Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Description :
Function to check whether given nodes are online or not
Change-Id: I92a40ac1a6bcdbdfb845413902dd0a798c68ed5c
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
| |
Replace all the time.sleep() instances with
wait_for_volume_process_to_be_online function
Change-Id: Id7e34979f811bd85f7475748406803026741a3a8
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
are online
Change-Id: I25424dd182c347a0570713ada8d2de611840fef3
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: Ifc6aa4d106dadf97e1741ec54a3323ea96e33101
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
This test does not work on NFS mounts due to bug 1473668. Disable this
test until we can find a workaround that makes this test actually green
Change-Id: Icd93cd796be5e8a72e144ba09e66733d6dcf5913
|
|
|
|
|
|
|
|
|
| |
Description:
Bring the self-heal daemon process offline for the nodes
Change-Id: I55301fb86a97147920991aa4455e8e5d80b1c5c3
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This function parses the output of 'gluster vol remove-brick status'
command for the given volume. The output of this function is
remove-brick status output in dictionary format.
Change-Id: I91b0bf9221b2645041abc5bbc016e356d0072b0b
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
| |
Change-Id: I7a8465a60c8e5d8f84a647ae65dbabcab2184516
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: If303d22f52d31e99676a6e97fbe0b9cb7d5a1234
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reset force
Description: Create Distribute volume, then enable bitrot and uss on that volume,
then check the bitd, scrub, snapd daemons running or not.
Then peform the volume reset, after volume reset only snap
daemon will get kill but bitd scrub daemons will be remain running.
Then perform volume reset with force, after volume reset with
force all three(bitd, scrub, snapd) daemons will get kill, these daemons
will not be running.
below are the steps performed for developing this test case:
-> Create Distributed volume
-> Enable BitD, Scrub and Uss on volume
-> Verify the BitD, Scrub and Uss daemons are running on every node
-> Reset the volume
-> Verify the Daemons (BitD, Scrub & Uss ) are running or not
-> Eanble Uss on same volume
-> Reset the volume with force
-> Verify all the daemons(BitD, Scrub & Uss) are running or not
Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Ia0f07590ceb8f680a8e750f793f37a63177904dc
Signed-off-by: Ambarish Soman <asoman@redhat.com>
|
|
|
|
|
|
|
| |
Added is_snapd_running function to uss_ops.py file
Change-Id: Ib1ff0a16550c94604209588bd5221f9ee6e9db92
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Gave meaningful names to functions
Returning -1 if there is no process running
Replace numbers with words
Rewording the msg "More than 1 or 0 self heal daemon"
Review Comments incorporated
Change-Id: If424a6f78536279c178ee45d62099fd8f63421dd
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: Id33b1de7c01cd7774d3c4cce3c40ddfe2dc0d884
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This should fix the test_cvt_test_self_heal_when_io_is_in_progress:
self.assertTrue(ret, "Not all the bricks in list:%s are offline",
> bricks_to_bring_offline)
E TypeError: assertTrue() takes at most 3 arguments (4 given)
Change-Id: Ibfee5253020c2f8927c4fd22a992f7cff7509a5d
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
| |
is str. i.e passing a single node to the function. If it is
str, then convert it to list
Change-Id: I1abacf62fdbe1ec56fe85c86d8e2a323a2c3971b
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
glustolibs-gluster libs
Change-Id: I44f559dd0477f97278b1444e7a6d292ca58b99dc
Signed-off-by: ShwethaHP <spandura@redhat.com>
|