| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Change-Id: Iabe08da9676b027de7b46622ee73162dcbffd98c
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I35568ef8234bc11a8bcf775315c24d9914fbb99d
Signed-off-by: Karan Sandha <ksandha@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate snapshot create
Change-Id: Ia2941a45ee62661bcef855ed4ed05a5c0aba6fb7
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate restore of a snapshot.
Change-Id: Icd73697b10bbec4a1a9576420207ebb26cd69139
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate create snap>256
Change-Id: Iea5e2ddcc3a5ef066cf4f55e1895947326a07904
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
global to whole cluster
Change-Id: I9cd8ae1f490bc870540657b4f309197f8cee737e
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1. adding a single brick to replicated volume
2. adding non existing bricks to volume
3. adding bricks from node which is not a part of cluster
4. triggering rebalance start after add brick are validated.
Change-Id: I982ff42dcbe6cd0cfbf3653b8cee0b269314db3f
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case basic volume operations are validated. i.e.,
starting, stopping and deleting a non existing volume, creating
all types of volumes, creation of volume by using a brick from
node which is not a part of cluster, starting a already started
volume, stopping a volume twice, deleting a volume twice and
validating volume info, volume list commands. these commands are
internally validating xml output also.
Change-Id: Ibf44d24e678d8bb14aa68bdeff988488b74741c6
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
It includes:
- Create 2 volumes
- Run concurrent set operation on both the vols
- Check for error or if any core generated
Change-Id: I5f735290ff57ec5e9ad8d85fd5d822c739dbbb5c
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performing NFS mount and unmoount on all volumes,
performing different types quorum settings.
-> Set nfs.disable off
-> Mount it with nfs and unmount it
-> set nfs.disable enable
-> Mount it with nfs
-> Set nfs.disable disable
-> Enable server quorum
-> Set the quorum ratio to numbers and percentage,
negative- numbers should fail, negative percentage should fail,
fraction should fail, negative fraction should fail
Change-Id: I6c4f022d571378f726b1cdbb7e74fdbc98d7f8cb
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase covers below scenarios,
1) Add-brick without volname
2) Adding a duplicate brick
3) Adding a brick which is already part of another volume
4) Adding a nested brick i.e brick inside another brick
5) Add a brick to a non existent volume
6) Add a brick from the peer which is not in cluster
Change-Id: I2d68715facabaa172db94afc7e1b64f95fb069a7
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
| |
Change-Id: Ibef22a1719fe44aac20024d82fd7f2425945149c
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
desc:
create two volumes
Set server quorum to both the volumes
set server quorum ratio 90%
stop glusterd service any one of the node
quorum regain message should be recorded with message id - 106002
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
start the glusterd service of same node
quorum regain message should be recorded with message id - 106003
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
Change-Id: I9ecab59b6131fc9c4c58bb972b3a41f15af1b87c
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Once a volume is mounted, and quota is enabled, we have to make sure
that quota cannot be set a directory that does not exist.
Change-Id: Ic89551c6d96b628fe04c19605af696800695721d
Signed-off-by: hari gowtham <hgowtham@redhat.com>
|
|
|
|
|
| |
Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ic1191e993db10a110fc753436ec60051adfd5350
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volume get functionalities
Test steps:
1. Create a gluster cluster
2. Get the option from the non-existing volume,
# gluster volume get <non-existing vol> io-cache
3. Get all options from the non-existing volume,
# gluster volume get <non-existing volume > all
4. Provide a incorrect command syntax to get the options
from the volume
# gluster volume get <vol-name>
# gluster volume get
# gluster volume get io-cache
5. Create any type of volume in the cluster
6. Get the value of the non-existing option
# gluster volume get <vol-name> temp.key
7. get all options set on the volume
# gluster volume get <vol-name> all
8. get the specific option set on the volume
# gluster volume get <vol-name> io-cache
9. Set an option on the volume
# gluster volume set <vol-name> performance.low-prio-threads 14
10. Get all the options set on the volume and check
for low-prio-threads
# gluster volume get <vol-name> all | grep -i low-prio-threads
11. Get all the options set on the volume
# gluster volume get <vol-name> all
12. Check for any cores in "cd /"
Change-Id: Ifd7697e68d7ecf297d7be75680a5681686c51ca0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script with Client Side Quorum with fixed should validate
maximum number of bricks to accept
* set cluster quorum to fixed
* set cluster.quorum-count to higher number which is greater than
number of replicas in a sub-voulme
* Above step should fail
Change-Id: I83952a07d36f5f890f3649a691afad2d0ccf037f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Detaching specified server from cluster
-> Detaching detached server again
-> Detaching invalid host
-> Detaching Non exist host
-> Checking Core file created or not
-> Peer detach one node which contains the bricks of volume created
-> Peer detach force a node which is hosting bricks of a volume
Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
host, non existing ip
Library for Core file Create or Not, Added is_core_file_created()
function to lib_utils.py
Test Desc:
Test script to verify peer probe non existing host
and invalid-ip, peer probe has to be fail for
non existing host, Glusterd services up and running
after invalid peer probe, and core file should not
get created under "/", /tmp, /var/log/core directory
Adding glusterd peer probe test cases with modifications according to comments
adding lib for core file verification
Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I73512dde33207295fa954a3b3949f653f03f23c0
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1.volume creation on root brick path without force and with force are validated.
2.deleting a brick manually and then strating the volume with force should not
bring that brick into online, this is validated.
3.Ater clearing all attributes, we should be able to create another volume
with previously used bricks, it is validated.
Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Volume delete operation should fail, when one of the brick node is
down, it is vaildated in this test case.
Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica984b28d2c23e3d0d716d8c0dde6ab6ef69dc8f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comand limited number of times while io is in progress.
Desc:
Create any type of volume then mount the volume, once
volume mounted successfully on client, start running IOs on
mount point then run the "gluster volume status volname inode"
command on all clusters randomly.
"gluster volume status volname inode" command should not get
hang while IOs in progress.
Then check that IOs completed successfullly or not on mount point.
Check that files in mount point listing properly or not.
Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Replace all the time.sleep() instances with
wait_for_volume_process_to_be_online function
Change-Id: Id7e34979f811bd85f7475748406803026741a3a8
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
This test does not work on NFS mounts due to bug 1473668. Disable this
test until we can find a workaround that makes this test actually green
Change-Id: Icd93cd796be5e8a72e144ba09e66733d6dcf5913
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reset force
Description: Create Distribute volume, then enable bitrot and uss on that volume,
then check the bitd, scrub, snapd daemons running or not.
Then peform the volume reset, after volume reset only snap
daemon will get kill but bitd scrub daemons will be remain running.
Then perform volume reset with force, after volume reset with
force all three(bitd, scrub, snapd) daemons will get kill, these daemons
will not be running.
below are the steps performed for developing this test case:
-> Create Distributed volume
-> Enable BitD, Scrub and Uss on volume
-> Verify the BitD, Scrub and Uss daemons are running on every node
-> Reset the volume
-> Verify the Daemons (BitD, Scrub & Uss ) are running or not
-> Eanble Uss on same volume
-> Reset the volume with force
-> Verify all the daemons(BitD, Scrub & Uss) are running or not
Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Ia0f07590ceb8f680a8e750f793f37a63177904dc
Signed-off-by: Ambarish Soman <asoman@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Gave meaningful names to functions
Returning -1 if there is no process running
Replace numbers with words
Rewording the msg "More than 1 or 0 self heal daemon"
Review Comments incorporated
Change-Id: If424a6f78536279c178ee45d62099fd8f63421dd
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This should fix the test_cvt_test_self_heal_when_io_is_in_progress:
self.assertTrue(ret, "Not all the bricks in list:%s are offline",
> bricks_to_bring_offline)
E TypeError: assertTrue() takes at most 3 arguments (4 given)
Change-Id: Ibfee5253020c2f8927c4fd22a992f7cff7509a5d
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
Chnages incorporated as per comment
Change-Id: I9a21e0350400198806644c07474ae6aeeeae6c58
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. setup_volume
2. mount_volume
3. setup_volume_and_mount
4. cleanup_volume
5. unmount_volume
6. unmount_and_cleanup_volume
These are added as static methods to give the test developer the
flexibility to call the setup/cleanup's or any other function
from any where in the testclass which inherits GlusterBaseClass
Also, this will remove the need for GlusterVolumeBaseClass and
hence removing the hardcoding of creattion of volume, mouting
in setUpClass of GlusterVolumeBaseClass.
This will also help in writing new baseclasses for example:
Block which can have class funcitons specific to block
and inherit all the functions from GlusterBaseClass
Change-Id: I3f0709af75e5bb242d265d04ada3a747c155211d
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
logging
the output
Change-Id: I6ff7e363871607c2f9d4272be7198150db59af5d
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
after bringing them online.
2) log all the xml output/error to DEBUG log level.
Change-Id: If6bb758ac728f299292def9d72c0ef166a1569ae
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I2747c3770925b8d8f05e10fb7da49d105b7130e6
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
|
|
|
|
|
| |
Change-Id: Ic066b8ad452b297a2c48e912883536ce3960c0eb
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
| |
IO is in progress and subdir mounts from client and server side
Change-Id: I80b22e6602bbc18652135211ea08710392c04cb6
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
| |
in progress, enable and disable cluster
Change-Id: I15adbc73d72a67bd6b4189298631c9374540f2bb
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
| |
checks nfs ganesha behaviour
Change-Id: I2dc7f0fb016982b7b7fa4a87c0310e4c96376f94
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1) test heal with replace-brick when io in progress
2) test heal when bricks goes offline and comes back online when io in progress.
Change-Id: Id9002c465aec8617217a12fa36846cdc1f61d7a4
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Remove brick Sanity case covers testing of remove-brick of a subvolume,
waiting for rebalance to complete, commiting the operation and validate
IO is successful on the mount.
Change-Id: I5912f62b3df5dfb5bf5339de036967f83b6a5117
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1) Quota: Enabling, Setting Limit, Disabling, Listing of Quota is tested.
2) Snapshot: Creating, Listing, Activating, Viewing the snap from mount,
De-Activating of snapshots is tested.
Change-Id: Ia91e86e121d5d3fcc038704031617594d3d601d4
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
|
|
| |
- expanding the volume i.e test add-brick is successful on the volume.
Change-Id: I8110eea97cf46e3ccc24156d6c67cae0cbf5a7c1
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test volume set option while IO is in progress. This basically tests
IO to be successful after the client graph changes.
(Note: This case will be run as part of Build Verification Test Suite)
Change-Id: I111cf0214596fe32c872fdc73c5ccb8ab4a308be
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
|
|
| |
validate all peers are in connected state.
Change-Id: I3aa725aea35d404326610a2490b3f48e7fa46546
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) glusterbaseclass:
- Making changes in glusterbaseclass to not necessarily have volume_type
and mount_type.
2) volume_libs:
- setup_volume don't have to export the volume. It just creates starts
and setup's any operation on the volume.
- Moved the sharing/exporting the volume to BaseClass
3) Renaming samba_ops to samba_libs to have better naming practice.
4) Adding nfs_ganesha_libs for any nfs related helper functions
5) Adding a new vvt case which creates, deteles, creates the volume.
Change-Id: I238c349df7165d669d3bc7234d97845dba2f51a6
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
|
|
| |
1) Tests glusterd start, stop, restart services
Change-Id: Ib424e24be49a7100808449e3e82706564088dcf6
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The setup should never raise an assert failure in a test. Only tests
should fail an assert. If an essential test setup doesn't work, we
should be raising custom exceptions instead.
Change-Id: I6d5cce448132b71b6fde3a39fef894be8b1216d3
Signed-off-by: Nigel Babu <nigelb@redhat.com>
|
|
Change-Id: I9284eb7ddaf727eef2d107e1e886fc60ec760446
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|