| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the Client Side Quorum with fixed
for cross 2 volume
* Disable self heal daemom
* set cluster.quorum-type to fixed.
* start I/O( write and read )from the mount point - must succeed
* Bring down brick1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 2
* start I/0 ( write and read ) - read must pass, write will fail
* bring back the brick1 online
* start I/0 ( write and read ) - must succeed
* Bring down brick2
* start I/0 ( write and read ) - read must pass, write will fail
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* cluster.quorum-count back to 2 and cluster.quorum-type to auto
* start I/0 ( write and read ) - must succeed
* Bring back brick2 online
* Bring down brick1
* start I/0 ( write and read ) - read must pass, write will fail
* set the quorum-type to none
* start I/0 ( write and read ) - must succeed
Change-Id: I415aba5db211607476fd7345c8ca6f4d49373402
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Peer probe using short name
- Create volume using IP
- Start/stop/getvolumeinfo
- Create volume using FQDN
- Start/stop/getvolumeinfo
Change-Id: I2d55944035c44e8ee360beb4ce41550338586d15
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Optimisations :
- Optimized imports
- Reduced count local variables
- Added more logging points with information about mount point and directories
- Add more logging points
- Included more log end points and implemented glusto's framework dht functions
Changes :
- Copyright years
- removed nfs mount point, since does not support extra attributes
- improved layout validations
- fixed typos in logs.
- Updated comments
Change-Id: If51d033d726edf2344af9aeba1246d4d6591f5c0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the Client Side Quorum with auto option
* check the default value of cluster.quorum-type
* try to set any junk value to cluster.quorum-type
other than {none,auto,fixed}
* check the default value of cluster.quorum-count
* set cluster.quorum-type to fixed and cluster.quorum-count to 1
* start I/O from the mount point
* kill 2 of the brick process from the each replica set.
* set cluster.quorum-type to auto
Change-Id: I102373d1a53635563909e4fb80a01d98c24d3355
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: I221b49315db8bc02873fc133ff12837954f0c232
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ice1e7139613c0d2f15c95e86e6c1e7b595d390a5
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: Ie4a4b323e2b7e57e3896550b6f9b7db28fba03b7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
This step is not needed at all. It appears to be a copy-paste error or
some sort of mistake. Missed it through review and a few rounds of
debugging
Change-Id: I232f68c846ebf18a106554c1b0214748f2cdc391
|
|
|
|
|
| |
Change-Id: I84b789f9c0204ca0f0efb40a9a01215902c0ee1d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ide757078782ad5337d501f3c3ca39036910d995b
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
"full" (default)
Change-Id: If916d20b0d7c9ded6fb1fc929d9ff1e7719d9594
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"diff" (default)
Change-Id: I34a196e8fc764d87e877a082be2b0575bb1b3b40
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"diff" (heal command)
Change-Id: Id310e0c17a872d8586ad8c7de79f1f68b93edb0a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
| |
before and after changing network ping timeout
Desc:
Change-Id: I8f3636cd899e536d2401a8cd93b98bf66ceea0f7
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Id0d9e468aaf0061e9ff0f5cc534c06017e97b793
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I1e0e291954533e602a50d3f6c25365bb0b68b926
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I43a5b87c4acfd3df9483ca869d926714325ae1b9
|
|
|
|
| |
Change-Id: Ib9f4ca5cda02ac1fe66a5c7cdc599255f2fadb4d
|
|
|
|
| |
Change-Id: I243a8ecf57483c20e5060351a9f24e7687ccdcf4
|
|
|
|
|
|
|
| |
as auto first brick must be up to have a rw filesystem in a x2 volume
Change-Id: I98b0808070e6d254b1deeb1a3a744d19adccbf03
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
|
|
|
|
|
|
|
|
|
| |
dispersed volume.
Refer to bug: https://bugzilla.redhat.com/show_bug.cgi?id=1470938
Change-Id: Iea327d87c6decbd0d607cb4abcb55384e8463614
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: Iaecdf6ad44677891340713a5c945a4bdc30ce527
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I99da69377658f3c5f47722dbc3edb216995e9fa4
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1. Create a volume and mount it
2. create some data
3. add brick
4. start rebalance
5. probe a new node
6. check rebalance status from new node.
We should be able to check rebalance status from newly probed node.
Change-Id: Ib09b468dcd3e81eb01f873e0491afe5ecf5124cc
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case, volume create operations such as creating volume
with non existing brick path, already used brick, already existing
volume name, bring the bricks to online with volume start force,
creating a volume with bricks in another cluster, creating a volume
when one of the brick node is down are validated.
Change-Id: I796c8e9023244c592c88116cf3baff52ddade48f
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
| |
Change-Id: Iabe08da9676b027de7b46622ee73162dcbffd98c
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I35568ef8234bc11a8bcf775315c24d9914fbb99d
Signed-off-by: Karan Sandha <ksandha@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate snapshot create
Change-Id: Ia2941a45ee62661bcef855ed4ed05a5c0aba6fb7
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate restore of a snapshot.
Change-Id: Icd73697b10bbec4a1a9576420207ebb26cd69139
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
The purpose of this test is to validate create snap>256
Change-Id: Iea5e2ddcc3a5ef066cf4f55e1895947326a07904
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
| |
global to whole cluster
Change-Id: I9cd8ae1f490bc870540657b4f309197f8cee737e
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1. adding a single brick to replicated volume
2. adding non existing bricks to volume
3. adding bricks from node which is not a part of cluster
4. triggering rebalance start after add brick are validated.
Change-Id: I982ff42dcbe6cd0cfbf3653b8cee0b269314db3f
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case basic volume operations are validated. i.e.,
starting, stopping and deleting a non existing volume, creating
all types of volumes, creation of volume by using a brick from
node which is not a part of cluster, starting a already started
volume, stopping a volume twice, deleting a volume twice and
validating volume info, volume list commands. these commands are
internally validating xml output also.
Change-Id: Ibf44d24e678d8bb14aa68bdeff988488b74741c6
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
It includes:
- Create 2 volumes
- Run concurrent set operation on both the vols
- Check for error or if any core generated
Change-Id: I5f735290ff57ec5e9ad8d85fd5d822c739dbbb5c
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performing NFS mount and unmoount on all volumes,
performing different types quorum settings.
-> Set nfs.disable off
-> Mount it with nfs and unmount it
-> set nfs.disable enable
-> Mount it with nfs
-> Set nfs.disable disable
-> Enable server quorum
-> Set the quorum ratio to numbers and percentage,
negative- numbers should fail, negative percentage should fail,
fraction should fail, negative fraction should fail
Change-Id: I6c4f022d571378f726b1cdbb7e74fdbc98d7f8cb
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase covers below scenarios,
1) Add-brick without volname
2) Adding a duplicate brick
3) Adding a brick which is already part of another volume
4) Adding a nested brick i.e brick inside another brick
5) Add a brick to a non existent volume
6) Add a brick from the peer which is not in cluster
Change-Id: I2d68715facabaa172db94afc7e1b64f95fb069a7
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
| |
Change-Id: Ibef22a1719fe44aac20024d82fd7f2425945149c
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
desc:
create two volumes
Set server quorum to both the volumes
set server quorum ratio 90%
stop glusterd service any one of the node
quorum regain message should be recorded with message id - 106002
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
start the glusterd service of same node
quorum regain message should be recorded with message id - 106003
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
Change-Id: I9ecab59b6131fc9c4c58bb972b3a41f15af1b87c
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Once a volume is mounted, and quota is enabled, we have to make sure
that quota cannot be set a directory that does not exist.
Change-Id: Ic89551c6d96b628fe04c19605af696800695721d
Signed-off-by: hari gowtham <hgowtham@redhat.com>
|
|
|
|
|
| |
Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ic1191e993db10a110fc753436ec60051adfd5350
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volume get functionalities
Test steps:
1. Create a gluster cluster
2. Get the option from the non-existing volume,
# gluster volume get <non-existing vol> io-cache
3. Get all options from the non-existing volume,
# gluster volume get <non-existing volume > all
4. Provide a incorrect command syntax to get the options
from the volume
# gluster volume get <vol-name>
# gluster volume get
# gluster volume get io-cache
5. Create any type of volume in the cluster
6. Get the value of the non-existing option
# gluster volume get <vol-name> temp.key
7. get all options set on the volume
# gluster volume get <vol-name> all
8. get the specific option set on the volume
# gluster volume get <vol-name> io-cache
9. Set an option on the volume
# gluster volume set <vol-name> performance.low-prio-threads 14
10. Get all the options set on the volume and check
for low-prio-threads
# gluster volume get <vol-name> all | grep -i low-prio-threads
11. Get all the options set on the volume
# gluster volume get <vol-name> all
12. Check for any cores in "cd /"
Change-Id: Ifd7697e68d7ecf297d7be75680a5681686c51ca0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script with Client Side Quorum with fixed should validate
maximum number of bricks to accept
* set cluster quorum to fixed
* set cluster.quorum-count to higher number which is greater than
number of replicas in a sub-voulme
* Above step should fail
Change-Id: I83952a07d36f5f890f3649a691afad2d0ccf037f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Detaching specified server from cluster
-> Detaching detached server again
-> Detaching invalid host
-> Detaching Non exist host
-> Checking Core file created or not
-> Peer detach one node which contains the bricks of volume created
-> Peer detach force a node which is hosting bricks of a volume
Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
host, non existing ip
Library for Core file Create or Not, Added is_core_file_created()
function to lib_utils.py
Test Desc:
Test script to verify peer probe non existing host
and invalid-ip, peer probe has to be fail for
non existing host, Glusterd services up and running
after invalid peer probe, and core file should not
get created under "/", /tmp, /var/log/core directory
Adding glusterd peer probe test cases with modifications according to comments
adding lib for core file verification
Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I73512dde33207295fa954a3b3949f653f03f23c0
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1.volume creation on root brick path without force and with force are validated.
2.deleting a brick manually and then strating the volume with force should not
bring that brick into online, this is validated.
3.Ater clearing all attributes, we should be able to create another volume
with previously used bricks, it is validated.
Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Volume delete operation should fail, when one of the brick node is
down, it is vaildated in this test case.
Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica984b28d2c23e3d0d716d8c0dde6ab6ef69dc8f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|