| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Set global options and other volume specific options on the volume
-> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1
-> gluster volume set VOL nfs.addr-namelookup on
-> gluster volume set VOL cluster.server-quorum-type server
-> gluster volume set VOL network.ping-timeout 20
-> gluster volume set VOL nfs.port 2049
-> gluster volume set VOL performance.nfs.write-behind on
-> Peer probe for a new node
Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount a volume
-> set 'read-only on' on a volume
-> perform some I/O's on mount point
-> set 'read-only off' on a volume
-> perform some I/O's on mount point
Change-Id: Iab980b1fd51edd764ef38b329275d72f875bf3c0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Set quorum type
-> Set quorum ratio to 95%
-> Start the volume
-> Stop the glusterd on one node
-> Now quorum is in not met condition
-> Check all bricks went to offline or not
-> Perform replace brick operation
-> Start glusterd on same node which is already stopped
-> Check all bricks are in online or not
-> Verify in vol info that old brick not replaced with new brick
Change-Id: Iab84df9449feeaba66ff0df2d0acbddb6b4e7591
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the cluster
In this test case, we are setting up some volume options when one of the
node in the cluster is down and then after the node is up, chekcing whether
the volume info is synced. And we are also trying to peer probe a new
node at the same time bringing down the glusterd of some node in the cluster.
After the node is up, checking whether peer status has correct information.
Steps followed are:
1. Create a cluster
2. Create a 2x3 distribute-replicated volume
3. Start the volume
4. From N1 issue 'gluster volume set <vol-name> stat-prefetch on'
5. At the same time when Step4 is happening, bring down glusterd of N2
6. Start glusterd on N2
7. Verify volume info is synced
8. From N1, issue 'gluster peer probe <new-host>'
9. At the same time when Step8 is happening, bring down glusterd of N2
10. Start glusterd on N2
11. Check the peer status has correct information across the cluster.
Change-Id: Ib95268a3fe11cfbc5c76aa090658133ecc8a0517
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Detach the node from peer
-> Check that any error messages related to peer detach
in glusterd log file
-> No errors should be there in glusterd log file
Change-Id: I481df5b15528fb6fd77cd1372110d7d23dd5cdef
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
3. Bring down glusterd in half of the nodes
4. Confirm that quorum is not met, by check whether the bricks are down.
5. Perform a remove brick operation, which should fail.
Change-Id: I69525651727ec92dce2f346ad706ab0943490a2d
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case we will set auth.allow option with more than 4096
characters and restart the glusterd. glusterd should restart successfully.
Steps followed:
1. Create and start a volume
2. Set auth.allow with <4096 characters
3. Restart glusterd, it should succeed
4. Set auth.allow with >4096 characters
5. Restart glusterd, it should succeed
6. Confirm that glusterd is running on the stopped node
Change-Id: I7a5a8e49a798238bd88e5da54a8f4857c039ca07
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create Dist Volume on Node 1
2. Down brick on Node 1
3. Peer Probe N2 from N1
4. Add identical brick on newly added node
5. Check volume status
Change-Id: I17c4769df6e4ec2f11b7d948ca48a006cf301073
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount the volume on 2 clients
-> Run I/O's on mountpoint
-> While I/O's are in progress
-> Perfrom gluster volume status fd repeatedly
-> List all files and dirs listed
Change-Id: I2d979dd79fa37ad270057bd87d290c84569c4a3d
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create distributed-replica Volume
-> Add 6 bricks to the volume
-> Mount the volume
-> Perform some I/O's on mount point
-> unmount the volume
-> Stop and delete the volume
-> Create another volume using bricks of deleted volume
Change-Id: I263d2f0a359ccb0409dba620363a39d92ea8d2b9
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case, we will check gluster volume status and gluster
volume status --xml from a node which is part of cluster but not
having any bricks of volumes.
Steps followed are:
1. Create a two node cluster
2. Create a distributed volume with one brick(Assume brick contains to N1)
3. From node which is not having any bricks i.e, N2 check gluster v status
which should fail saying volume is not started.
4. From N2, check gluster v status --xml. It should fail because volume
is not started yet.
5. Start the volume
6. From N2, check gluster v status, this should succeed.
7. From N2, check gluster v status --xml, this should succeed.
Change-Id: I1a230b82c0628c66c16f25f89dd4e6d1d0b3f443
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case, We will check uuid's of bricks in output of
gluster volume info --xml output from newly probed node.
Steps followed are:
1. Create a two node cluster
2. Create and start a 2x2 volume
3. From the existing cluster probe to a new node
4. Check gluster volume info --xml from newly probed node
5. In the output gluster volume info --xml, uuid's of bricks
should be non-zero
Change-Id: I73d07f1b91b5beab26cc87217defb8999fba474e
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Start rebalance
-> Check task type in volume status
-> Check task status string in volume status
-> Check task type in volume status xml
-> Check task status string in volume status xml
-> Start Remove brick operation
-> Check task type in volume status
-> Check task status string in volume status
-> Check task type in volume status xml
-> Check task status string in volume status xml
Change-Id: I9de53008e19f1965dac21d4b80b9b271bbcf53a1
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Get the current op-version
-> Get the max supported op-version
-> Verify vol info file exists or not in all servers
-> Get the version number from vol info file
-> If current op-version is less than max-op-version
set the current op-version to max-op-version
-> After vol set operation verify that version number
increased by one or not in vol info file
-> verify that current-op-version and max-op-version same or not.
Change-Id: If56210a406b15861b0a261e29d2e5f45e14301fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Enable server quorum on volume
-> Stop glusterd on all nodes except first node
-> Verify brick status of nodes where glusterd is running with
default quorum ratio(51%)
-> Change the cluster.server-quorum-ratio from default to 95%
-> Start glusterd on all servers except last node
-> Verify the brick status again
Change-Id: I249574fe6c758e6b8e5bea603f36dcf8698fc1de
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I2a3427cb9165cb2b06a1c72962071e286a65e0a8
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
|
|
|
|
|
| |
Change-Id: I5ffd826bd375956e29ef6f52913fa7dabf8bc7ce
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
| |
The replace brick setUp function had a syntax error and a wrong assert.
The peer probe tearDown method did not work in a situation where the
test failed leading to cascading failures in other tests.
Change-Id: Ia7e0d85bb88c0c9bc6d489b4d03dc7610fd4f129
|
|
|
|
| |
Change-Id: I774f64e2f355e2ca2f41c7a5c472aeae5adcd3dc
|
|
|
|
|
| |
Change-Id: Ic6cb4e96d8f14558c0f9d4eb5e24cbb507578f4c
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
| |
In this testcase, Setting the lowest Gluster opversion and
invalid opversion are validated
Change-Id: Ie45859228e35b7cb171493dd22e30e2f26b70631
Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
|
|
|
|
|
| |
Change-Id: I2802171403490c9de715aa281fefb562e08249fe
Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Peer probe using short name
- Create volume using IP
- Start/stop/getvolumeinfo
- Create volume using FQDN
- Start/stop/getvolumeinfo
Change-Id: I2d55944035c44e8ee360beb4ce41550338586d15
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
before and after changing network ping timeout
Desc:
Change-Id: I8f3636cd899e536d2401a8cd93b98bf66ceea0f7
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
| |
Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1. Create a volume and mount it
2. create some data
3. add brick
4. start rebalance
5. probe a new node
6. check rebalance status from new node.
We should be able to check rebalance status from newly probed node.
Change-Id: Ib09b468dcd3e81eb01f873e0491afe5ecf5124cc
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case, volume create operations such as creating volume
with non existing brick path, already used brick, already existing
volume name, bring the bricks to online with volume start force,
creating a volume with bricks in another cluster, creating a volume
when one of the brick node is down are validated.
Change-Id: I796c8e9023244c592c88116cf3baff52ddade48f
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1. adding a single brick to replicated volume
2. adding non existing bricks to volume
3. adding bricks from node which is not a part of cluster
4. triggering rebalance start after add brick are validated.
Change-Id: I982ff42dcbe6cd0cfbf3653b8cee0b269314db3f
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case basic volume operations are validated. i.e.,
starting, stopping and deleting a non existing volume, creating
all types of volumes, creation of volume by using a brick from
node which is not a part of cluster, starting a already started
volume, stopping a volume twice, deleting a volume twice and
validating volume info, volume list commands. these commands are
internally validating xml output also.
Change-Id: Ibf44d24e678d8bb14aa68bdeff988488b74741c6
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
It includes:
- Create 2 volumes
- Run concurrent set operation on both the vols
- Check for error or if any core generated
Change-Id: I5f735290ff57ec5e9ad8d85fd5d822c739dbbb5c
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performing NFS mount and unmoount on all volumes,
performing different types quorum settings.
-> Set nfs.disable off
-> Mount it with nfs and unmount it
-> set nfs.disable enable
-> Mount it with nfs
-> Set nfs.disable disable
-> Enable server quorum
-> Set the quorum ratio to numbers and percentage,
negative- numbers should fail, negative percentage should fail,
fraction should fail, negative fraction should fail
Change-Id: I6c4f022d571378f726b1cdbb7e74fdbc98d7f8cb
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
desc:
create two volumes
Set server quorum to both the volumes
set server quorum ratio 90%
stop glusterd service any one of the node
quorum regain message should be recorded with message id - 106002
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
start the glusterd service of same node
quorum regain message should be recorded with message id - 106003
for both the volumes in /var/log/messages and
/var/log/glusterfs/glusterd.log
Change-Id: I9ecab59b6131fc9c4c58bb972b3a41f15af1b87c
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volume get functionalities
Test steps:
1. Create a gluster cluster
2. Get the option from the non-existing volume,
# gluster volume get <non-existing vol> io-cache
3. Get all options from the non-existing volume,
# gluster volume get <non-existing volume > all
4. Provide a incorrect command syntax to get the options
from the volume
# gluster volume get <vol-name>
# gluster volume get
# gluster volume get io-cache
5. Create any type of volume in the cluster
6. Get the value of the non-existing option
# gluster volume get <vol-name> temp.key
7. get all options set on the volume
# gluster volume get <vol-name> all
8. get the specific option set on the volume
# gluster volume get <vol-name> io-cache
9. Set an option on the volume
# gluster volume set <vol-name> performance.low-prio-threads 14
10. Get all the options set on the volume and check
for low-prio-threads
# gluster volume get <vol-name> all | grep -i low-prio-threads
11. Get all the options set on the volume
# gluster volume get <vol-name> all
12. Check for any cores in "cd /"
Change-Id: Ifd7697e68d7ecf297d7be75680a5681686c51ca0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Detaching specified server from cluster
-> Detaching detached server again
-> Detaching invalid host
-> Detaching Non exist host
-> Checking Core file created or not
-> Peer detach one node which contains the bricks of volume created
-> Peer detach force a node which is hosting bricks of a volume
Change-Id: I6a1fce6e7c626f822ddbc43ea4d2fcd4bc3262c8
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
host, non existing ip
Library for Core file Create or Not, Added is_core_file_created()
function to lib_utils.py
Test Desc:
Test script to verify peer probe non existing host
and invalid-ip, peer probe has to be fail for
non existing host, Glusterd services up and running
after invalid peer probe, and core file should not
get created under "/", /tmp, /var/log/core directory
Adding glusterd peer probe test cases with modifications according to comments
adding lib for core file verification
Change-Id: I0ebd6ee2b340d1f1b01878cb0faf69f41fec2e10
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1.volume creation on root brick path without force and with force are validated.
2.deleting a brick manually and then strating the volume with force should not
bring that brick into online, this is validated.
3.Ater clearing all attributes, we should be able to create another volume
with previously used bricks, it is validated.
Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Volume delete operation should fail, when one of the brick node is
down, it is vaildated in this test case.
Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comand limited number of times while io is in progress.
Desc:
Create any type of volume then mount the volume, once
volume mounted successfully on client, start running IOs on
mount point then run the "gluster volume status volname inode"
command on all clusters randomly.
"gluster volume status volname inode" command should not get
hang while IOs in progress.
Then check that IOs completed successfullly or not on mount point.
Check that files in mount point listing properly or not.
Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
reset force
Description: Create Distribute volume, then enable bitrot and uss on that volume,
then check the bitd, scrub, snapd daemons running or not.
Then peform the volume reset, after volume reset only snap
daemon will get kill but bitd scrub daemons will be remain running.
Then perform volume reset with force, after volume reset with
force all three(bitd, scrub, snapd) daemons will get kill, these daemons
will not be running.
below are the steps performed for developing this test case:
-> Create Distributed volume
-> Enable BitD, Scrub and Uss on volume
-> Verify the BitD, Scrub and Uss daemons are running on every node
-> Reset the volume
-> Verify the Daemons (BitD, Scrub & Uss ) are running or not
-> Eanble Uss on same volume
-> Reset the volume with force
-> Verify all the daemons(BitD, Scrub & Uss) are running or not
Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|