| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Change-Id: I80c73f4bb7de90faf482f6c2559d364692067768
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
snapshot listing before and after
glusterd restart.
Change-Id: I7cabe284c52974256e1a48807a5fc1787789583a
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 3 nodes
2. Create a distributed volumes with 3 bricks
3. Start the volume
4. Fuse mount the gluster volume on out of trusted nodes
5. Remove a brick from the volume
6. Check remove-brick status
7. Stop the remove brick process
8. Perform fix-layoyt on the volume
9. Get the rebalance fix-layout status
10. Create a directory from mount point
11. Check trusted.glusterfs.dht extended attribue for newly
created directory on the remove brick
Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
uss functionality while io is going on.
Change-Id: Ie7bf440b02980a0606bf4c4061f5a1628179c128
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When the IOs are done with client side heal disabled,
it should not hang.
RHG3-11098
Change-Id: I2f180dd1ba2f45ae0f302a730a02b90ae77b99ad
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
test case:
- Verify that the file is created on the hashed subvol alone
- Verify that the trusted.glusterfs.pathinfo reflects the file location
- Verify that the file creation fails if the hashed subvol is down
Change-Id: I951c20f03772a0c5739244ec354f9bbfd6d0ea65
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
| |
Change-Id: I9caa937f74fefad3b9cf13fc0abd0e6a4b380b96
|
|
|
|
|
| |
Change-Id: I0360e6590425aea48d7acf2ddb10d9fbfe9fdeef
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount volume
-> write files on mount point
-> delete files from mount point
-> check for any errors filled in all brick logs
Change-Id: Ic744ad04daa0bdb7adcc672360c9ed03f56004ab
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The below operations are performed on various types of volumes and mount.
* Enable Quota
* Create 10 directories one inside the other and set limit of
1GB on each directory
* Perform a quota list operation
* Create some random amount of data inside each directory
* Perform a quota list operation
* Remove the quota limit and delete the data
Change-Id: I2a706eba5c23909e2e6996f485b3f4ead9d5dbca
Signed-off-by: harigowtham <hgowtham@redhat.com>
|
|
|
|
| |
Change-Id: I02d3b6f77276e11417bab6236e74d1be0e6a3b32
|
|
|
|
|
|
|
|
|
| |
This test case checks whether directory with null gfid is getting
the gfids assigned on all the subvols of a dist-rep volume when
lookup comes on that directory from the mount point.
Change-Id: Ie68cd0e8b293e9380532e2ccda3d53659854de9b
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
| |
This test cases will validate snapshot scheduler behaviour
when we enable/disable scheduler.
Change-Id: Ia6f01a9853aaceb05155bfc92cccba686d320e43
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
| |
Change-Id: I14807c51fb534e5b729da6de69eb062601e80b42
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
| |
Change-Id: I29eefb9ba5bbe46ba79267b85fb8814a14d10b00
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 2 nodes
2. Create a distributed volumes with 2 bricks
3. Start the volume
4. Mount the volume
5. Add some data file on mount
6. Start rebalance with force
7. stop glusterd on 2nd node
8. Check rebalance status , it should not hang
9. Issue volume related command
Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the IOs are done with server side heal disabled,
it should not hang.
ec_check_heal_comp function will fail because of the
bug 1593224- Client side heal is not removing dirty
flag for some of the files
While this bug has been raised and investigated by
dev, this patch is doing its job and testing the
target functionality.
RHG3-11097
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Change-Id: I841285c9b1a747f5800ec8cdd29a099e5fcc08c5
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
| |
This test case will validate USS behaviour when we
enable USS on the volume when brick is down.
Change-Id: I9be021135c1f038a0c6949ce2484b47cd8634c1e
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960
Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
4. Bring down glusterd in half of the nodes
5. Confirm that quorum is not met, by check whether the bricks are down.
6. Perform a add brick operation, which should fail.
7. Check whether added brick is part of volume.
Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Enable Quota
- Set quota limit of 1 GB on /
- Create 10 directories inside volume
- Set quota limit of 100 MB on directories
- Fill data inside the directories till quota limit is reached
- Validate the size fields using quota list
Change-Id: I917da8cdf0d78afd6eeee22b6cf6a4d580ac0c9f
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
| |
Change-Id: Iaaa78c071bd7ee3ad3ed222957e71aec61f80045
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
| |
subdir
Change-Id: I8c71470a67fef17d54d5fdfbcf0d36eb156c07dd
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I84c4375c38ef7322e65f113db6c6229620c57214
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Set global options and other volume specific options on the volume
-> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1
-> gluster volume set VOL nfs.addr-namelookup on
-> gluster volume set VOL cluster.server-quorum-type server
-> gluster volume set VOL network.ping-timeout 20
-> gluster volume set VOL nfs.port 2049
-> gluster volume set VOL performance.nfs.write-behind on
-> Peer probe for a new node
Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I5dd80e8b1ec8a0e3ab7f565c478be368c2e7c73d
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test quota limit-usage by setting limits of various values
being big, small and decimal values. eg. 1GB, 10GB, 2.5GB, etc.
and validate the limits by creating data more than the
hard limits.
(after reaching hard limit the data creation should stop)
Addressed review comments.
Change-Id: If2801cf13ea22c253b22ecb41fc07f2f1705a6d7
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount a volume
-> set 'read-only on' on a volume
-> perform some I/O's on mount point
-> set 'read-only off' on a volume
-> perform some I/O's on mount point
Change-Id: Iab980b1fd51edd764ef38b329275d72f875bf3c0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Rebalance should fail on a pure distribute volume when glusterd is down
on one of the nodes.
Change-Id: I5a871a7783b434ef61f0f1cf4b262db9f5148af6
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added:
1. georep_root_prerequisites
2. georep_create_root_session
Change-Id: Iac026322bc387c6b54bcd81a734785eb9d5cae9d
Signed-off-by: rallan <rallan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Also updated following.
1. API name of 'setup_mountbroker_prerequisites'
to 'georep_nonroot_prerequisites'
2. Added setting up of passwordless SSH to slave node
in prerequisites
Change-Id: I15e567100750d88d7e9e698308c852ad6afbf082
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
1. API to generate ssh keygen
2. API to copy ssh key on to remote node
Change-Id: I0b89ce9d77d4a16eaa3ad10f646d412f1190f56e
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
| |
Also fix the typo in georep_create
Change-Id: I58ca115da827b458d07ce38806a40b1d5bfae643
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Set quorum type
-> Set quorum ratio to 95%
-> Start the volume
-> Stop the glusterd on one node
-> Now quorum is in not met condition
-> Check all bricks went to offline or not
-> Perform replace brick operation
-> Start glusterd on same node which is already stopped
-> Check all bricks are in online or not
-> Verify in vol info that old brick not replaced with new brick
Change-Id: Iab84df9449feeaba66ff0df2d0acbddb6b4e7591
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Icd529b26c66f7fc8e39f620276d9fb7053cd7547
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
The API to set passwd for the given user
and group in all the slave nodes.
Change-Id: I69dc150c598c9101be825f159f037f4ad43706ed
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
While remove-brick operation is in-progress on a volume, glusterd
should not allow add-brick operation on the same volume.
Change-Id: Iddcbbdb1a5a444ea88995f176c0a18df932dea41
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Setting up mountbroker along with adding user
and checking status. Setting pem keys specific
to non-root
Change-Id: Ic8e38087d118f43aea0da270ea8f8f9da81286c1
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Contains ops for restarting glusterd (used for slave nodes
in non-root), adding a group as well as a user to set up
a non-root geo-rep session
Change-Id: Iec0990e86fbb5a92a70f26820d43529c21e1742f
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Signed-off-by: rallan <rallan@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I6cb2a321cd774e5c1008d27c42d9df9219e74ff0
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Deletion of a geo-rep session and getting the status
of the geo-rep session
Change-Id: I94f3c1877c4530246e1cc7077085c92ee7c72101
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I47236b1b1bbd40b52bc85ad59ed7b78faa432410
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
If a rebalance is in-progress on a volume, glusterd should fail a
remove-brick operation on the same volume.
Change-Id: I2f15023870f342c98186b1860b960cb3c04c0572
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the cluster
In this test case, we are setting up some volume options when one of the
node in the cluster is down and then after the node is up, chekcing whether
the volume info is synced. And we are also trying to peer probe a new
node at the same time bringing down the glusterd of some node in the cluster.
After the node is up, checking whether peer status has correct information.
Steps followed are:
1. Create a cluster
2. Create a 2x3 distribute-replicated volume
3. Start the volume
4. From N1 issue 'gluster volume set <vol-name> stat-prefetch on'
5. At the same time when Step4 is happening, bring down glusterd of N2
6. Start glusterd on N2
7. Verify volume info is synced
8. From N1, issue 'gluster peer probe <new-host>'
9. At the same time when Step8 is happening, bring down glusterd of N2
10. Start glusterd on N2
11. Check the peer status has correct information across the cluster.
Change-Id: Ib95268a3fe11cfbc5c76aa090658133ecc8a0517
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Detach the node from peer
-> Check that any error messages related to peer detach
in glusterd log file
-> No errors should be there in glusterd log file
Change-Id: I481df5b15528fb6fd77cd1372110d7d23dd5cdef
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test tries to create and validate EC volume
with various combinations of input parameters.
RHG3-12926
Change-Id: Icfc15e069d04475ca65b4d7c1dd260434f104cdb
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
|
|
|
|
|
| |
Change-Id: Icd5c423ad1b2fee770680cc66d9919c930c4780f
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
3. Bring down glusterd in half of the nodes
4. Confirm that quorum is not met, by check whether the bricks are down.
5. Perform a remove brick operation, which should fail.
Change-Id: I69525651727ec92dce2f346ad706ab0943490a2d
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota object limit
is exceeded on a directory.
Change-Id: Icc63b1794f82aef708832d0b207ded5f13391b85
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
| |
Change-Id: I94b67fe9a810f020fef36ec9ab00ce7182c9e5c0
Signed-off-by: Manisha Saini <msaini@redhat.com>
|