| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Set global options and other volume specific options on the volume
-> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1
-> gluster volume set VOL nfs.addr-namelookup on
-> gluster volume set VOL cluster.server-quorum-type server
-> gluster volume set VOL network.ping-timeout 20
-> gluster volume set VOL nfs.port 2049
-> gluster volume set VOL performance.nfs.write-behind on
-> Peer probe for a new node
Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I5dd80e8b1ec8a0e3ab7f565c478be368c2e7c73d
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test quota limit-usage by setting limits of various values
being big, small and decimal values. eg. 1GB, 10GB, 2.5GB, etc.
and validate the limits by creating data more than the
hard limits.
(after reaching hard limit the data creation should stop)
Addressed review comments.
Change-Id: If2801cf13ea22c253b22ecb41fc07f2f1705a6d7
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount a volume
-> set 'read-only on' on a volume
-> perform some I/O's on mount point
-> set 'read-only off' on a volume
-> perform some I/O's on mount point
Change-Id: Iab980b1fd51edd764ef38b329275d72f875bf3c0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Rebalance should fail on a pure distribute volume when glusterd is down
on one of the nodes.
Change-Id: I5a871a7783b434ef61f0f1cf4b262db9f5148af6
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added:
1. georep_root_prerequisites
2. georep_create_root_session
Change-Id: Iac026322bc387c6b54bcd81a734785eb9d5cae9d
Signed-off-by: rallan <rallan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Also updated following.
1. API name of 'setup_mountbroker_prerequisites'
to 'georep_nonroot_prerequisites'
2. Added setting up of passwordless SSH to slave node
in prerequisites
Change-Id: I15e567100750d88d7e9e698308c852ad6afbf082
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
1. API to generate ssh keygen
2. API to copy ssh key on to remote node
Change-Id: I0b89ce9d77d4a16eaa3ad10f646d412f1190f56e
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
| |
Also fix the typo in georep_create
Change-Id: I58ca115da827b458d07ce38806a40b1d5bfae643
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Set quorum type
-> Set quorum ratio to 95%
-> Start the volume
-> Stop the glusterd on one node
-> Now quorum is in not met condition
-> Check all bricks went to offline or not
-> Perform replace brick operation
-> Start glusterd on same node which is already stopped
-> Check all bricks are in online or not
-> Verify in vol info that old brick not replaced with new brick
Change-Id: Iab84df9449feeaba66ff0df2d0acbddb6b4e7591
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Icd529b26c66f7fc8e39f620276d9fb7053cd7547
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
The API to set passwd for the given user
and group in all the slave nodes.
Change-Id: I69dc150c598c9101be825f159f037f4ad43706ed
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
While remove-brick operation is in-progress on a volume, glusterd
should not allow add-brick operation on the same volume.
Change-Id: Iddcbbdb1a5a444ea88995f176c0a18df932dea41
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Setting up mountbroker along with adding user
and checking status. Setting pem keys specific
to non-root
Change-Id: Ic8e38087d118f43aea0da270ea8f8f9da81286c1
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Contains ops for restarting glusterd (used for slave nodes
in non-root), adding a group as well as a user to set up
a non-root geo-rep session
Change-Id: Iec0990e86fbb5a92a70f26820d43529c21e1742f
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Signed-off-by: rallan <rallan@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I6cb2a321cd774e5c1008d27c42d9df9219e74ff0
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Deletion of a geo-rep session and getting the status
of the geo-rep session
Change-Id: I94f3c1877c4530246e1cc7077085c92ee7c72101
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I47236b1b1bbd40b52bc85ad59ed7b78faa432410
Signed-off-by: rallan <rallan@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
| |
If a rebalance is in-progress on a volume, glusterd should fail a
remove-brick operation on the same volume.
Change-Id: I2f15023870f342c98186b1860b960cb3c04c0572
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the cluster
In this test case, we are setting up some volume options when one of the
node in the cluster is down and then after the node is up, chekcing whether
the volume info is synced. And we are also trying to peer probe a new
node at the same time bringing down the glusterd of some node in the cluster.
After the node is up, checking whether peer status has correct information.
Steps followed are:
1. Create a cluster
2. Create a 2x3 distribute-replicated volume
3. Start the volume
4. From N1 issue 'gluster volume set <vol-name> stat-prefetch on'
5. At the same time when Step4 is happening, bring down glusterd of N2
6. Start glusterd on N2
7. Verify volume info is synced
8. From N1, issue 'gluster peer probe <new-host>'
9. At the same time when Step8 is happening, bring down glusterd of N2
10. Start glusterd on N2
11. Check the peer status has correct information across the cluster.
Change-Id: Ib95268a3fe11cfbc5c76aa090658133ecc8a0517
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Detach the node from peer
-> Check that any error messages related to peer detach
in glusterd log file
-> No errors should be there in glusterd log file
Change-Id: I481df5b15528fb6fd77cd1372110d7d23dd5cdef
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test tries to create and validate EC volume
with various combinations of input parameters.
RHG3-12926
Change-Id: Icfc15e069d04475ca65b4d7c1dd260434f104cdb
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
|
|
|
|
|
| |
Change-Id: Icd5c423ad1b2fee770680cc66d9919c930c4780f
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
3. Bring down glusterd in half of the nodes
4. Confirm that quorum is not met, by check whether the bricks are down.
5. Perform a remove brick operation, which should fail.
Change-Id: I69525651727ec92dce2f346ad706ab0943490a2d
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota object limit
is exceeded on a directory.
Change-Id: Icc63b1794f82aef708832d0b207ded5f13391b85
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
| |
Change-Id: I94b67fe9a810f020fef36ec9ab00ce7182c9e5c0
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
| |
sub-directory level using both IP and hostname of clients.
Change-Id: I3822b2cfd0fbadcdcbc679f046b299d84e741f19
|
|
|
|
| |
Change-Id: I8770aa4fdfd4bf94ecdda3e80a79c6717e2974dd
|
|
|
|
|
|
|
|
| |
Activated snaps should get listed in .snap directory while deactivated
snap should not.
Change-Id: I04a61c49dcbc9510d60cc8ee6b1364742271bbf0
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test case objective:
Restarting glusterd should not restart a completed
rebalance operation.
Change-Id: I52b808d91d461048044ac742185ddf4696bf94a3
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case we will set auth.allow option with more than 4096
characters and restart the glusterd. glusterd should restart successfully.
Steps followed:
1. Create and start a volume
2. Set auth.allow with <4096 characters
3. Restart glusterd, it should succeed
4. Set auth.allow with >4096 characters
5. Restart glusterd, it should succeed
6. Confirm that glusterd is running on the stopped node
Change-Id: I7a5a8e49a798238bd88e5da54a8f4857c039ca07
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create Dist Volume on Node 1
2. Down brick on Node 1
3. Peer Probe N2 from N1
4. Add identical brick on newly added node
5. Check volume status
Change-Id: I17c4769df6e4ec2f11b7d948ca48a006cf301073
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
| |
Create 10 clones of snapshot and verify gluster volume list
information
Change-Id: Ibd813680d1890e239deaf415469f7f4dccfa6867
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I2ba674b8ea97964040f2e7d47a169c1e41808116
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount the volume on 2 clients
-> Run I/O's on mountpoint
-> While I/O's are in progress
-> Perfrom gluster volume status fd repeatedly
-> List all files and dirs listed
Change-Id: I2d979dd79fa37ad270057bd87d290c84569c4a3d
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
| |
Change-Id: I40f41c03e5ea8130a7374579b249bdd113b4a842
|
|
|
|
|
| |
Change-Id: I624e041271d3b776e243aebfab43e081ccfd7946
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
| |
Change-Id: Iaad1dcb4339aa752a45e39d7bca338d1fdc87da0
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase will enable quota on dir of volume and rename it
to other name and checks whether quota list is showing the renamed dir.
Incorporated the changes made on the quota_ops and quota_libs.
Change-Id: I7166a9810614c966a4a656b5e8976df55b102c01
Signed-off-by: venkata edara <redara@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create distributed-replica Volume
-> Add 6 bricks to the volume
-> Mount the volume
-> Perform some I/O's on mount point
-> unmount the volume
-> Stop and delete the volume
-> Create another volume using bricks of deleted volume
Change-Id: I263d2f0a359ccb0409dba620363a39d92ea8d2b9
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Iff0e832ebcad14968328c7d7575d120ba8152252
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following modifications has been made -
1. Added function for snap scheduler initialization.
2. Snap scheduler initialization should not be combined with snap
schedule enable.
3. Snap scheduler enable/disable should be issued for a node instead
for every node in cluster.
Change-Id: I23650f48b152debdfb4d7bc8af6f65ecb2bcddfb
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
| |
This testcase verifies rebalance behaviour while IO is in-progress from
multiple clients
Change-Id: Id87472a8194d31e5de181827cfcf30ccacc346c0
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
| |
Change-Id: I3ad5486c5b507fa82ac2f4c0b7c0bdadfc523220
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the snapshot information after
glusterd is restarted.
Change-Id: I7c5e761d8a8cd261841d064dbd94093e1c5b6edd
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I3fbb764925fb19b3e4808711eadbf51090ed98b3
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota limit on a
directory is reached.
Change-Id: I336b78eb55cd5c7ec6b3236f95ce9f0cb8423667
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
fix for function view_snaps_from_mounts
1. Iteration through snap_list was incorrect as we were comparing
snaps taken from snap_list against snap_list itself.
2. Now it is changed to snaps which is superset of all snaps.
Change-Id: Ib14e7819f6fd49e563fd9e8a8f7699581a8900b4
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Deletion of file on source bricks must reflect on sink brick after bringing
it up(conservative merge must NOT happen) when quota is enabled.
Change-Id: I8c3f55ddd1eee9a211674c8759b94aa801f6f174
|
|
|
|
|
| |
Change-Id: I01ad84cc2e35873b985d5d86d3cbacd226b42ae1
Signed-off-by: ubansal <ubansal@redhat.com>
|