| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Change-Id: I8770aa4fdfd4bf94ecdda3e80a79c6717e2974dd
|
|
|
|
|
|
|
|
| |
Activated snaps should get listed in .snap directory while deactivated
snap should not.
Change-Id: I04a61c49dcbc9510d60cc8ee6b1364742271bbf0
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test case objective:
Restarting glusterd should not restart a completed
rebalance operation.
Change-Id: I52b808d91d461048044ac742185ddf4696bf94a3
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case we will set auth.allow option with more than 4096
characters and restart the glusterd. glusterd should restart successfully.
Steps followed:
1. Create and start a volume
2. Set auth.allow with <4096 characters
3. Restart glusterd, it should succeed
4. Set auth.allow with >4096 characters
5. Restart glusterd, it should succeed
6. Confirm that glusterd is running on the stopped node
Change-Id: I7a5a8e49a798238bd88e5da54a8f4857c039ca07
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create Dist Volume on Node 1
2. Down brick on Node 1
3. Peer Probe N2 from N1
4. Add identical brick on newly added node
5. Check volume status
Change-Id: I17c4769df6e4ec2f11b7d948ca48a006cf301073
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
| |
Create 10 clones of snapshot and verify gluster volume list
information
Change-Id: Ibd813680d1890e239deaf415469f7f4dccfa6867
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I2ba674b8ea97964040f2e7d47a169c1e41808116
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount the volume on 2 clients
-> Run I/O's on mountpoint
-> While I/O's are in progress
-> Perfrom gluster volume status fd repeatedly
-> List all files and dirs listed
Change-Id: I2d979dd79fa37ad270057bd87d290c84569c4a3d
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
| |
Change-Id: I40f41c03e5ea8130a7374579b249bdd113b4a842
|
|
|
|
|
| |
Change-Id: I624e041271d3b776e243aebfab43e081ccfd7946
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
| |
Change-Id: Iaad1dcb4339aa752a45e39d7bca338d1fdc87da0
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase will enable quota on dir of volume and rename it
to other name and checks whether quota list is showing the renamed dir.
Incorporated the changes made on the quota_ops and quota_libs.
Change-Id: I7166a9810614c966a4a656b5e8976df55b102c01
Signed-off-by: venkata edara <redara@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create distributed-replica Volume
-> Add 6 bricks to the volume
-> Mount the volume
-> Perform some I/O's on mount point
-> unmount the volume
-> Stop and delete the volume
-> Create another volume using bricks of deleted volume
Change-Id: I263d2f0a359ccb0409dba620363a39d92ea8d2b9
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Iff0e832ebcad14968328c7d7575d120ba8152252
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following modifications has been made -
1. Added function for snap scheduler initialization.
2. Snap scheduler initialization should not be combined with snap
schedule enable.
3. Snap scheduler enable/disable should be issued for a node instead
for every node in cluster.
Change-Id: I23650f48b152debdfb4d7bc8af6f65ecb2bcddfb
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
| |
This testcase verifies rebalance behaviour while IO is in-progress from
multiple clients
Change-Id: Id87472a8194d31e5de181827cfcf30ccacc346c0
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
| |
Change-Id: I3ad5486c5b507fa82ac2f4c0b7c0bdadfc523220
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the snapshot information after
glusterd is restarted.
Change-Id: I7c5e761d8a8cd261841d064dbd94093e1c5b6edd
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I3fbb764925fb19b3e4808711eadbf51090ed98b3
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota limit on a
directory is reached.
Change-Id: I336b78eb55cd5c7ec6b3236f95ce9f0cb8423667
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
fix for function view_snaps_from_mounts
1. Iteration through snap_list was incorrect as we were comparing
snaps taken from snap_list against snap_list itself.
2. Now it is changed to snaps which is superset of all snaps.
Change-Id: Ib14e7819f6fd49e563fd9e8a8f7699581a8900b4
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Deletion of file on source bricks must reflect on sink brick after bringing
it up(conservative merge must NOT happen) when quota is enabled.
Change-Id: I8c3f55ddd1eee9a211674c8759b94aa801f6f174
|
|
|
|
|
| |
Change-Id: I01ad84cc2e35873b985d5d86d3cbacd226b42ae1
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case, we will check gluster volume status and gluster
volume status --xml from a node which is part of cluster but not
having any bricks of volumes.
Steps followed are:
1. Create a two node cluster
2. Create a distributed volume with one brick(Assume brick contains to N1)
3. From node which is not having any bricks i.e, N2 check gluster v status
which should fail saying volume is not started.
4. From N2, check gluster v status --xml. It should fail because volume
is not started yet.
5. Start the volume
6. From N2, check gluster v status, this should succeed.
7. From N2, check gluster v status --xml, this should succeed.
Change-Id: I1a230b82c0628c66c16f25f89dd4e6d1d0b3f443
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
| |
Change-Id: I4d2767acb4fff7ab972e11d13ef4c547914110d9
Signed-off-by: rallan <rallan@redhat.com>
|
|
|
|
|
| |
Change-Id: I3e789d2e8ad24cca62d8a5ef8cfb9511375cfe0e
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
| |
Change-Id: Ic356857db199529a4eaacb9140d71a8fd7c70375
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
| |
bricks are down
Change-Id: I1169250706494b1b833d3b7e8a1ee148426e224b
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
Adding quota_libs.py with the quota_fetch_daemon_pid library.
This library gets the PID of quota daemon from all the nodes.
Change-Id: Icc5ecba5649a2f7fd48d5cedda6c1dd3ad8b50c0
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
| |
Change-Id: I6cdb3122af071f9f7bcfaeba8427e5c4aad8a4ec
|
|
|
|
|
|
|
|
|
|
| |
When bricks of various sizes are used to create a
disperse volume, volume size should be of the size
(number of data bricks * least of brick size)
RHG3-11124
Change-Id: Ic791212bf028328996b896ae4896cf860c153264
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
|
|
|
|
|
|
|
| |
that nodes are online even if nodes are still offline
Change-Id: I57da740fbc8eef2e41d5dfe3bb82a8d487630893
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
| |
We renamed the quota functions, so this caused some conflicts on master.
Change-Id: I5eb6381dc77dcd99929cbc20173941bf1bd2290d
|
|
|
|
|
| |
Change-Id: I1b06d935ae056dd0fe583f3c1a9f181ec2a40076
Signed-off-by: rallan <rallan@redhat.com>
|
|
|
|
|
| |
Change-Id: Ie5072e8f29185104bcd54348465cc6e96cce5f00
Signed-off-by: Akarsha <akrai@redhat.com>
|
|
|
|
|
|
|
|
| |
Information of snapshots taken for a specified snapshot/volume
or all snapshots present in the system
Change-Id: Ibe355053848b234e0892c0b7f68bfed053f8867a
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I0f01234c66a42844bfa5b6c548cd17f4512d98e2
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case, We will check uuid's of bricks in output of
gluster volume info --xml output from newly probed node.
Steps followed are:
1. Create a two node cluster
2. Create and start a 2x2 volume
3. From the existing cluster probe to a new node
4. Check gluster volume info --xml from newly probed node
5. In the output gluster volume info --xml, uuid's of bricks
should be non-zero
Change-Id: I73d07f1b91b5beab26cc87217defb8999fba474e
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
- added quota_libs.py with quota_validate library
- Removed redundant function in quota_ops.
- changed naming of quota_ops to be consistent and
intutive w.r.t cli
Change-Id: I4faf448ea308c9e04b548d6174d900fcf56978a5
Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
|
|
|
|
|
|
| |
decription.
Change-Id: I895b1eb51e0bc425ba1aff559374ea9383894ccb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Start rebalance
-> Check task type in volume status
-> Check task status string in volume status
-> Check task type in volume status xml
-> Check task status string in volume status xml
-> Start Remove brick operation
-> Check task type in volume status
-> Check task status string in volume status
-> Check task type in volume status xml
-> Check task status string in volume status xml
Change-Id: I9de53008e19f1965dac21d4b80b9b271bbcf53a1
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
| |
options should not be applicable to glfsheal
Change-Id: I019b0299dd7f907446e85f6de0186fb61a3ce1f1
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
Description: Test case which checks for gfid self heal
of a file on 1x3 replicated volume
Change-Id: I3bad7c16435bd99fa3f5b812c65970bebdbd18ac
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: This test case runs split-brain resolution CLIs
on a file in gfid split-brain on 1x2 volume.
1. kill 1 brick
2. create a file at mount point
3. bring back the killed brick
4. kill the other brick
5. create same file at mount point
6. bring back the killed brick
7. try heal from CLI and check if it gets completed
Change-Id: Iddd386741c3c672cda90db46facd7b04feaa2181
|
|
|
|
|
|
|
| |
may be down
Change-Id: I0515680e2cbe582917f0034461b305a33b75ca94
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test case covers the quota functionality for
a volume with a single brick (1x1).
Quota help cli command is also validated here.
Change-Id: I772f4646e2229c21f4547122410633715ef47668
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifying directory quota functionality with respect to the
limit-usage option. Set limits on various directories [breadth]
and check for the quota list of all the directories.
* Enable Quota
* Create 10 directories and set limit of 1GB on each directory
* Perform a quota list operation
* Create some random amount of data inside each directory
* Perform a quota list operation
Change-Id: I3ffc5b99018365eca21ecbdd55d6d9c176f36d6f
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Get the current op-version
-> Get the max supported op-version
-> Verify vol info file exists or not in all servers
-> Get the version number from vol info file
-> If current op-version is less than max-op-version
set the current op-version to max-op-version
-> After vol set operation verify that version number
increased by one or not in vol info file
-> verify that current-op-version and max-op-version same or not.
Change-Id: If56210a406b15861b0a261e29d2e5f45e14301fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I47d0ac4afac44442bd877243c45581df83c6a2e7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I734f85671f17e9a7e9d863aa3a0ef8f632182d48
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|