| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This testcase contains tests which verifies the snapshot max-hardlimit
and soft-limit and deletion of snapshots for a volume
Change-Id: Iff00475abdaf01b2beac72545774f477e99a13ce
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
On a quota enabled volume validate if alert time is printed only after
crossing the soft limit.
Change-Id: Ia94ae9dd760fed644841df11fe046c184cdd3398
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
| |
This test verifies that file/directory creation or
deletion doesn't leave behind any stale spaces
Change-Id: I02b16625d64426abef390c5b00473ad8a7b7d84d
Signed-off-by: Akarsha <akrai@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Run IOs over cifs mount and simultaneously do a
gluster volume set stat-prefetch on and off.
The IOs should not fail.
Change-Id: I9327e599eb3536f3c49a90d468391055ea4c3bf9
Signed-off-by: vivek das <vdas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test case performs split brain resolution on a file
not in split-brain. This action should fail.
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Change-Id: I01b9a41530498e96f6092283372798e61a9ac2b2
|
|
|
|
|
| |
Change-Id: I7534850d317993ee0b4c81ec06c1bdaeaf0d7535
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
It could have failed without anyone noticing.
Added 'xfail' - do we expect to fail in deletion
(and changed tests accordingly)
On the way, ensure stdout and stderr are logged in case of such failures.
Change-Id: Ibdf7a43cadb0393707a6c68c19a664453a971eb1
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
| |
No functional change, just make the tests a bit more readable.
It could be moved to a decorator later on, wrapping tests.
Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
| |
Change-Id: I9a0cb923c7e3ec0146c2b3b9bf0dcafe6ab892e8
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The below operations are performed on a dis-rep volume
* Enable Quota
* Set limit of 1GB on /
* Mount the volume
* Create some random amount of data inside each directory until quota
is reached
* Perform a quota list operation
* Perform add-brick
* Trying add files and see if quota is honored.
Change-Id: Idd6a647c65a9e2dc7322bc729b48366cf6a46c85
Signed-off-by: harigowtham <hgowtham@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following steps are followed:
* Enable quota on the volume
* Set the quota on the non-existing directory
* Create the directory as above and set limit
* List the quota on the volume
* Delete the directory
* Recreate the directory
* List the quota on volume
* Check for volume status
Change-Id: Ieb37b24715f4c4a9689db582489a7d9a965d54ad
Signed-off-by: harigowtham <hgowtham@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
snapshot activation and deactivation status
when glusterd is down.
Change-Id: I9ae97ceba40ce2511b1b731d5e02990f78f424e9
Signed-off-by: srivickynesh <sselvan@redhat.com>
Signed-off-by: Vignesh <vignesh@localhost.localdomain>
|
|
|
|
|
|
|
|
|
|
| |
Test quota with respect to the limit-usage and soft-limit
quota volume options. Set a unique soft-limit on each directory
and create some data such that the soft-limit is exceeded
and validate with the quota list command.
Change-Id: I80fbb93072a9d8c2fa861c931042ae2fba1fefe1
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Quota limit-usage should fail when trying to set the limit
on a symlink.
- Enable quota
- Set a limit on the volume
- Create a directory from mount
- Create a symlink of the directory from mount
- Try to set a quota limit on the symlink
Change-Id: Ic9979a5c67af74d08527acc0f9a30c1d97291906
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
This test creates gfid split-brain of files and uses the source-brick
option in the CLI to resolve them.
Polarion test: RHG3-4402
Change-Id: I4fb3f16bfcdf77afe92c3a6f98f259147fef30c2
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
| |
This test case will validate snap restore on online volume.
When we try to restore online volume it should fail
Change-Id: If542a6052ee6e9eec7d5e1268f480f047916e66c
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
| |
Change-Id: I634d11cb582521b03f0bb481172e2f4f68d1c2ce
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a limit is set and a file of size larger than limit is created
then the file creation will stop when it will reach the limit.
- Enable Quota
- Create a directory from mount point
- Set a limit of 10 MB on the directory
- Set Quota soft-timeout and hard-timeout to 0 seconds
- Create a file of size larger than the Quota limit
eg. 20 MB file
- Perform Quota list operation to check if all the fields are
appropriate such as hard_limit, available_space, sl_exceeded,
hl_execeeded, etc.
Change-Id: I44d7e23e60589cbe022735e9b4b5f3bfd1b7458e
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
| |
Polarion test case #RHG3-4094
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Change-Id: I1f7100ddb6697cfc8749f8cd2c29e14e9bfdb5ce
|
|
|
|
|
|
| |
Polarion test case #RHG3-4094
Change-Id: Id7492b1e0a7a000ece788c7a0cc4ed9dd8743700
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case we are checking whether the peer status is having
FQDN or not.
Steps followed are:
1. Peer probe to a new node i.e., N1 to N2 using hostname
2. Check peer status in N2, it should have FQDN of N1
3. Check peer status on N1, it should have FQDN of N2
4. Create a distributed volume with single brick on each node.
5. Start volume
6. Peer probe to a new node N3 using IP
7. Add a brick from node3 to the volume, add brick should succeed.
8. Get volume info, it should have correct information
Change-Id: I7f2bb8cecf28e61273ca83d7e3ad502ced979c5c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If there are directories present on only one brick without having
gfid (created from backend), heal them and assign gfids when named
lookup comes on those directories.
Change-Id: I32c27f0b04c8eb36b25899ca9fbe7aef141f13b9
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The test case tests directory healing.
If a mkdir happens successfully when a subvol is down, the
directory should heal and should have zeroed layout once the
subvol is up.
Change-Id: Ia2f5747a1008112a5dcebda8a953ee3d2de9f75f
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the USS
where .snaps folder is already created and
checks for behaviour before
enabling uss and after disabling uss
Change-Id: I9c6dcca1198b7fed37103cc21f3ecba72bfa20a5
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Create snap for that volume
-> Enable uss
-> Check snapd running or not
-> Probe a new node while snapd is running
Change-Id: Ic28036436dc501ed894f3f99060d0297dd9d3c8a
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
snapshot creation and listing Invalid names
and parameters.
Change-Id: Iab3dd71a83176d850780c0834f95d4e8ac1e8997
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Objective:
When induced holes in the layout by remove brick force, lookup sent
on the directory from client should fix the directory layout without
any holes or overlaps.
Change-Id: If4af10929e8e7d4da93cea80c515e37acf53d34e
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Enabling server quorum
-> Set server quorum ratio to 95%
-> Stop the glusterd on any one of the node
-> Perform remove brick operation
-> start glusterd
-> Check gluster vol info, bricks should be same before and after
performing remove brick operation.
Change-Id: I4b7e97ecc6cf6854ec8ff36d296824e549bf9b97
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test:
case -1:
- bring down a subvol
- create a directory so that it does not hash to down subvol
- make sure stat is successful on the dir
case -2:
- create directory
- bring down hashed subvol
- make sure stat is successful on the dir
case -3:
- create dir
- bringdown unhashed subvol
- make sure stat is successful on the dir
Change-Id: I9cbd2e7f04c885eaa09414d6b49632cf77dd72ec
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
| |
Create a clone and check for self heal operation on
the cloned volume.
Change-Id: Icf61f996fcd503a6c0d0bf936900858b715a4742
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Create Dir with some file inside dir
2) Verify dir exists on all bricks as well as mount point
3) Compare dir stat with mount-point and brick location path
4) Change the ownership of directory
5) Compare dir stats with mount-point and brick path
6) Try to change pemission with different user for directory
7) Compare dir stat with mount-point and brick path
8) Try to change permission with different user for directory
9) change permission of directory
10) Compare dir stat with mount-point and brick path
11) Try to change permission with different user for same directory
Change-Id: I284842be8c7562d4618d4e69e202c4d80945f1c5
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Objective:
When induced holes in the layout by remove brick force, fix-layout
start should fix the layout without any holes or overlaps.
Change-Id: Ie4c47fff11957784044e717c644743263812a0e4
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
| |
The test case verifies that a mkdir of directory hashed to a down
subvolume should fail.
Change-Id: I8465f4869c9283d4339c50cdbd56b0256fa11bb9
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test cases validates snapshot delete behaviour
when glusterd is down on one node. When we bring up the brought
down node, post handshake number of snaps should be consistent.
Change-Id: If7ed6c3f384658faa4eb3de8577e38d3fe55f980
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Remove any one Brick directory
-> Start Volume
-> Check the gluster volume status
Change-Id: I83c25c59607d065f8e411e7befa8f934009a9d64
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I525f50a42e29270d9ac445d62e12c7e7e25a7ae3
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the snapshot scheduler
for deleting existing scheduled jobs
Change-Id: Ibd1a00fb336f279d8c13a89ae513a914977f593d
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test quota with respect to the quota daemon and
features.quota-deem-statfs quota volume option when
quota is enabled/disabled on the volume.
Change-Id: I91a4ced6a5d31fe93c6bb9b0aa842cd5baf38ee0
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
| |
Rebalance should proceed even if glusterd is down on a node.
Change-Id: I499e8a4e6b42bd7a8153c1d82c8b329a1933e748
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
Creation of clone from snapshot of volume
and delete snapshot and original volume.
Validate cloned volume is not affected.
Change-Id: Ic61defe96fed6adeab21b3715878ac8093156645
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase exercises below rebalance commands,
1) Rebalance with fix-layout
2) Rebalance start --> status --> stop
3) Rebalance with force option
Changes:
- Remove pytest.mark from test cases
Change-Id: I467de068dabac90018f6241472b2d91d9d9e85a8
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 4 nodes
2. Create a distributed-replicated volumes with 4 bricks
3. Start the volume
4. Fuse mount the gluster volume on out of trusted nodes
5. Create some data file
6. Start remove-brick operation for one replica pair
7. Restart glusterd on all nodes
8. Try to commit the remove-brick operation while rebalance
is in progress, it should fail
Change-Id: I64901078865ef282b86c9b3ff54d065f976b9e84
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
snapshot deletion with snapname, with volumename
and delete all snapshot commands.
Change-Id: I1e4e361e58b35744e08e63c48b43d9e8caf2e953
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 2 nodes
2. Create a distributed volumes with 2 bricks
3. Start the volume
4. Stop glusterd on one node 2
5. Modify any of the volume option on node 1
6. Start glusterd on node 2
7. Check volume status, brick should get port
Change-Id: I688f954f5f53678290e84df955f5529ededaf78f
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Fuse mount the volume
-> Perform I/O on fuse mount
-> Add bricks to the volume
-> Perform rebalance on the volume
-> While rebalance is in progress,
-> restart glusterd on all the nodes in the cluster
Change-Id: I522d7aa55adedc2363bf315f96e51469b6565967
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
While remove-brick operation is in-progress on a volume, glusterd
should not allow rebalance on the same volume.
Change-Id: Ic94754bc12c86a32f2f5fd064129bf6bc038ed6a
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
| |
Change-Id: I80c73f4bb7de90faf482f6c2559d364692067768
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
snapshot listing before and after
glusterd restart.
Change-Id: I7cabe284c52974256e1a48807a5fc1787789583a
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 3 nodes
2. Create a distributed volumes with 3 bricks
3. Start the volume
4. Fuse mount the gluster volume on out of trusted nodes
5. Remove a brick from the volume
6. Check remove-brick status
7. Stop the remove brick process
8. Perform fix-layoyt on the volume
9. Get the rebalance fix-layout status
10. Create a directory from mount point
11. Check trusted.glusterfs.dht extended attribue for newly
created directory on the remove brick
Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
uss functionality while io is going on.
Change-Id: Ie7bf440b02980a0606bf4c4061f5a1628179c128
Signed-off-by: srivickynesh <sselvan@redhat.com>
|