| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Create Dir with some file inside dir
2) Verify dir exists on all bricks as well as mount point
3) Compare dir stat with mount-point and brick location path
4) Change the ownership of directory
5) Compare dir stats with mount-point and brick path
6) Try to change pemission with different user for directory
7) Compare dir stat with mount-point and brick path
8) Try to change permission with different user for directory
9) change permission of directory
10) Compare dir stat with mount-point and brick path
11) Try to change permission with different user for same directory
Change-Id: I284842be8c7562d4618d4e69e202c4d80945f1c5
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Objective:
When induced holes in the layout by remove brick force, fix-layout
start should fix the layout without any holes or overlaps.
Change-Id: Ie4c47fff11957784044e717c644743263812a0e4
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
| |
The test case verifies that a mkdir of directory hashed to a down
subvolume should fail.
Change-Id: I8465f4869c9283d4339c50cdbd56b0256fa11bb9
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test cases validates snapshot delete behaviour
when glusterd is down on one node. When we bring up the brought
down node, post handshake number of snaps should be consistent.
Change-Id: If7ed6c3f384658faa4eb3de8577e38d3fe55f980
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Remove any one Brick directory
-> Start Volume
-> Check the gluster volume status
Change-Id: I83c25c59607d065f8e411e7befa8f934009a9d64
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I525f50a42e29270d9ac445d62e12c7e7e25a7ae3
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the snapshot scheduler
for deleting existing scheduled jobs
Change-Id: Ibd1a00fb336f279d8c13a89ae513a914977f593d
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test quota with respect to the quota daemon and
features.quota-deem-statfs quota volume option when
quota is enabled/disabled on the volume.
Change-Id: I91a4ced6a5d31fe93c6bb9b0aa842cd5baf38ee0
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
| |
Rebalance should proceed even if glusterd is down on a node.
Change-Id: I499e8a4e6b42bd7a8153c1d82c8b329a1933e748
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
Creation of clone from snapshot of volume
and delete snapshot and original volume.
Validate cloned volume is not affected.
Change-Id: Ic61defe96fed6adeab21b3715878ac8093156645
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase exercises below rebalance commands,
1) Rebalance with fix-layout
2) Rebalance start --> status --> stop
3) Rebalance with force option
Changes:
- Remove pytest.mark from test cases
Change-Id: I467de068dabac90018f6241472b2d91d9d9e85a8
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 4 nodes
2. Create a distributed-replicated volumes with 4 bricks
3. Start the volume
4. Fuse mount the gluster volume on out of trusted nodes
5. Create some data file
6. Start remove-brick operation for one replica pair
7. Restart glusterd on all nodes
8. Try to commit the remove-brick operation while rebalance
is in progress, it should fail
Change-Id: I64901078865ef282b86c9b3ff54d065f976b9e84
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
snapshot deletion with snapname, with volumename
and delete all snapshot commands.
Change-Id: I1e4e361e58b35744e08e63c48b43d9e8caf2e953
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 2 nodes
2. Create a distributed volumes with 2 bricks
3. Start the volume
4. Stop glusterd on one node 2
5. Modify any of the volume option on node 1
6. Start glusterd on node 2
7. Check volume status, brick should get port
Change-Id: I688f954f5f53678290e84df955f5529ededaf78f
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Fuse mount the volume
-> Perform I/O on fuse mount
-> Add bricks to the volume
-> Perform rebalance on the volume
-> While rebalance is in progress,
-> restart glusterd on all the nodes in the cluster
Change-Id: I522d7aa55adedc2363bf315f96e51469b6565967
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
While remove-brick operation is in-progress on a volume, glusterd
should not allow rebalance on the same volume.
Change-Id: Ic94754bc12c86a32f2f5fd064129bf6bc038ed6a
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
| |
Change-Id: I80c73f4bb7de90faf482f6c2559d364692067768
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
snapshot listing before and after
glusterd restart.
Change-Id: I7cabe284c52974256e1a48807a5fc1787789583a
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 3 nodes
2. Create a distributed volumes with 3 bricks
3. Start the volume
4. Fuse mount the gluster volume on out of trusted nodes
5. Remove a brick from the volume
6. Check remove-brick status
7. Stop the remove brick process
8. Perform fix-layoyt on the volume
9. Get the rebalance fix-layout status
10. Create a directory from mount point
11. Check trusted.glusterfs.dht extended attribue for newly
created directory on the remove brick
Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
uss functionality while io is going on.
Change-Id: Ie7bf440b02980a0606bf4c4061f5a1628179c128
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When the IOs are done with client side heal disabled,
it should not hang.
RHG3-11098
Change-Id: I2f180dd1ba2f45ae0f302a730a02b90ae77b99ad
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
test case:
- Verify that the file is created on the hashed subvol alone
- Verify that the trusted.glusterfs.pathinfo reflects the file location
- Verify that the file creation fails if the hashed subvol is down
Change-Id: I951c20f03772a0c5739244ec354f9bbfd6d0ea65
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
| |
Change-Id: I9caa937f74fefad3b9cf13fc0abd0e6a4b380b96
|
|
|
|
|
| |
Change-Id: I0360e6590425aea48d7acf2ddb10d9fbfe9fdeef
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount volume
-> write files on mount point
-> delete files from mount point
-> check for any errors filled in all brick logs
Change-Id: Ic744ad04daa0bdb7adcc672360c9ed03f56004ab
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The below operations are performed on various types of volumes and mount.
* Enable Quota
* Create 10 directories one inside the other and set limit of
1GB on each directory
* Perform a quota list operation
* Create some random amount of data inside each directory
* Perform a quota list operation
* Remove the quota limit and delete the data
Change-Id: I2a706eba5c23909e2e6996f485b3f4ead9d5dbca
Signed-off-by: harigowtham <hgowtham@redhat.com>
|
|
|
|
| |
Change-Id: I02d3b6f77276e11417bab6236e74d1be0e6a3b32
|
|
|
|
|
|
|
|
|
| |
This test case checks whether directory with null gfid is getting
the gfids assigned on all the subvols of a dist-rep volume when
lookup comes on that directory from the mount point.
Change-Id: Ie68cd0e8b293e9380532e2ccda3d53659854de9b
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
| |
This test cases will validate snapshot scheduler behaviour
when we enable/disable scheduler.
Change-Id: Ia6f01a9853aaceb05155bfc92cccba686d320e43
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
| |
Change-Id: I14807c51fb534e5b729da6de69eb062601e80b42
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
| |
Change-Id: I29eefb9ba5bbe46ba79267b85fb8814a14d10b00
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 2 nodes
2. Create a distributed volumes with 2 bricks
3. Start the volume
4. Mount the volume
5. Add some data file on mount
6. Start rebalance with force
7. stop glusterd on 2nd node
8. Check rebalance status , it should not hang
9. Issue volume related command
Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the IOs are done with server side heal disabled,
it should not hang.
ec_check_heal_comp function will fail because of the
bug 1593224- Client side heal is not removing dirty
flag for some of the files
While this bug has been raised and investigated by
dev, this patch is doing its job and testing the
target functionality.
RHG3-11097
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Change-Id: I841285c9b1a747f5800ec8cdd29a099e5fcc08c5
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
| |
This test case will validate USS behaviour when we
enable USS on the volume when brick is down.
Change-Id: I9be021135c1f038a0c6949ce2484b47cd8634c1e
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960
Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
4. Bring down glusterd in half of the nodes
5. Confirm that quorum is not met, by check whether the bricks are down.
6. Perform a add brick operation, which should fail.
7. Check whether added brick is part of volume.
Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Enable Quota
- Set quota limit of 1 GB on /
- Create 10 directories inside volume
- Set quota limit of 100 MB on directories
- Fill data inside the directories till quota limit is reached
- Validate the size fields using quota list
Change-Id: I917da8cdf0d78afd6eeee22b6cf6a4d580ac0c9f
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
| |
Change-Id: Iaaa78c071bd7ee3ad3ed222957e71aec61f80045
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
| |
subdir
Change-Id: I8c71470a67fef17d54d5fdfbcf0d36eb156c07dd
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I84c4375c38ef7322e65f113db6c6229620c57214
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Set global options and other volume specific options on the volume
-> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1
-> gluster volume set VOL nfs.addr-namelookup on
-> gluster volume set VOL cluster.server-quorum-type server
-> gluster volume set VOL network.ping-timeout 20
-> gluster volume set VOL nfs.port 2049
-> gluster volume set VOL performance.nfs.write-behind on
-> Peer probe for a new node
Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I5dd80e8b1ec8a0e3ab7f565c478be368c2e7c73d
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test quota limit-usage by setting limits of various values
being big, small and decimal values. eg. 1GB, 10GB, 2.5GB, etc.
and validate the limits by creating data more than the
hard limits.
(after reaching hard limit the data creation should stop)
Addressed review comments.
Change-Id: If2801cf13ea22c253b22ecb41fc07f2f1705a6d7
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount a volume
-> set 'read-only on' on a volume
-> perform some I/O's on mount point
-> set 'read-only off' on a volume
-> perform some I/O's on mount point
Change-Id: Iab980b1fd51edd764ef38b329275d72f875bf3c0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Rebalance should fail on a pure distribute volume when glusterd is down
on one of the nodes.
Change-Id: I5a871a7783b434ef61f0f1cf4b262db9f5148af6
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Set quorum type
-> Set quorum ratio to 95%
-> Start the volume
-> Stop the glusterd on one node
-> Now quorum is in not met condition
-> Check all bricks went to offline or not
-> Perform replace brick operation
-> Start glusterd on same node which is already stopped
-> Check all bricks are in online or not
-> Verify in vol info that old brick not replaced with new brick
Change-Id: Iab84df9449feeaba66ff0df2d0acbddb6b4e7591
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
While remove-brick operation is in-progress on a volume, glusterd
should not allow add-brick operation on the same volume.
Change-Id: Iddcbbdb1a5a444ea88995f176c0a18df932dea41
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
| |
If a rebalance is in-progress on a volume, glusterd should fail a
remove-brick operation on the same volume.
Change-Id: I2f15023870f342c98186b1860b960cb3c04c0572
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the cluster
In this test case, we are setting up some volume options when one of the
node in the cluster is down and then after the node is up, chekcing whether
the volume info is synced. And we are also trying to peer probe a new
node at the same time bringing down the glusterd of some node in the cluster.
After the node is up, checking whether peer status has correct information.
Steps followed are:
1. Create a cluster
2. Create a 2x3 distribute-replicated volume
3. Start the volume
4. From N1 issue 'gluster volume set <vol-name> stat-prefetch on'
5. At the same time when Step4 is happening, bring down glusterd of N2
6. Start glusterd on N2
7. Verify volume info is synced
8. From N1, issue 'gluster peer probe <new-host>'
9. At the same time when Step8 is happening, bring down glusterd of N2
10. Start glusterd on N2
11. Check the peer status has correct information across the cluster.
Change-Id: Ib95268a3fe11cfbc5c76aa090658133ecc8a0517
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Detach the node from peer
-> Check that any error messages related to peer detach
in glusterd log file
-> No errors should be there in glusterd log file
Change-Id: I481df5b15528fb6fd77cd1372110d7d23dd5cdef
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|