summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* snapshot:: Delete Snaps and list snaps after restarting glusterdsrivickynesh2018-06-261-0/+129
| | | | | | | | | Test Cases in this module tests the snapshot listing before and after glusterd restart. Change-Id: I7cabe284c52974256e1a48807a5fc1787789583a Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test rebalance spurious failureMohit Agrawal2018-06-261-0/+174
| | | | | | | | | | | | | | | | | | 1. Trusted storage Pool of 3 nodes 2. Create a distributed volumes with 3 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Remove a brick from the volume 6. Check remove-brick status 7. Stop the remove brick process 8. Perform fix-layoyt on the volume 9. Get the rebalance fix-layout status 10. Create a directory from mount point 11. Check trusted.glusterfs.dht extended attribue for newly created directory on the remove brick Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Snapshot: Enable USS on volume when snapshots are taken when IO is going onsrivickynesh2018-06-261-0/+178
| | | | | | | | Test Cases in this module tests the uss functionality while io is going on. Change-Id: Ie7bf440b02980a0606bf4c4061f5a1628179c128 Signed-off-by: srivickynesh <sselvan@redhat.com>
* functional/disperse: Verify IO hang during server side heal.Ashish Pandey2018-06-261-0/+139
| | | | | | | | | | When the IOs are done with client side heal disabled, it should not hang. RHG3-11098 Change-Id: I2f180dd1ba2f45ae0f302a730a02b90ae77b99ad Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* glusto/dht: Verify create file operationSusant Palai2018-06-261-0/+166
| | | | | | | | | | test case: - Verify that the file is created on the hashed subvol alone - Verify that the trusted.glusterfs.pathinfo reflects the file location - Verify that the file creation fails if the hashed subvol is down Change-Id: I951c20f03772a0c5739244ec354f9bbfd6d0ea65 Signed-off-by: Susant Palai <spalai@redhat.com>
* Test to verify whether invalid values are accepted in auth options.Jilju Joy2018-06-261-0/+164
| | | | Change-Id: I9caa937f74fefad3b9cf13fc0abd0e6a4b380b96
* Fuse mount point should not to go to Read only while deleting filesVitalii Koriakov2018-06-261-0/+253
| | | | | Change-Id: I0360e6590425aea48d7acf2ddb10d9fbfe9fdeef Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* No Errors should generate in brick logs after deleting files from mountpointRajesh Madaka2018-06-261-0/+179
| | | | | | | | | | | -> Create volume -> Mount volume -> write files on mount point -> delete files from mount point -> check for any errors filled in all brick logs Change-Id: Ic744ad04daa0bdb7adcc672360c9ed03f56004ab Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Quota : Validate limit on deep directoriesharigowtham2018-06-261-0/+228
| | | | | | | | | | | | | | | The below operations are performed on various types of volumes and mount. * Enable Quota * Create 10 directories one inside the other and set limit of 1GB on each directory * Perform a quota list operation * Create some random amount of data inside each directory * Perform a quota list operation * Remove the quota limit and delete the data Change-Id: I2a706eba5c23909e2e6996f485b3f4ead9d5dbca Signed-off-by: harigowtham <hgowtham@redhat.com>
* Testcase to verify authentication feature using a combination of IP and fqdnJilju Joy2018-06-261-0/+162
| | | | Change-Id: I02d3b6f77276e11417bab6236e74d1be0e6a3b32
* afr: Test gfid assignment on dist-rep volumekarthik-us2018-06-251-0/+135
| | | | | | | | | This test case checks whether directory with null gfid is getting the gfids assigned on all the subvols of a dist-rep volume when lookup comes on that directory from the mount point. Change-Id: Ie68cd0e8b293e9380532e2ccda3d53659854de9b Signed-off-by: karthik-us <ksubrahm@redhat.com>
* snapshot : snapshot scheduler behaviourSunny Kumar2018-06-251-0/+151
| | | | | | | | This test cases will validate snapshot scheduler behaviour when we enable/disable scheduler. Change-Id: Ia6f01a9853aaceb05155bfc92cccba686d320e43 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* FuseSubdir Test With replace Brick functionalityManisha Saini2018-06-251-0/+228
| | | | | Change-Id: I14807c51fb534e5b729da6de69eb062601e80b42 Signed-off-by: Manisha Saini <msaini@redhat.com>
* Remove faulty subvolume and add with new subvolVitalii Koriakov2018-06-251-0/+221
| | | | | Change-Id: I29eefb9ba5bbe46ba79267b85fb8814a14d10b00 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test rebalance hang after stop glusterd on other nodeMohit Agrawal2018-06-251-0/+201
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Mount the volume 5. Add some data file on mount 6. Start rebalance with force 7. stop glusterd on 2nd node 8. Check rebalance status , it should not hang 9. Issue volume related command Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* functional/disperse: verify IO hang during client side healSunil Kumar Acharya2018-06-251-0/+150
| | | | | | | | | | | | | | | | | | | When the IOs are done with server side heal disabled, it should not hang. ec_check_heal_comp function will fail because of the bug 1593224- Client side heal is not removing dirty flag for some of the files While this bug has been raised and investigated by dev, this patch is doing its job and testing the target functionality. RHG3-11097 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: I841285c9b1a747f5800ec8cdd29a099e5fcc08c5 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* snapshot : validate uss when brick is downSunny Kumar2018-06-251-0/+202
| | | | | | | | This test case will validate USS behaviour when we enable USS on the volume when brick is down. Change-Id: I9be021135c1f038a0c6949ce2484b47cd8634c1e Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Validate snap info from detached node in the clusterBala Konda Reddy Mekala2018-06-251-0/+143
| | | | | Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960 Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* glusto-tests/glusterd: add brick should fail, when server quorum is not metSanju Rakonde2018-06-251-0/+159
| | | | | | | | | | | | | | Steps followed are: 1. Create and a start a volume 2. Set cluster.server-quorum-type as server 3. Set cluster.server-quorum-ratio as 95% 4. Bring down glusterd in half of the nodes 5. Confirm that quorum is not met, by check whether the bricks are down. 6. Perform a add brick operation, which should fail. 7. Check whether added brick is part of volume. Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Quota : Limit-Usage on volume as well as sub-directoriesVinayak Papnoi2018-06-251-0/+180
| | | | | | | | | | | | - Enable Quota - Set quota limit of 1 GB on / - Create 10 directories inside volume - Set quota limit of 100 MB on directories - Fill data inside the directories till quota limit is reached - Validate the size fields using quota list Change-Id: I917da8cdf0d78afd6eeee22b6cf6a4d580ac0c9f Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* afr: heal gfids for files created from backendRavishankar N2018-06-251-0/+178
| | | | | Change-Id: Iaaa78c071bd7ee3ad3ed222957e71aec61f80045 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* This TC is to verify that auth allow fails with blank string for volumes and ↵ubansal2018-06-241-0/+80
| | | | | | | subdir Change-Id: I8c71470a67fef17d54d5fdfbcf0d36eb156c07dd Signed-off-by: ubansal <ubansal@redhat.com>
* Testcase for FUSE mount to not allow mounting of rejected client using hostnameubansal2018-06-241-0/+191
| | | | | Change-Id: I84c4375c38ef7322e65f113db6c6229620c57214 Signed-off-by: ubansal <ubansal@redhat.com>
* Test peer probe after setting global options to the volumeRajesh Madaka2018-06-241-0/+136
| | | | | | | | | | | | | | -> Set global options and other volume specific options on the volume -> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1 -> gluster volume set VOL nfs.addr-namelookup on -> gluster volume set VOL cluster.server-quorum-type server -> gluster volume set VOL network.ping-timeout 20 -> gluster volume set VOL nfs.port 2049 -> gluster volume set VOL performance.nfs.write-behind on -> Peer probe for a new node Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* TC for authentication for volume by using auth rejectubansal2018-06-231-0/+169
| | | | | Change-Id: I5dd80e8b1ec8a0e3ab7f565c478be368c2e7c73d Signed-off-by: ubansal <ubansal@redhat.com>
* Quota : Test case for quota with multiple values of limit-usageVinayak Papnoi2018-06-231-0/+277
| | | | | | | | | | | | | Test quota limit-usage by setting limits of various values being big, small and decimal values. eg. 1GB, 10GB, 2.5GB, etc. and validate the limits by creating data more than the hard limits. (after reaching hard limit the data creation should stop) Addressed review comments. Change-Id: If2801cf13ea22c253b22ecb41fc07f2f1705a6d7 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Test read-only option on volumesRajesh Madaka2018-06-231-0/+156
| | | | | | | | | | | | -> Create volume -> Mount a volume -> set 'read-only on' on a volume -> perform some I/O's on mount point -> set 'read-only off' on a volume -> perform some I/O's on mount point Change-Id: Iab980b1fd51edd764ef38b329275d72f875bf3c0 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Start rebalance when glusterd is down on one of the nodePrasad Desala2018-06-231-0/+184
| | | | | | | | Rebalance should fail on a pure distribute volume when glusterd is down on one of the nodes. Change-Id: I5a871a7783b434ef61f0f1cf4b262db9f5148af6 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test replace brick when quorum not metRajesh Madaka2018-06-211-0/+225
| | | | | | | | | | | | | | | | | -> Create volume -> Set quorum type -> Set quorum ratio to 95% -> Start the volume -> Stop the glusterd on one node -> Now quorum is in not met condition -> Check all bricks went to offline or not -> Perform replace brick operation -> Start glusterd on same node which is already stopped -> Check all bricks are in online or not -> Verify in vol info that old brick not replaced with new brick Change-Id: Iab84df9449feeaba66ff0df2d0acbddb6b4e7591 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding Testcase: add_brick_while_remove_brick_in_progressPrasad Desala2018-06-211-0/+183
| | | | | | | | While remove-brick operation is in-progress on a volume, glusterd should not allow add-brick operation on the same volume. Change-Id: Iddcbbdb1a5a444ea88995f176c0a18df932dea41 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Adding Testcase: Remove_brick_while_rebalance_in_progressPrasad Desala2018-06-211-0/+212
| | | | | | | | If a rebalance is in-progress on a volume, glusterd should fail a remove-brick operation on the same volume. Change-Id: I2f15023870f342c98186b1860b960cb3c04c0572 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* glusto-tests/glusterd: setting volume option when one of the node is down in ↵Sanju Rakonde2018-06-201-0/+203
| | | | | | | | | | | | | | | | | | | | | | | | | | the cluster In this test case, we are setting up some volume options when one of the node in the cluster is down and then after the node is up, chekcing whether the volume info is synced. And we are also trying to peer probe a new node at the same time bringing down the glusterd of some node in the cluster. After the node is up, checking whether peer status has correct information. Steps followed are: 1. Create a cluster 2. Create a 2x3 distribute-replicated volume 3. Start the volume 4. From N1 issue 'gluster volume set <vol-name> stat-prefetch on' 5. At the same time when Step4 is happening, bring down glusterd of N2 6. Start glusterd on N2 7. Verify volume info is synced 8. From N1, issue 'gluster peer probe <new-host>' 9. At the same time when Step8 is happening, bring down glusterd of N2 10. Start glusterd on N2 11. Check the peer status has correct information across the cluster. Change-Id: Ib95268a3fe11cfbc5c76aa090658133ecc8a0517 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* No Errors should generate in glusterd.log while detaching node from glusterRajesh Madaka2018-06-201-0/+88
| | | | | | | | | | -> Detach the node from peer -> Check that any error messages related to peer detach in glusterd log file -> No errors should be there in glusterd log file Change-Id: I481df5b15528fb6fd77cd1372110d7d23dd5cdef Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* functional/disperse: Validate EC volume creationSunil Kumar Acharya2018-06-201-0/+137
| | | | | | | | | This test tries to create and validate EC volume with various combinations of input parameters. RHG3-12926 Change-Id: Icfc15e069d04475ca65b4d7c1dd260434f104cdb Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* Test multiple clients "dd" on same-file (default)Vitalii Koriakov2018-06-201-0/+263
| | | | | Change-Id: Icd5c423ad1b2fee770680cc66d9919c930c4780f Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* glusto-tests/glusterd: remove brick should fail, when server quorum is not metSanju Rakonde2018-06-191-0/+140
| | | | | | | | | | | | | Steps followed are: 1. Create and a start a volume 2. Set cluster.server-quorum-type as server 3. Set cluster.server-quorum-ratio as 95% 3. Bring down glusterd in half of the nodes 4. Confirm that quorum is not met, by check whether the bricks are down. 5. Perform a remove brick operation, which should fail. Change-Id: I69525651727ec92dce2f346ad706ab0943490a2d Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* afr: Test self heal when quota object limit is setkarthik-us2018-06-191-0/+196
| | | | | | | | Self heal should heal the files even if the quota object limit is exceeded on a directory. Change-Id: Icc63b1794f82aef708832d0b207ded5f13391b85 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* FuseSubdir validation for quota limitsManisha Saini2018-06-191-0/+277
| | | | | Change-Id: I94b67fe9a810f020fef36ec9ab00ce7182c9e5c0 Signed-off-by: Manisha Saini <msaini@redhat.com>
* Testcase to verify auth.reject and auth.allow volume options on volume and ↵Jilju Joy2018-06-191-0/+329
| | | | | | sub-directory level using both IP and hostname of clients. Change-Id: I3822b2cfd0fbadcdcbc679f046b299d84e741f19
* Testcase to verify precedence of auth.reject over auth.allow volume option.Jilju Joy2018-06-191-0/+327
| | | | Change-Id: I8770aa4fdfd4bf94ecdda3e80a79c6717e2974dd
* snapshot : validating USS functionalitySunny Kumar2018-06-191-0/+235
| | | | | | | | Activated snaps should get listed in .snap directory while deactivated snap should not. Change-Id: I04a61c49dcbc9510d60cc8ee6b1364742271bbf0 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Restart glusterd after rebalance is overPrasad Desala2018-06-191-0/+177
| | | | | | | | | Test case objective: Restarting glusterd should not restart a completed rebalance operation. Change-Id: I52b808d91d461048044ac742185ddf4696bf94a3 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* glusto-tests/glusterd: setting auth.allow with more than 4096 charactersSanju Rakonde2018-06-191-0/+102
| | | | | | | | | | | | | | | | In this test case we will set auth.allow option with more than 4096 characters and restart the glusterd. glusterd should restart successfully. Steps followed: 1. Create and start a volume 2. Set auth.allow with <4096 characters 3. Restart glusterd, it should succeed 4. Set auth.allow with >4096 characters 5. Restart glusterd, it should succeed 6. Confirm that glusterd is running on the stopped node Change-Id: I7a5a8e49a798238bd88e5da54a8f4857c039ca07 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Add identical brick on new node after down a brick on other nodeMohit Agrawal2018-06-191-0/+126
| | | | | | | | | | | 1. Create Dist Volume on Node 1 2. Down brick on Node 1 3. Peer Probe N2 from N1 4. Add identical brick on newly added node 5. Check volume status Change-Id: I17c4769df6e4ec2f11b7d948ca48a006cf301073 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Snapshot: Create many clones and verify volume list informationsrivickynesh2018-06-191-0/+166
| | | | | | | | Create 10 clones of snapshot and verify gluster volume list information Change-Id: Ibd813680d1890e239deaf415469f7f4dccfa6867 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Resolving meta-data split-brain from client using extended attributesVitalii Koriakov2018-06-191-0/+257
| | | | | Change-Id: I2ba674b8ea97964040f2e7d47a169c1e41808116 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test volume status fd while IO in progressRajesh Madaka2018-06-191-0/+152
| | | | | | | | | | | | -> Create volume -> Mount the volume on 2 clients -> Run I/O's on mountpoint -> While I/O's are in progress -> Perfrom gluster volume status fd repeatedly -> List all files and dirs listed Change-Id: I2d979dd79fa37ad270057bd87d290c84569c4a3d Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Testcase to verify auth.allow featureJilju Joy2018-06-192-0/+211
| | | | Change-Id: I40f41c03e5ea8130a7374579b249bdd113b4a842
* Adding fuse subdir add brick and rebalance testManisha Saini2018-06-191-0/+231
| | | | | Change-Id: I624e041271d3b776e243aebfab43e081ccfd7946 Signed-off-by: Manisha Saini <msaini@redhat.com>
* Test to verify auth.allow setting using Fqdn of clientsJilju Joy2018-06-191-0/+175
| | | | Change-Id: Iaad1dcb4339aa752a45e39d7bca338d1fdc87da0