summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Testcase to verify precedence of auth.reject over auth.allow volume option.Jilju Joy2018-06-191-0/+327
| | | | Change-Id: I8770aa4fdfd4bf94ecdda3e80a79c6717e2974dd
* snapshot : validating USS functionalitySunny Kumar2018-06-191-0/+235
| | | | | | | | Activated snaps should get listed in .snap directory while deactivated snap should not. Change-Id: I04a61c49dcbc9510d60cc8ee6b1364742271bbf0 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Restart glusterd after rebalance is overPrasad Desala2018-06-191-0/+177
| | | | | | | | | Test case objective: Restarting glusterd should not restart a completed rebalance operation. Change-Id: I52b808d91d461048044ac742185ddf4696bf94a3 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* glusto-tests/glusterd: setting auth.allow with more than 4096 charactersSanju Rakonde2018-06-191-0/+102
| | | | | | | | | | | | | | | | In this test case we will set auth.allow option with more than 4096 characters and restart the glusterd. glusterd should restart successfully. Steps followed: 1. Create and start a volume 2. Set auth.allow with <4096 characters 3. Restart glusterd, it should succeed 4. Set auth.allow with >4096 characters 5. Restart glusterd, it should succeed 6. Confirm that glusterd is running on the stopped node Change-Id: I7a5a8e49a798238bd88e5da54a8f4857c039ca07 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Add identical brick on new node after down a brick on other nodeMohit Agrawal2018-06-191-0/+126
| | | | | | | | | | | 1. Create Dist Volume on Node 1 2. Down brick on Node 1 3. Peer Probe N2 from N1 4. Add identical brick on newly added node 5. Check volume status Change-Id: I17c4769df6e4ec2f11b7d948ca48a006cf301073 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Snapshot: Create many clones and verify volume list informationsrivickynesh2018-06-191-0/+166
| | | | | | | | Create 10 clones of snapshot and verify gluster volume list information Change-Id: Ibd813680d1890e239deaf415469f7f4dccfa6867 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Resolving meta-data split-brain from client using extended attributesVitalii Koriakov2018-06-191-0/+257
| | | | | Change-Id: I2ba674b8ea97964040f2e7d47a169c1e41808116 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test volume status fd while IO in progressRajesh Madaka2018-06-191-0/+152
| | | | | | | | | | | | -> Create volume -> Mount the volume on 2 clients -> Run I/O's on mountpoint -> While I/O's are in progress -> Perfrom gluster volume status fd repeatedly -> List all files and dirs listed Change-Id: I2d979dd79fa37ad270057bd87d290c84569c4a3d Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Testcase to verify auth.allow featureJilju Joy2018-06-192-0/+211
| | | | Change-Id: I40f41c03e5ea8130a7374579b249bdd113b4a842
* Adding fuse subdir add brick and rebalance testManisha Saini2018-06-191-0/+231
| | | | | Change-Id: I624e041271d3b776e243aebfab43e081ccfd7946 Signed-off-by: Manisha Saini <msaini@redhat.com>
* Test to verify auth.allow setting using Fqdn of clientsJilju Joy2018-06-191-0/+175
| | | | Change-Id: Iaad1dcb4339aa752a45e39d7bca338d1fdc87da0
* Quota: Test Quota by renaming the dir.venkata edara2018-06-191-0/+130
| | | | | | | | | | | | This testcase will enable quota on dir of volume and rename it to other name and checks whether quota list is showing the renamed dir. Incorporated the changes made on the quota_ops and quota_libs. Change-Id: I7166a9810614c966a4a656b5e8976df55b102c01 Signed-off-by: venkata edara <redara@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Create volume using bricks of deleted volumeRajesh Madaka2018-06-181-0/+179
| | | | | | | | | | | | | -> Create distributed-replica Volume -> Add 6 bricks to the volume -> Mount the volume -> Perform some I/O's on mount point -> unmount the volume -> Stop and delete the volume -> Create another volume using bricks of deleted volume Change-Id: I263d2f0a359ccb0409dba620363a39d92ea8d2b9 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Ttest replacing all arbiter bricks in the volumeVitalii Koriakov2018-06-181-0/+242
| | | | | Change-Id: Iff0e832ebcad14968328c7d7575d120ba8152252 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* snapshot:: Modification in library for snap schedulerSunny Kumar2018-06-181-38/+50
| | | | | | | | | | | | Following modifications has been made - 1. Added function for snap scheduler initialization. 2. Snap scheduler initialization should not be combined with snap schedule enable. 3. Snap scheduler enable/disable should be issued for a node instead for every node in cluster. Change-Id: I23650f48b152debdfb4d7bc8af6f65ecb2bcddfb Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Rebalance: Directory and file creation from multiple clientsPrasad Desala2018-06-171-0/+249
| | | | | | | | This testcase verifies rebalance behaviour while IO is in-progress from multiple clients Change-Id: Id87472a8194d31e5de181827cfcf30ccacc346c0 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test to verify auth.reject feature on sub directoriesJilju Joy2018-06-171-0/+159
| | | | Change-Id: I3ad5486c5b507fa82ac2f4c0b7c0bdadfc523220
* Snapshot: Information of snapshots after restarting glusterdsrivickynesh2018-06-171-0/+136
| | | | | | | | Test Cases in this module tests the snapshot information after glusterd is restarted. Change-Id: I7c5e761d8a8cd261841d064dbd94093e1c5b6edd Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding fuse Subdir rename caseManisha Saini2018-06-171-0/+250
| | | | | Change-Id: I3fbb764925fb19b3e4808711eadbf51090ed98b3 Signed-off-by: Manisha Saini <msaini@redhat.com>
* afr: Test self heal when quota limit exceedskarthik-us2018-06-171-0/+197
| | | | | | | | Self heal should heal the files even if the quota limit on a directory is reached. Change-Id: I336b78eb55cd5c7ec6b3236f95ce9f0cb8423667 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* Snapshot:: Fix: view_snaps_from_mounts functionsrivickynesh2018-06-151-1/+1
| | | | | | | | | | fix for function view_snaps_from_mounts 1. Iteration through snap_list was incorrect as we were comparing snaps taken from snap_list against snap_list itself. 2. Now it is changed to snaps which is superset of all snaps. Change-Id: Ib14e7819f6fd49e563fd9e8a8f7699581a8900b4 Signed-off-by: srivickynesh <sselvan@redhat.com>
* afr: test entry selfheal with quota limit objectRavishankar N2018-06-151-0/+224
| | | | | | | Deletion of file on source bricks must reflect on sink brick after bringing it up(conservative merge must NOT happen) when quota is enabled. Change-Id: I8c3f55ddd1eee9a211674c8759b94aa801f6f174
* Function for reset brick added in the brick_opsubansal2018-06-151-1/+52
| | | | | Change-Id: I01ad84cc2e35873b985d5d86d3cbacd226b42ae1 Signed-off-by: ubansal <ubansal@redhat.com>
* glusto-tests/glusterd: gluster volume status with/without xml tagSanju Rakonde2018-06-151-0/+112
| | | | | | | | | | | | | | | | | | | | In this test case, we will check gluster volume status and gluster volume status --xml from a node which is part of cluster but not having any bricks of volumes. Steps followed are: 1. Create a two node cluster 2. Create a distributed volume with one brick(Assume brick contains to N1) 3. From node which is not having any bricks i.e, N2 check gluster v status which should fail saying volume is not started. 4. From N2, check gluster v status --xml. It should fail because volume is not started yet. 5. Start the volume 6. From N2, check gluster v status, this should succeed. 7. From N2, check gluster v status --xml, this should succeed. Change-Id: I1a230b82c0628c66c16f25f89dd4e6d1d0b3f443 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Pausing and resuming a geo-rep sessionrallan2018-06-151-0/+63
| | | | | Change-Id: I4d2767acb4fff7ab972e11d13ef4c547914110d9 Signed-off-by: rallan <rallan@redhat.com>
* Fuse subdir multiclient AuthenticationManisha Saini2018-06-141-0/+141
| | | | | Change-Id: I3e789d2e8ad24cca62d8a5ef8cfb9511375cfe0e Signed-off-by: Manisha Saini <msaini@redhat.com>
* Adding Fuse Subdir LevelDown Test CaseManisha Saini2018-06-141-0/+148
| | | | | Change-Id: Ic356857db199529a4eaacb9140d71a8fd7c70375 Signed-off-by: Manisha Saini <msaini@redhat.com>
* All the fields in heal info must be mentioned consistently when few of the ↵Vitalii Koriakov2018-06-141-0/+192
| | | | | | | bricks are down Change-Id: I1169250706494b1b833d3b7e8a1ee148426e224b Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Quota : Library to get the quotad pidVinayak Papnoi2018-06-141-0/+67
| | | | | | | | Adding quota_libs.py with the quota_fetch_daemon_pid library. This library gets the PID of quota daemon from all the nodes. Change-Id: Icc5ecba5649a2f7fd48d5cedda6c1dd3ad8b50c0 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Test to verify Fuse sub directory mount operationJilju Joy2018-06-141-0/+99
| | | | Change-Id: I6cdb3122af071f9f7bcfaeba8427e5c4aad8a4ec
* tests: Test case to verify brick consumable sizeSunil Kumar Acharya2018-06-142-0/+132
| | | | | | | | | | When bricks of various sizes are used to create a disperse volume, volume size should be of the size (number of data bricks * least of brick size) RHG3-11124 Change-Id: Ic791212bf028328996b896ae4896cf860c153264 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* Fixed a problem, when after rebooting nodes and timeout completion it sad ↵Vitalii Koriakov2018-06-131-3/+7
| | | | | | | that nodes are online even if nodes are still offline Change-Id: I57da740fbc8eef2e41d5dfe3bb82a8d487630893 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Fix tests with old quota functionsNigel Babu2018-06-132-17/+17
| | | | | | We renamed the quota functions, so this caused some conflicts on master. Change-Id: I5eb6381dc77dcd99929cbc20173941bf1bd2290d
* Initial steps for geo-rep setuprallan2018-06-131-0/+110
| | | | | Change-Id: I1b06d935ae056dd0fe583f3c1a9f181ec2a40076 Signed-off-by: rallan <rallan@redhat.com>
* Libraries to enable and disable shared storageAkarsha2018-06-131-0/+116
| | | | | Change-Id: Ie5072e8f29185104bcd54348465cc6e96cce5f00 Signed-off-by: Akarsha <akrai@redhat.com>
* Snapshot: snapshot information testcasessrivickynesh2018-06-131-0/+113
| | | | | | | | Information of snapshots taken for a specified snapshot/volume or all snapshots present in the system Change-Id: Ibe355053848b234e0892c0b7f68bfed053f8867a Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding fuse subdir remove brick testManisha Saini2018-06-131-0/+234
| | | | | Change-Id: I0f01234c66a42844bfa5b6c548cd17f4512d98e2 Signed-off-by: Manisha Saini <msaini@redhat.com>
* glusto-tests/glusterd: check uuid of bricks in vol info --xmlSanju Rakonde2018-06-131-0/+105
| | | | | | | | | | | | | | | | In this test case, We will check uuid's of bricks in output of gluster volume info --xml output from newly probed node. Steps followed are: 1. Create a two node cluster 2. Create and start a 2x2 volume 3. From the existing cluster probe to a new node 4. Check gluster volume info --xml from newly probed node 5. In the output gluster volume info --xml, uuid's of bricks should be non-zero Change-Id: I73d07f1b91b5beab26cc87217defb8999fba474e Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Quota: Fixing quota libs in glustoSanoj Unnikrishnan2018-06-137-226/+142
| | | | | | | | | | - added quota_libs.py with quota_validate library - Removed redundant function in quota_ops. - changed naming of quota_ops to be consistent and intutive w.r.t cli Change-Id: I4faf448ea308c9e04b548d6174d900fcf56978a5 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
* Changes to accept '*' as authentication value and minor enhancement in ↵Jilju Joy2018-06-121-16/+22
| | | | | | decription. Change-Id: I895b1eb51e0bc425ba1aff559374ea9383894ccb
* Verifying task type and status in vol statusRajesh Madaka2018-06-121-0/+192
| | | | | | | | | | | | | | | | | -> Create Volume -> Start rebalance -> Check task type in volume status -> Check task status string in volume status -> Check task type in volume status xml -> Check task status string in volume status xml -> Start Remove brick operation -> Check task type in volume status -> Check task status string in volume status -> Check task type in volume status xml -> Check task status string in volume status xml Change-Id: I9de53008e19f1965dac21d4b80b9b271bbcf53a1 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Setting "data-self-heal", "metadata-self-heal", "entry-self-heal" volume ↵Vitalii Koriakov2018-06-121-0/+272
| | | | | | | options should not be applicable to glfsheal Change-Id: I019b0299dd7f907446e85f6de0186fb61a3ce1f1 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Added an afr test case: test gfid heal on 1x3 volroot2018-06-121-0/+188
| | | | | | | | Description: Test case which checks for gfid self heal of a file on 1x3 replicated volume Change-Id: I3bad7c16435bd99fa3f5b812c65970bebdbd18ac Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* afr: test to resolve split-brain using CLI (source-brick)root2018-06-121-0/+191
| | | | | | | | | | | | | | Description: This test case runs split-brain resolution CLIs on a file in gfid split-brain on 1x2 volume. 1. kill 1 brick 2. create a file at mount point 3. bring back the killed brick 4. kill the other brick 5. create same file at mount point 6. bring back the killed brick 7. try heal from CLI and check if it gets completed Change-Id: Iddd386741c3c672cda90db46facd7b04feaa2181
* Heal command should say that triggering heal is unsuccessful as some bricks ↵Vitalii Koriakov2018-06-111-0/+220
| | | | | | | may be down Change-Id: I0515680e2cbe582917f0034461b305a33b75ca94 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Quota : Test case for quota on single brick volumeVinayak Papnoi2018-06-111-0/+159
| | | | | | | | | This test case covers the quota functionality for a volume with a single brick (1x1). Quota help cli command is also validated here. Change-Id: I772f4646e2229c21f4547122410633715ef47668 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Quota : Test case to validate limits on various directories [breadth]Vinayak Papnoi2018-06-101-0/+185
| | | | | | | | | | | | | | | | Verifying directory quota functionality with respect to the limit-usage option. Set limits on various directories [breadth] and check for the quota list of all the directories. * Enable Quota * Create 10 directories and set limit of 1GB on each directory * Perform a quota list operation * Create some random amount of data inside each directory * Perform a quota list operation Change-Id: I3ffc5b99018365eca21ecbdd55d6d9c176f36d6f Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Verifying max supported op-version and vol info fileRajesh Madaka2018-06-091-0/+129
| | | | | | | | | | | | | | | | -> Create Volume -> Get the current op-version -> Get the max supported op-version -> Verify vol info file exists or not in all servers -> Get the version number from vol info file -> If current op-version is less than max-op-version set the current op-version to max-op-version -> After vol set operation verify that version number increased by one or not in vol info file -> verify that current-op-version and max-op-version same or not. Change-Id: If56210a406b15861b0a261e29d2e5f45e14301fd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test OOM Kill on client when heal is in progress on 1*3 arbiter volumeVitalii Koriakov2018-06-081-0/+201
| | | | | Change-Id: I47d0ac4afac44442bd877243c45581df83c6a2e7 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Automatic heal should be triggered and all files must be availbleVitalii Koriakov2018-06-081-0/+269
| | | | | Change-Id: I734f85671f17e9a7e9d863aa3a0ef8f632182d48 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>