summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* Adding Fuse Subdir LevelDown Test CaseManisha Saini2018-06-141-0/+148
| | | | | Change-Id: Ic356857db199529a4eaacb9140d71a8fd7c70375 Signed-off-by: Manisha Saini <msaini@redhat.com>
* All the fields in heal info must be mentioned consistently when few of the ↵Vitalii Koriakov2018-06-141-0/+192
| | | | | | | bricks are down Change-Id: I1169250706494b1b833d3b7e8a1ee148426e224b Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test to verify Fuse sub directory mount operationJilju Joy2018-06-141-0/+99
| | | | Change-Id: I6cdb3122af071f9f7bcfaeba8427e5c4aad8a4ec
* tests: Test case to verify brick consumable sizeSunil Kumar Acharya2018-06-141-0/+113
| | | | | | | | | | When bricks of various sizes are used to create a disperse volume, volume size should be of the size (number of data bricks * least of brick size) RHG3-11124 Change-Id: Ic791212bf028328996b896ae4896cf860c153264 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* Fix tests with old quota functionsNigel Babu2018-06-132-17/+17
| | | | | | We renamed the quota functions, so this caused some conflicts on master. Change-Id: I5eb6381dc77dcd99929cbc20173941bf1bd2290d
* Snapshot: snapshot information testcasessrivickynesh2018-06-131-0/+113
| | | | | | | | Information of snapshots taken for a specified snapshot/volume or all snapshots present in the system Change-Id: Ibe355053848b234e0892c0b7f68bfed053f8867a Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding fuse subdir remove brick testManisha Saini2018-06-131-0/+234
| | | | | Change-Id: I0f01234c66a42844bfa5b6c548cd17f4512d98e2 Signed-off-by: Manisha Saini <msaini@redhat.com>
* glusto-tests/glusterd: check uuid of bricks in vol info --xmlSanju Rakonde2018-06-131-0/+105
| | | | | | | | | | | | | | | | In this test case, We will check uuid's of bricks in output of gluster volume info --xml output from newly probed node. Steps followed are: 1. Create a two node cluster 2. Create and start a 2x2 volume 3. From the existing cluster probe to a new node 4. Check gluster volume info --xml from newly probed node 5. In the output gluster volume info --xml, uuid's of bricks should be non-zero Change-Id: I73d07f1b91b5beab26cc87217defb8999fba474e Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Quota: Fixing quota libs in glustoSanoj Unnikrishnan2018-06-134-32/+32
| | | | | | | | | | - added quota_libs.py with quota_validate library - Removed redundant function in quota_ops. - changed naming of quota_ops to be consistent and intutive w.r.t cli Change-Id: I4faf448ea308c9e04b548d6174d900fcf56978a5 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
* Verifying task type and status in vol statusRajesh Madaka2018-06-121-0/+192
| | | | | | | | | | | | | | | | | -> Create Volume -> Start rebalance -> Check task type in volume status -> Check task status string in volume status -> Check task type in volume status xml -> Check task status string in volume status xml -> Start Remove brick operation -> Check task type in volume status -> Check task status string in volume status -> Check task type in volume status xml -> Check task status string in volume status xml Change-Id: I9de53008e19f1965dac21d4b80b9b271bbcf53a1 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Setting "data-self-heal", "metadata-self-heal", "entry-self-heal" volume ↵Vitalii Koriakov2018-06-121-0/+272
| | | | | | | options should not be applicable to glfsheal Change-Id: I019b0299dd7f907446e85f6de0186fb61a3ce1f1 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Added an afr test case: test gfid heal on 1x3 volroot2018-06-121-0/+188
| | | | | | | | Description: Test case which checks for gfid self heal of a file on 1x3 replicated volume Change-Id: I3bad7c16435bd99fa3f5b812c65970bebdbd18ac Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* afr: test to resolve split-brain using CLI (source-brick)root2018-06-121-0/+191
| | | | | | | | | | | | | | Description: This test case runs split-brain resolution CLIs on a file in gfid split-brain on 1x2 volume. 1. kill 1 brick 2. create a file at mount point 3. bring back the killed brick 4. kill the other brick 5. create same file at mount point 6. bring back the killed brick 7. try heal from CLI and check if it gets completed Change-Id: Iddd386741c3c672cda90db46facd7b04feaa2181
* Heal command should say that triggering heal is unsuccessful as some bricks ↵Vitalii Koriakov2018-06-111-0/+220
| | | | | | | may be down Change-Id: I0515680e2cbe582917f0034461b305a33b75ca94 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Quota : Test case for quota on single brick volumeVinayak Papnoi2018-06-111-0/+159
| | | | | | | | | This test case covers the quota functionality for a volume with a single brick (1x1). Quota help cli command is also validated here. Change-Id: I772f4646e2229c21f4547122410633715ef47668 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Quota : Test case to validate limits on various directories [breadth]Vinayak Papnoi2018-06-101-0/+185
| | | | | | | | | | | | | | | | Verifying directory quota functionality with respect to the limit-usage option. Set limits on various directories [breadth] and check for the quota list of all the directories. * Enable Quota * Create 10 directories and set limit of 1GB on each directory * Perform a quota list operation * Create some random amount of data inside each directory * Perform a quota list operation Change-Id: I3ffc5b99018365eca21ecbdd55d6d9c176f36d6f Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Verifying max supported op-version and vol info fileRajesh Madaka2018-06-091-0/+129
| | | | | | | | | | | | | | | | -> Create Volume -> Get the current op-version -> Get the max supported op-version -> Verify vol info file exists or not in all servers -> Get the version number from vol info file -> If current op-version is less than max-op-version set the current op-version to max-op-version -> After vol set operation verify that version number increased by one or not in vol info file -> verify that current-op-version and max-op-version same or not. Change-Id: If56210a406b15861b0a261e29d2e5f45e14301fd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test OOM Kill on client when heal is in progress on 1*3 arbiter volumeVitalii Koriakov2018-06-081-0/+201
| | | | | Change-Id: I47d0ac4afac44442bd877243c45581df83c6a2e7 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Automatic heal should be triggered and all files must be availbleVitalii Koriakov2018-06-081-0/+269
| | | | | Change-Id: I734f85671f17e9a7e9d863aa3a0ef8f632182d48 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test conservative merge of files (heal command and 1x2 replicate volume)Vitalii Koriakov2018-06-071-0/+325
| | | | | Change-Id: I90c51f0e945cfe85e60bc97e1ed3b617a0a7eba5 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test brick status when quorum not metRajesh Madaka2018-06-071-0/+151
| | | | | | | | | | | | | | -> Create volume -> Enable server quorum on volume -> Stop glusterd on all nodes except first node -> Verify brick status of nodes where glusterd is running with default quorum ratio(51%) -> Change the cluster.server-quorum-ratio from default to 95% -> Start glusterd on all servers except last node -> Verify the brick status again Change-Id: I249574fe6c758e6b8e5bea603f36dcf8698fc1de Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Manual heal command should trigger heal of the fileVitalii Koriakov2018-06-061-0/+213
| | | | | Change-Id: Ie685a2e60c19bc096c54034a6b2f7d4380441f3d Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Adding basic test case for EC.Sunil Kumar Acharya2018-06-062-0/+73
| | | | | | Change-Id: I389aaa59db10b40d3ec117b8bb23d76fad29b41b Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* Setting volume option when quorum is not metBala Konda Reddy M2018-06-061-0/+129
| | | | | | Change-Id: I2a3427cb9165cb2b06a1c72962071e286a65e0a8 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com> Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* Test to check vol stop, start, reset options when sub-dirs are mounted.Jilju Joy2018-06-052-0/+173
| | | | Change-Id: Ic9dc93a8a433e7630c669c924b54916f4b838809
* Adding testcase: Rebalance with hidden filesPrasad Desala2018-06-051-0/+204
| | | | | | | | | | | | | | | If the dataset has hidden files and when added bricks and triggered rebalance, rebalance should be able to pick the hidden files for migration and should migrate those without any issues and the checksum should match post rebalance. Changes: - Minor fixes - Imrpoved imports - Removed logs of rebalance status Change-Id: I31c5859e112ad3a6efef7e008995090afda677cc Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Snapshot: Status and Info Invalid casessrivickynesh2018-06-041-0/+137
| | | | | Change-Id: I70dc1930dabebc8e0af4c83c8b70842f92f9865a Signed-off-by: srivickynesh <sselvan@redhat.com>
* glusterd test cases: Peer probe from a standalone node to existing glusterGaurav Yadav2018-06-011-0/+240
| | | | | Change-Id: I5ffd826bd375956e29ef6f52913fa7dabf8bc7ce Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* Snapshot: Testcase to validate snapshot mountsrivickynesh2018-05-311-0/+207
| | | | | Change-Id: I3af948e10673737d06e352e3bb8a1bec58ea3c55 Signed-off-by: srivickynesh <sselvan@redhat.com>
* The purpose of this test case is to ensure USS validationvivek das2018-05-311-0/+225
| | | | | | | | | | Where .snaps folder is only readable and listing all the snapshots and it's content. Also ensures that deactivated snapshot doesn't get listed. Change-Id: I25ad451986c861038450c0369e6cbc130b8945bb Signed-off-by: vivek das <vdas@redhat.com> Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding Test case test_glustershd_on_newly_probed_serverVijay Avuthu2018-05-311-0/+219
| | | | | | | | | | | | | | | | | | | | | | | | | | | Description: est script to verify glustershd process on newly probed sAdding Test case test_glustershd_on_newly_probed_server Description: Test script to verify glustershd process on newly probed server * check glustershd process - only 1 glustershd process should be running * Add new node to cluster * check glustershd process - only 1 glustershd process should be running on all servers inclusing newly probed server * stop the volume * add another node to cluster * check glustershd process - glustershd process shouldn't be running on servers including newly probed server * start the volume * check glustershd process - only 1 glustershd process should be running on all servers inclusing newly probed server Change-Id: I6142000ee8322b7ab27dbcd27e05088d1c8be806 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Rebalance: starting and stopping the volume while rebalance is in progressPrasad Desala2018-05-231-0/+224
| | | | | | | | This testcase verifies, volume stop should not be allowed while rebalance is in-progress and it should throw appropriate error. Change-Id: I24ffc263f26eb99fff774cb851ac98ac6fed2bee Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Add test rebalance positive test - add brick commandMiroslav Asenov2018-05-231-0/+185
| | | | | | | Changes: - Updated test case add brics to volume with IO operations. Change-Id: I61e83c0d58d65783da4e54bc2a8a32a35b515c07
* Add test case DHT Distribution based on key valueMiroslav Asenov2018-05-231-0/+219
| | | | | | | | | | | | Changes: - Add more log end points - Include dht glusto's functions - Fix count log parameters - Convert docstring to google style docstring on helper function - Renamed test class Change-Id: Ib919e86c8c79e8bdad4007bc9d77d76b031ecb3d
* Quota: Negative test cases for Quotavenkata edara2018-05-231-0/+145
| | | | | | | | | This testcase will enable/disable quota by giving negative inputs and also try to enable timeouts by giving huge value , all testcases have to return false. Change-Id: I3996a38a728c20199ef969d03ff9e11dc774ee6c Signed-off-by: venkata edara <redara@redhat.com>
* Quota : Test case for quota listVinayak Papnoi2018-05-171-0/+153
| | | | | | | | | This test case covers the quota list functionality with and without the path. Both outputs for quota list with and without path should match. Change-Id: I251cae896558e09bee3d4a689f8287df2b0bb585 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* TestCase Testing VolumeType change from replicated to Arbiter volume along ↵root2018-05-111-0/+394
| | | | | | | | with volume operations add-brick, remove-brick, replace-brick post volume type change Change-Id: I44a1ff6fab3228736ae9c83fe67b16c2e8c40adc Signed-off-by: Karan Sandha <ksandha@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Adding test_glustershd_on_all_volume_types test caseVijay Avuthu2018-05-111-2/+223
| | | | | | | | | | | | | | | | | | | Description: Test Script to verify the glustershd server vol file has only entries for replicate volumes * Create multiple volumes and start all volumes * Check the glustershd processes - Only 1 glustershd should be listed * Check the glustershd server vol file - should contain entries only for replicated involved volumes * Add bricks to the replicate volume - it should convert to distributed-replicate * Check the glustershd server vol file - newly added bricks should present * Check the glustershd processes - Only 1 glustershd should be listed Change-Id: Ie110a0312e959e23553417975aa2189ed01be6a4 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Adding test case : test_client_side_quorum_with_fixed_for_cross3Vijay Avuthu2018-05-111-0/+651
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Description: Test Script to verify the Client Side Quorum with fixed for cross 3 volume * Disable self heal daemom * set cluster.quorum-type to fixed. * start I/O( write and read )from the mount point - must succeed * Bring down brick1 * start I/0 ( write and read ) - must succeed * Bring down brick2 * start I/0 ( write and read ) - must succeed * set the cluster.quorum-count to 1 * start I/0 ( write and read ) - must succeed * set the cluster.quorum-count to 2 * start I/0 ( write and read ) - read must pass, write will fail * bring back the brick1 online * start I/0 ( write and read ) - must succeed * Bring back brick2 online * start I/0 ( write and read ) - must succeed * set cluster.quorum-type to auto * start I/0 ( write and read ) - must succeed * Bring down brick1 and brick2 * start I/0 ( write and read ) - read must pass, write will fail * set the cluster.quorum-count to 1 * start I/0 ( write and read ) - read must pass, write will fail * set the cluster.quorum-count to 3 * start I/0 ( write and read ) - read must pass, write will fail * set the quorum-type to none * start I/0 ( write and read ) - must succeed Change-Id: Ic159aee3ca80f6a584a46e2ac7986f4007346968 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* Removing files when one of the brick if in offline stateKaran Sandha2018-05-111-0/+225
| | | | | | Change-Id: I2e4cf5c4280351d7cfaa25ffb53cd081227d7e9e Signed-off-by: Karan Sandha <ksandha@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Snapshot: validate Snapshot activation and deactivationvivek das2018-05-091-0/+170
| | | | | | | | | | the purpose of this test case is to validate Snapshot activation and deactivation. Pre-Activate, After Activate and After Deactivate Change-Id: I7b502922171247954801feefe0409b1751a1d07f Signed-off-by: vivek das <vdas@redhat.com> Signed-off-by: srivickynesh <sselvan@redhat.com>
* Fix replace brick failure and peer probe failureNigel Babu2018-05-092-15/+12
| | | | | | | | The replace brick setUp function had a syntax error and a wrong assert. The peer probe tearDown method did not work in a situation where the test failed leading to cascading failures in other tests. Change-Id: Ia7e0d85bb88c0c9bc6d489b4d03dc7610fd4f129
* Fix assertIsNotNone call in glusterd testsNigel Babu2018-05-081-2/+1
| | | | Change-Id: I774f64e2f355e2ca2f41c7a5c472aeae5adcd3dc
* Test MetaData Self-Heal (heal command)Vitalii Koriakov2018-05-081-115/+461
| | | | | Change-Id: I32fefdab769e5a361e4dcb5f1328b2c8da2e4f1a Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* selfheal daemon casesKaran Sandha2018-05-081-5/+116
| | | | | | Change-Id: I24e2baddc4f5cdb2c9ae0ab6b9020b2eb9b42a05 Signed-off-by: Karan Sandha <ksandha@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Replace brick with valid brick path and non-existing brick pathBala Konda Reddy M2018-05-081-0/+114
| | | | | Change-Id: Ic6cb4e96d8f14558c0f9d4eb5e24cbb507578f4c Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Glusterd Testcase: Setting lower gluster opversionBala Konda Reddy Mekala2018-05-071-0/+89
| | | | | | | | In this testcase, Setting the lowest Gluster opversion and invalid opversion are validated Change-Id: Ie45859228e35b7cb171493dd22e30e2f26b70631 Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* test : snapshot creation during rebalanceSunny Kumar2018-05-071-0/+251
| | | | | | | | | The purpose of this test is to validate snapshot create during rebalance. Change-Id: I8c58e0dd6571e648c1410342db988b95b82aaa1b Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Signed-off-by: srivickynesh <sselvan@redhat.com>
* Validate activate on create set command for snapshot.vivek das2018-05-071-0/+174
| | | | | | | Signed-off-by: vivek das <vdas@redhat.com> Change-Id: I418d3e1900acb8e79e4063a191164866544acd5d Signed-off-by: srivickynesh <sselvan@redhat.com>
* test : snapshot max-hard-limit and max-soft-limitSunny Kumar2018-05-071-0/+280
| | | | | | | | | The purpose of this test is to validate snapshot hard and soft max-limt options. Change-Id: I63e661549977251104c120d7b25422ff57fdadeb Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Signed-off-by: srivickynesh <sselvan@redhat.com>