summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* [lib]:Adding the default configs for arbiter and dist-arbitersayaleeraut2019-11-291-0/+11
| | | | | | | | | | | Adding the default configurations for arbiter and distributed- arbiter volume types, as it was missing from the gluster_base_class.py. Adding Arbiter and Distributed arbiter configuration in the glusto_tests_config.yml Change-Id: Ic078505975ff1a1171a4bc6ee6ad2c67f0fb45f1 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [py2to3] Refactor gluster_base_class.pyValerii Ponomarov2019-11-281-2/+2
| | | | | | | | | | | | | | | | | | Following changes were implemented: - Delete unused imports and place used ones in the alphabetical order. Imports are splitted into 3 groups: built-ins, third-parties and local modules/libs. - Make changes to support py3 in addition to py2. - Minimize number of code lines keeping the same behaviour and improving readability. - Add possibility to get 'bound' (cls) methods using 'get_super_method' staticmethod from base class. Before it was possible to call only unbound (self) methods. - Update 'test_add_brick.py' module as PoC for running base class bound methods in both - py2 and py3. Now this module py2/3 compatible. Change-Id: I1b66b3a91084b2487c26bec8763ab2b4e12ac482 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [py2to3] Add method to the base class for proper calling of it's methodsValerii Ponomarov2019-11-221-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lots of test classes are wrapped by the 'runs_on' decorator. This decorator replaces original test class with it's copy where parent class is original test class if we use py3. Such situation leads to the impossibility to use following approach in py3: super(SomeClass, some_class_instance).some_method() And, the above approach is py2/3 compatible approach for calling parent class's methods. The problem we face is that we fall into the unexpected recursion here. So, add 'get_super_method' to the base class, which detects such situation and returns proper method of a proper parent class. Also, fix test class located at 'glusterd/test_peer_status.py' module to have proof of concept. With this change 'test_peer_probe_status' test case becomes completely py2/3 compatible. Example of new method usage: @runs_on([['distributed'], ['glusterfs']]) class TestDecoratedClass(GlusterBaseClass): ... def setUp(self): self.get_super_method(self, 'setUp')() ... This approach must be used instead of existing calls of 'im_func' function if we want to support both at once - python2 and python3. Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com> Change-Id: I23f4462b64f9d4dd90812273f08fb756d073ab76
* [py2to3] Replace usage of ".iteitems()" attr with ".items()"Valerii Ponomarov2019-11-211-2/+2
| | | | | | | | Dict attribute called "iteritems()" is not supported in the py3. So, replace it's usage with another similar attr called "items()". Change-Id: I130b7f67f0a2d5da5ed6c3d792f5ff024ba148f4 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [Snapshot] Fix teardown for test_snap_scheduler_statusVinayak Papnoi2019-11-211-57/+83
| | | | | | | | | | | | The tearDown function was incomplete where, since the test case has enabled the shared storage, it needs to be disabled as part of the tear down. This fix includes the addition of the necessary tearDown parts along with some cosmetic changes. Change-Id: Id421c840a1c7606ecf185c9520ca436d47911f45 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* [TC] Resubmitting testcase test_glusterd_quorum_validation after bug fixkshithijiyer2019-11-211-0/+301
| | | | | | | | | | | | | | | | | | | As the below mentioned bug is fixed resubmitting testcase: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Test case: -> Creating two volumes and starting them, stop the second volume -> Set the server quorum and set the ratio to 90 -> Stop the glusterd in one of the node, so the quorum won't meet -> Peer probing a new node should fail -> Volume stop will fail -> Volume delete will fail -> Volume reset will fail -> Start the glusterd on the node where it is stopped -> Volume stop, start, delete will succeed once quorum is met Change-Id: Ic9dea44364d4cb84b6170eb1f1cfeff1398b7a9b Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding test case : test_no_glustershd_with_distributeMilind Waykole2019-11-201-0/+176
| | | | | | | Change-Id: I12b5586bdcef128df64fcd8a0ba80f193395f313 Co-authored-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
* [TC] Test quota limits are honored on rebalance + healkshithijiyer2019-11-181-0/+196
| | | | | | | | | | | | | | | | | | | | Test case: * Enable quota on the volume. * Set hard and soft time out to zero. * Create some files and directories from mount point so that the limits are reached. * Perform add-brick operation on the volume. * Start rebalance on the volume. * While rebalance is running, kill one of the bricks of the volume and start after a while. * While rebalance + self heal is in progress, create some more files and directories from the mount point until limit is hit. Change-Id: Ic7d2ac92b4e132ab1018242c17bed5e888e86cf3 Co-authored-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TC] Testcase to test that statvfs calls honor quota limits.Sanoj Unnikrishnan2019-11-181-0/+120
| | | | | | | | | | | | | | Test statvfs calls return appropriate avaialable size with quota. * Enable Quota * Save the result from statvfs call * Set Quota limit of 1 GB on the root of the volume * Validate statvfs call honors quota * Remove quota limit from the Volume * Validate statvfs call reports old value of avialable space Change-Id: I5f6271e0acdba13d483eb321f62ca9fdc5360859 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TC] Posix Compliancekshithijiyer2019-11-041-3/+94
| | | | | | | | | Adding test case to check posix compliance for nfs ganesha Change-Id: I8188fba9245cfe27ae30b7818cfe7d2b624e9d87 Signed-off-by: Ambarish Soman <asoman@redhat.com> Signed-off-by: Arjun Sharma <arjsharm@redhat.com>
* Test to verify failover when sub directories are mounted on fuse clientsJilju Joy2019-11-041-0/+196
| | | | Change-Id: Iec8471de0add8cb6eaf6c80fb24c631e992aad4d
* [fix] Turning off only client side heal option at the start of the testMilind Waykole2019-10-231-8/+10
| | | | | | | | | | | | | | Earlier in the testcase we were turning off shd which is not correct and we have to turn off only client side heal options as mentioned below metadata-self-heal entry-self-heal data-self-heal After renaming files we have to turn on these options while doing a look up from client Change-Id: I8c76abb8e79620c412e5991f5d8255b6b2a850e8 Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
* nfs-ganesha: Modifying test case based on latest librariesJilju Joy2019-10-231-8/+58
| | | | | Change-Id: I2afd326fe0df1959a8fd20bc7325159688c1a73e Signed-off-by: Jilju Joy <jijoy@redhat.com>
* TC: test_add_brick_while_remove_brick_is_in_progressPrasad Desala2019-10-201-1/+10
| | | | | | | | | | Add-brick command failure is leaving the directories created on the backend bricks which is resulting in the failure of the subsequent cases. Added some changes to clean the bricks. Change-Id: I108efbcaef2010f6fd52c334446059f96fff3741 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test rebalance operation when quorum not metRajesh Madaka2019-10-161-0/+161
| | | | | | | | | | | | | | | -> Create volume -> Stop the volume -> Enabling serve quorum -> start the volume -> Set server quorum ratio to 95% -> Stop the glusterd of any one of the node -> Perform rebalance operation operation -> Check gluster volume status -> start glusterd Change-Id: I3bb42a83414dbcabdc61178e11d584eaf90c3b40 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* [Fix] Adding count++ to prevent test from running in infinite loopyinkui2019-10-152-0/+4
| | | | | | | | | In a few testcases in glusterd count++ is missing due to which the testcase in an infinite loop. Fixing that and sending patch. Change-Id: I56a355f6ea3ae79231e09d7aee80031da3ebec52 Signed-off-by: yinkui <13965432176@163.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Enabling client side heal,as client side heal is disabled by default in RHGS 3.5milindw962019-10-151-5/+42
| | | | | | Change-Id: I7f8769defd34d55d8eec720c40ed55e69523f917 Signed-off-by: Anees Patel <anepatel@redhat.com> Signed-off-by: milindw96 <milindwaykole96@gmail.com>
* Modifying ACL test fileArjun Sharma2019-10-101-8/+50
| | | | | | | | | Since the NfsGaneshaVolumeBaseClass has been removed the ACL test needs to be modified to use the alternative method to the above mentioned class. Change-Id: I398dfef6dd334a3e3f3871d44705af312d81318a Signed-off-by: Arjun Sharma <arjsharm@redhat.com>
* fuse-subdir: Revertig the change made in patch 22019Jilju Joy2019-10-071-3/+4
| | | | | | | | | * The issue with disperse and distributed-disperse volume types are now fixed. * Reference : Bugzilla 1663375 * Using mount object instead of clients list for setting authentication Change-Id: I914cee7fb790dc65e947e0b6db40d02e23575e65 Signed-off-by: Jilju Joy <jijoy@redhat.com>
* Using set_acl in the testcase as existing functions are removedBala Konda Reddy M2019-10-031-3/+5
| | | | | | | | | | | Earlier in nfs_ganesha_ops, two library functions enable_acl, disable_acl were implemented and later with changes to NFS Ganesha Base class both the functions are removed and made into single function set_acl. Change-Id: I5456adeeffa49c35a5ea19c8d11272f91ec4bdbf Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* [Fix] Fixing string formatting errors and client heal errorskshithijiyer2019-09-191-37/+51
| | | | | | Change-Id: Ifef2ffe022accf59edcbc949c505f47931b19fe4 Signed-off-by: Anees Patel <anepatel@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* nfs-ganesha: Modifying test case based on latest libraryJilju Joy2019-09-191-45/+94
| | | | | Change-Id: I5372caf852b41e127c46f6afa697435dcde9dbf6 Signed-off-by: Jilju Joy <jijoy@redhat.com>
* nfs-ganesha: Test case to verify replace brick operation while IO is in progressJilju Joy2019-09-191-0/+176
| | | | | Change-Id: I40e36e1d4d404fcb9709fa9e50d33cad77172350 Signed-off-by: Jilju Joy <jijoy@redhat.com>
* nfs_ganesha: Test case to validate add brick operation while IO is in progressJilju Joy2019-09-191-0/+185
| | | | | | | | | | | Steps 1. Start IO on mount points 2. Add bricks to expand the volume 3. Start rebalance and wait for its completion 4. Validate IOs Change-Id: I97c09c9fc226afeff1446d225959730715f89aef Signed-off-by: Jilju Joy <jijoy@redhat.com>
* nfs-ganesha:Testcase to create and export new volume while IOs are in progressJilju Joy2019-09-191-0/+250
| | | | | | | | | | | Steps 1. Start IO on mount points 2. Create another volume 'volume_new' 3. Export volume_new through nfs-ganesha 4. Mount the volume on clients Change-Id: I2c4fe59e9a85e6668672d31a0a6c27d11c7f03f8 Signed-off-by: Jilju Joy <jijoy@redhat.com>
* Cthon test case for NFS GaneshaArjun Sharma2019-09-121-0/+121
| | | | | Change-Id: I3fb826bd0ecbe46bee4b9f8594b23f16921adbec Signed-off-by: Arjun Sharma <arjsharm@redhat.com>
* Add 7 negative scenarios for EC volume createubansal2019-09-111-1/+73
| | | | | | | | | With redundancy count as negative or disperse count with negative(different permutations) and disperse data count equal to disperse count Change-Id: I761851c64833256532464f56a9a78e20ceb8a4e1 Signed-off-by: ubansal <ubansal@redhat.com>
* Fix AFR test case tearDown and library importVinayak Papnoi2019-09-111-3/+16
| | | | | | | | | | | | | The test case 'test_client_side_quorum_with_fixed_validate_max_bricks' does not have a tearDown part where the volume options which have been set inside the test case have not been reset to default. The library function 'set_volume_options' was being imported from a wrong library. This fix includes this change along with the tearDown steps. Change-Id: Ic57494e7a7e8a25303b7979f98cc2dfbc9a7d7b6 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Checking quota daemon and self heal daemon process after rebootkshithijiyer2019-09-111-0/+167
| | | | | | | | | | | | | For each node should have one self heal daemon and one quota daemon should be up running that means total number of self heal daemon and quota daemon to be up and ruuning is (number of nodes *2), in code i am checking that count should be equalent to (number of nodes * 2) Change-Id: I79d40467edc255a479a369f19a6fd1fec9111f53 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [DHT] : Modifying the messages in g.log.info and ExecutionErrorsayaleeraut2019-09-111-4/+4
| | | | | | | | | | | | The ExecutionError msg at line 66 should be "failed to clean-up volume" because it is under tearDownClass. The g.log.info msg at line 123 should be "Checking if gfid xattr of directories is displayed and is same on all the bricks on the server node" as the code below it checks for the gfid xattr on the bricks on server node and not if it is displayed on mount point. Change-Id: If4e20e3487a44c1cc7047504d19cc9859424ccd4 Signed-off-by: sayaleeraut <saraut@redhat.com>
* do getfattr in brick_path rather than m_pointyinkui2019-09-111-2/+3
| | | | | Change-Id: I240ecb0b4a9c99134b7a5cd237a59c2857d0fb7b Signed-off-by: yinkui <13965432176@163.com>
* access self.mnod so we must get layout in mount_point/dirpath, and change ↵yinkui2019-09-041-2/+2
| | | | | | | the log. Change-Id: I361b2c59108b19480906f6dfd49b023ed1eb05cd Signed-off-by: yinkui <13965432176@163.com>
* Added a library for daemon reload and fixing testcaseBala Konda Reddy M2019-09-041-8/+26
| | | | | | | | | | | | | | After changing the type of unit file from INFO to DEBUG. Performing daemon reload. Earlier using running commands continuosly to generated debug messages instead of running continuosly, restarted glusterd in one of the nodes so that while handshake the logs will be in Debug mode. After validating reverting back the unit file to INFO and daemon reload Change-Id: I8c99407eff2ea98a836f37fc2d89bb99f7eeccb7 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Fixes: changed triggering heal by getting md5sumkshithijiyer2019-09-041-8/+7
| | | | | | Change-Id: I4d056b94b4ea59beee7eb24e7e5d5f65d7256b4a Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [glusterfind] Test case to verify file renamesVinayak Papnoi2019-08-201-0/+231
| | | | | | | | | | | | | | | | | | Verifying the glusterfind functionality with renames of files. * Create a session on the volume * Create various files from mount point * Perform glusterfind pre * Perform glusterfind post * Check the contents of outfile * Rename the files created from mount point * Perform glusterfind pre * Perform glusterfind post * Check the contents of outfile Files renamed must be listed Change-Id: Ib7682e86d59f0519b267ec01cda999920a30de86 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* [glusterfind] Test case to verify file deletesVinayak Papnoi2019-08-201-0/+239
| | | | | | | | | | | | | | | | | | Verifying the glusterfind functionality with deletion of files. * Create a session on the volume * Create various files from mount point * Perform glusterfind pre * Perform glusterfind post * Check the contents of outfile * Delete the files created from mount point * Perform glusterfind pre * Perform glusterfind post * Check the contents of outfile Files deleted must be listed Change-Id: I2ee05a2c97983fb521648e372db21c8361a2c835 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* glusterd test cases: Enabling and disabling shared storagekshithijiyer2019-08-191-0/+192
| | | | | | | | | | | | | | | | | | | | Steps: -> Enable a shared storage -> Disable a shared storage -> Create volume of any type with name gluster_shared_storage -> Disable the shared storage -> Check, volume created in step-3 is not deleted -> Delete the volume -> Enable the shared storage -> Check volume with name gluster_shared_storage is created -> Disable the shared storage Change-Id: I1fd29d51e32cadd7978771f4a37ac87176d90372 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Snapshot] Change shared volume mount validation functionVinayak Papnoi2019-08-081-65/+51
| | | | | | | | | | | The test case makes use of 'is_shared_volume_unmounted' function which is redundant as the purpose can be solved via a very similar function 'is_shared_volume_mounted'. This change has been implemented. There are various cosmetic changes that have been implemented as well. Change-Id: I560b464b4bcc436658db49c0a5ed8c7aadacfb6a Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Fix AFR test case tearDownVinayak Papnoi2019-08-081-6/+18
| | | | | | | | | | | | The test case 'test_client_side_quorum_with_fixed_for_cross3' does not include the tearDown part where the volume options which have been set inside the test case have to be reset to default. This fix includes the necessary tearDown steps along with a few cosmetic changes. Change-Id: I86187cef4523492ec97707ff93d0eca365293008 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Adding test case to mount volume, remove /var/log/glusterfs/ and remount the ↵kshithijiyer2019-08-061-0/+60
| | | | | | | | | | | | | | | volume. Test case: 1. Create all types of volumes and start them. 2. Mount all volumes on clients. 3. Delete /var/log/glusterfs folder on client. 4. Run IO on all the mount points. 5. Unmount and remount all volumes. 6. Check if logs are regenerated or not. Change-Id: I4f90d709c4da6e1c73cf95f4075c50aa44cdd811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Enable client-side heal as client-side healing is diabled in RHGS 3.5Anees Patel2019-08-051-0/+10
| | | | | Change-Id: I0992b1b9af4e12f4e20d7a5dc184048de104d89d Signed-off-by: Anees Patel <anepatel@redhat.com>
* Optimized test case: tests/functional/glusterd/test_remove_brick_scenarios.pyhadarsharon2019-08-051-16/+28
| | | | | | | | | | Worked on the following: Improved I/O performance of test (writing 100k files to a mounted volume) by applying the following changes: 1. Modified the touch command to write as many files as possible per process, thus requiring less processes to write the 100k files 2. Using Threads to parallelize the touch processes from within the test, for better efficiency Change-Id: Id969f387f4b7b8e88daf688f7bada950cff2c412 Signed-off-by: hadarsharon <hsharon@redhat.com>
* Adding test case to enable brickmux, create start and stop 3 volumes.kshithijiyer2019-07-291-0/+115
| | | | | | | | | | | Test Case: 1.Set cluster.brick-multiplex to enabled. 2.Create three 1x3 replica volumes. 3.Start all the three volumes. 4.Stop three volumes one by one. Change-Id: Ibf3e81e7424d6a429da0aa12efeae7fffd3338f2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [EC]Fix a script issue to change permission of the directoryubansal2019-07-291-2/+7
| | | | | | | | Added a line to change permission of the directory so that client side healing happens for the directory also Change-Id: If4a24f2dbd6c9c85d4cb2944d1ad4795dbc39adb Signed-off-by: ubansal <ubansal@redhat.com>
* Enable client-side heal as client-side healing is disabled by default in ↵Anees Patel2019-07-291-0/+10
| | | | | | | RHGS 3.5 Change-Id: I500912b5217b675f9fdff4fe1cb518b465de245c Signed-off-by: Anees Patel <anepatel@redhat.com>
* [glusterfind] Addition of 'Test' in the class name of glusterfind create testVinayak Papnoi2019-07-291-32/+24
| | | | | | | | | | | | | | The class name was missing 'Test' at the beginning. Changing the name of the class to 'TestGlusterFindCreateCLI' from 'GlusterFindCreateCLI'. Removing the setUpClass and tearDownClass and replacing them with setUp and tearDown. Changing the variable names to be intelligible. Change-Id: Ibb5d9c6ef75ef11960aad35d65c343fa08fc9de1 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Adding testcase to remove /var/log/glusterfs and mounting volume.kshithijiyer2019-07-231-0/+125
| | | | | | | | | | | | | Test Case: 1. Create all types of volumes. 2. Start all volumes. 3. Delete /var/log/glusterfs folder on the client. 4. Mount all the volumes one by one. 5. Run IO on all the mount points. 6. Check if logs are generated in /var/log/glusterfs/. Change-Id: I7a3275aad940116c3506b22b13a670e455d9ef00 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to not stop glusterd on mnode.kshithijiyer2019-07-081-6/+28
| | | | | | | | | | | | | This test case was failing in a test run as the mnode was not removed from the list self.servers because of which there were runs where glusterd was stoppped instead and command was executed on mnod.As well as adding code to check and start glusterd on the node in instances where the test case fails. Change-Id: Id203102d3f0ec82af0ac215f0ecaf7ae22b630f5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Modifying test_enabling_gluster_debug_mode to do more operations.kshithijiyer2019-07-021-3/+7
| | | | | | | | | | | | While running test_enabling_gluster_debug_mode through jenkins it was observed that running volume operation once wasn't generating enough of logs by the time the logs were checked which lead to failure of the test case in the jenkins run. So modifying the logic which generates logs to run operation in a loop to generated a good amount of logs. Change-Id: Id7a12c86a04dc86d4856dbe30d945e70e64ea4f7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Converting text files to markdown files.kshithijiyer2019-06-302-21/+20
| | | | | | | | | It's better to use markdown files instead of text files for readme files. Hence converting readme files to readme.md files. Change-Id: I41c1b2f065895d885f4b1fabdc9b9e4051810e80 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>