summaryrefslogtreecommitdiffstats
path: root/tests/functional/gluster_stability
Commit message (Collapse)AuthorAgeFilesLines
* [TestFix] Remove redundant skip statement from test caseArun Kumar2020-07-311-6/+0
| | | | | | | | | 'TestGlusterBlockStability.SetUp' already have a check for 'crs' or 'cns'. Removed redundant skip statement which perform same step in test case 'test_initiator_side_failures_initiator_and_target_on_same_node'. Change-Id: Ia587f9d17b9752a218ead030faa6941938b0d4c7 Signed-off-by: Arun Kumar <arukumar@redhat.com>
* [Test] Add TC to kill targetcli during block PVC deletionArun Kumar2020-07-291-31/+72
| | | | | Change-Id: I336fd244c74a1afca6ba1e1e5b6e6b30e14bfad2 Signed-off-by: Arun Kumar <aanand01762@gmail.com>
* [Test] Add TC app pod deletion with different intiator and targetArun Kumar2020-07-101-27/+45
| | | | | Change-Id: I1d5e8ce9a12b693b1a4a6fbc93718c3b6e71c801 Signed-off-by: Arun Kumar <aanand01762@gmail.com>
* [Test] Add TC to validate block behaviour when target node is downArun Kumar2020-07-101-29/+58
| | | | | Change-Id: I4535d249d4d271f6b622f82002eddd33f3d4b01e Signed-off-by: Arun Kumar <aanand01762@gmail.com>
* [Test] Add TC to validate initiator node reboot when taget node is sameArun Kumar2020-07-061-15/+55
| | | | | Change-Id: Icd7b817b2ba680c47cd7c27fe95ad791d2e217b8 Signed-off-by: Arun Kumar <aanand01762@gmail.com>
* [TestFix] Add steps to wait_for_events after rebootSri Vignesh2020-06-241-1/+27
| | | | | | | | | Fix consits of - 1. Add conv=fsync in dd to sync io after pod reboot 2. Add steps to wait for events after node reboot Change-Id: I53d1d9262fe05f9acd68ad18b8a0f48fd6716dea Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [TestFix] Remove TC w.r.t scaling with 100 podsSri Vignesh2020-06-101-45/+0
| | | | | | | | | TC 'test_initiator_side_failures_create_100_app_pods_with_block_pv' creates 100 pods which is part of scaling and fails due to setup issues. Change-Id: I923feaf37c6c0442632bf14ee88e0e27f414d8b0 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [TestFix] Fix 'RuntimeError' while updating dict in loopvamahaja2020-04-291-1/+1
| | | | | | | | | | Python3 returns iterator of list for function 'dict.items()', which cuases 'RuntimeError' while trying to update dict inside loop. Convert this iterator into list while looping over the dict items. Change-Id: Ia339bb89c4a8dbacae1c66ff41110975db73e9d7 Signed-off-by: vamahaja <vamahaja@redhat.com>
* [Test] Add test case to verify tcmu log levelscrombus2020-04-271-0/+80
| | | | | | Change-Id: I41d4817fe39ad9c4b26eee659eaaeda563f72496 Signed-off-by: vamahaja <vamahaja@redhat.com> Co-authored-by: crombus <pkundra@redhat.com>
* [Test] Add TC to validate BV creation after stopping gluster servicescrombus2020-04-201-1/+139
| | | | | | Change-Id: Ib4cd81191be5c53d104e3460274e18c94f070dd2 Signed-off-by: vamahaja <vamahaja@redhat.com> Co-authored-by: crombus <pkundra@redhat.com>
* [TestFix] Move tier1 test casevamahaja2020-04-071-1/+1
| | | | | | | | | | Fix consists of - Test case 'test_initiator_side_failures_create_100_app_pods_with_block_pv' creates 100 i/o pods with PVC which takes time and may fail in case cluster is slow. Move this test case to tier2. Change-Id: I540e268eae81f7934e0d0f2d2aa6168c388c1a66 Signed-off-by: vamahaja <vamahaja@redhat.com>
* [TestFix] Add pytest marker for tier2 test casesvamahaja2020-03-311-8/+8
| | | | | Change-Id: I43ebf7f489f0e80f33992edc7cea6a54dcc8a531 Signed-off-by: vamahaja <vamahaja@redhat.com>
* [TestFix] Add pytest marker for tier1 test casesvamahaja2020-03-173-0/+27
| | | | | Change-Id: I1d497a9b61762e68558026ddc49e5269b0354ce1 Signed-off-by: vamahaja <vamahaja@redhat.com>
* [TestFix] Add pytest marker for tier0 test casesvamahaja2020-03-131-0/+3
| | | | | Change-Id: I29093a09c3f0cc09eaa9c6d94bad882c0bafd91c Signed-off-by: vamahaja <vamahaja@redhat.com>
* Fix TC 'test_delete_block_pvcs_with_network_failure'Arun2020-01-171-11/+37
| | | | | | | | | | | TC is failing when there are no BHV's before running the TC TC is failing during the validation of the total free space Add following steps to fix the problem: - Get the default BHV size if there are no BHV's before excuting the TC - Calculate total initial free space of all the BHV's - Compare the initial and final free space of BHV's Change-Id: Ic13c201ad04b02b80ca73d41b3c42451202ed181
* Fix test to run when iscsi-intiator-utils package is the expectedkasturiNarra2019-12-121-2/+13
| | | | | | | Fix test to run when iscsi-intiator-utils package is not less than 6.2.0.874-13.el7 Change-Id: Ic3af5302a415c2d2420ad9b92f2d82246420beea
* Add skip for 'iscsi-initiator-utils' version in test casevamahaja2019-12-121-0/+13
| | | | | | | | Add skip for package 'iscsi-initiator-utils' in test case 'test_initiator_side_failures_create_100_app_pods_with_block_pv' due to issue 'BZ-1624670' Change-Id: I4730b298e367fdacbaac1f314b760df6d7642e14
* Fix usage of cls ins attributes in 'test_gluster_block_stability.py'Valerii Ponomarov2019-12-111-4/+4
| | | | | | | | | Make test cases use local var 'prefix' not depending on the 'self.prefix'. We get 'AttributeError' error when 'self.prefix' is not set. And it can be set only when one of test cases gets run. Change-Id: I767e037e70e019ecb3a719d898dfe2b020dddff0
* [py2to3] Fix various py3 incompatibilitiesValerii Ponomarov2019-12-111-4/+4
| | | | Change-Id: I6c517278c3f8bf6f374ab60bc27768e503161278
* Add TC creating and deleting bunch of PVCs during network failureArun2019-12-111-0/+50
| | | | | | | | Create network side failure during creation and deletion of the PVC's Network side failure is introduced by opening and closing the ports related to gluster-blockd Change-Id: I0e7d97f0bf4a786f9ebb4cb5ccba5e5fd5812fc6
* Add tc with deletion of block-vol while IO is in progresskasturiNarra2019-11-291-0/+138
| | | | | | | | | Simulate situation where we have attached block-PVC to a pod, and some I/O is going on there. After sending "delete" command to it I/O must not be interrupted and should be finished successfully. The state of PV is expected to be changed to the "released" state. Change-Id: If8fb27b35253f01ab67772e72e5fb9f2f335d51b
* Add TC deleting bunch of PVC's during network failureArun Kumar2019-11-261-0/+80
| | | | | | | | Create network failure while deleting PVC's Network side failure is introduced by opening and closing the ports related to gluster-blockd. Change-Id: Id3a749aa1a051bbce99b85046fa0a79831e85dd5
* Add TC restart initiator node when gluster node is downNitin Goyal2019-11-221-0/+107
| | | | | | | Add new TC where we verify iscsi rediscovery is happening properly on restart of initiator nodes when one gluster node is down. Change-Id: I515bb27d43843f7c19c4a12b6531a212e9c3285a
* Add new TC verify PID's of gluster volumeNitin Goyal2019-11-211-0/+72
| | | | | | | Add new TC where it will verify PID's of gluster volumes are same when volume options value is different. Change-Id: Ie0cae1ad3fdfd35e4c0e7f01e3a048b62b185369
* Add TC's related to network side failures while I/O's are runningArun Kumar2019-11-211-0/+50
| | | | | | | Verify I/O's are running when we close the ports 3260 and 24010 on the active or passive paths Change-Id: Ib1d69fca8f4894fd50bcc30f23ec0304f2fb5230
* Add TC creating bunch of PVCs during network failureArun Kumar2019-11-081-0/+36
| | | | | | | | Create network side failure by opening and closing the ports related to gluster-blockd during creation of the PVC's, Verify PVC's are bound and validate multipath. Change-Id: Ibc53a13e2abb8674661da83d5881a13bbb2ad7fb
* Fix 'test_volume_create_delete_when_block_services_are_down' tcValerii Ponomarov2019-10-281-54/+46
| | | | | | | | | | It happens that regex fail to find expected message in oc events related to the stopped service. The reason is in the not static part of the error message, it may vary. So, fix mentioned tc by removing part of the regex, which can vary. Also, update several other parts of the tc making it use less codelines. Change-Id: I905f63b2b9aefbe72df610d223da4569ccb14d5e
* Add possibility to check Heketi DB inconsistencies after each tcValerii Ponomarov2019-10-251-0/+1
| | | | | | | | | Define 'check_heketi_db_inconsistencies' config option setting there 'False' or 'True' values. Default value is 'True'. If heketi client doesn't support the 'heketi db check' feature, then 'heketi db check' just won't be performed. Change-Id: I7faff35b15e40d864c0377ae7fee154e217d8eae
* Fix 'test_tcmu_runner_failure_while_creating_and_deleting_pvc' tcValerii Ponomarov2019-10-181-20/+2
| | | | | | | | | | | | 'test_tcmu_runner_failure_while_creating_and_deleting_pvc ' tc may fail occasionally when one of the following situations happen: - PVs get deleted very fast, so tc may fail to get their statuses. - PVs may not be updated in time due to the 'slowness' of a cluster. So, to avoid problems related to the above mentioned situations, just check all the deleted and created PVCs, not filtering them. Change-Id: Ib24c5fd7c3310daa0e5523f2c6c1fd90bd958e60
* Add library to get heketi block volumes by name prefixvamahaja2019-10-172-72/+16
| | | | | | | | | | | | | | This change required due to - 1. Get block volume by prefix is common step which is used in two places for now and will be used in other places too. 2. Hence add library "heketi_blockvolume_list_by_name_prefix" in heketi_ops.py. 3. And use added library and update code from class "GlusterStabilityTestSetup" and "TestGlusterBlockStability". Change-Id: I9e60d58d5c84380104081335745270b3d21ff071 Signed-off-by: vamahaja <vamahaja@redhat.com>
* Add TC create and delete PVC's when gluster-blockd is downNitin Goyal2019-09-251-7/+12
| | | | | | | Add TC verify PVC's create and delete fails while gluster-blockd service is down on one node. Change-Id: I73332f667526891e0fadf050f53bb33394519566
* Delete block volume when one node is downadityaramteke2019-09-251-0/+43
| | | | | | | | Create block volume with hacount 4 when one of the four nodes is down. Also, create a block volume with hacount 3 and verify volume creation is succesful. Change-Id: I4bf8b178ac92a71fa5e6b7e9e8220da04c6e872d
* Add TC to validate creation of 100 app pods with block volumesrgeorge2019-09-251-0/+95
| | | | | | | | | Validate the creation of 100 app pods with block volumes attached to it. Verify iscsi login, multipath and check for volume mismatch across OCP, heketi and gluster Change-Id: If400b8d3fe3d0ba0f22169633f0bb537f3f237e5 Signed-off-by: rgeorge <rgeorge@redhat.com>
* Add TC Fail tcmu-runner service while creating and deleting pvcArun Kumar2019-09-251-0/+89
| | | | | | | | | | Create 8 Block PVC and wait for them to get bound. Delete created PVC's and create 8 more PVC's simultaneously. Kill the tcmu-runner service while PVC creation and deletion is in the progress. Restart all the services related to block volumes and verify all pending operations are completed sucessfully. Change-Id: I0cb4cd29b92233a65be93f4b96f1a9a0cb8bed9f
* Add testcase to validate path failureskasturiNarra2019-09-171-0/+80
| | | | | | | | | | | | | Simulate situation where we have block pvc attached to a pod and some I/O is created on the pod. After sending "delete" command to it, iqn's present on the nodes where app pods are running are expected to be logged out and migrated to a new node where new pods are running. Also data in the app pod is expected to be the same. Change-Id: Ia3a9e77902a29b942b151ea4874125423520f46f
* Add TC create and delete PVC's when block services are downNitin Goyal2019-09-161-1/+181
| | | | | | | Add new TC verify PVC's create and delete fails while gluster-blockd and tcmu-runner services are down on one node. Change-Id: I5acbcfd6c9a227c0cce21c62f5e162caf970aa22
* Add tc to validate all gluster IP utilized by blockvolumeskasturiNarra2019-09-111-0/+67
| | | | | | | Testcase to check if all gluster nodes are utilized for block volume creation Change-Id: Id3762d3aff85a628ed972b19fbe15bfa223376c6
* Move func get_block_hosting_volume_by_pvc_name to baseclasskasturiNarra2019-09-091-24/+0
| | | | | | | | Move get_block_hosting_volume_by_pvc_name from test_restart_gluster_services to baseclass to be able to reuse it in other test modules. Change-Id: I65847792601b422293d4baed2ccade664ad7e54b
* Add TC abrupt reboot of initiator nodeNitin Goyal2019-09-061-0/+99
| | | | | | | Add new TC where we verify rediscovery of iscsi paths which happen after abrupt reboot of initiator node. Change-Id: I841e875881c47f8215d48821cd76c0399d43badc
* Add functionality to create more than 1 DC in parallelValerii Ponomarov2019-09-041-27/+8
| | | | | | | | We have test cases which create more than 1 app DC at once. So, add functionality to be able to create bunch of DCs in parallel and reuse it in one of test cases. Change-Id: Id606d02c31a919bbc6d49d59714dd5628c6a835d
* Add TC app pod deletion when initiator and target on same nodeNitin Goyal2019-08-281-0/+109
| | | | | | | This TC verify iscsi login and logout is happening properly on deletion of app pod when initiator and target on same node. Change-Id: Ia812cbbddef4fcab2f3762c930a38c0c8af62417
* Fix test cases skipped due to closed bugsvamahaja2019-08-191-1/+28
| | | | | | | | Add fix to test cases which are skipped due to bugs - BZ-1644685, BZ-1652913, BZ-1634745, BZ-1635736, BZ-1636477, BZ-1641668 Change-Id: I03a7f7dddaba5d3e41770969352b2263c84cb920 Signed-off-by: vamahaja <vamahaja@redhat.com>
* Add OCS 3.9 version check for test cases which uses multipath validationvamahaja2019-08-141-0/+14
| | | | | Change-Id: I5bb2760ff284cdc83424388bfdfc79d5fd112f21 Signed-off-by: vamahaja <vamahaja@redhat.com>
* Add OCS version check for test casesvamahaja2019-08-141-0/+25
| | | | Change-Id: I4196cb1395a3720e03f62473d11e5f46d797c355
* Add TC restart app pod when target node is downNitin Goyal2019-08-131-0/+80
| | | | | | | | | | | New TC insures that app pod is restarted properly when one of the target node is down. This patch includes libs of vmware api These libraries can perform VM operations like power on and off via vmware client api's. Change-Id: I11ad4dc3318cb0583de5882d8067ed7e30ea9962
* Reuse more common functions in the GlusterStabilityTestSetupvamahaja2019-08-131-89/+28
| | | | | Change-Id: Ia4e0fb737b16ea7bdc8ffd5ae44cdd418471552a Signed-off-by: vamahaja <vamahaja@redhat.com>
* Add TC run IOs when tcmu-runner service is downNitin Goyal2019-07-191-0/+91
| | | | | | | | This test case verifies that when tcmu-runner is down we are able to run IOs on block volumes, and stoping tcmu-runner would effect gluster-blockd and gluster-block-target services. Change-Id: I0b4a23f7c2dce909f07a22893f83a6c1d0285091
* Add TC run IOs when gluster-blockd service is downnigoyal2019-07-091-0/+60
| | | | | | | New TC verifies that it is possible to run IOs on block volumes when gluster-blockd service is down. Change-Id: Ia1de14d2990f833221f6725e9b0e48d77ef85c10
* Separate out gluster-block stability TCNitin Goyal2019-07-051-0/+114
| | | | | | | move gluster-block stability TC to new module named 'gluster_stability/test_gluster_block_stability.py'. Change-Id: Iac1f432e438a2815bdc2115ab19e0170e46930c1
* Reshufle TC modules to the correct componentNitin Goyal2019-07-042-0/+50
| | | | | | | Rename TC modules to appropriate name and move them to correct component. Change-Id: I87c9bb7822c17c955dd9c2d780ef08e4d4e0d7ee