| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix consists of 2 parts:
- Calculate correctly left time for waiting in
'scale_dcs_pod_amount_and_wait' function. Before, it had bug,
where we were waiting less time than requested. And it led to the
unexpected timeout errors.
Example: it could fail waiting only 180sec having 600sec as real
timeout.
- Reset 'attempts' on the instantiated 'waiter's to avoid
redundant waiting reusing the single waiter instance.
Example. On deletion of 20 PVCs we save about 1,5 minutes.
In whole test suite we create much more PVCs than 20.
Change-Id: I5d06a63dd0c2c5bd67fdb09fef87948d65e6bf22
|
|
|
|
|
|
|
| |
Add new TC missing parameters for file storage class and verify PVC
pending state.
Change-Id: Ibd8f418934da261058adb4127e60f85465a5ef75
|
|
|
|
|
|
| |
As part of the fix loaded the output into json which was missed.
Change-Id: Ia3d07f768362232ec2b34641be1a6ae4c4eec399
|
|
|
|
|
|
|
|
|
|
| |
Add 10 test cases for testing Heketi zones feature.
To make all the Heketi zones test cases run in the single suite using the same
cluster set 'common.allow_heketi_zones_update' config option to "True" value.
By default it is set to "False" and only small set of test cases can run
having such value.
Change-Id: I69a1f7c96c9f52a06134e715e113ccd9b06764e6
|
|
|
|
|
|
|
|
|
|
|
| |
In test case when we use large disk size, heketi takes time to create
volume and due to timeout it gives error.
Add fix in such test cases to check if volume created after getting
an exception, get details of such volumes or raise exception in case
it fails to create volume.
Change-Id: I1c23a8c6558c23edf8947771e4f41a4bd3ffd66a
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Gluster block test cases failed with error "self.node" variable not found
as self.node variable is not created in "GlusterStabilityTestSetup" class.
Fixed change in baseclass.py by using variable "self.ocp_client[0]".
Change-Id: I2affabbb626b7e266a9bb243d3ec249229ec9670
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
"get_ocp_gluster_pod_details" library gives error
"IndexError: list index out of range" in case of
independent mode setup as "get_custom_resource"
returns "[[]]" which in not none.
Added fix to check if any element present in list.
Change-Id: Iffd081f835ffaf9cb50c020cd297444ea2678950
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
| |
Change-Id: If30737958bc667264fc01fe81d411f406b501918
|
|
|
|
|
| |
Change-Id: I5886680a0d5666c68c677893e0fb327be0e80760
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
| |
Change-Id: I9cffdb09826e993de6db3d558996c7b46c92a03f
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
| |
Testcase to check if all gluster nodes are utilized
for block volume creation
Change-Id: Id3762d3aff85a628ed972b19fbe15bfa223376c6
|
|
|
|
|
|
|
|
| |
Move get_block_hosting_volume_by_pvc_name from
test_restart_gluster_services to baseclass to
be able to reuse it in other test modules.
Change-Id: I65847792601b422293d4baed2ccade664ad7e54b
|
|
|
|
|
|
|
| |
Add new TC where we verify rediscovery of iscsi paths
which happen after abrupt reboot of initiator node.
Change-Id: I841e875881c47f8215d48821cd76c0399d43badc
|
|
|
|
|
|
|
|
| |
Add test case where we set and test Heketi config options
to limit lower/upper allowed volume size
and amount of bricks per volume.
Change-Id: Ifb5cf64bba34dbf4e89f2fe9364263385a04cfa7
|
|
|
|
|
|
|
|
| |
Verify creation of replica 2 volume and validate the bricks
created in gluster backend are same.
Change-Id: I9fcc090e909d9bf578cf8eca6e12e4f785140e3f
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Validate creation of block volume with name
Change-Id: I08ee31201d42a95f8a829eb54ce68421903fdbbf
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Add new TC where we verify given heketi mount point in volume info
is working.
Change-Id: Ia03af9ea6729cdfadf663abe46db7184a8b6fe2c
|
|
|
|
|
|
|
| |
Added Error msg in GlusterBlockBaseClass for better visibility in case
of failures or debugging purpose.
Change-Id: Ia92cd6f6129cb5aa55f6f8e807ef056e54691956
|
|
|
|
|
|
| |
Changed the description of libraries in gluster_ops where it was wrong.
Change-Id: I8ea9ec4ff1f28250d4544d6329b386271cf1e551
|
|
|
|
|
|
|
|
| |
validate creation and deletion of
distributed-replicated bhv.
Change-Id: Ia42ace7e5be53fa7a00f88378575205e7fd5ba97
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
| |
modification in block volume name parameter passed twice while
creating
Change-Id: Ieb827a8e7fa40a84eed7f8c5e90760710e6615b2
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
| |
validate block volume creation with
different auth values
Change-Id: I820f65a5aaa5adc6cb58b16b18b6c93b22177d45
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
| |
We have test cases which create more than 1 app DC at once.
So, add functionality to be able to create bunch of DCs in parallel
and reuse it in one of test cases.
Change-Id: Id606d02c31a919bbc6d49d59714dd5628c6a835d
|
|
|
|
|
| |
Change-Id: Icdbb3a626d96c1f762f5616623ea6bc99d56ef3c
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
| |
Change-Id: Iaa7deca275958d4de68601dc16d1920f3dab85f2
|
|
|
|
| |
Change-Id: I4b469969a041eaf4ccb6d95a59d6d2332c6c845c
|
|
|
|
| |
Change-Id: I26c750c68055c6cc50de8015942d0d9725819aaf
|
|
|
|
|
|
|
|
|
| |
By default, lots of rules are disabled and we track them manually.
So, fix it by setting explicit list of rules we skip.
Set the only skipped rule as of now to the W503, which is opposite
to the enabled W504.
Change-Id: Ib4d17177ac0a5cd11d8ff389883dbf83743faf35
|
|
|
|
|
|
|
| |
This TC verify iscsi login and logout is happening properly on deletion
of app pod when initiator and target on same node.
Change-Id: Ia812cbbddef4fcab2f3762c930a38c0c8af62417
|
|
|
|
|
|
| |
Change-Id: I8e3aed5e26eff3e76246c03cbd13f0f84b6a29f6
Signed-off-by: Manisha Saini <msaini@redhat.com>
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
| |
Change-Id: Ia0325c4309837fe3f89eab3c066775f9fb1ab1de
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
|
| |
Fix error handling logic in the "vmware" cloud provider
module by adding "raise" statement in places where
it is lost.
Change-Id: I92e97da4109bc6ab7368b41d2a6886e9f2be31c1
|
|
|
|
| |
Change-Id: I2ebc1c2e7e7aefb5e0d70342a7ec243a12b0663b
|
|
|
|
|
|
|
|
| |
Add fix to test cases which are skipped due to bugs -
BZ-1644685, BZ-1652913, BZ-1634745, BZ-1635736, BZ-1636477, BZ-1641668
Change-Id: I03a7f7dddaba5d3e41770969352b2263c84cb920
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
| |
Change-Id: I5bb2760ff284cdc83424388bfdfc79d5fd112f21
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
| |
Change-Id: I4196cb1395a3720e03f62473d11e5f46d797c355
|
|
|
|
|
|
|
|
|
|
|
| |
New TC insures that app pod is restarted properly when one of the
target node is down.
This patch includes libs of vmware api
These libraries can perform VM operations like power on and off via
vmware client api's.
Change-Id: I11ad4dc3318cb0583de5882d8067ed7e30ea9962
|
|
|
|
|
| |
Change-Id: Ia4e0fb737b16ea7bdc8ffd5ae44cdd418471552a
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Separate node reboot functionality from single test case to re-use
it at other places. Update test case accordingly.
Change-Id: Ib9a7f15d29237e4f21aafc408c074e799e706740
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
"get_gluster_pod_names_by_pvc_name" function depends
on the ocp3.11 specific shell command, which fails on
old OCP versions. So, fix it by getting info in compatible way.
Also, update usages of this function with new return data structure.
Change-Id: Ibb8559590a1288c032630b3292f631d28bc87263
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
|
| |
This test case validates bhv options like
features.shard,shard.size,shd max threads,
and performance related ones
Change-Id: I991f13e1b744486281e98813f169630a666eeb59
|
|
|
|
|
|
| |
Add steps to verify endpoints
Change-Id: Ibf09e8f0e2cefb0d45755225c051f2a11fe43860
|
|
|
|
|
|
|
|
| |
Create 3 volumes using heketi-cli command, verify their presence in
heketi topology info, delete 2 volumes and validate their deletions
and presence of 3rd volume in heketi topology info
Change-Id: I78298d2aec21ff8031ff01efd53f11ba31e269c9
|
|
|
|
|
| |
Change-Id: I7ead23c46a472fee70d684c45f32f5e4efb0674f
Signed-off-by: kasturiNarra <knarra@redhat.com>
|
|
|
|
|
|
|
|
| |
'playbooks/generate-tests-config.yaml' playbook fails when
cluster has 'glusterfs-registry' nodes. So, fix it by using proper
filtering in shell commands.
Change-Id: I884f57e646a513b1ceddc5345099fcd8379fce72
|
|
|
|
|
|
|
|
| |
This test case verifies that when tcmu-runner is down we are able to
run IOs on block volumes, and stoping tcmu-runner would effect
gluster-blockd and gluster-block-target services.
Change-Id: I0b4a23f7c2dce909f07a22893f83a6c1d0285091
|
|
|
|
|
|
|
|
|
|
| |
Before, in case we had some constant networking issue trying to connect
to a host, we were falling into the endless loop of connection recreation.
The error we got after it is the 'recursion limit exceeded'. It is unexpected.
So, change this logic making it avoid recursion and do just one
connection recreation attempt which is useful to fix broken connections.
Change-Id: Id808edbad7e6d69ad58a75bfbae176fddb173d18
|
|
|
|
|
|
|
|
| |
It happens that heketi client located out of the Heketi POD may fail
not reaching the server side. So, add back-up approach where we run
Heketi commands on a Heketi POD when main commands fail.
Change-Id: Ie6ae5be82082f34426f9288b02575e3abd4940f5
|
|
|
|
|
|
|
| |
New TC verifies that it is possible to run IOs on block
volumes when gluster-blockd service is down.
Change-Id: Ia1de14d2990f833221f6725e9b0e48d77ef85c10
|
|
|
|
|
|
|
| |
move gluster-block stability TC to new module named
'gluster_stability/test_gluster_block_stability.py'.
Change-Id: Iac1f432e438a2815bdc2115ab19e0170e46930c1
|