| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Create network failure while deleting PVC's
Network side failure is introduced by opening and closing the ports
related to gluster-blockd.
Change-Id: Id3a749aa1a051bbce99b85046fa0a79831e85dd5
|
|
|
|
|
|
|
| |
Add new TC where we verify iscsi rediscovery is happening properly
on restart of initiator nodes when one gluster node is down.
Change-Id: I515bb27d43843f7c19c4a12b6531a212e9c3285a
|
|
|
|
|
|
|
| |
Verify I/O's are running when we close the ports 3260 and 24010 on
the active or passive paths
Change-Id: Ib1d69fca8f4894fd50bcc30f23ec0304f2fb5230
|
|
|
|
|
|
|
|
| |
Create network side failure by opening and closing the ports
related to gluster-blockd during creation of the PVC's,
Verify PVC's are bound and validate multipath.
Change-Id: Ibc53a13e2abb8674661da83d5881a13bbb2ad7fb
|
|
|
|
|
|
|
|
|
|
| |
It happens that regex fail to find expected message in oc events
related to the stopped service. The reason is in the not static
part of the error message, it may vary.
So, fix mentioned tc by removing part of the regex, which can vary.
Also, update several other parts of the tc making it use less codelines.
Change-Id: I905f63b2b9aefbe72df610d223da4569ccb14d5e
|
|
|
|
|
|
|
|
|
|
|
|
| |
'test_tcmu_runner_failure_while_creating_and_deleting_pvc ' tc may fail
occasionally when one of the following situations happen:
- PVs get deleted very fast, so tc may fail to get their statuses.
- PVs may not be updated in time due to the 'slowness' of a cluster.
So, to avoid problems related to the above mentioned situations,
just check all the deleted and created PVCs, not filtering them.
Change-Id: Ib24c5fd7c3310daa0e5523f2c6c1fd90bd958e60
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change required due to -
1. Get block volume by prefix is common step which is used
in two places for now and will be used in other places
too.
2. Hence add library "heketi_blockvolume_list_by_name_prefix"
in heketi_ops.py.
3. And use added library and update code from class
"GlusterStabilityTestSetup" and "TestGlusterBlockStability".
Change-Id: I9e60d58d5c84380104081335745270b3d21ff071
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
| |
Add TC verify PVC's create and delete fails while gluster-blockd
service is down on one node.
Change-Id: I73332f667526891e0fadf050f53bb33394519566
|
|
|
|
|
|
|
|
| |
Create block volume with hacount 4 when one of the four
nodes is down. Also, create a block volume with hacount 3
and verify volume creation is succesful.
Change-Id: I4bf8b178ac92a71fa5e6b7e9e8220da04c6e872d
|
|
|
|
|
|
|
|
|
| |
Validate the creation of 100 app pods with block
volumes attached to it. Verify iscsi login, multipath and
check for volume mismatch across OCP, heketi and gluster
Change-Id: If400b8d3fe3d0ba0f22169633f0bb537f3f237e5
Signed-off-by: rgeorge <rgeorge@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Create 8 Block PVC and wait for them to get bound. Delete created
PVC's and create 8 more PVC's simultaneously. Kill the tcmu-runner
service while PVC creation and deletion is in the progress. Restart
all the services related to block volumes and verify all pending
operations are completed sucessfully.
Change-Id: I0cb4cd29b92233a65be93f4b96f1a9a0cb8bed9f
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simulate situation where we have block
pvc attached to a pod and some I/O is
created on the pod. After sending "delete"
command to it, iqn's present on the nodes
where app pods are running are expected to
be logged out and migrated to a new node where
new pods are running. Also data in the app pod
is expected to be the same.
Change-Id: Ia3a9e77902a29b942b151ea4874125423520f46f
|
|
|
|
|
|
|
| |
Add new TC verify PVC's create and delete fails while gluster-blockd
and tcmu-runner services are down on one node.
Change-Id: I5acbcfd6c9a227c0cce21c62f5e162caf970aa22
|
|
|
|
|
|
|
| |
Testcase to check if all gluster nodes are utilized
for block volume creation
Change-Id: Id3762d3aff85a628ed972b19fbe15bfa223376c6
|
|
|
|
|
|
|
| |
Add new TC where we verify rediscovery of iscsi paths
which happen after abrupt reboot of initiator node.
Change-Id: I841e875881c47f8215d48821cd76c0399d43badc
|
|
|
|
|
|
|
|
| |
We have test cases which create more than 1 app DC at once.
So, add functionality to be able to create bunch of DCs in parallel
and reuse it in one of test cases.
Change-Id: Id606d02c31a919bbc6d49d59714dd5628c6a835d
|
|
|
|
|
|
|
| |
This TC verify iscsi login and logout is happening properly on deletion
of app pod when initiator and target on same node.
Change-Id: Ia812cbbddef4fcab2f3762c930a38c0c8af62417
|
|
|
|
|
| |
Change-Id: I5bb2760ff284cdc83424388bfdfc79d5fd112f21
Signed-off-by: vamahaja <vamahaja@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
New TC insures that app pod is restarted properly when one of the
target node is down.
This patch includes libs of vmware api
These libraries can perform VM operations like power on and off via
vmware client api's.
Change-Id: I11ad4dc3318cb0583de5882d8067ed7e30ea9962
|
|
|
|
|
|
|
|
| |
This test case verifies that when tcmu-runner is down we are able to
run IOs on block volumes, and stoping tcmu-runner would effect
gluster-blockd and gluster-block-target services.
Change-Id: I0b4a23f7c2dce909f07a22893f83a6c1d0285091
|
|
|
|
|
|
|
| |
New TC verifies that it is possible to run IOs on block
volumes when gluster-blockd service is down.
Change-Id: Ia1de14d2990f833221f6725e9b0e48d77ef85c10
|
|
move gluster-block stability TC to new module named
'gluster_stability/test_gluster_block_stability.py'.
Change-Id: Iac1f432e438a2815bdc2115ab19e0170e46930c1
|