| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Change-Id: I3d749c5d131973217d18fc1158236806645e4ab4
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
was not handled by teardown class
Change-Id: I789adbf0909c5edd0a2eb19ed4ccebcb654700fd
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
|
|
| |
split-brain warning per BZ1579758
Change-Id: I674557e153234e0f6af20f12d168b744bda3a3f8
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
| |
Change-Id: I04ffdedb1ce25ab05239c77b4dd5893ce18b32f7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I9f33c84be39bdca85909c2ae337bd4482532d061
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
folder
Change-Id: I1fb4497ac915c7a93f223ef4e6946eeb4dcd0e90
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I8d15f482dbde12670975e32af685570a0eaa50b6
Signed-off-by: Anees <anepatel@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Issue:
Creating hardlinks fails since TC trying to create
same hardlink twicw from 2 different clients.
Change-Id: I1c0d48f53eec00ed2a766b786c551d83ac278946
|
|
|
|
|
| |
Change-Id: I6a95e82977f4ac6092716c064597931768023710
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
| |
test_client_side_quorum_auto_local_to_volume_not_cluster
Change-Id: Ibf16517fe062f9335d17c0e6d88ddab44a644c7b
|
|
|
|
|
|
| |
test_client_side_quorum_auto_local_to_volume_not_cluster
Change-Id: I8abef160fb6aecb0f74edec0324a53bb23bb2885
|
|
|
|
|
| |
Change-Id: I4560b425aa470da27631eb6401e3775fb90c2330
Signed-off-by: Vitalii Koriakov <vkoriako@nredhat.com>
|
|
|
|
|
| |
Change-Id: I0143a4ffa16fa0c3ea240f5debbdc5519a9e5445
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I6462446cce6c06a7559028eee1a6968af093c959
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Id32859df069106d6c9913147ecfa8d378dfa8e9d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
bring_bricks_online method takes list as parameter, but in the
test cases, it passed as str.
Change-Id: I07caef7ef6510268856d832221d8b2993d3e9751
|
|
|
|
|
| |
Change-Id: Id9face2267b9f702bb2b0b5b3c294b3e4082cdf7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I46fc2feffe6443af6913785d67bf310838532421
|
|
|
|
|
| |
Change-Id: Ibb159d8a1b28ae267ca89800ace1ece9a3382b35
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
| |
In 3.4, default quorum type is chnaged to auto. pre 3.4 releases,
it was None
Change-Id: I4e58ff8cc4727db81bb6b9baadd101687ddb74b0
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test case performs split brain resolution on a file
not in split-brain. This action should fail.
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Change-Id: I01b9a41530498e96f6092283372798e61a9ac2b2
|
|
|
|
|
|
|
|
| |
No functional change, just make the tests a bit more readable.
It could be moved to a decorator later on, wrapping tests.
Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
This test creates gfid split-brain of files and uses the source-brick
option in the CLI to resolve them.
Polarion test: RHG3-4402
Change-Id: I4fb3f16bfcdf77afe92c3a6f98f259147fef30c2
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
| |
Change-Id: I634d11cb582521b03f0bb481172e2f4f68d1c2ce
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
| |
Polarion test case #RHG3-4094
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Change-Id: I1f7100ddb6697cfc8749f8cd2c29e14e9bfdb5ce
|
|
|
|
|
|
| |
Polarion test case #RHG3-4094
Change-Id: Id7492b1e0a7a000ece788c7a0cc4ed9dd8743700
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If there are directories present on only one brick without having
gfid (created from backend), heal them and assign gfids when named
lookup comes on those directories.
Change-Id: I32c27f0b04c8eb36b25899ca9fbe7aef141f13b9
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
| |
Change-Id: I525f50a42e29270d9ac445d62e12c7e7e25a7ae3
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test case checks whether directory with null gfid is getting
the gfids assigned on all the subvols of a dist-rep volume when
lookup comes on that directory from the mount point.
Change-Id: Ie68cd0e8b293e9380532e2ccda3d53659854de9b
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
| |
Change-Id: Iaaa78c071bd7ee3ad3ed222957e71aec61f80045
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
| |
Change-Id: Icd5c423ad1b2fee770680cc66d9919c930c4780f
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota object limit
is exceeded on a directory.
Change-Id: Icc63b1794f82aef708832d0b207ded5f13391b85
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota limit on a
directory is reached.
Change-Id: I336b78eb55cd5c7ec6b3236f95ce9f0cb8423667
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
| |
Deletion of file on source bricks must reflect on sink brick after bringing
it up(conservative merge must NOT happen) when quota is enabled.
Change-Id: I8c3f55ddd1eee9a211674c8759b94aa801f6f174
|
|
|
|
|
|
|
| |
bricks are down
Change-Id: I1169250706494b1b833d3b7e8a1ee148426e224b
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
options should not be applicable to glfsheal
Change-Id: I019b0299dd7f907446e85f6de0186fb61a3ce1f1
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
Description: Test case which checks for gfid self heal
of a file on 1x3 replicated volume
Change-Id: I3bad7c16435bd99fa3f5b812c65970bebdbd18ac
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: This test case runs split-brain resolution CLIs
on a file in gfid split-brain on 1x2 volume.
1. kill 1 brick
2. create a file at mount point
3. bring back the killed brick
4. kill the other brick
5. create same file at mount point
6. bring back the killed brick
7. try heal from CLI and check if it gets completed
Change-Id: Iddd386741c3c672cda90db46facd7b04feaa2181
|
|
|
|
|
|
|
| |
may be down
Change-Id: I0515680e2cbe582917f0034461b305a33b75ca94
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I734f85671f17e9a7e9d863aa3a0ef8f632182d48
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I90c51f0e945cfe85e60bc97e1ed3b617a0a7eba5
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ie685a2e60c19bc096c54034a6b2f7d4380441f3d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
est script to verify glustershd process on newly probed sAdding Test
case test_glustershd_on_newly_probed_server
Description:
Test script to verify glustershd process on newly probed server
* check glustershd process - only 1 glustershd process should be running
* Add new node to cluster
* check glustershd process - only 1 glustershd process should be running on
all servers inclusing newly probed server
* stop the volume
* add another node to cluster
* check glustershd process - glustershd process shouldn't be running on
servers including newly probed server
* start the volume
* check glustershd process - only 1 glustershd process should be running
on all servers inclusing newly probed server
Change-Id: I6142000ee8322b7ab27dbcd27e05088d1c8be806
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the glustershd server vol file
has only entries for replicate volumes
* Create multiple volumes and start all volumes
* Check the glustershd processes - Only 1 glustershd should be listed
* Check the glustershd server vol file - should contain entries only
for replicated involved volumes
* Add bricks to the replicate volume - it should convert to
distributed-replicate
* Check the glustershd server vol file - newly added bricks
should present
* Check the glustershd processes - Only 1 glustershd should be listed
Change-Id: Ie110a0312e959e23553417975aa2189ed01be6a4
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the Client Side Quorum with fixed
for cross 3 volume
* Disable self heal daemom
* set cluster.quorum-type to fixed.
* start I/O( write and read )from the mount point - must succeed
* Bring down brick1
* start I/0 ( write and read ) - must succeed
* Bring down brick2
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 2
* start I/0 ( write and read ) - read must pass, write will fail
* bring back the brick1 online
* start I/0 ( write and read ) - must succeed
* Bring back brick2 online
* start I/0 ( write and read ) - must succeed
* set cluster.quorum-type to auto
* start I/0 ( write and read ) - must succeed
* Bring down brick1 and brick2
* start I/0 ( write and read ) - read must pass, write will fail
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - read must pass, write will fail
* set the cluster.quorum-count to 3
* start I/0 ( write and read ) - read must pass, write will fail
* set the quorum-type to none
* start I/0 ( write and read ) - must succeed
Change-Id: Ic159aee3ca80f6a584a46e2ac7986f4007346968
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: I32fefdab769e5a361e4dcb5f1328b2c8da2e4f1a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I24e2baddc4f5cdb2c9ae0ab6b9020b2eb9b42a05
Signed-off-by: Karan Sandha <ksandha@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script which verifies that the existing glustershd should take
care of self healing
* Create and start the Replicate volume
* Check the glustershd processes - Note the pids
* Bring down the One brick ( lets say brick1) without affecting the cluster
* Create 5000 files on volume
* bring the brick1 up which was killed in previous steps
* check the heal info - proactive self healing should start
* Bring down brick1 again
* wait for 60 sec and brought up the brick1
* Check the glustershd processes - pids should be different
* Monitor the heal till its complete
Change-Id: Ib044ec60214171f136cc4c2f9225b8fe62e6214d
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
| |
( x3)
Change-Id: Ic0aaccdbf6938702ec1dbb44e888e45eb9f21e28
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the Client Side Quorum with fixed
for cross 2 volume
* Disable self heal daemom
* set cluster.quorum-type to fixed.
* start I/O( write and read )from the mount point - must succeed
* Bring down brick1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 2
* start I/0 ( write and read ) - read must pass, write will fail
* bring back the brick1 online
* start I/0 ( write and read ) - must succeed
* Bring down brick2
* start I/0 ( write and read ) - read must pass, write will fail
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* cluster.quorum-count back to 2 and cluster.quorum-type to auto
* start I/0 ( write and read ) - must succeed
* Bring back brick2 online
* Bring down brick1
* start I/0 ( write and read ) - read must pass, write will fail
* set the quorum-type to none
* start I/0 ( write and read ) - must succeed
Change-Id: I415aba5db211607476fd7345c8ca6f4d49373402
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|