| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
bricks are down
Change-Id: I1169250706494b1b833d3b7e8a1ee148426e224b
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
options should not be applicable to glfsheal
Change-Id: I019b0299dd7f907446e85f6de0186fb61a3ce1f1
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
Description: Test case which checks for gfid self heal
of a file on 1x3 replicated volume
Change-Id: I3bad7c16435bd99fa3f5b812c65970bebdbd18ac
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: This test case runs split-brain resolution CLIs
on a file in gfid split-brain on 1x2 volume.
1. kill 1 brick
2. create a file at mount point
3. bring back the killed brick
4. kill the other brick
5. create same file at mount point
6. bring back the killed brick
7. try heal from CLI and check if it gets completed
Change-Id: Iddd386741c3c672cda90db46facd7b04feaa2181
|
|
|
|
|
|
|
| |
may be down
Change-Id: I0515680e2cbe582917f0034461b305a33b75ca94
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I734f85671f17e9a7e9d863aa3a0ef8f632182d48
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I90c51f0e945cfe85e60bc97e1ed3b617a0a7eba5
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ie685a2e60c19bc096c54034a6b2f7d4380441f3d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
est script to verify glustershd process on newly probed sAdding Test
case test_glustershd_on_newly_probed_server
Description:
Test script to verify glustershd process on newly probed server
* check glustershd process - only 1 glustershd process should be running
* Add new node to cluster
* check glustershd process - only 1 glustershd process should be running on
all servers inclusing newly probed server
* stop the volume
* add another node to cluster
* check glustershd process - glustershd process shouldn't be running on
servers including newly probed server
* start the volume
* check glustershd process - only 1 glustershd process should be running
on all servers inclusing newly probed server
Change-Id: I6142000ee8322b7ab27dbcd27e05088d1c8be806
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the glustershd server vol file
has only entries for replicate volumes
* Create multiple volumes and start all volumes
* Check the glustershd processes - Only 1 glustershd should be listed
* Check the glustershd server vol file - should contain entries only
for replicated involved volumes
* Add bricks to the replicate volume - it should convert to
distributed-replicate
* Check the glustershd server vol file - newly added bricks
should present
* Check the glustershd processes - Only 1 glustershd should be listed
Change-Id: Ie110a0312e959e23553417975aa2189ed01be6a4
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the Client Side Quorum with fixed
for cross 3 volume
* Disable self heal daemom
* set cluster.quorum-type to fixed.
* start I/O( write and read )from the mount point - must succeed
* Bring down brick1
* start I/0 ( write and read ) - must succeed
* Bring down brick2
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 2
* start I/0 ( write and read ) - read must pass, write will fail
* bring back the brick1 online
* start I/0 ( write and read ) - must succeed
* Bring back brick2 online
* start I/0 ( write and read ) - must succeed
* set cluster.quorum-type to auto
* start I/0 ( write and read ) - must succeed
* Bring down brick1 and brick2
* start I/0 ( write and read ) - read must pass, write will fail
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - read must pass, write will fail
* set the cluster.quorum-count to 3
* start I/0 ( write and read ) - read must pass, write will fail
* set the quorum-type to none
* start I/0 ( write and read ) - must succeed
Change-Id: Ic159aee3ca80f6a584a46e2ac7986f4007346968
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: I32fefdab769e5a361e4dcb5f1328b2c8da2e4f1a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I24e2baddc4f5cdb2c9ae0ab6b9020b2eb9b42a05
Signed-off-by: Karan Sandha <ksandha@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script which verifies that the existing glustershd should take
care of self healing
* Create and start the Replicate volume
* Check the glustershd processes - Note the pids
* Bring down the One brick ( lets say brick1) without affecting the cluster
* Create 5000 files on volume
* bring the brick1 up which was killed in previous steps
* check the heal info - proactive self healing should start
* Bring down brick1 again
* wait for 60 sec and brought up the brick1
* Check the glustershd processes - pids should be different
* Monitor the heal till its complete
Change-Id: Ib044ec60214171f136cc4c2f9225b8fe62e6214d
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
| |
( x3)
Change-Id: Ic0aaccdbf6938702ec1dbb44e888e45eb9f21e28
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the Client Side Quorum with fixed
for cross 2 volume
* Disable self heal daemom
* set cluster.quorum-type to fixed.
* start I/O( write and read )from the mount point - must succeed
* Bring down brick1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* set the cluster.quorum-count to 2
* start I/0 ( write and read ) - read must pass, write will fail
* bring back the brick1 online
* start I/0 ( write and read ) - must succeed
* Bring down brick2
* start I/0 ( write and read ) - read must pass, write will fail
* set the cluster.quorum-count to 1
* start I/0 ( write and read ) - must succeed
* cluster.quorum-count back to 2 and cluster.quorum-type to auto
* start I/0 ( write and read ) - must succeed
* Bring back brick2 online
* Bring down brick1
* start I/0 ( write and read ) - read must pass, write will fail
* set the quorum-type to none
* start I/0 ( write and read ) - must succeed
Change-Id: I415aba5db211607476fd7345c8ca6f4d49373402
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the Client Side Quorum with auto option
* check the default value of cluster.quorum-type
* try to set any junk value to cluster.quorum-type
other than {none,auto,fixed}
* check the default value of cluster.quorum-count
* set cluster.quorum-type to fixed and cluster.quorum-count to 1
* start I/O from the mount point
* kill 2 of the brick process from the each replica set.
* set cluster.quorum-type to auto
Change-Id: I102373d1a53635563909e4fb80a01d98c24d3355
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: I221b49315db8bc02873fc133ff12837954f0c232
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ie4a4b323e2b7e57e3896550b6f9b7db28fba03b7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I84b789f9c0204ca0f0efb40a9a01215902c0ee1d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"full" (default)
Change-Id: If916d20b0d7c9ded6fb1fc929d9ff1e7719d9594
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"diff" (default)
Change-Id: I34a196e8fc764d87e877a082be2b0575bb1b3b40
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"diff" (heal command)
Change-Id: Id310e0c17a872d8586ad8c7de79f1f68b93edb0a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Id0d9e468aaf0061e9ff0f5cc534c06017e97b793
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I1e0e291954533e602a50d3f6c25365bb0b68b926
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I43a5b87c4acfd3df9483ca869d926714325ae1b9
|
|
|
|
|
|
|
| |
as auto first brick must be up to have a rw filesystem in a x2 volume
Change-Id: I98b0808070e6d254b1deeb1a3a744d19adccbf03
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
|
|
|
|
|
| |
Change-Id: Iaecdf6ad44677891340713a5c945a4bdc30ce527
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I99da69377658f3c5f47722dbc3edb216995e9fa4
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
global to whole cluster
Change-Id: I9cd8ae1f490bc870540657b4f309197f8cee737e
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ibef22a1719fe44aac20024d82fd7f2425945149c
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script with Client Side Quorum with fixed should validate
maximum number of bricks to accept
* set cluster quorum to fixed
* set cluster.quorum-count to higher number which is greater than
number of replicas in a sub-voulme
* Above step should fail
Change-Id: I83952a07d36f5f890f3649a691afad2d0ccf037f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: I73512dde33207295fa954a3b3949f653f03f23c0
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica984b28d2c23e3d0d716d8c0dde6ab6ef69dc8f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
Gave meaningful names to functions
Returning -1 if there is no process running
Replace numbers with words
Rewording the msg "More than 1 or 0 self heal daemon"
Review Comments incorporated
Change-Id: If424a6f78536279c178ee45d62099fd8f63421dd
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|