| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Check that when there are pending heals and healing and I/O are going
on, heal info completes successfully.
Change-Id: I7b00c5b6446d6ec722c1c48a50e5293272df0fdf
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create a replicated/distributed-replicate volume and mount it
2. Start IO from the clients
3. Bring down a brick from the subvol and validate it is offline
4. Bring back the brick online and wait for heal to complete
5. Once the heal is completed, expand the volume.
6. Trigger rebalance and wait for rebalance to complete
7. Validate IO, no errors during the steps performed from step 2
8. Check arequal of the subvol and all the brick in the same subvol
should have same checksum
Note: This tests is cleary for replicated volume types.
Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create 1x3 volume and fuse mount the volume
2) On mount created a dir dir1
3) Pkill glusterfsd on node n1 (b2 on node2 and b3 and node3 up)
4) touch f{1..10} on the mountpoint
5) b2 and b3 xattrs would be blaming b1 as files are created while
b1 is down
6) Reset the b3 xattrs to NOT blame b1 by using setattr
7) Now pkill glusterfsd of b2 on node2
8) Restart glusterd on node1 to bring up b1
9) Now bricks b1 online , b2 down, b3 online
10) touch x{1..10} under dir1 itself
11) Again reset xattr on node3 of b3 so that it doesn't blame b2,
as done for b1 in step 6
12) Do restart glusterd on node2 hosting b2 to bring all bricks online
13) Check for heal info, split-brain and arequal for the bricks
Change-Id: Ieea875dd7243c7f8d2c6959aebde220508134d7a
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
- Remove unneccessary disablement of client side heal options
- Check if client side heal options are disabled by default
- Test data heal by default method
- Explicit data heal by calling self heal command
Change-Id: I3be9001fc1cf124a4cf5a290cee985e166c0b685
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the non-tiered volume types, In few test cases while bringing
bricks offline, collecting both hot_tier_bricks and cold_tier_bricks
and it is not needed to collect hot and cold tier bricks.
Removing tier kwarg in one of the test.
Removing the hot and cold tiered bricks and collecting only bricks
of the particular volume as mentioned below.
Removing below section
```
bricks_to_bring_offline_dict = (select_bricks_to_bring_offline(
self.mnode, self.volname))
bricks_to_bring_offline = list(filter(None, (
bricks_to_bring_offline_dict['hot_tier_bricks'] +
bricks_to_bring_offline_dict['cold_tier_bricks'] +
bricks_to_bring_offline_dict['volume_bricks'])))
```
Modifying as below for bringing bricks offline.
```
bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks']
```
Change-Id: I4f59343b380ced498516794a8cc7c968390a8459
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
A pattern was observed where testcases
which were passing were throwing error
in teadownclass this was becuase
docleanup was running before teadownclass
and when teardownclass was executed it
failed as the setup was already cleaned.
Solution:
Change code to teardown from teardownclass
and move setup volume code to setup from
setupclass.
Change-Id: I37c6fde1f592224c114148f0ed7215b2494b4502
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Failing in CentOS-CI due to this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1768380
Description:
Test Script which verifies that the server side healing must happen
only if the heal daemon is running on the node where source brick
resides.
* Create and start the Replicate volume
* Check the glustershd processes - Only 1 glustershd should be listed
* Bring down the bricks without affecting the cluster
* Create files on volume
* kill the glustershd on node where bricks is running
* bring the bricks up which was killed in previous steps
* check the heal info - heal info must show pending heal info, heal
shouldn't happen since glustershd is down on source node
* issue heal
* trigger client side heal
* heal should complete successfully
Change-Id: I1fba01f980a520b607c38d8f3371bcfe086f7783
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the glustershd server vol file
has only entries for replicate volumes.
Testcase steps:
1.Create multiple volumes and start all volumes
2.Check the glustershd processes(Only 1 glustershd
should be listed)
3.Do replace brick on the replicate volume
4.Confirm that the brick is replaced
5.Check the glustershd processes(Only 1 glustershd should be listed
and pid should be different)
6.glustershd server vol should be updated with new bricks
Change-Id: I09245c8ff6a2b31a038749643af294aa8b81a51a
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sys library was added to all the testcases to fetch
the `sys.version_info.major` which fetches the version
of python with which glusto and glusto-tests is installed
and runs the I/O script i.e file_dir_ops.py with that
version of python but this creates a problem as older jobs
running on older platforms won't run the way they use to,
like if the older platform had python2 by default and
we are running it tests from a slave which
has python3 it'll fails and visa-versa.
The problem is introduced due the below code:
```
cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files "
"--dirname-start-num 10 --dir-depth 1 --dir-length 1 "
"--max-num-of-dirs 1 --num-of-files 5 %s" % (
sys.version_info.major, self.script_upload_path,
self.mounts[0].mountpoint))
```
The solution to this problem is to change `python%d`
to `python` which would enable the code to run with
whatever version of python is avaliable on that client
this would enable us to run any version of framework
with both the older and latest platforms.
Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Added steps to reset volume and resolved teardown class
cleanup failures.
Change-Id: I06b0ed8810c9b064fd2ee7c0bfd261928d8c07db
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: Icf32bb20b7eaf2eabb07b59be813997a28872565
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Ic14be81f1cd42c470d2bb5c15505fc1bc168a393
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: Id4df838565ec3f9ad765cf223bb5115e43dac1c5
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
| |
Change-Id: I12b5586bdcef128df64fcd8a0ba80f193395f313
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
|
|
|
|
|
| |
Change-Id: I3d749c5d131973217d18fc1158236806645e4ab4
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
was not handled by teardown class
Change-Id: I789adbf0909c5edd0a2eb19ed4ccebcb654700fd
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
| |
Change-Id: I04ffdedb1ce25ab05239c77b4dd5893ce18b32f7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I9f33c84be39bdca85909c2ae337bd4482532d061
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
folder
Change-Id: I1fb4497ac915c7a93f223ef4e6946eeb4dcd0e90
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I6a95e82977f4ac6092716c064597931768023710
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I4560b425aa470da27631eb6401e3775fb90c2330
Signed-off-by: Vitalii Koriakov <vkoriako@nredhat.com>
|
|
|
|
|
| |
Change-Id: I0143a4ffa16fa0c3ea240f5debbdc5519a9e5445
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I6462446cce6c06a7559028eee1a6968af093c959
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Id32859df069106d6c9913147ecfa8d378dfa8e9d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Id9face2267b9f702bb2b0b5b3c294b3e4082cdf7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I46fc2feffe6443af6913785d67bf310838532421
|
|
|
|
|
|
|
|
| |
No functional change, just make the tests a bit more readable.
It could be moved to a decorator later on, wrapping tests.
Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
| |
Change-Id: I634d11cb582521b03f0bb481172e2f4f68d1c2ce
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
| |
Change-Id: I525f50a42e29270d9ac445d62e12c7e7e25a7ae3
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the glustershd server vol file
has only entries for replicate volumes
* Create multiple volumes and start all volumes
* Check the glustershd processes - Only 1 glustershd should be listed
* Check the glustershd server vol file - should contain entries only
for replicated involved volumes
* Add bricks to the replicate volume - it should convert to
distributed-replicate
* Check the glustershd server vol file - newly added bricks
should present
* Check the glustershd processes - Only 1 glustershd should be listed
Change-Id: Ie110a0312e959e23553417975aa2189ed01be6a4
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: I32fefdab769e5a361e4dcb5f1328b2c8da2e4f1a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I24e2baddc4f5cdb2c9ae0ab6b9020b2eb9b42a05
Signed-off-by: Karan Sandha <ksandha@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script which verifies that the existing glustershd should take
care of self healing
* Create and start the Replicate volume
* Check the glustershd processes - Note the pids
* Bring down the One brick ( lets say brick1) without affecting the cluster
* Create 5000 files on volume
* bring the brick1 up which was killed in previous steps
* check the heal info - proactive self healing should start
* Bring down brick1 again
* wait for 60 sec and brought up the brick1
* Check the glustershd processes - pids should be different
* Monitor the heal till its complete
Change-Id: Ib044ec60214171f136cc4c2f9225b8fe62e6214d
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: I221b49315db8bc02873fc133ff12837954f0c232
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Ie4a4b323e2b7e57e3896550b6f9b7db28fba03b7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I84b789f9c0204ca0f0efb40a9a01215902c0ee1d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"full" (default)
Change-Id: If916d20b0d7c9ded6fb1fc929d9ff1e7719d9594
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"diff" (default)
Change-Id: I34a196e8fc764d87e877a082be2b0575bb1b3b40
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
"diff" (heal command)
Change-Id: Id310e0c17a872d8586ad8c7de79f1f68b93edb0a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Id0d9e468aaf0061e9ff0f5cc534c06017e97b793
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
|
|
|
|
|
| |
Change-Id: Iaecdf6ad44677891340713a5c945a4bdc30ce527
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
Change-Id: If92b6f756f362cb4ae90008c6425b6c6652e3758
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|