| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Change-Id: Ibf0391a2f7709fb08326f57a0c4c899e28faf62f
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test: test_metadata_heal_from_shd
Steps:
1. Create, mount and run IO on volume
2. Set `self-heal-daemon` to `off` and bring files into metadata
split brain
3. Set `self-heal-daemon` to `on` and wait for heal completion
4. Validate areequal checksum on backend bricks
Test: test_metadata_heal_from_heal_cmd
Steps:
1. Create, mount and run IO on volume
2. Set `self-heal-daemon` to `off` and bring files into metadata
split brain
3. Set `self-heal-daemon` to `on`, invoke `gluster vol <vol> heal`
4. Validate areequal checksum on backend bricks
Test: test_data_heal_from_shd
Steps:
1. Create, mount and run IO on volume
2. Set `self-heal-daemon` to `off` and bring files into data
split brain
3. Set `self-heal-daemon` to `on` and wait for heal completion
4. Validate areequal checksum on backend bricks
Change-Id: I24411d964fb6252ae5b621c6569e791b54dcc311
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
| |
Change-Id: I626914130554cccf1008ab43158d7063d131b870
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
| |
Change-Id: I3920be66ac84fe700c4d0d6a1d2c1750efb43335
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and mount it
2. Start gluster compilation
3. Bring down redundant bricks
4. Wait for compilation to complete
5. Bring up bricks
6. Check if mountpoint is accessible
7. Delete glusterfs from mountpoint and
start gluster compilation again
8. Bring down redundant bricks
9. Wait for compilation to complete
10. Bring up bricks
11. Check if mountpoint is accessible
Change-Id: Ic5a272fba7db9707c4acf776d5a505a31a34b915
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create deep directory and file in each directory
3.Rename the file
4.Check if brickpath contains old files
5.Delete all data
6.Check .glusterfs/indices/xattrop is empty
7.Check if brickpath is empty
Change-Id: I04e50ef94379daa344be1ae1d19cf2d66f8f460b
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a volume and mount it
2. Disable heal and cluster-quorum-count
3. Bring down one data and arbiter brick from one
subvol
4. Write IO and validate it
5. Bring up bricks
6. Bring down another data brick and arbiter brick
from the same subvol
7. Write IO and validate it
8. Bring up bricks
9. Check if split-brain is created
10. Write IO -> should fail
11. Enable heal and cluster-quorum-count
12. Write IO -> should fail
Change-Id: I229b58c1bcd70dcd87d35dc410e12f51b032b9c4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Starting writing and reading data on file
3.Bring down 1 brick
4.Validate read and write to file
5.Bring up brick and start healing
6.Monitor healing and completion
7.Bring down 2nd brick
8.Read and write to same file
9.Bring up brick and start healing
10.Monitor healing and completion
11.Check split-brain
Change-Id: Ib03a1ad7ee626337904b084e85eee38750fea141
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create a directory say d1
3.Create deep directories and files in d1
4.Bring down redundant bricks
5.Delete d1
6.Create d1 and same data again
7.Bring bricks up
8.Monitor heal
9.Verify split-brain
Change-Id: I778fab6bf6d9f81fca79fe18285073e1f7ccc7e7
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create a volume and mount it
- disable metadata,data,entry heal
- Create files and take arequal of mount point
- Bring down redundant bricks
- Append data and create hardlinks
- Bring up bricks
- Check healing and split-brain
- Bring down redundant bricks
- Truncate data
- Check file and hardlink stat match
- Bring up bricks
Change-Id: I9b26f2fb26d72b71abd63a25ef8d9173f32997d4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create an volume and mount it
- Bring bricks offline
- Write 50k files
- Bring bricks online
- Monitor heal completion
- Check for split-brain
Change-Id: I40739effdfa1c1068fa0628467154b9a667161a3
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the non-tiered volume types, In few test cases while bringing
bricks offline, collecting both hot_tier_bricks and cold_tier_bricks
and it is not needed to collect hot and cold tier bricks.
Removing tier kwarg in one of the test.
Removing the hot and cold tiered bricks and collecting only bricks
of the particular volume as mentioned below.
Removing below section
```
bricks_to_bring_offline_dict = (select_bricks_to_bring_offline(
self.mnode, self.volname))
bricks_to_bring_offline = list(filter(None, (
bricks_to_bring_offline_dict['hot_tier_bricks'] +
bricks_to_bring_offline_dict['cold_tier_bricks'] +
bricks_to_bring_offline_dict['volume_bricks'])))
```
Modifying as below for bringing bricks offline.
```
bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks']
```
Change-Id: I4f59343b380ced498516794a8cc7c968390a8459
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcase test_mount_point_not_go_to_rofs fails
every time in the CI runs with the below traceback:
> ret = wait_for_io_to_complete(self.all_mounts_procs, self.mounts)
tests/functional/arbiter/test_mount_point_while_deleting_files.py:137:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
build/bdist.linux-x86_64/egg/glustolibs/io/utils.py:290: in wait_for_io_to_complete
???
/usr/lib/python2.7/site-packages/glusto/connectible.py:247: in async_communicate
stdout, stderr = p.communicate()
/usr/lib64/python2.7/subprocess.py:800: in communicate
return self._communicate(input)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <subprocess.Popen object at 0x7febb64238d0>, input = None
def _communicate(self, input):
if self.stdin:
# Flush stdio buffer. This might block, if the user has
# been writing to .stdin in an uncontrolled fashion.
> self.stdin.flush()
E ValueError: I/O operation on closed file
/usr/lib64/python2.7/subprocess.py:1396: ValueError
This is because the self.io_validation_complete is
never set to True in the testcase.
Fix:
Adding code to set self.io_validation_complete to
True and moving code from TearDownClass to
TearDown.
Modifying logic to not add both clients to self.mounts.
Change-Id: I51ed635e713838ee3054c4d1dd8c6cdc16bbd8bf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the non-tiered volume types, In few test cases while bringing
bricks offline, collecting both hot_tier_bricks and cold_tier_bricks
and it is not needed to collect hot and cold tier bricks.
Removing the hot and cold tiered bricks and collecting only bricks
of the particular volume as mentioned below.
Removing below section
```
bricks_to_bring_offline_dict = (select_bricks_to_bring_offline(
self.mnode, self.volname))
bricks_to_bring_offline = list(filter(None, (
bricks_to_bring_offline_dict['hot_tier_bricks'] +
bricks_to_bring_offline_dict['cold_tier_bricks'] +
bricks_to_bring_offline_dict['volume_bricks'])))
```
Modifying as below for bringing bricks offline.
```
bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks']
```
Change-Id: Icb1dc4a79cf311b686d839f2c9390371e42142f7
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
In python3 assertItemsEqual is no longer supported and is replaced with assertCountEqual (Refer [1]).
Because of this issue, few arbiter tests are failing.
[1] https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertItemsEqual
Fix:
The replacement assertCountEqual is not supported in python2. So the fix is to replace assertItemsEqual
with assertEqual(sorted(expected), sorted(actual))
Change-Id: Ic1d599fa31f85a8a41598b6c245056a6ff01e000
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
* Create IO
* Calculate arequal from mount
* kill glusterd process and glustershd process on arbiter nodes
* Delete data from backend from the arbiter nodes
* Start glusterd process and force start the volume
to bring the processes online
* Check if heal is completed
* Check for split-brain
* Calculate arequal checksum and compare it
Change-Id: I41192134530ec42db3398ae97e4f328b77e529d1
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`g.rpyc_get_connection()` has a limitaion where it can't
convert python2 calls to python3 calls. Due to this a large
number of testcases fail when executed from a python2 machine
on a python3 only setup or visa versa with the below stack trace:
```
E ========= Remote Traceback (1) =========
E Traceback (most recent call last):
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request
E res = self._HANDLERS[handler](self, *args)
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect
E if hasattr(self._local_objects[id_pack], '____conn__'):
E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__
E return self._dict[key][0]
E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560)
```
Solution:
Write generic code which can run from python2 to
python3 and visa-versa
Change-Id: I7783485a784ef4b57f626f77e6012d918fee6032
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ic0b3b1333ac7b1ae02f701943d49510e6d46c259
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As distributed-arbiter and arbiter weren't present before patch [1],
arbiter and distributed-arbiter volumes were created by the hack show
below where a distributed-replicated or replicated volume's configuration
was modified to create an arbiter volume.
```
@runs_on([['replicated', 'distributed-replicated'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
.................
@classmethod
def setUpClass(cls):
...............
# Overriding the volume type to specifically test the volume
# type Change from distributed-replicated to arbiter
if cls.volume_type == "distributed-replicated":
cls.volume['voltype'] = { 'type': 'distributed-replicated',
'dist_count': 2,
'replica_count': 3,
'arbiter_count': 1,
'transport': 'tcp'}
```
Now this code is to be updated where we need to remove code which
was used to override volume configuration and
just add arbiter or distributed-arbiter in `@runs_on([],[])`
as shown below:
```
@runs_on([['replicated', 'distributed-arbiter'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
```
Links:
[1] https://github.com/gluster/glusto-tests/commit/08b727842bc66603e3b8d1160ee4b15051b0cd20
Change-Id: I4c44c2f3506bd0183fd991354fb723f8ec235a4b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I2ba0c81dad41bdac704007bd1780b8a98cb50358
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Set the volume option
"metadata-self-heal": "off"
"entry-self-heal": "off"
"data-self-heal": "off"
"self-heal-daemon": "off"
2.Bring down all bricks processes from selected set
3.Create IO (50k files)
4.Get arequal before getting bricks online
5.Bring bricks online
6.Set the volume option
"self-heal-daemon": "on"
7.Check for daemons
8.Start healing
9.Check if heal is completed
10.Check for split-brain
11.Get arequal after getting bricks online and compare with
arequal before getting bricks online
12.Add bricks to volume
13.Do rebalance and wait for it to complete
14.Get arequal after adding bricks and compare with
arequal after getting bricks online
Change-Id: I1598c4d6cf98ce99249e85fc377b9db84886f284
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Change-Id: I1eacfd74c730d28e36bb8f7e3a1f574edc3d13c7
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Change-Id: Id0cdda175865c84bef917211560acee8ea10fe7b
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: Icf32bb20b7eaf2eabb07b59be813997a28872565
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: I0d2eeb978c6757d6d910ebfe21b07811bf74b80a
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I5a60646b984557ed024cb4b3a8088ce7dfb7622c
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: Ieecf4707ee6cb7b3c58380306bf105e282986b1b
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier in the testcase we were turning off shd
which is not correct and we have to turn off only
client side heal options as mentioned below
metadata-self-heal
entry-self-heal
data-self-heal
After renaming files we have to turn on these options
while doing a look up from client
Change-Id: I8c76abb8e79620c412e5991f5d8255b6b2a850e8
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
|
|
|
|
|
|
| |
Change-Id: I4d056b94b4ea59beee7eb24e7e5d5f65d7256b4a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
RHGS 3.5
Change-Id: I500912b5217b675f9fdff4fe1cb518b465de245c
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When eager-lock is on, and two writes happen in parallel on a FD,
Arbiter becoming source of heal is observed, hence modifing the IO
pattern, Also this is a race-condition, hence executing the same
remove-brick cycle thrice per BZ 1401969, this patch also takes
care of multiple clients writing to different files/dirs, no two
clients writing to same file
Change-Id: If0003afb675bbcf9f6b555b43e9a11e4def5435c
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
|
|
| |
was not handled by teardown class
Change-Id: I789adbf0909c5edd0a2eb19ed4ccebcb654700fd
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
| |
Change-Id: I04ffdedb1ce25ab05239c77b4dd5893ce18b32f7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I9f33c84be39bdca85909c2ae337bd4482532d061
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I6a95e82977f4ac6092716c064597931768023710
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Icd8586f5df4c34a5f0edd4b5a69f864bd2984ade
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I0143a4ffa16fa0c3ea240f5debbdc5519a9e5445
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I6462446cce6c06a7559028eee1a6968af093c959
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Id32859df069106d6c9913147ecfa8d378dfa8e9d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Id9face2267b9f702bb2b0b5b3c294b3e4082cdf7
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
for the current workload rebalance is not sufficient
so changing the timeout to balance the workload.
Change-Id: Iad5bf9d828ad12a6867847a02c19e0ebe6b7741e
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: Ib0f279fc03bbab23b458220db9ccd55b4b1bdcab
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I21f3d8acce35acb852e36c2747d67bc8ed235e9d
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
| |
Change-Id: I46fc2feffe6443af6913785d67bf310838532421
|
|
|
|
|
| |
Change-Id: Ie25f33d52730ee5482d656bf5a75f1d983cca78e
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: Icbc522a6e23c4dc76547936d8595723d3eb2d81a
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
No functional change, just make the tests a bit more readable.
It could be moved to a decorator later on, wrapping tests.
Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
| |
Change-Id: I9a0cb923c7e3ec0146c2b3b9bf0dcafe6ab892e8
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I0360e6590425aea48d7acf2ddb10d9fbfe9fdeef
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I29eefb9ba5bbe46ba79267b85fb8814a14d10b00
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|