| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a replicated/distributed-replicate volume and mount it
2. Set data/metadata/entry-self-heal to off and
data-self-heal-algorithm to diff
3. Create few files inside a directory with some data
4. Check arequal of the subvol and all the bricks in the subvol should
have same checksum
5. Bring down a brick from the subvol and validate it is offline
6. Modify the data of existing files under the directory
7. Bring back the brick online and wait for heal to complete
8. Check arequal of the subvol and all the brick in the same subvol
should have same checksum
Change-Id: I568a932c6e1db4a9084c01556c5fcca7c8e24a49
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create a replicated/distributed-replicate volume and mount it
2. Start IO from the clients
3. Bring down a brick from the subvol and validate it is offline
4. Bring back the brick online and wait for heal to complete
5. Once the heal is completed, expand the volume.
6. Trigger rebalance and wait for rebalance to complete
7. Validate IO, no errors during the steps performed from step 2
8. Check arequal of the subvol and all the brick in the same subvol
should have same checksum
Note: This tests is cleary for replicated volume types.
Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
| |
Change-Id: I465fefeae36a5b700009bb1d6a3c6639ffafd6bd
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Three Scenarios:
- Simulate gfid split brain files under a directory
- Resolve gfid splits using `source-brick`, `bigger-file` and
`latest-mtime` methods
- Validate all the files are healed and data is consistent
Change-Id: I8b143f341c0db2f32086ecb6878cbfe3bdb247ce
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create and mount a replicated volume and disable quorum,
self-heal deamon
- Create ~10 files from the mount point and simulate data, metadata
split-brain for 2 files each
- Create a dir with some files and simulate entry/gfid split brain
- Validate volume successfully recognizing split-brain
- Validate a lookup on split-brain files fails with EIO error on mount
- Validate `heal info` and `heal info split-brain` command shows only
the files that are in split-brain
- Validate new files and dir's can be created from the mount
Change-Id: I8caeb284c53304a74473815ae5181213c710b085
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
| |
- `str.rsplit` doesn't accept named args in py2
- Removed named arg to make it compatible with both versions
Change-Id: Iba287ef4c98ebcbafe55f2166c99aef0c20ed9aa
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a distributed-replicated(3X3)/distributed-arbiter(3X(2+1))
and mount it on one client
2. Kill 3 bricks corresponding to the 1st subvol
3. Unmount and remount the volume on the same client
4. Create deep dir from mount point 'dir1/subdir1/deepdir1'
5. Create files under dir1/subdir1/deepdir1; touch <filename>
6. Now bring all sub-vols up by volume start force
7. Validate backend bricks for dir creation, the subvol which is
offline will have no dirs created, whereas other subvols will have
dirs created from step 4
8. Trigger heal from client by '#find . | xargs stat'
9. Verify that the directory entries are created on all back-end bricks
10. Create new dir (dir2) on location dir1/subdir1/deepdir1
11. Trigger rebalance and wait for the completion
12. Check backend bricks for all entries of dirs
Change-Id: I4d8f39e69c84c28ec238ea73935cd7ca0288bffc
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create 1x3 volume and fuse mount the volume
2) On mount created a dir dir1
3) Pkill glusterfsd on node n1 (b2 on node2 and b3 and node3 up)
4) touch f{1..10} on the mountpoint
5) b2 and b3 xattrs would be blaming b1 as files are created while
b1 is down
6) Reset the b3 xattrs to NOT blame b1 by using setattr
7) Now pkill glusterfsd of b2 on node2
8) Restart glusterd on node1 to bring up b1
9) Now bricks b1 online , b2 down, b3 online
10) touch x{1..10} under dir1 itself
11) Again reset xattr on node3 of b3 so that it doesn't blame b2,
as done for b1 in step 6
12) Do restart glusterd on node2 hosting b2 to bring all bricks online
13) Check for heal info, split-brain and arequal for the bricks
Change-Id: Ieea875dd7243c7f8d2c6959aebde220508134d7a
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
- Validate `heal info` returns before timeout with IO
- Validate `heal info` returns before timeout with IO and brick down
- Validate data heal on file append in AFR, arbiter
- Validate entry heal on file append in AFR, arbiter
Change-Id: I803b931cd82d97b5c20bd23cd5670cb9e6f04176
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create volume and create files/dirs from mount point
2. With IO in progress execute reset-brick start
3. Now format the disk from back-end, using rm -rf <brick path>
4. Execute reset brick commit and check for the brick is online.
5. Issue volume heal using "gluster vol heal <volname> full"
6. Check arequal for all bricks to verify all backend bricks
including the resetted brick have same data
Change-Id: I06b93d79200decb25f863e7a3f72fc8e8b1c4ab4
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create, start and mount an arbiter volume in two clients
- Create two dir's, fill IO in first dir and take note of arequal
- Start a continuous IO from second directory
- Convert arbiter to x2 replicated volume (remove brick)
- Convert x2 replicated to x3 replicated volume (add brick)
- Wait for ~5 min for vol file to be updated on all clients
- Enable client side heal options and issue volume heal
- Validate heal completes with no errors and arequal of first dir
matches against initial checksum
Change-Id: I291acf892b72bc8a05e76d0cffde44d517d05f06
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create and mount a replicated volume
- Kill one of the bricks and write IO from mount point
- Verify `gluster volume heal <volname> info healed` and `gluster
volume heal <volname> info heal-failed` command results in error
- Validate `gluster volume help` doesn't list `healed` and
`heal-failed` commands
Change-Id: Ie1c3db12cdfbd54914e61f812cbdac382c9c723e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
- Remove unneccessary disablement of client side heal options
- Check if client side heal options are disabled by default
- Test data heal by default method
- Explicit data heal by calling self heal command
Change-Id: I3be9001fc1cf124a4cf5a290cee985e166c0b685
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the non-tiered volume types, In few test cases while bringing
bricks offline, collecting both hot_tier_bricks and cold_tier_bricks
and it is not needed to collect hot and cold tier bricks.
Removing tier kwarg in one of the test.
Removing the hot and cold tiered bricks and collecting only bricks
of the particular volume as mentioned below.
Removing below section
```
bricks_to_bring_offline_dict = (select_bricks_to_bring_offline(
self.mnode, self.volname))
bricks_to_bring_offline = list(filter(None, (
bricks_to_bring_offline_dict['hot_tier_bricks'] +
bricks_to_bring_offline_dict['cold_tier_bricks'] +
bricks_to_bring_offline_dict['volume_bricks'])))
```
Modifying as below for bringing bricks offline.
```
bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks']
```
Change-Id: I4f59343b380ced498516794a8cc7c968390a8459
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the non-tiered volume types, In few test cases while bringing
bricks offline, collecting both hot_tier_bricks and cold_tier_bricks
and it is not needed to collect hot and cold tier bricks.
Removing the hot and cold tiered bricks and collecting only bricks
of the particular volume as mentioned below.
Removing below section
```
bricks_to_bring_offline_dict = (select_bricks_to_bring_offline(
self.mnode, self.volname))
bricks_to_bring_offline = list(filter(None, (
bricks_to_bring_offline_dict['hot_tier_bricks'] +
bricks_to_bring_offline_dict['cold_tier_bricks'] +
bricks_to_bring_offline_dict['volume_bricks'])))
```
Modifying as below for bringing bricks offline.
```
bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks']
```
Change-Id: Icb1dc4a79cf311b686d839f2c9390371e42142f7
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume and mount it on one client
2. git clone the glusterfs repo on the glusterfs volume
3. Set the performance options to off
4. Repeat step 2 on a different directory
Change-Id: Iaecce7cd14ecf84058c75847a037c6589d3833e9
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
| |
The function "set_volume_options()" is a part of "volume_ops" lib,
but was wrongly imported from "volume_libs" lib earlier. Corrected
the import statement.
Change-Id: I7295684e7a564468ac42bbe1f00643ee150f769d
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
A pattern was observed where testcases
which were passing were throwing error
in teadownclass this was becuase
docleanup was running before teadownclass
and when teardownclass was executed it
failed as the setup was already cleaned.
Solution:
Change code to teardown from teardownclass
and move setup volume code to setup from
setupclass.
Change-Id: I37c6fde1f592224c114148f0ed7215b2494b4502
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Failing in CentOS-CI due to this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1768380
Description:
Test Script which verifies that the server side healing must happen
only if the heal daemon is running on the node where source brick
resides.
* Create and start the Replicate volume
* Check the glustershd processes - Only 1 glustershd should be listed
* Bring down the bricks without affecting the cluster
* Create files on volume
* kill the glustershd on node where bricks is running
* bring the bricks up which was killed in previous steps
* check the heal info - heal info must show pending heal info, heal
shouldn't happen since glustershd is down on source node
* issue heal
* trigger client side heal
* heal should complete successfully
Change-Id: I1fba01f980a520b607c38d8f3371bcfe086f7783
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Turn off the options self heal daemon
2.Create IO
3.Calculate arequal of the bricks and mount point
4.Bring down "brick1" process
5.Change the permissions of the directories and files
6.Change the ownership of the directories and files
7.Change the group of the directories and files
8.Bring back the brick "brick1" process
9.Execute "find . | xargs stat" from the mount point to trigger heal
10.Verify the changes in permissions are not self healed on brick1
11.Verify the changes in permissions on all bricks but brick1
12.Verify the changes in ownership are not self healed on brick1
13.Verify the changes in ownership on all the bricks but brick1
14.Verify the changes in group are not successfully self-healed
on brick1
15.Verify the changes in group on all the bricks but brick1
16.Turn on the option metadata-self-heal
17.Execute "find . | xargs md5sum" from the mount point to trgger heal
18.Wait for heal to complete
19.Verify the changes in permissions are self-healed on brick1
20.Verify the changes in ownership are successfully self-healed
on brick1
21.Verify the changes in group are successfully self-healed on brick1
22.Calculate arequal check on all the bricks and mount point
Change-Id: Ia7fb1b272c3c6bf85093690819b68bd83efefe14
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the glustershd server vol file
has only entries for replicate volumes.
Testcase steps:
1.Create multiple volumes and start all volumes
2.Check the glustershd processes(Only 1 glustershd
should be listed)
3.Do replace brick on the replicate volume
4.Confirm that the brick is replaced
5.Check the glustershd processes(Only 1 glustershd should be listed
and pid should be different)
6.glustershd server vol should be updated with new bricks
Change-Id: I09245c8ff6a2b31a038749643af294aa8b81a51a
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create directory on mount point and write files/dirs
2.Create another set of files (1K files)
3.While creation of files/dirs are in progress Kill one brick
4.Remove the contents of the killed brick(simulating disk replacement)
5.When the IO's are still in progress, restart glusterd on the nodes
where we simulated disk replacement to bring back bricks online
6.Start volume heal
7.Wait for IO's to complete
8.Verify whether the files are self-healed
9.Calculate arequals of the mount point and all the bricks
CentOS-CI failure due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1807384
Change-Id: I9e9f58a16a7950fd7d6493cbb5c4f5483892851e
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create a single brick volume
2.Add some files and directories
3.Get arequal from mountpoint
4.Add-brick such that this brick makes
the volume a replica vol 1x3
5.Start heal full
6.Make sure heal is completed
7.Get arequals from all bricks and
compare with arequal from mountpoint
Change-Id: I4ef140b326b3d9edcbd5b1f0b7d9c43f38ccfe66
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Create a volume and mount it.
2. Create a directory on mount and check whether all the bricks have
the same gfid.
3. Now delete gfid attr from all but one backend bricks,
4. Do lookup from the mount.
5. Check whether all the bricks have the same gfid assigned.
Failing in CentOS-CI due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1696075
Change-Id: I4eebc247b15c488cfa24599e0afec2fa5671656f
Co-authored-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sys library was added to all the testcases to fetch
the `sys.version_info.major` which fetches the version
of python with which glusto and glusto-tests is installed
and runs the I/O script i.e file_dir_ops.py with that
version of python but this creates a problem as older jobs
running on older platforms won't run the way they use to,
like if the older platform had python2 by default and
we are running it tests from a slave which
has python3 it'll fails and visa-versa.
The problem is introduced due the below code:
```
cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files "
"--dirname-start-num 10 --dir-depth 1 --dir-length 1 "
"--max-num-of-dirs 1 --num-of-files 5 %s" % (
sys.version_info.major, self.script_upload_path,
self.mounts[0].mountpoint))
```
The solution to this problem is to change `python%d`
to `python` which would enable the code to run with
whatever version of python is avaliable on that client
this would enable us to run any version of framework
with both the older and latest platforms.
Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Added steps to reset volume and resolved teardown class
cleanup failures.
Change-Id: I06b0ed8810c9b064fd2ee7c0bfd261928d8c07db
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase 1: Test entry transaction crash consistency : create
- Create IO
- Calculate arequal before creating snapshot
- Create snapshot
- Modify the data
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 2: Test entry transaction crash consistency : delete
- Create IO of 50 files
- Delete 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Delete 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 3: Test entry transaction crash consistency : rename
- Create IO of 50 files
- Rename 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Rename 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Change-Id: I7cb9182f91ae50c47d5ae9b3f8031413b2bbfbbf
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
available space
Testcase:
- note the current available space on the mount
- create 1M file on the mount
- note the current available space on the mountpoint and compare
with space before creation
- remove the file
- note the current available space on the mountpoint and compare
with space before creation
Change-Id: Iff017039d1888d03f067ee2a9f26aff327bd4059
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Idcc40442869cb3e44873625887409592d9e0710d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: I1df0324dac2da5aad4064cc72ef77dcb5bf67e4f
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: Icf32bb20b7eaf2eabb07b59be813997a28872565
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Ic14be81f1cd42c470d2bb5c15505fc1bc168a393
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: Id4df838565ec3f9ad765cf223bb5115e43dac1c5
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
|
| |
Dict attribute called "iteritems()" is not supported in the py3.
So, replace it's usage with another similar attr called "items()".
Change-Id: I130b7f67f0a2d5da5ed6c3d792f5ff024ba148f4
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
| |
Change-Id: I12b5586bdcef128df64fcd8a0ba80f193395f313
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
|
|
|
|
|
|
| |
Change-Id: I7f8769defd34d55d8eec720c40ed55e69523f917
Signed-off-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: milindw96 <milindwaykole96@gmail.com>
|
|
|
|
|
|
| |
Change-Id: Ifef2ffe022accf59edcbc949c505f47931b19fe4
Signed-off-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test case 'test_client_side_quorum_with_fixed_validate_max_bricks'
does not have a tearDown part where the volume options which have been
set inside the test case have not been reset to default.
The library function 'set_volume_options' was being imported from a
wrong library. This fix includes this change along with the tearDown
steps.
Change-Id: Ic57494e7a7e8a25303b7979f98cc2dfbc9a7d7b6
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test case 'test_client_side_quorum_with_fixed_for_cross3' does not
include the tearDown part where the volume options which have been set
inside the test case have to be reset to default.
This fix includes the necessary tearDown steps along with a few
cosmetic changes.
Change-Id: I86187cef4523492ec97707ff93d0eca365293008
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
| |
Change-Id: I0992b1b9af4e12f4e20d7a5dc184048de104d89d
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
| |
Change-Id: I7f7b5cfdee09067d8d96bfcf56ce8a3372ca9368
Signed-off-by: Anees Patel <anepatel@redhat.com>
|
|
|
|
|
| |
Change-Id: I683e6ff47120b7db8ee6ae02ed83eba19e6ac4c9
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I2acf835a4cf7301c64c4c8a9423f78672cdf9aa4
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I364054c35f623893700798bedef965fe05f6aabf
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
test_client_side_quorum_with_auto_option_overwrite_fixed
Change-Id: I5e22228eaf8574f2ccb1ae38cb98ec01e6493fdf
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
If quorum is not met, reads/writes fails with
transport end point not connected
Change-Id: I219c99fc5b96147c059174daf0383454e1bd2831
|
|
|
|
|
| |
Change-Id: I0a2c0ba2e28fc23fe3ff2db57b4ba3c0f08993aa
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I156e80e958d9e4c7aeec3a97bbcb16e8bfa36f30
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
| |
Change-Id: I3d749c5d131973217d18fc1158236806645e4ab4
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
| |
was not handled by teardown class
Change-Id: I789adbf0909c5edd0a2eb19ed4ccebcb654700fd
Signed-off-by: Anees Patel <anepatel@redhat.com>
|