| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
bring_bricks_online() method uses direct
invocation of `service glusterd restart` at
line 212. However due to change in behaviour
of glusterd through patch [1] and [2] if
glusterd is restart more than 6 times in an
hour it goes into failed state which has to
be reset using the `systemctl rest-failed`
command. This causes a random failures as shown
below:
```
2020-08-25 18:00:51,600 INFO (run) root@rhs-vm24.blr.com (cp): service glusterd restart
2020-08-25 18:00:51,601 DEBUG (_get_ssh_connection) Retrieved connection from cache: root@rhs-vm24.blr.com
2020-08-25 18:00:51,830 INFO (_log_results) ^[[34;1mRETCODE (root@rhs-vm24.blr.com): 1^[[0m
2020-08-25 18:00:51,830 INFO (_log_results) ^[[31;1mSTDERR (root@rhs-vm24.blr.com)...
Redirecting to /bin/systemctl restart glusterd.service
Job for glusterd.service failed.
See "systemctl status glusterd.service" and "journalctl -xe" for details.
^[[0m
2020-08-25 18:00:51,830 ERROR (bring_bricks_online) Unable to restart glusterd on node rhs-vm24.blr.com
```
Fix:
Change the code to use restart_glusterd()
from gluster_init.
Links:
[1] https://review.gluster.org/#/c/glusterfs/+/23751/
[2] https://review.gluster.org/#/c/glusterfs/+/23970/
Change-Id: Ibe44463ac1d444f3d2155c9ae11680c9ffd8dab9
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tier libraries are not used across test cases and due to checks
across brick_libs.py and volume_libs.py, performance of regular
test cases(time taken for execution) is getting degraded.
One more factor to remove Tier libraries across glusto-tests
is, the functionality is deprecated.
Change-Id: Ie56955800515b2ff5bb3b55debaad0fd88b5ab5e
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Finding the offline brick limit using ceil returns incorrect
value. E.g., For replica count 3, ceil(3/2) returns 2, and
the subsequent method uses this value to bring down 2 out of
3 available bricks, resulting in IO and many other failures.
Fix:
Change ceil to floor. Also change the '/' operator to '//'
for py2/3 compatibility
Change-Id: I3ee10647bb037a3efe95d1b04e0864cf61e2499e
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
| |
Check if the xattr of the given bricks are same
Change-Id: Ib1ba010bfeafc132123a88a893017f870a989789
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding function is_broken_symlinks_present_on_bricks()
to brick_libs for checking if backend bricks have
broken symlinks or not.
Function added based on reviews on patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/20460/8/tests/functional/bvt/test_verify_volume_sanity.py
Change-Id: I1b512702ab6bc629bcd967ff34ad7ecfddfc1af1
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use 'list' object type in comparisons instead of 'str'
Because it is differently treated in py2 and py3.
Example:
In py2 isinstance(u'foo', str) is False
In py3 isinstance(u'foo', str) is True
Change-Id: I7663d42494bf59d74550ff4897379d35cc357db4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
get_online_bricks_list() used to fail in case of node
down secnarios with KeyError exception adding code to
catch the exception and provide brick list with bricks
from gluster v status.
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
Change-Id: Ia26208a52e4197050421bc34b9b8cdaf74ac4da6
|
|
|
|
| |
Change-Id: I46fc2feffe6443af6913785d67bf310838532421
|
|
|
|
| |
Change-Id: I487822efd6c57b8c51b65b4eaf149eb67e96731b
|
|
|
|
|
|
|
| |
enabled
Change-Id: I0ae0d45050501154e9cd69c0d3dc9e915ae21b3a
Signed-off-by: Akarsha <akrai@redhat.com>
|
|
|
|
|
|
|
| |
mux is enabled
Change-Id: Ibf77bb30f5ead4b208337d0a9f2d5b42a6875ded
Signed-off-by: Akarsha <akrai@redhat.com>
|
|
|
|
| |
Change-Id: Id0f72542702732cdecb21a2e0fa07a64ca8891c4
|
|
|
|
|
|
|
|
|
| |
1. Waiting for all bricks to be online
2. Waiting for all self-heal-daemons to be online
3. Waiting for all volume processes to be online
Change-Id: I01a8711838227eb167e69710ecbd3abd0fecb9e6
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
after bringing them online.
2) log all the xml output/error to DEBUG log level.
Change-Id: If6bb758ac728f299292def9d72c0ef166a1569ae
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) self-heal failures: With the recent changes made to gluster for the bug: https://bugzilla.redhat.com/show_bug.cgi?id=1480423, the location of the brick process pid's changed to /var/run/gluster.
Making the corresponding changes to glusto-tests libraries.
Moving away from referring to pid file to grep for the process with the brick name.
This fixes the issue.
2) Group options not being set properly: Since we were popping the 'group' option from the 'options'
dictionary after the group options being set to set the other volume options, the option gets removed
from the g.config['gluster']['smb_volume_options'] as well.
Hence perform a deep copy of the dict before modifying the dict.
Change-Id: I293bf81913857cb0327f30aa1db5aaa9be5a318e
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: I5ea8b6c0b58fd19c31fc96cc567c53000cd3841b
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1) test heal with replace-brick when io in progress
2) test heal when bricks goes offline and comes back online when io in progress.
Change-Id: Id9002c465aec8617217a12fa36846cdc1f61d7a4
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
- expanding the volume i.e test add-brick is successful on the volume.
Change-Id: I8110eea97cf46e3ccc24156d6c67cae0cbf5a7c1
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: Ibdd092118d3bb912716c46fd278ef3c680a6e742
Signed-off-by: Nigel Babu <nigelb@redhat.com>
|
|
class, heal related helpers, samba helpers, and windows ops helpers
Change-Id: I0ad8fc7548c88e89d2ba6441166b9a38af76cea0
Signed-off-by: Shwetha Panduranga <spandura@redhat.com>
|