| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
There are scenarios where multiple files are to be renamed
to hash to a particular subvol. The existing method returns
the same name as the loop always starts from 1.
Fix:
Adding an optional argument, existing_names which contains
names already hashed to the subvol. An additional check is
added to ensure the name found is not already used
Change-Id: I453ee290c8462322194cebb42c40e8fbc7c373ed
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The method takes mnode, volname and throttle-type as parameters.
It sets the rebal-throttle for the volume as per the mentioned
throttle-type.
Change-Id: I9eb14e39f87158c9ae7581636c2cad1333fd573c
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
As SSL cannot be set after volume start op, moving
set_volume_option prior to volume start.
Change-Id: I14e1dc42deb0c0c28736f03e07cf25f3adb48349
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Finding the offline brick limit using ceil returns incorrect
value. E.g., For replica count 3, ceil(3/2) returns 2, and
the subsequent method uses this value to bring down 2 out of
3 available bricks, resulting in IO and many other failures.
Fix:
Change ceil to floor. Also change the '/' operator to '//'
for py2/3 compatibility
Change-Id: I3ee10647bb037a3efe95d1b04e0864cf61e2499e
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
| |
This method helps in cases of rename scenarios where the new filename
has to be hashed to a specific subvol
Change-Id: Ia36ea8e3d279ddf130f3a8a940dbe1fcb1910974
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
| |
This method fetches all entries under a directory in recursive fashion
Change-Id: I4fc066ccf7a3a4730d568f96d926e46dea7b20a1
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently setup_volume in volume_libs.py and gluster_base_class.py
are to create volume and start it. There are tests, where only
volume_create is required and if the test has to run on all volume
types. Any contributor have to do all the validations which are
already implemented in setup_volume and classmethod of setup volume
in the gluster_base_class to their test.
Solution:
Added a parameter in the setup_volume() function "create_only" by
default it is false, unless specified this paramter setup_volume
will work as it is.
similarly, have added a parameter in classmethod of setup_volume
in gluster_base_class.py "only_volume_create", here also defaults
to false unless specified.
Note: After calling "setup_volume() -> volume_stop" is not same as
just "volume_create()" in the actual test.
Change-Id: I76cde1b668b3afcac41dd882c2a376cb6fac88a3
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Changed get_file_stat function to assign correct key-value pairs
for atime, mtime and ctime respectively.
Previously, all timestamp keys were assigned to atime value
Change-Id: I471ec341d1a213395a89f6c01315f3d0f2e976af
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Below command to 'pkill pidof glusterd' is not
right, as it is not getting the pidof glusterd.
eg:
cmd = "pkill pidof glusterd"
ret, out ,err = g.run("10.20.30.40", cmd, "root")
>>> ret, out, err
(2, '', "pkill: only one pattern can be provided\n
Try `pkill --help' for more information.\n")
Here command is failing.
Solution:
Added `pidof glusterd` which will get proper
glusterd pid and kill the stale pid after
glusterd stop failed.
cmd = "pkill `pidof glusterd`"
ret, out ,err = g.run("10.20.30.40", cmd, "root")
>>> ret, out, err
(1, '', '')
Note: The ret value is 1, as it is tried on a machine
where glusterd is running. The purpose of the fix is
to get the proper pid.
Change-Id: Iacba3712852b9d16546ced9a4c071c62182fe385
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
While performing scratch clenaup, observerd
posix health checkers warnings once the glusterd is
started as shown below
[2020-05-05 12:19:10.633623] M [MSGID: 113075]
[posix-helpers.c:2194:posix_health_check_thread_proc]
0-testvol_distributed-dispersed-posix: health-check
failed, going down
Solution:
In scartch cleanup, once the glusterd is stopped,
and runtime socket file removed for glusterd daemon,
there are stale glusterfsd present on few the
machines. Adding a step to get glusterfsd processes
if any and using kill_process method killing the stale
glusterfsd processes and continuing with the existing
procedure. Once the glusterd is started won't see any
posix health checkers.
Change-Id: Ib3e9492ec029b5c9efd1c07b4badc779375a66d6
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_file_stat function doesn't have access time
modified time and change time for a file or
directory. Added respective parameters for get-
ting the values into the dictionary.
Changed the separator from ':' to '$', reason is
to overcome the unpacking of the tuple error as
below:
2020-04-02 19:27:45.962477021
If ":" as separator is used, will be hitting
"ValueError: too many values to unpack" error.
Used $ as separator, as it is not used for the
filenames in the glusto-tests and not part of
the stat output.
Change-Id: I40b0c1fd08a5175d3730c1cf8478d5ad8df6e8dd
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Ideally operations done in ssl_ops.py should
be performed on a gluster cluster even before
peer probing the nodes. This makes the library
useless as we can't run any library in glusto-tests
without peer probing
Solution:
Enable SSL on gluster cluster before parsing it
to run tests present in glusto-tests.
Change-Id: If803179c67d5b3271b70c1578269350444aa3cf6
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
The command creation with a specific user had six substitutions,
but had only 5 placeholders.
Change-Id: I2c9f63213f78e5cec9e5bd30cac8d75eb8dbd6ce
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
| |
Adding function create_link_file() to create
soft and hard links for an existing file.
Change-Id: I6be313ded1a640beb450425fbd29374df51fbfa3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
This test verifies remove brick operations on disperse
volume.
Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume and set the volume option
'diagnostics.client-log-level' to DEBUG mount the volume on one
client.
2. Create a directory
3. Validate the number of lookups for the directory creation from the
log file.
4. Perform a new lookup of the directory
5. No new lookups should have happened on the directory, validate from
the log file.
6. Bring down one subvol of the volume and repeat step 4, 5
7. Bring down one brick from the online bricks and repeat step 4, 5
8. Start the volume with force and wait for all process to be online.
Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
On latest platforms pidof command is returning
multiple pids as shown below:
27190 27078 26854
This is becasue it was returning glusterd,glusterfsd
and glusterfs processes as well. The problem is that
/usr/sbin/glusterd is a link to glusterfsd.
'pidof' has a new feature that pidof searches for
the pattern in /proc/PID/cmdline, /proc/PID/stat and
finally /proc/PID/exe. Hence pidof matches realpath
of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd
and results in glusterd, glusterfs and glusterfsd pids
being returned in output.
Fix:
Use pgrep instead of pidof to get glusterfsd
pids. And change the split logic accordingly.
Change-Id: I729e05c3f4cacf7bf826592da965a94a49bb6f33
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
On latest platforms pidof command is returning
multiple pids as shown below:
27190 27078 26854
This is becasue it was returning glusterd,glusterfsd
and glusterfs processes as well. The problem is that
/usr/sbin/glusterd is a link to glusterfsd.
'pidof' has a new feature that pidof searches for
the pattern in /proc/PID/cmdline, /proc/PID/stat and
finally /proc/PID/exe. Hence pidof matches realpath
of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd
and results in glusterd, glusterfs and glusterfsd pids
being returned in output.
Fix:
Use pgrep instead of pidof to get glusterfsd
pids. And change the split logic accordingly.
Change-Id: Ie215734387989f2d8cb19e4b4f7cddc73d2a5608
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Check if the xattr of the given bricks are same
Change-Id: Ib1ba010bfeafc132123a88a893017f870a989789
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Verifies whether the shd daemon is up and running on a particular node.
The method verifies whether the shd pid is present or not on the given
node. If present, as an additional verification, verifies that the
'self-heal daemon' for the node specified is not there in the get volume
status output
Change-Id: I4865dc5c493a72ed7334ea998d0a231f4f8c75c8
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
item.next() is not supported in python 3.
Solution:
Add try except block to take care of both python 2
and python 3.
Change-Id: I4c88804e45eee2a2ace24a982447000027e6ca3c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`replace_brick_from_volume()` function doesn't support
brick shareing which creates a problem as when we
try to perform replace brick with a large number
of volumes it is unable to get bricks and hence failes.
Solution:
Adding code for boolean kawrg for multi_vol and using
form_bricks_for_multivol() or form_bricks_list()
according to the value of multi_vol. The default
value of multi_vol is false which would only use
form_bricks_list() as done without the changes.
Blocks:
This patch currently blocks the below mentioned patch:
https://review.gluster.org/#/c/glusto-tests/+/19483/
Change-Id: I842a4ebea81e53e694b5b194294f1b941f47d380
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
The method executes the 'gluster get-state' command on the specified node
and verifies the glusterd state dump, reads it and returns the content
as a dictionary
Change-Id: I0356ccf740fd97d1930e9f09d6111304b14cd015
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`g.rpyc_get_connection()` has a limitaion where it can't
convert python2 calls to python3 calls. Due to this a large
number of testcases fail when executed from a python2 machine
on a python3 only setup or visa versa with the below stack trace:
```
E ========= Remote Traceback (1) =========
E Traceback (most recent call last):
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request
E res = self._HANDLERS[handler](self, *args)
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect
E if hasattr(self._local_objects[id_pack], '____conn__'):
E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__
E return self._dict[key][0]
E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560)
```
Solution:
The solution here is to modify the code to not use
`g.rpyc_get_connection()`. The following changes are done
to accomplish it:
1)Remove code which uses g.rpyc_get_connection() and use generic
logic in functions:
a. do_bricks_exist_in_shd_volfile()
b. get_disk_usage()
c. mount_volume()
d. list_files()
f. append_string_to_file()
2)Create files which can be uploaded and executed on
clients/servers to avoid rpc calls in functions:
a. calculate_hash()
b. validate_files_in_dir()
3)Modify setup.py to push the below files to
`/usr/share/glustolibs/scripts/`:
a.compute_hash.py
b.walk_dir.py
Change-Id: I00a81a88382bf3f8b366753eebdb2999260788ca
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Add docleanup and docleanupclass in baseclass,
which will call the function fresh_setup_cleanup,
will cleanup the nodes to fresh setup if it is
set to true or whenever the testcase fails.
Change-Id: I951ff59cc3959ede5580348b7f93b57683880a23
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps: (file access)
- rename the file so that the hashed and cached are different
- make sure file can be accessed as long as cached is up
Fixes a library issue as well in find_new_hashed()
Change-Id: Id81264848d6470b9fe477b50290f5ecf917ceda3
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Case 1:
1.mkdir srcdir and dstdir(such that srcdir and
dstdir hashes to different subvols)
2.Bring down srcdir hashed subvol
3.mv srcdir dstdir (should fail)
Case 2:
1.mkdir srcdir dstdir
2.Bring down srcdir hashed
3.Bring down dstdir hashed
4.mv srcdir dstdir (should fail)
Case 3:
1.mkdir srcdir dstdir
2.Bring down dstdir hashed subvol
3.mv srcdir dstdir (should fail)
Additional library fix details:
Also fixing library function to work with distributed-disperse volume
by removing `if oldhashed._host != brickdir._host:` as the same node
can host multiple bricks of the same volume.
Change-Id: Iaa472d1eb304b547bdec7a8e6b62c1df1a0ce591
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier, brick creation is carried out based on the difference of used
and unused bricks. This is a bottleneck for implementing brick
multiplexing testcases. Moreover we can't create more than 10 volumes.
With this library, implementing a way to create bricks on top of the
existing servers in a cyclic way to have equal number of bricks on each
brick partition on each server
Added paramter in setup_volume function, if multi_vol flag is set it
will fetch bricks using cyclic manner using (form_bricks_for_multi_vol)
otherwise it will fetch using old mechanism.
Added bulk_volume_creation function, to create multiple volumes the
user has specified.
Change-Id: I2103ec6ce2be4e091e0a96b18220d5e3502284a0
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Due to patch [1], the framework broke and
was failing for all the testcases with the below
backtrace:
```
> mount_dict['server'] = cls.snode
E AttributeError: type object 'VolumeAccessibilityTests_cplex_replicated_glusterf'
has no attribute 'snode'
```
Solution:
This was becasue mnode_slave was accidentally written as snode. And
cls.geo_rep_info wasn't a safe condition operator hence changed it
to cls.slaves.
Testcase results with patch:
test_cvt.py::TestGlusterHealSanity_cplex_replicated_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed-dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterHealSanity_cplex_dispersed_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_replicated_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterReplaceBrickSanity_cplex_distributed-replicated_glusterfs::test_replace_brick_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_dispersed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed-replicated_glusterfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_nfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_nfs::test_shrinking_volume_when_io_in_progress PASSED
links:
[1] https://review.gluster.org/#/c/glusto-tests/+/24029/
Change-Id: If7b329e232ab61df9f9d38f5491c58693336dd48
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the code for the following:
1.Adding function setup_master_and_slave_volumes() to geo_rep_libs.
2.Adding variables for master_mounts, slave_mounts, master_volume
and slave_volume to gluster_base_class.py
3.Adding class class method
setup_and_mount_geo_rep_master_and_slave_volumes to
gluster_base_class.py.
Change-Id: Ic8ae1cb1c8b5719d4774996c3e9e978551414b44
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new function volume_type() will check if the volume under test
is of pure Replicated/Disperse/Arbiter type and return the result
in string.
The functions,run_layout_tests() & validate_files_in_dir() have
been modified to check the Gluster version and volume type in order
to fix the DHT pass-through caused issues.
Change-Id: Ie7ad259883907c1fdc0b54e6743636fdab793272
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue earlier was that whenever a TC called the _get_layout()
and _is_complete() methods, it failed on Replicate/Arbiter/Disperse
volume types because of DHT pass-through.
The functions,get_layout() and is_complete() have been modified to
check for the Gluster version and volume type before running, in
order to fix the issue.
About DHT pass-through : Please refer to-
https://github.com/gluster/glusterfs/issues/405
for the details.
Change-Id: I0b0dc0ac3cbdef070a20854fbc89442fee1da8b6
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
Instead of calling g.log.error, we were calling g.log.err
Due to this instead of throwing the right error message in, say,
when doing volume cleanup, it was throwing ambiguos traceback.
Change-Id: I39887ce08756eaf29df2d99f73cc7795a4d2c065
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the method get_volume_type() passed when ran on interpreter.
But when the method was called in a TC, it failed at condition (Line:
2235) because :
e.g.
The value of brickdir_path is "dhcp47-3.lab.eng.blr.redhat.com:/bricks/
brick2/testvol_replicated_brick2/" and it is trying to find the value
in the list ['10.70.46.172:/bricks/brick2/testvol_replicated_brick0',
'10.70.46.195:/bricks/brick2/testvol_replicated_brick1',
'10.70.47.3:/bricks/brick2/testvol_replicated_brick2'] returned by
get_all_bricks(), which will fail.
Now, with fix, it will run successfully as it tries to check if for host
dhcp47-3.lab.eng.blr.redhat.com, the brick
/bricks/brick2/testvol_replicated_brick2 is present in the list
brick_paths[] which consists of only the paths and not the IP addresses
of the bricks present on that host.
Change-Id: Ie595faba1e92c559293ddd04f46b85065b23dfc5
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
Add steps to include bring offline bricks to online and
volume reset in case of failure scenarios
Change-Id: I9bdadd8a80ded81cf7cb4e324a18321400bfcc4c
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the elements passed in list for volume type check were
('replicate', 'disperse', 'arbiter'), but as the volume type
returned by get_volume_type() will be in the format 'Replicate',
'Disperse', 'Arbiter' and lists are case sensitive, these changes
will make sure it does not change.
Change-Id: Ic73ca946cd9c06bfa5b92605dbeba74d6ffa83d9
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The function get_volume_type() will return the type of volume (as
distributed/replicate/disperse/arbiter/distributed-replicated/
distributed-dispersed/distributed-arbiter) under test.
Change-Id: Ib23ae1ad18ef65d0520fe041a5f80211030a034b
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DHT pass-through functionality was introduced in the Gluster
6, due to which the TCs were failing for Replicate, Disperse and
Arbiter volume types whenever the function to get hashrange was
called.
With this fix, first the Gluster version and later the volume
type will be checked before calling the function to get the
hashrange. If the Gluster version is greater than or equal to
6, the layout will not be checked for the pure AFR/Arbiter/EC
volumes.
About DHT pass-through option : The distribute xlator now skips
unnecessary checks and operations when the distribute count is one
for a volume, resulting in improved performance. Comes into play
when there is only 1 brick or it is a pure replicate or pure
disperse or pure arbiter volume.
Change-Id: I55634f495a54e3c9909b1e1c716990b9ee9834a3
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding function collect_bricks_arequal() to lib_utils.py
to collect arequal-checksum on all the bricks of all the
nodes used to create a volume using the below command:
```
arequal-checksum -p <BrickPath> -i .glusterfs -i .landfill -i .trashcan
```
Usage:
```
>>> all_bricks = get_all_bricks(self.mnode, self.volname)
>>> ret, arequal = collect_bricks_arequal(all_bricks)
>>> ret
True
```
Change-Id: Id42615469be18d84e5691c982369634c436ed0cf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing method does not have an option to use the volume name
to get the list of snapshots. Without the option to provide the
volume name, it is not possible to get the snapshots specific to a
volume.
With this fix, there is an addition of a kwarg 'volname' for the
method 'get_snap_list' with which the snapshots of a particular
volume can be listed. This option is necessary in some test cases
where the user needs to get the list of snapshots specific to a
particular volume.
This fix also includes a small typo error in the description of
a method.
Change-Id: Ib0aeaf417a37142ebe36847e27bcd60683f325e7
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
| |
Added library to cleanup lv created after snapshot clone and made
modifications with cleanup.
Change-Id: I71a437bf99eac1170895510475ddb30748771672
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The method wait_for_peers_to_connect checks whether it's Arg
'servers' is a string. It should rather be checking if the arg is
a list and accordingly make the changes if required.
Changed the Arg 'servers' validation type from 'str' to 'list'.
Change-Id: I74ddb489cd286c1f2531af478f8811759173f01e
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
| |
The function will return the gluster version in float for the host
it is called on. It accepts the host IP as a parameter.
Change-Id: Icf48cf41031f0fa06cf3864e9215c5a960bb7c64
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
library files
Moving waiters from testcase and adding it as function in library in gluster_init and peer_ops.
Change-Id: I5ab1e42a5a0366fadb399789da1c156d8d96ec18
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This addition is to aid the test cases which validate the existence
of snapshots under the '.snaps' directory. The existing method i.e.
'uss_list_snap' does not help in getting the list of snapshots
and instead just recursively performs 'ls' under the .snaps
directory.
This method will return a list of directory names present under
the '.snaps' directory.
Change-Id: I808131788df975ca243ac5721713492422af0ab8
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding function is_broken_symlinks_present_on_bricks()
to brick_libs for checking if backend bricks have
broken symlinks or not.
Function added based on reviews on patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/20460/8/tests/functional/bvt/test_verify_volume_sanity.py
Change-Id: I1b512702ab6bc629bcd967ff34ad7ecfddfc1af1
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Fixing keyerror introduced in the patch:
https://review.gluster.org/#/c/glusto-tests/+/23967/
Change-Id: Ib44678fe46df5090b1586b09b47d3046c4dc6f9b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding params in gluster_base_class.py to store value
of root user's password and non-root user's name, group
and password. sync_method will be mentioned inside the
testcase as there are testcases which are only specific
to one of the sync type and it doesn't make sense to add
it in the config file. Instead we'll write testcase as shown
below:
```
class georeptestcase(baseclass):
run_chmod_with_changelog_crawl(sync_type):
# Steps of the testcase
test_chmod_with_changelog_crawl_rsync():
# Calling function with rsync as sync_type.
run_chmod_with_changelog_crawl('rsync')
test_chmod_with_changelog_crawl_tarssh():
# Calling function with tarssh as sync_type.
run_chmod_with_changelog_crawl('tarssh')
```
Change-Id: Ie65f542e76bfbee89ac2914bdcd086e1bd08dfdb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The library shared_storage_ops.py contains two functions which are
similar and do not serve a completely unique purpose. These functions
are "is_shared_volume_mounted" and "is_shared_volume_unmounted".
Here, the function "is_shared_volume_unmounted" needs to be removed
because any test case can be validated using assertion for the
function "is_shared_volume_mounted".
The function "disable_shared_storage" has an incorrect description.
This description has been changed with the fix.
There are minor cosmetic changes as well which have been fixed to make
the code lightweight.
Change-Id: I796831a95c205fef49a841eb14f5a15079f9a6b0
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Checking the status of snapshot informations
before and after restarting glusterd.
Test case:
1. Create volume
2. Create two snapshots with description
3. Check snapshot status informations with snapname,
volume name and without snap name/volname.
4. Restart glusterd on all nodes
5. Follow step3 again and validate snapshot
History of the patch:
The testcase was failing in CentOS-ci due to bug [1], However this
bug was defered for fix. Post updating the code with the workaround
mentioned below, it was observed that glusterd restart was failing
due to bug [2]. And now as the both the bugs are fixed this patch
is safe to merge.
Workaround:
Now the only possible workaround for this is to
modify the function get_snap_info_by_volname()
to not use --xml option and directly run the command which
will dump the output as string which can be directly
used in the testcase. This modification in the library function
will not impact any other testcase.
Links:
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1590797
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1782200
Change-Id: I26ac0aaa5f6c8849fd9de41f506d6d13fc55e166
Co-authored-by: srivickynesh <sselvan@redhat.com>
Signed-off-by: srivickynesh <sselvan@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|