| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_file_stat function doesn't have access time
modified time and change time for a file or
directory. Added respective parameters for get-
ting the values into the dictionary.
Changed the separator from ':' to '$', reason is
to overcome the unpacking of the tuple error as
below:
2020-04-02 19:27:45.962477021
If ":" as separator is used, will be hitting
"ValueError: too many values to unpack" error.
Used $ as separator, as it is not used for the
filenames in the glusto-tests and not part of
the stat output.
Change-Id: I40b0c1fd08a5175d3730c1cf8478d5ad8df6e8dd
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Ideally operations done in ssl_ops.py should
be performed on a gluster cluster even before
peer probing the nodes. This makes the library
useless as we can't run any library in glusto-tests
without peer probing
Solution:
Enable SSL on gluster cluster before parsing it
to run tests present in glusto-tests.
Change-Id: If803179c67d5b3271b70c1578269350444aa3cf6
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
The command creation with a specific user had six substitutions,
but had only 5 placeholders.
Change-Id: I2c9f63213f78e5cec9e5bd30cac8d75eb8dbd6ce
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
| |
Adding function create_link_file() to create
soft and hard links for an existing file.
Change-Id: I6be313ded1a640beb450425fbd29374df51fbfa3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
This test verifies remove brick operations on disperse
volume.
Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume and set the volume option
'diagnostics.client-log-level' to DEBUG mount the volume on one
client.
2. Create a directory
3. Validate the number of lookups for the directory creation from the
log file.
4. Perform a new lookup of the directory
5. No new lookups should have happened on the directory, validate from
the log file.
6. Bring down one subvol of the volume and repeat step 4, 5
7. Bring down one brick from the online bricks and repeat step 4, 5
8. Start the volume with force and wait for all process to be online.
Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
On latest platforms pidof command is returning
multiple pids as shown below:
27190 27078 26854
This is becasue it was returning glusterd,glusterfsd
and glusterfs processes as well. The problem is that
/usr/sbin/glusterd is a link to glusterfsd.
'pidof' has a new feature that pidof searches for
the pattern in /proc/PID/cmdline, /proc/PID/stat and
finally /proc/PID/exe. Hence pidof matches realpath
of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd
and results in glusterd, glusterfs and glusterfsd pids
being returned in output.
Fix:
Use pgrep instead of pidof to get glusterfsd
pids. And change the split logic accordingly.
Change-Id: I729e05c3f4cacf7bf826592da965a94a49bb6f33
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
On latest platforms pidof command is returning
multiple pids as shown below:
27190 27078 26854
This is becasue it was returning glusterd,glusterfsd
and glusterfs processes as well. The problem is that
/usr/sbin/glusterd is a link to glusterfsd.
'pidof' has a new feature that pidof searches for
the pattern in /proc/PID/cmdline, /proc/PID/stat and
finally /proc/PID/exe. Hence pidof matches realpath
of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd
and results in glusterd, glusterfs and glusterfsd pids
being returned in output.
Fix:
Use pgrep instead of pidof to get glusterfsd
pids. And change the split logic accordingly.
Change-Id: Ie215734387989f2d8cb19e4b4f7cddc73d2a5608
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Check if the xattr of the given bricks are same
Change-Id: Ib1ba010bfeafc132123a88a893017f870a989789
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Verifies whether the shd daemon is up and running on a particular node.
The method verifies whether the shd pid is present or not on the given
node. If present, as an additional verification, verifies that the
'self-heal daemon' for the node specified is not there in the get volume
status output
Change-Id: I4865dc5c493a72ed7334ea998d0a231f4f8c75c8
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
item.next() is not supported in python 3.
Solution:
Add try except block to take care of both python 2
and python 3.
Change-Id: I4c88804e45eee2a2ace24a982447000027e6ca3c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`replace_brick_from_volume()` function doesn't support
brick shareing which creates a problem as when we
try to perform replace brick with a large number
of volumes it is unable to get bricks and hence failes.
Solution:
Adding code for boolean kawrg for multi_vol and using
form_bricks_for_multivol() or form_bricks_list()
according to the value of multi_vol. The default
value of multi_vol is false which would only use
form_bricks_list() as done without the changes.
Blocks:
This patch currently blocks the below mentioned patch:
https://review.gluster.org/#/c/glusto-tests/+/19483/
Change-Id: I842a4ebea81e53e694b5b194294f1b941f47d380
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
The method executes the 'gluster get-state' command on the specified node
and verifies the glusterd state dump, reads it and returns the content
as a dictionary
Change-Id: I0356ccf740fd97d1930e9f09d6111304b14cd015
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`g.rpyc_get_connection()` has a limitaion where it can't
convert python2 calls to python3 calls. Due to this a large
number of testcases fail when executed from a python2 machine
on a python3 only setup or visa versa with the below stack trace:
```
E ========= Remote Traceback (1) =========
E Traceback (most recent call last):
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request
E res = self._HANDLERS[handler](self, *args)
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect
E if hasattr(self._local_objects[id_pack], '____conn__'):
E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__
E return self._dict[key][0]
E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560)
```
Solution:
The solution here is to modify the code to not use
`g.rpyc_get_connection()`. The following changes are done
to accomplish it:
1)Remove code which uses g.rpyc_get_connection() and use generic
logic in functions:
a. do_bricks_exist_in_shd_volfile()
b. get_disk_usage()
c. mount_volume()
d. list_files()
f. append_string_to_file()
2)Create files which can be uploaded and executed on
clients/servers to avoid rpc calls in functions:
a. calculate_hash()
b. validate_files_in_dir()
3)Modify setup.py to push the below files to
`/usr/share/glustolibs/scripts/`:
a.compute_hash.py
b.walk_dir.py
Change-Id: I00a81a88382bf3f8b366753eebdb2999260788ca
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Add docleanup and docleanupclass in baseclass,
which will call the function fresh_setup_cleanup,
will cleanup the nodes to fresh setup if it is
set to true or whenever the testcase fails.
Change-Id: I951ff59cc3959ede5580348b7f93b57683880a23
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps: (file access)
- rename the file so that the hashed and cached are different
- make sure file can be accessed as long as cached is up
Fixes a library issue as well in find_new_hashed()
Change-Id: Id81264848d6470b9fe477b50290f5ecf917ceda3
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Case 1:
1.mkdir srcdir and dstdir(such that srcdir and
dstdir hashes to different subvols)
2.Bring down srcdir hashed subvol
3.mv srcdir dstdir (should fail)
Case 2:
1.mkdir srcdir dstdir
2.Bring down srcdir hashed
3.Bring down dstdir hashed
4.mv srcdir dstdir (should fail)
Case 3:
1.mkdir srcdir dstdir
2.Bring down dstdir hashed subvol
3.mv srcdir dstdir (should fail)
Additional library fix details:
Also fixing library function to work with distributed-disperse volume
by removing `if oldhashed._host != brickdir._host:` as the same node
can host multiple bricks of the same volume.
Change-Id: Iaa472d1eb304b547bdec7a8e6b62c1df1a0ce591
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier, brick creation is carried out based on the difference of used
and unused bricks. This is a bottleneck for implementing brick
multiplexing testcases. Moreover we can't create more than 10 volumes.
With this library, implementing a way to create bricks on top of the
existing servers in a cyclic way to have equal number of bricks on each
brick partition on each server
Added paramter in setup_volume function, if multi_vol flag is set it
will fetch bricks using cyclic manner using (form_bricks_for_multi_vol)
otherwise it will fetch using old mechanism.
Added bulk_volume_creation function, to create multiple volumes the
user has specified.
Change-Id: I2103ec6ce2be4e091e0a96b18220d5e3502284a0
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Due to patch [1], the framework broke and
was failing for all the testcases with the below
backtrace:
```
> mount_dict['server'] = cls.snode
E AttributeError: type object 'VolumeAccessibilityTests_cplex_replicated_glusterf'
has no attribute 'snode'
```
Solution:
This was becasue mnode_slave was accidentally written as snode. And
cls.geo_rep_info wasn't a safe condition operator hence changed it
to cls.slaves.
Testcase results with patch:
test_cvt.py::TestGlusterHealSanity_cplex_replicated_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed-dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterHealSanity_cplex_dispersed_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_replicated_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterReplaceBrickSanity_cplex_distributed-replicated_glusterfs::test_replace_brick_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_dispersed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed-replicated_glusterfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_nfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_nfs::test_shrinking_volume_when_io_in_progress PASSED
links:
[1] https://review.gluster.org/#/c/glusto-tests/+/24029/
Change-Id: If7b329e232ab61df9f9d38f5491c58693336dd48
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the code for the following:
1.Adding function setup_master_and_slave_volumes() to geo_rep_libs.
2.Adding variables for master_mounts, slave_mounts, master_volume
and slave_volume to gluster_base_class.py
3.Adding class class method
setup_and_mount_geo_rep_master_and_slave_volumes to
gluster_base_class.py.
Change-Id: Ic8ae1cb1c8b5719d4774996c3e9e978551414b44
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new function volume_type() will check if the volume under test
is of pure Replicated/Disperse/Arbiter type and return the result
in string.
The functions,run_layout_tests() & validate_files_in_dir() have
been modified to check the Gluster version and volume type in order
to fix the DHT pass-through caused issues.
Change-Id: Ie7ad259883907c1fdc0b54e6743636fdab793272
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue earlier was that whenever a TC called the _get_layout()
and _is_complete() methods, it failed on Replicate/Arbiter/Disperse
volume types because of DHT pass-through.
The functions,get_layout() and is_complete() have been modified to
check for the Gluster version and volume type before running, in
order to fix the issue.
About DHT pass-through : Please refer to-
https://github.com/gluster/glusterfs/issues/405
for the details.
Change-Id: I0b0dc0ac3cbdef070a20854fbc89442fee1da8b6
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
Instead of calling g.log.error, we were calling g.log.err
Due to this instead of throwing the right error message in, say,
when doing volume cleanup, it was throwing ambiguos traceback.
Change-Id: I39887ce08756eaf29df2d99f73cc7795a4d2c065
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the method get_volume_type() passed when ran on interpreter.
But when the method was called in a TC, it failed at condition (Line:
2235) because :
e.g.
The value of brickdir_path is "dhcp47-3.lab.eng.blr.redhat.com:/bricks/
brick2/testvol_replicated_brick2/" and it is trying to find the value
in the list ['10.70.46.172:/bricks/brick2/testvol_replicated_brick0',
'10.70.46.195:/bricks/brick2/testvol_replicated_brick1',
'10.70.47.3:/bricks/brick2/testvol_replicated_brick2'] returned by
get_all_bricks(), which will fail.
Now, with fix, it will run successfully as it tries to check if for host
dhcp47-3.lab.eng.blr.redhat.com, the brick
/bricks/brick2/testvol_replicated_brick2 is present in the list
brick_paths[] which consists of only the paths and not the IP addresses
of the bricks present on that host.
Change-Id: Ie595faba1e92c559293ddd04f46b85065b23dfc5
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
Add steps to include bring offline bricks to online and
volume reset in case of failure scenarios
Change-Id: I9bdadd8a80ded81cf7cb4e324a18321400bfcc4c
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the elements passed in list for volume type check were
('replicate', 'disperse', 'arbiter'), but as the volume type
returned by get_volume_type() will be in the format 'Replicate',
'Disperse', 'Arbiter' and lists are case sensitive, these changes
will make sure it does not change.
Change-Id: Ic73ca946cd9c06bfa5b92605dbeba74d6ffa83d9
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The function get_volume_type() will return the type of volume (as
distributed/replicate/disperse/arbiter/distributed-replicated/
distributed-dispersed/distributed-arbiter) under test.
Change-Id: Ib23ae1ad18ef65d0520fe041a5f80211030a034b
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DHT pass-through functionality was introduced in the Gluster
6, due to which the TCs were failing for Replicate, Disperse and
Arbiter volume types whenever the function to get hashrange was
called.
With this fix, first the Gluster version and later the volume
type will be checked before calling the function to get the
hashrange. If the Gluster version is greater than or equal to
6, the layout will not be checked for the pure AFR/Arbiter/EC
volumes.
About DHT pass-through option : The distribute xlator now skips
unnecessary checks and operations when the distribute count is one
for a volume, resulting in improved performance. Comes into play
when there is only 1 brick or it is a pure replicate or pure
disperse or pure arbiter volume.
Change-Id: I55634f495a54e3c9909b1e1c716990b9ee9834a3
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding function collect_bricks_arequal() to lib_utils.py
to collect arequal-checksum on all the bricks of all the
nodes used to create a volume using the below command:
```
arequal-checksum -p <BrickPath> -i .glusterfs -i .landfill -i .trashcan
```
Usage:
```
>>> all_bricks = get_all_bricks(self.mnode, self.volname)
>>> ret, arequal = collect_bricks_arequal(all_bricks)
>>> ret
True
```
Change-Id: Id42615469be18d84e5691c982369634c436ed0cf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing method does not have an option to use the volume name
to get the list of snapshots. Without the option to provide the
volume name, it is not possible to get the snapshots specific to a
volume.
With this fix, there is an addition of a kwarg 'volname' for the
method 'get_snap_list' with which the snapshots of a particular
volume can be listed. This option is necessary in some test cases
where the user needs to get the list of snapshots specific to a
particular volume.
This fix also includes a small typo error in the description of
a method.
Change-Id: Ib0aeaf417a37142ebe36847e27bcd60683f325e7
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
| |
Added library to cleanup lv created after snapshot clone and made
modifications with cleanup.
Change-Id: I71a437bf99eac1170895510475ddb30748771672
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The method wait_for_peers_to_connect checks whether it's Arg
'servers' is a string. It should rather be checking if the arg is
a list and accordingly make the changes if required.
Changed the Arg 'servers' validation type from 'str' to 'list'.
Change-Id: I74ddb489cd286c1f2531af478f8811759173f01e
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
| |
The function will return the gluster version in float for the host
it is called on. It accepts the host IP as a parameter.
Change-Id: Icf48cf41031f0fa06cf3864e9215c5a960bb7c64
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
library files
Moving waiters from testcase and adding it as function in library in gluster_init and peer_ops.
Change-Id: I5ab1e42a5a0366fadb399789da1c156d8d96ec18
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This addition is to aid the test cases which validate the existence
of snapshots under the '.snaps' directory. The existing method i.e.
'uss_list_snap' does not help in getting the list of snapshots
and instead just recursively performs 'ls' under the .snaps
directory.
This method will return a list of directory names present under
the '.snaps' directory.
Change-Id: I808131788df975ca243ac5721713492422af0ab8
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding function is_broken_symlinks_present_on_bricks()
to brick_libs for checking if backend bricks have
broken symlinks or not.
Function added based on reviews on patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/20460/8/tests/functional/bvt/test_verify_volume_sanity.py
Change-Id: I1b512702ab6bc629bcd967ff34ad7ecfddfc1af1
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Fixing keyerror introduced in the patch:
https://review.gluster.org/#/c/glusto-tests/+/23967/
Change-Id: Ib44678fe46df5090b1586b09b47d3046c4dc6f9b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding params in gluster_base_class.py to store value
of root user's password and non-root user's name, group
and password. sync_method will be mentioned inside the
testcase as there are testcases which are only specific
to one of the sync type and it doesn't make sense to add
it in the config file. Instead we'll write testcase as shown
below:
```
class georeptestcase(baseclass):
run_chmod_with_changelog_crawl(sync_type):
# Steps of the testcase
test_chmod_with_changelog_crawl_rsync():
# Calling function with rsync as sync_type.
run_chmod_with_changelog_crawl('rsync')
test_chmod_with_changelog_crawl_tarssh():
# Calling function with tarssh as sync_type.
run_chmod_with_changelog_crawl('tarssh')
```
Change-Id: Ie65f542e76bfbee89ac2914bdcd086e1bd08dfdb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The library shared_storage_ops.py contains two functions which are
similar and do not serve a completely unique purpose. These functions
are "is_shared_volume_mounted" and "is_shared_volume_unmounted".
Here, the function "is_shared_volume_unmounted" needs to be removed
because any test case can be validated using assertion for the
function "is_shared_volume_mounted".
The function "disable_shared_storage" has an incorrect description.
This description has been changed with the fix.
There are minor cosmetic changes as well which have been fixed to make
the code lightweight.
Change-Id: I796831a95c205fef49a841eb14f5a15079f9a6b0
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Checking the status of snapshot informations
before and after restarting glusterd.
Test case:
1. Create volume
2. Create two snapshots with description
3. Check snapshot status informations with snapname,
volume name and without snap name/volname.
4. Restart glusterd on all nodes
5. Follow step3 again and validate snapshot
History of the patch:
The testcase was failing in CentOS-ci due to bug [1], However this
bug was defered for fix. Post updating the code with the workaround
mentioned below, it was observed that glusterd restart was failing
due to bug [2]. And now as the both the bugs are fixed this patch
is safe to merge.
Workaround:
Now the only possible workaround for this is to
modify the function get_snap_info_by_volname()
to not use --xml option and directly run the command which
will dump the output as string which can be directly
used in the testcase. This modification in the library function
will not impact any other testcase.
Links:
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1590797
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1782200
Change-Id: I26ac0aaa5f6c8849fd9de41f506d6d13fc55e166
Co-authored-by: srivickynesh <sselvan@redhat.com>
Signed-off-by: srivickynesh <sselvan@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use 'list' object type in comparisons instead of 'str'
Because it is differently treated in py2 and py3.
Example:
In py2 isinstance(u'foo', str) is False
In py3 isinstance(u'foo', str) is True
Change-Id: I7663d42494bf59d74550ff4897379d35cc357db4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
- Use 'list' object type in comparisons instead of 'str'. Because
it is differently treated in py2 and py3. Example:
# In py2 isinstance(u'foo', str) is False
# In py3 isinstance(u'foo', str) is True
Change-Id: Ic0a5c1469e9951ee9b2472714004b05e2c5fdc94
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Adding the following check functions:
1. is_passwordless_ssh_configured() - To check if passwordless
ssh is configured or not between given nodes with a given user.
2. is_group_exists() - To check if group is present on servers
or not.
3. is_user_exists() - To check if a given user is present on
servers or not.
- Adding functionality to support both sync methods.
- Adding nonrootpass parameter to georep_prerequisites() as in
the previous logic the password for the non-root and the root
user were the same which might not always be the case.
- Fixing georep_config_get() and georep_config_set() to take
non-root user as well.
Change-Id: I8a42d48d56690040dd7f78d1fb919029c0d6e61d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
isinstance() was missing at some places due to which
variable type check for string was failing
in the following functions:
1.set_passwd()
2.group_add()
3.add_user()
Change-Id: Iafe47967f8d6df686c9ecdd6d87dac3c81bdb5db
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case steps:
1. Set cluster.brick-multiplex to enabled.
2. Create and start 2 volumes of type 1x3 and 2x3.
3. Check if cluster.brick-multiplex is enabled.
4. Reset the cluster using "gluster v reset all".
5. Check if cluster.brick-multiplex is disabled.
6. Create a new volume of type 2x3.
7. Set cluster.brick-multiplex to enabled.
8. Stop and start all three volumes.
9. Check the if pids match and check if more
than one pids of glusterfsd is present.
Additional library fix:
Changing the command in check_brick_pid_matches_glusterfsd_pid()
as it won't work when brick-mux is enabled.
From:
cmd = ("ps -eaf | grep glusterfsd |
" "grep %s.%s | grep -v 'grep %s.%s'"
% (volname, brick_node, volname, brick_node))
To:
cmd = "pidof glusterfsd"
Change-Id: If7bdde13071732b176a0a2289635319571872e47
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Ifad7d7f8e2e97bf327483b90dbf5a1cb855bc0dd
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: I7a76e0c2e491caffd7ba1b648b47c4c6a687c89a
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
One of the recent changes [1] broke volume cleanup logic
which is located in base class.
Fix it by proper handling of the cleanup function results
where '0' return code is 'success' and not 'failure'
as it was considered in that change.
[1] 6cd137615aec29dade5b41975fcbdae06852cf53
Change-Id: I674493369202ceabc6983fae0b3834e3b0708bf1
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: Ie408d7972452123b63eb5cc17c61bc319a99e304
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: Iae0f6e729c26e466d82c4133439bdd7021485e7f
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|