| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I5319ce497ca3359e0e7dbd9ece481bada1ee2205
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create a single brick volume
2.Add some files and directories
3.Get arequal from mountpoint
4.Add-brick such that this brick makes
the volume a replica vol 1x3
5.Start heal full
6.Make sure heal is completed
7.Get arequals from all bricks and
compare with arequal from mountpoint
Change-Id: I4ef140b326b3d9edcbd5b1f0b7d9c43f38ccfe66
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1257394 - Provide meaningful errors on peer probe and peer detach
Test Steps:
1 check the current peer status
2 detach one of the valid nodes which is already part of cluster
3 stop glusterd on that node
4 try to attach above node to cluster, which must fail with
Transport End point error
5 Recheck the test using hostname, expected to see same result
6 start glusterd on that node
7 halt/reboot the node
8 try to peer probe the halted node, which must fail again.
9 The only error accepted is as below
"peer probe: failed: Probe returned with Transport endpoint is not
connected"
10 Check peer status and make sure no other nodes in peer reject state
Change-Id: Ic0a083d5cb150275e927723d960e89fe1a5528fb
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Add extra time for beaker machines to validate
the testcases
for test_rebalance_spurious.py added cleanup in
teardown because fix layout patch is still not
merged.
Change-Id: I7ee8324ff136bbdb74600b730b4b802d86116427
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Due to patch [1], the framework broke and
was failing for all the testcases with the below
backtrace:
```
> mount_dict['server'] = cls.snode
E AttributeError: type object 'VolumeAccessibilityTests_cplex_replicated_glusterf'
has no attribute 'snode'
```
Solution:
This was becasue mnode_slave was accidentally written as snode. And
cls.geo_rep_info wasn't a safe condition operator hence changed it
to cls.slaves.
Testcase results with patch:
test_cvt.py::TestGlusterHealSanity_cplex_replicated_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed-dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterHealSanity_cplex_dispersed_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_replicated_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterReplaceBrickSanity_cplex_distributed-replicated_glusterfs::test_replace_brick_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_dispersed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed-replicated_glusterfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_nfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_nfs::test_shrinking_volume_when_io_in_progress PASSED
links:
[1] https://review.gluster.org/#/c/glusto-tests/+/24029/
Change-Id: If7b329e232ab61df9f9d38f5491c58693336dd48
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the code for the following:
1.Adding function setup_master_and_slave_volumes() to geo_rep_libs.
2.Adding variables for master_mounts, slave_mounts, master_volume
and slave_volume to gluster_base_class.py
3.Adding class class method
setup_and_mount_geo_rep_master_and_slave_volumes to
gluster_base_class.py.
Change-Id: Ic8ae1cb1c8b5719d4774996c3e9e978551414b44
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Create a volume and mount it.
2. Create a directory on mount and check whether all the bricks have
the same gfid.
3. Now delete gfid attr from all but one backend bricks,
4. Do lookup from the mount.
5. Check whether all the bricks have the same gfid assigned.
Failing in CentOS-CI due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1696075
Change-Id: I4eebc247b15c488cfa24599e0afec2fa5671656f
Co-authored-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new function volume_type() will check if the volume under test
is of pure Replicated/Disperse/Arbiter type and return the result
in string.
The functions,run_layout_tests() & validate_files_in_dir() have
been modified to check the Gluster version and volume type in order
to fix the DHT pass-through caused issues.
Change-Id: Ie7ad259883907c1fdc0b54e6743636fdab793272
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue earlier was that whenever a TC called the _get_layout()
and _is_complete() methods, it failed on Replicate/Arbiter/Disperse
volume types because of DHT pass-through.
The functions,get_layout() and is_complete() have been modified to
check for the Gluster version and volume type before running, in
order to fix the issue.
About DHT pass-through : Please refer to-
https://github.com/gluster/glusterfs/issues/405
for the details.
Change-Id: I0b0dc0ac3cbdef070a20854fbc89442fee1da8b6
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The current timeout for reboot given in
test_heal_full_node_reboot is about 350 seconds
which works with most hardware configurations.
However when reboot is done on slower systems which
take time to come up this logic fails due to
which this testcase and the preceding testcases
fail.
Solution:
Change the timeout for reboot from 350 to 700, this
wouldn't affect the testcase's perfromance in good
hardware configurations as the timeout is for the max
value and if the node is up before the testcase it'll
exit anyways.
Change-Id: I60d05236e8b08ba7d0fec29657a93f2ae53404d4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I25d30f7bdb20f0825709c4c852140e1906870ce7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ib357d5690bb28131d788073b80a088647167fe80
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ic0b3b1333ac7b1ae02f701943d49510e6d46c259
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sys library was added to all the testcases to fetch
the `sys.version_info.major` which fetches the version
of python with which glusto and glusto-tests is installed
and runs the I/O script i.e file_dir_ops.py with that
version of python but this creates a problem as older jobs
running on older platforms won't run the way they use to,
like if the older platform had python2 by default and
we are running it tests from a slave which
has python3 it'll fails and visa-versa.
The problem is introduced due the below code:
```
cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files "
"--dirname-start-num 10 --dir-depth 1 --dir-length 1 "
"--max-num-of-dirs 1 --num-of-files 5 %s" % (
sys.version_info.major, self.script_upload_path,
self.mounts[0].mountpoint))
```
The solution to this problem is to change `python%d`
to `python` which would enable the code to run with
whatever version of python is avaliable on that client
this would enable us to run any version of framework
with both the older and latest platforms.
Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Script sometimes fails at expand volume with
"Already part of volume" error fixed it with this patch.
Change-Id: I628bbdb268e5a42112f68d9148da6bdb775acd26
Co-authored-by: Prasad Desala <tdesala@redhat.com>,
Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Prasad Desala <tdesala@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The default value of performance.io-cache was ON by default
before gluster 6.0, in gluster 6.0 it was set to OFF.
Solution:
Adding code to check gluster version and then check
weather it is ON or OFF as shown below:
```
if get_gluster_version(self.mnode) >= 6.0:
self.assertIn("off", ret['performance.io-cache'],
"io-cache value is not correct")
else:
self.assertIn("on", ret['performance.io-cache'],
"io-cache value is not correct")
```
CentOS-CI failure analysis:
This patch is expected to failed as if we run `gluster --version` on
nightly builds the output returned as shown below:
```
# gluster --version
glusterfs 20200220.a0e0890
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
```
This output can't be parsed by get_gluster_version() function
which is used in this patch to get the gluster version
and check for perfromance.io-cache's default value accordingly.
Change-Id: I00b652a9d5747cbf3006825bb17b9ca2f69cb9cd
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Moved steps from teardown class to teardown and removed
unwanted teardown class and rectified the testcase
failing with wait for io to complete by removing the step
because after validate io the sub process terminates and
results in failure.
Change-Id: I2eaf05680b817b681aff8b48683fc9dac88896b0
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I0fa6bbacda16fb97d3454a8510a937442b5755a4
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I04f7b7c894d48d0188379028412d9c6b48eac210
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
and used wait for peer to connect and
wait for glusterd to connect functions in testcases
added fixes to check file exists
increased timeout value for failure cases
Change-Id: I9d5692f635ed324ffe7dac9944ec9b8f3b933fd1
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As distributed-arbiter and arbiter weren't present before patch [1],
arbiter and distributed-arbiter volumes were created by the hack show
below where a distributed-replicated or replicated volume's configuration
was modified to create an arbiter volume.
```
@runs_on([['replicated', 'distributed-replicated'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
.................
@classmethod
def setUpClass(cls):
...............
# Overriding the volume type to specifically test the volume
# type Change from distributed-replicated to arbiter
if cls.volume_type == "distributed-replicated":
cls.volume['voltype'] = { 'type': 'distributed-replicated',
'dist_count': 2,
'replica_count': 3,
'arbiter_count': 1,
'transport': 'tcp'}
```
Now this code is to be updated where we need to remove code which
was used to override volume configuration and
just add arbiter or distributed-arbiter in `@runs_on([],[])`
as shown below:
```
@runs_on([['replicated', 'distributed-arbiter'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
```
Links:
[1] https://github.com/gluster/glusto-tests/commit/08b727842bc66603e3b8d1160ee4b15051b0cd20
Change-Id: I4c44c2f3506bd0183fd991354fb723f8ec235a4b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Instead of calling g.log.error, we were calling g.log.err
Due to this instead of throwing the right error message in, say,
when doing volume cleanup, it was throwing ambiguos traceback.
Change-Id: I39887ce08756eaf29df2d99f73cc7795a4d2c065
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the method get_volume_type() passed when ran on interpreter.
But when the method was called in a TC, it failed at condition (Line:
2235) because :
e.g.
The value of brickdir_path is "dhcp47-3.lab.eng.blr.redhat.com:/bricks/
brick2/testvol_replicated_brick2/" and it is trying to find the value
in the list ['10.70.46.172:/bricks/brick2/testvol_replicated_brick0',
'10.70.46.195:/bricks/brick2/testvol_replicated_brick1',
'10.70.47.3:/bricks/brick2/testvol_replicated_brick2'] returned by
get_all_bricks(), which will fail.
Now, with fix, it will run successfully as it tries to check if for host
dhcp47-3.lab.eng.blr.redhat.com, the brick
/bricks/brick2/testvol_replicated_brick2 is present in the list
brick_paths[] which consists of only the paths and not the IP addresses
of the bricks present on that host.
Change-Id: Ie595faba1e92c559293ddd04f46b85065b23dfc5
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
| |
Change-Id: I2ba0c81dad41bdac704007bd1780b8a98cb50358
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Set the volume option
"metadata-self-heal": "off"
"entry-self-heal": "off"
"data-self-heal": "off"
"self-heal-daemon": "off"
2.Bring down all bricks processes from selected set
3.Create IO (50k files)
4.Get arequal before getting bricks online
5.Bring bricks online
6.Set the volume option
"self-heal-daemon": "on"
7.Check for daemons
8.Start healing
9.Check if heal is completed
10.Check for split-brain
11.Get arequal after getting bricks online and compare with
arequal before getting bricks online
12.Add bricks to volume
13.Do rebalance and wait for it to complete
14.Get arequal after adding bricks and compare with
arequal after getting bricks online
Change-Id: I1598c4d6cf98ce99249e85fc377b9db84886f284
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Added steps to reset volume and resolved teardown class
cleanup failures.
Change-Id: I06b0ed8810c9b064fd2ee7c0bfd261928d8c07db
|
|
|
|
|
|
|
|
|
| |
used library functions to wait for glusterd to start
and wait for peer to connect and made modifications in teardown part
to rectified statements to correct values
Change-Id: I40b4362ae1491acf75681c7623c16c53213bb1b9
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added wait_for_io_to_complete function to testcases
used wait_for_glusterd function
and wait_for_peer_connect function
Change-Id: I4811848aad8cca4198cc93d8e200dfc47ae7ac9b
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
| |
Add steps to include bring offline bricks to online and
volume reset in case of failure scenarios
Change-Id: I9bdadd8a80ded81cf7cb4e324a18321400bfcc4c
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the elements passed in list for volume type check were
('replicate', 'disperse', 'arbiter'), but as the volume type
returned by get_volume_type() will be in the format 'Replicate',
'Disperse', 'Arbiter' and lists are case sensitive, these changes
will make sure it does not change.
Change-Id: Ic73ca946cd9c06bfa5b92605dbeba74d6ffa83d9
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The function get_volume_type() will return the type of volume (as
distributed/replicate/disperse/arbiter/distributed-replicated/
distributed-dispersed/distributed-arbiter) under test.
Change-Id: Ib23ae1ad18ef65d0520fe041a5f80211030a034b
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DHT pass-through functionality was introduced in the Gluster
6, due to which the TCs were failing for Replicate, Disperse and
Arbiter volume types whenever the function to get hashrange was
called.
With this fix, first the Gluster version and later the volume
type will be checked before calling the function to get the
hashrange. If the Gluster version is greater than or equal to
6, the layout will not be checked for the pure AFR/Arbiter/EC
volumes.
About DHT pass-through option : The distribute xlator now skips
unnecessary checks and operations when the distribute count is one
for a volume, resulting in improved performance. Comes into play
when there is only 1 brick or it is a pure replicate or pure
disperse or pure arbiter volume.
Change-Id: I55634f495a54e3c9909b1e1c716990b9ee9834a3
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
Reboot cases are failing with timeout value,
therfore increasing the timeout value in function.
Change-Id: I262120e87d36b2d5cc7244b37d5f6e051c964f0f
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Change-Id: I1eacfd74c730d28e36bb8f7e3a1f574edc3d13c7
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase 1: Test entry transaction crash consistency : create
- Create IO
- Calculate arequal before creating snapshot
- Create snapshot
- Modify the data
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 2: Test entry transaction crash consistency : delete
- Create IO of 50 files
- Delete 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Delete 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 3: Test entry transaction crash consistency : rename
- Create IO of 50 files
- Rename 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Rename 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Change-Id: I7cb9182f91ae50c47d5ae9b3f8031413b2bbfbbf
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding function collect_bricks_arequal() to lib_utils.py
to collect arequal-checksum on all the bricks of all the
nodes used to create a volume using the below command:
```
arequal-checksum -p <BrickPath> -i .glusterfs -i .landfill -i .trashcan
```
Usage:
```
>>> all_bricks = get_all_bricks(self.mnode, self.volname)
>>> ret, arequal = collect_bricks_arequal(all_bricks)
>>> ret
True
```
Change-Id: Id42615469be18d84e5691c982369634c436ed0cf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifying the glusterfind functionality with deletion of files.
* Create a volume
* Create a session on the volume
* Create various files from mount point
* Perform glusterfind pre
* Perform glusterfind post
* Check the contents of outfile
* Modify the contents of the files from mount point
* Perform glusterfind pre
* Perform glusterfind post
* Check the contents of outfile
Files modified must be listed
Change-Id: Ie696e194364b2b86a7ceb5fb6e10066ecc669577
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There can be a case where due to a failed tearDown of a previous
test case, the snapshot creation might fail because of redundant
snapshot of the same name present.
This fix includes the changed snapshot names along with tearDown
for deleting the snapshots created in the test case
"test_snap_info_glusterd_restart.py".
Changing the names of the test case function to something
relevant to the test case.
Making use of the method "wait_for_glusterd_to_start" in places
where glusterd is restarted.
Change-Id: I6cb50dc84d306194b6bd363daf7ae0ebd6bb12ee
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There can be a case where due to a failed tearDown of a previous
test case, the snapshot creation might fail. This can happen when
a redundant snapshot with the same name is present.
This fix includes the changed snapshot names along with tearDown
for deleting the snapshots created in the test case
"test_snap_list_after_restart.py".
This fix also includes the method 'wait_for_glusterd_to_start'
where there is a restart of glusterd.
Adding a check for validation of snapshots using snapname.
Change-Id: If8b48a12bd067ad54dba742eeb88444beaf5f153
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing method does not have an option to use the volume name
to get the list of snapshots. Without the option to provide the
volume name, it is not possible to get the snapshots specific to a
volume.
With this fix, there is an addition of a kwarg 'volname' for the
method 'get_snap_list' with which the snapshots of a particular
volume can be listed. This option is necessary in some test cases
where the user needs to get the list of snapshots specific to a
particular volume.
This fix also includes a small typo error in the description of
a method.
Change-Id: Ib0aeaf417a37142ebe36847e27bcd60683f325e7
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
| |
Added library to cleanup lv created after snapshot clone and made
modifications with cleanup.
Change-Id: I71a437bf99eac1170895510475ddb30748771672
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The method wait_for_peers_to_connect checks whether it's Arg
'servers' is a string. It should rather be checking if the arg is
a list and accordingly make the changes if required.
Changed the Arg 'servers' validation type from 'str' to 'list'.
Change-Id: I74ddb489cd286c1f2531af478f8811759173f01e
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
| |
Change-Id: Id0cdda175865c84bef917211560acee8ea10fe7b
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
The function will return the gluster version in float for the host
it is called on. It accepts the host IP as a parameter.
Change-Id: Icf48cf41031f0fa06cf3864e9215c5a960bb7c64
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
available space
Testcase:
- note the current available space on the mount
- create 1M file on the mount
- note the current available space on the mountpoint and compare
with space before creation
- remove the file
- note the current available space on the mountpoint and compare
with space before creation
Change-Id: Iff017039d1888d03f067ee2a9f26aff327bd4059
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Description: This test creates directories and subdirectories
and their copies at mount point and checks for layout and
directory information at all subvols.
Change-Id: Iabce046e7ce63c5428061bcefb98d06359dac8bd
Co-authored-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test case 'test_256_snapshots.py' contains hard-coded commands
which can be run using the existing library functions from the
snap_ops.py library, such as:
* snap_create
* set_snap_config
This patch includes usage of appropriate functions in place of the
hard-coded commands along with various cosmetic changes.
Change-Id: I6da7444d903efd1be6582c8fea037a5c4fddc111
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Verify file/directory creation or deletion on volume
doesn't leave behind any broken symlinks on the bricks.
Steps:
1.Create 10 files and directories.
2.Check if broken symlinks are present on brick path.
3.Remove files and directories.
4.Check if broken symlinks are present on brick path.
Change-Id: If65a5eaa0fe0e1b96f002496744580e0ede5a4af
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
library files
Moving waiters from testcase and adding it as function in library in gluster_init and peer_ops.
Change-Id: I5ab1e42a5a0366fadb399789da1c156d8d96ec18
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This addition is to aid the test cases which validate the existence
of snapshots under the '.snaps' directory. The existing method i.e.
'uss_list_snap' does not help in getting the list of snapshots
and instead just recursively performs 'ls' under the .snaps
directory.
This method will return a list of directory names present under
the '.snaps' directory.
Change-Id: I808131788df975ca243ac5721713492422af0ab8
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|