| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Turn off the options self heal daemon
2.Create IO
3.Calculate arequal of the bricks and mount point
4.Bring down "brick1" process
5.Change the permissions of the directories and files
6.Change the ownership of the directories and files
7.Change the group of the directories and files
8.Bring back the brick "brick1" process
9.Execute "find . | xargs stat" from the mount point to trigger heal
10.Verify the changes in permissions are not self healed on brick1
11.Verify the changes in permissions on all bricks but brick1
12.Verify the changes in ownership are not self healed on brick1
13.Verify the changes in ownership on all the bricks but brick1
14.Verify the changes in group are not successfully self-healed
on brick1
15.Verify the changes in group on all the bricks but brick1
16.Turn on the option metadata-self-heal
17.Execute "find . | xargs md5sum" from the mount point to trgger heal
18.Wait for heal to complete
19.Verify the changes in permissions are self-healed on brick1
20.Verify the changes in ownership are successfully self-healed
on brick1
21.Verify the changes in group are successfully self-healed on brick1
22.Calculate arequal check on all the bricks and mount point
Change-Id: Ia7fb1b272c3c6bf85093690819b68bd83efefe14
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
This test case creates files at mount point
and verifies custom attributes across bricks
Testcase steps:
1.Create a gluster volume and start it.
2.Create file and link files.
3.Create a custom xattr for file.
4.Verify that xattr for file is displayed on
mount point and bricks
5.Modify custom xattr value and verify that xattr
for file is displayed on mount point and bricks
6.Verify that custom xattr is not displayed
once you remove it
7.Create a custom xattr for symbolic link.
8.Verify that xattr for symbolic link
is displayed on mount point and sub-volume
9.Modify custom xattr value and verify that
xattr for symbolic link is displayed on
mount point and bricks
10.Verify that custom xattr is not
displayed once you remove it.
Change-Id: Iff7360273369c77da243f2c09df2e10a0eec27ea
Co-authored-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Adding function create_link_file() to create
soft and hard links for an existing file.
Change-Id: I6be313ded1a640beb450425fbd29374df51fbfa3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test case 'tests/functional/quota/test_limit_usage_deep_dir.py'
fails erratically for disperse volume.
A bug [1] had been raised for the same where it was decided to
remove the disperse volume type from the 'runs_on' of that test.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1672983
Change-Id: Ica8f2af449225d72d1b60c2c86b20e16b80a5a5a
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the glustershd server vol file
has only entries for replicate volumes.
Testcase steps:
1.Create multiple volumes and start all volumes
2.Check the glustershd processes(Only 1 glustershd
should be listed)
3.Do replace brick on the replicate volume
4.Confirm that the brick is replaced
5.Check the glustershd processes(Only 1 glustershd should be listed
and pid should be different)
6.glustershd server vol should be updated with new bricks
Change-Id: I09245c8ff6a2b31a038749643af294aa8b81a51a
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
This test verifies remove brick operations on disperse
volume.
Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
* Create IO
* Calculate arequal from mount
* kill glusterd process and glustershd process on arbiter nodes
* Delete data from backend from the arbiter nodes
* Start glusterd process and force start the volume
to bring the processes online
* Check if heal is completed
* Check for split-brain
* Calculate arequal checksum and compare it
Change-Id: I41192134530ec42db3398ae97e4f328b77e529d1
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume and set the volume option
'diagnostics.client-log-level' to DEBUG mount the volume on one
client.
2. Create a directory
3. Validate the number of lookups for the directory creation from the
log file.
4. Perform a new lookup of the directory
5. No new lookups should have happened on the directory, validate from
the log file.
6. Bring down one subvol of the volume and repeat step 4, 5
7. Bring down one brick from the online bricks and repeat step 4, 5
8. Start the volume with force and wait for all process to be online.
Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
On latest platforms pidof command is returning
multiple pids as shown below:
27190 27078 26854
This is becasue it was returning glusterd,glusterfsd
and glusterfs processes as well. The problem is that
/usr/sbin/glusterd is a link to glusterfsd.
'pidof' has a new feature that pidof searches for
the pattern in /proc/PID/cmdline, /proc/PID/stat and
finally /proc/PID/exe. Hence pidof matches realpath
of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd
and results in glusterd, glusterfs and glusterfsd pids
being returned in output.
Fix:
Use pgrep instead of pidof to get glusterfsd
pids. And change the split logic accordingly.
Change-Id: I729e05c3f4cacf7bf826592da965a94a49bb6f33
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
On latest platforms pidof command is returning
multiple pids as shown below:
27190 27078 26854
This is becasue it was returning glusterd,glusterfsd
and glusterfs processes as well. The problem is that
/usr/sbin/glusterd is a link to glusterfsd.
'pidof' has a new feature that pidof searches for
the pattern in /proc/PID/cmdline, /proc/PID/stat and
finally /proc/PID/exe. Hence pidof matches realpath
of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd
and results in glusterd, glusterfs and glusterfsd pids
being returned in output.
Fix:
Use pgrep instead of pidof to get glusterfsd
pids. And change the split logic accordingly.
Change-Id: Ie215734387989f2d8cb19e4b4f7cddc73d2a5608
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Check if the xattr of the given bricks are same
Change-Id: Ib1ba010bfeafc132123a88a893017f870a989789
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Verifies whether the shd daemon is up and running on a particular node.
The method verifies whether the shd pid is present or not on the given
node. If present, as an additional verification, verifies that the
'self-heal daemon' for the node specified is not there in the get volume
status output
Change-Id: I4865dc5c493a72ed7334ea998d0a231f4f8c75c8
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changing the distribute count to 4 for the volume type
distributed-replicated or distributed-dispersed, as earlier with
distribute count 2, after remove-brick, the dist-rep & dist-disp
volumes were converted to pure rep or pure dispersed, which caused
"layout not complete" error as with the DHT pass-through feature
layout is not set on bricks if volume type is pure replicated/pure
dispersed on gluster version 6.0
Adding distributed-arbiter volume type and have added code to
override its configuration as well.
Change-Id: Ic7a3404ed49d24f956de33f7bd5ca8ea61297e5b
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test case verifies whether the gluster get-state shows the proper brick status in the output.
The test case checks the brick status when the brick is up and also after killing the brick process.
It also verifies whether the other bricks are up when a particular brick process is killed.
Change-Id: I9801249d25be2817104194bb0a8f6a16271d662a
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Check the existence of '/usr/lib/firewalld/services/glusterfs.xml'
2. Validate the owner of this file as 'glusterfs-server'
3. Validate SELinux label context as 'system_u:object_r:lib_t:s0'
Change-Id: I55bfb3b51a9188e2088459eaf5304b8b73f2834a
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
This test case creates a large file at mount point,
adds extra brick and initiates rebalance. While
migration is in progress, it stops rebalance process
and checks if it has stopped.
Testcase Steps:
1. Create and start a volume.
2. Mount volume on client and create a large file.
3. Add bricks to the volume and check layout
4. Rename the file such that it hashs to different
subvol.
5. Start rebalance on volume.
6. Stop rebalance on volume.
Change-Id: I7edd37a548467d6624ffe1efa64b0c1b56ff26ed
Co-authored-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The TC was failing with "AssertionError: ('hash range is not
there %s', False)" even though the bricks were healed and the
directory was created on non-hashed bricks. This was due to the
conflict between the TC and the DHT library changes (added to
fix the issues caused by DHT pass-through functionality). The
code is now modified according to the library changes and hence
the TC works fine.
Change-Id: I501e7db89643822fbc711e631ceacda79e4c4ea4
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
item.next() is not supported in python 3.
Solution:
Add try except block to take care of both python 2
and python 3.
Change-Id: I4c88804e45eee2a2ace24a982447000027e6ca3c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`replace_brick_from_volume()` function doesn't support
brick shareing which creates a problem as when we
try to perform replace brick with a large number
of volumes it is unable to get bricks and hence failes.
Solution:
Adding code for boolean kawrg for multi_vol and using
form_bricks_for_multivol() or form_bricks_list()
according to the value of multi_vol. The default
value of multi_vol is false which would only use
form_bricks_list() as done without the changes.
Blocks:
This patch currently blocks the below mentioned patch:
https://review.gluster.org/#/c/glusto-tests/+/19483/
Change-Id: I842a4ebea81e53e694b5b194294f1b941f47d380
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
There are two python2 to python3 incompatibilities
present in test_add_brick_when_quorum_not_met.py
and test_add_identical_brick_new_node.py.
In test_add_brick_when_quorum_not_met.py the testcase
fails with the below error:
> for node in range(num_of_nodes_to_bring_down, num_of_servers):
E TypeError: 'float' object cannot be interpreted as an integer
This is because a = 10 / 5 returns a float in python3
but it returns a int in python2 as shown below:
Python 2.7.15 (default, Oct 15 2018, 15:26:09)
[GCC 8.2.1 20180801 (Red Hat 8.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10/5
>>> type(a)
<type 'int'>
Python 3.7.3 (default, Mar 27 2019, 13:41:07)
[GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10/5
>>> type(a)
<class 'float'>
In test_add_identical_brick_new_node.py testcase
fails with the below error:
> add_bricks.append(string.replace(bricks_list[0],
self.servers[0], self.servers[1]))
E AttributeError: module 'string' has no attribute 'replace'
This is because string is depriciated in python3 and
is replaced with str.
Solution:
For the first issue we would need to change
a = 10/5 to a = 10//5 as it is constant
across both python versions.
For the second issue adding try except
block as shown below would be suffice:
except AttributeError:
add_bricks.append(str.replace(bricks_list[0],
self.servers[0],
self.servers[1]))
Change-Id: I9ec325760b279032af3748101bd2bfc58589d57d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Add sleep after glusterd restart is run async
in servers by avoiding another transaction in
progress failure in testcase
Change-Id: I514c24813dc7c102b807a582ae2b0d19069e0d34
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The method kills the given set of processes running on the specified
node. It takes process ids or process names and uses the kill command
to terminate the process and returns status as boolean value
Change-Id: Ic6c316dac6b3496d34614c568115b0fa0f40d07d
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`g.rpyc_get_connection()` has a limitaion where it can't
convert python2 calls to python3 calls. Due to this a large
number of testcases fail when executed from a python2 machine
on a python3 only setup or visa versa with the below stack trace:
```
E ========= Remote Traceback (1) =========
E Traceback (most recent call last):
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request
E res = self._HANDLERS[handler](self, *args)
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect
E if hasattr(self._local_objects[id_pack], '____conn__'):
E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__
E return self._dict[key][0]
E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560)
```
Solution:
Write generic code which can run from python2 to
python3 and visa-versa
Change-Id: I7783485a784ef4b57f626f77e6012d918fee6032
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
The method executes the 'gluster get-state' command on the specified node
and verifies the glusterd state dump, reads it and returns the content
as a dictionary
Change-Id: I0356ccf740fd97d1930e9f09d6111304b14cd015
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
| |
Add steps wait_for_bricks_to_be_online in teardown after
the glusterd is started in teststeps
Change-Id: Id30a3d870c6ba7c77b0e79604521ec41fe624822
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Checks replace-brick and data intergrity post that
2.Checks replace-brick while IO's are in progress
Change-Id: Idfc801fde50967924696b2e909633b9ca95ac721
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Line 135 is missing () which leads to the below trace back
when the testcase fails:
```
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135
```
Solution:
Adding the missing () brackets in line 135.
Change-Id: I318a5b838f01840afee5d4109645cc7dcd86c8fa
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently the code supports both service and systemctl
commands but it fails on the latest platforms with
the below error on the latest platforms:
```
service glusterd reload
Redirecting to /bin/systemctl reload glusterd.service
Failed to reload glusterd.service: Job type reload is
not applicable for unit glusterd.service.
```
This is because the latest platforms uses systemctl
instead of service to reload the daemon processes:
```
systemctl daemon-reload
```
Solution:
The present code doesn't work properly as the check
is specific to only one platform, hence it fails.
The solution for this is to just check for older
platforms and run service command. For all other
platforms run systemctl command.
Change-Id: I19b24652b96c4794553d3659eaf0301395929bca
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`g.rpyc_get_connection()` has a limitaion where it can't
convert python2 calls to python3 calls. Due to this a large
number of testcases fail when executed from a python2 machine
on a python3 only setup or visa versa with the below stack trace:
```
E ========= Remote Traceback (1) =========
E Traceback (most recent call last):
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request
E res = self._HANDLERS[handler](self, *args)
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect
E if hasattr(self._local_objects[id_pack], '____conn__'):
E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__
E return self._dict[key][0]
E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560)
```
Solution:
The solution here is to modify the code to not use
`g.rpyc_get_connection()`. The following changes are done
to accomplish it:
1)Remove code which uses g.rpyc_get_connection() and use generic
logic in functions:
a. do_bricks_exist_in_shd_volfile()
b. get_disk_usage()
c. mount_volume()
d. list_files()
f. append_string_to_file()
2)Create files which can be uploaded and executed on
clients/servers to avoid rpc calls in functions:
a. calculate_hash()
b. validate_files_in_dir()
3)Modify setup.py to push the below files to
`/usr/share/glustolibs/scripts/`:
a.compute_hash.py
b.walk_dir.py
Change-Id: I00a81a88382bf3f8b366753eebdb2999260788ca
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1702298 - Custom xattrs are not healed on newly added brick
Test Steps:
1) Create a volume.
2) Mount the volume using FUSE.
3) Create 100 directories on the mount point.
4) Set the xattr on the directories.
5) Add bricks to the volume and trigger rebalance.
6) Wait for rebalance to complete.
7) After rebalance completes,check if all the bricks have healed.
8) Check the xattr for dirs on the newly added bricks.
Change-Id: If83f65ea163ccf16f9024d6b3a867ba7b35773f0
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Add docleanup and docleanupclass in baseclass,
which will call the function fresh_setup_cleanup,
will cleanup the nodes to fresh setup if it is
set to true or whenever the testcase fails.
Change-Id: I951ff59cc3959ede5580348b7f93b57683880a23
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcase test_ec_version was failing with the
below traceback:
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: %d format: a number is required, not str
Logged from file test_ec_version_healing_whenonebrickdown.py, line 233
This was due to a missing 's' in the log message on line 233.
Solution:
Add the missing s in the log message on line 233 as
shown below:
g.log.info('Brick %s is offline successfully', brick_b2_down)
Also renaming the file for more clarity of what the
testcase does.
Change-Id: I626fbe23dfaab0dd6d77c75329664a81a120c638
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps: (file access)
- rename the file so that the hashed and cached are different
- make sure file can be accessed as long as cached is up
Fixes a library issue as well in find_new_hashed()
Change-Id: Id81264848d6470b9fe477b50290f5ecf917ceda3
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Case 1:
1.mkdir srcdir and dstdir(such that srcdir and
dstdir hashes to different subvols)
2.Bring down srcdir hashed subvol
3.mv srcdir dstdir (should fail)
Case 2:
1.mkdir srcdir dstdir
2.Bring down srcdir hashed
3.Bring down dstdir hashed
4.mv srcdir dstdir (should fail)
Case 3:
1.mkdir srcdir dstdir
2.Bring down dstdir hashed subvol
3.mv srcdir dstdir (should fail)
Additional library fix details:
Also fixing library function to work with distributed-disperse volume
by removing `if oldhashed._host != brickdir._host:` as the same node
can host multiple bricks of the same volume.
Change-Id: Iaa472d1eb304b547bdec7a8e6b62c1df1a0ce591
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Changes done in this patch include:
1. reduced runtime of test by removing multiple volume configs
2. added extra validation for node already peer detached
3. added test steps to cover peer detach when volume is offline
Change-Id: I80413594e90b59dc63b7f4f52e6e348ddb7a9fa0
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier, brick creation is carried out based on the difference of used
and unused bricks. This is a bottleneck for implementing brick
multiplexing testcases. Moreover we can't create more than 10 volumes.
With this library, implementing a way to create bricks on top of the
existing servers in a cyclic way to have equal number of bricks on each
brick partition on each server
Added paramter in setup_volume function, if multi_vol flag is set it
will fetch bricks using cyclic manner using (form_bricks_for_multi_vol)
otherwise it will fetch using old mechanism.
Added bulk_volume_creation function, to create multiple volumes the
user has specified.
Change-Id: I2103ec6ce2be4e091e0a96b18220d5e3502284a0
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add sleep in test_snap_delete_original_volume.py
after cloning a volume added sleep and then starting
the volume. Changed io to write in one mount point
otherwise seeing issues with validate io.
removed baseclass cleanup because original volume is
already cleaned up in testcase.
Change-Id: I7bf9686384e238e1afe8491013a3058865343eee
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create directory on mount point and write files/dirs
2.Create another set of files (1K files)
3.While creation of files/dirs are in progress Kill one brick
4.Remove the contents of the killed brick(simulating disk replacement)
5.When the IO's are still in progress, restart glusterd on the nodes
where we simulated disk replacement to bring back bricks online
6.Start volume heal
7.Wait for IO's to complete
8.Verify whether the files are self-healed
9.Calculate arequals of the mount point and all the bricks
CentOS-CI failure due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1807384
Change-Id: I9e9f58a16a7950fd7d6493cbb5c4f5483892851e
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I5319ce497ca3359e0e7dbd9ece481bada1ee2205
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create a single brick volume
2.Add some files and directories
3.Get arequal from mountpoint
4.Add-brick such that this brick makes
the volume a replica vol 1x3
5.Start heal full
6.Make sure heal is completed
7.Get arequals from all bricks and
compare with arequal from mountpoint
Change-Id: I4ef140b326b3d9edcbd5b1f0b7d9c43f38ccfe66
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1257394 - Provide meaningful errors on peer probe and peer detach
Test Steps:
1 check the current peer status
2 detach one of the valid nodes which is already part of cluster
3 stop glusterd on that node
4 try to attach above node to cluster, which must fail with
Transport End point error
5 Recheck the test using hostname, expected to see same result
6 start glusterd on that node
7 halt/reboot the node
8 try to peer probe the halted node, which must fail again.
9 The only error accepted is as below
"peer probe: failed: Probe returned with Transport endpoint is not
connected"
10 Check peer status and make sure no other nodes in peer reject state
Change-Id: Ic0a083d5cb150275e927723d960e89fe1a5528fb
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Add extra time for beaker machines to validate
the testcases
for test_rebalance_spurious.py added cleanup in
teardown because fix layout patch is still not
merged.
Change-Id: I7ee8324ff136bbdb74600b730b4b802d86116427
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Due to patch [1], the framework broke and
was failing for all the testcases with the below
backtrace:
```
> mount_dict['server'] = cls.snode
E AttributeError: type object 'VolumeAccessibilityTests_cplex_replicated_glusterf'
has no attribute 'snode'
```
Solution:
This was becasue mnode_slave was accidentally written as snode. And
cls.geo_rep_info wasn't a safe condition operator hence changed it
to cls.slaves.
Testcase results with patch:
test_cvt.py::TestGlusterHealSanity_cplex_replicated_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed-dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterHealSanity_cplex_dispersed_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_replicated_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterReplaceBrickSanity_cplex_distributed-replicated_glusterfs::test_replace_brick_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_dispersed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed-replicated_glusterfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_nfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_nfs::test_shrinking_volume_when_io_in_progress PASSED
links:
[1] https://review.gluster.org/#/c/glusto-tests/+/24029/
Change-Id: If7b329e232ab61df9f9d38f5491c58693336dd48
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the code for the following:
1.Adding function setup_master_and_slave_volumes() to geo_rep_libs.
2.Adding variables for master_mounts, slave_mounts, master_volume
and slave_volume to gluster_base_class.py
3.Adding class class method
setup_and_mount_geo_rep_master_and_slave_volumes to
gluster_base_class.py.
Change-Id: Ic8ae1cb1c8b5719d4774996c3e9e978551414b44
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Create a volume and mount it.
2. Create a directory on mount and check whether all the bricks have
the same gfid.
3. Now delete gfid attr from all but one backend bricks,
4. Do lookup from the mount.
5. Check whether all the bricks have the same gfid assigned.
Failing in CentOS-CI due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1696075
Change-Id: I4eebc247b15c488cfa24599e0afec2fa5671656f
Co-authored-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new function volume_type() will check if the volume under test
is of pure Replicated/Disperse/Arbiter type and return the result
in string.
The functions,run_layout_tests() & validate_files_in_dir() have
been modified to check the Gluster version and volume type in order
to fix the DHT pass-through caused issues.
Change-Id: Ie7ad259883907c1fdc0b54e6743636fdab793272
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue earlier was that whenever a TC called the _get_layout()
and _is_complete() methods, it failed on Replicate/Arbiter/Disperse
volume types because of DHT pass-through.
The functions,get_layout() and is_complete() have been modified to
check for the Gluster version and volume type before running, in
order to fix the issue.
About DHT pass-through : Please refer to-
https://github.com/gluster/glusterfs/issues/405
for the details.
Change-Id: I0b0dc0ac3cbdef070a20854fbc89442fee1da8b6
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The current timeout for reboot given in
test_heal_full_node_reboot is about 350 seconds
which works with most hardware configurations.
However when reboot is done on slower systems which
take time to come up this logic fails due to
which this testcase and the preceding testcases
fail.
Solution:
Change the timeout for reboot from 350 to 700, this
wouldn't affect the testcase's perfromance in good
hardware configurations as the timeout is for the max
value and if the node is up before the testcase it'll
exit anyways.
Change-Id: I60d05236e8b08ba7d0fec29657a93f2ae53404d4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I25d30f7bdb20f0825709c4c852140e1906870ce7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ib357d5690bb28131d788073b80a088647167fe80
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|