| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
- `str.rsplit` doesn't accept named args in py2
- Removed named arg to make it compatible with both versions
Change-Id: Iba287ef4c98ebcbafe55f2166c99aef0c20ed9aa
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create deep directory and file in each directory
3.Rename the file
4.Check if brickpath contains old files
5.Delete all data
6.Check .glusterfs/indices/xattrop is empty
7.Check if brickpath is empty
Change-Id: I04e50ef94379daa344be1ae1d19cf2d66f8f460b
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a volume and mount it
2. Disable heal and cluster-quorum-count
3. Bring down one data and arbiter brick from one
subvol
4. Write IO and validate it
5. Bring up bricks
6. Bring down another data brick and arbiter brick
from the same subvol
7. Write IO and validate it
8. Bring up bricks
9. Check if split-brain is created
10. Write IO -> should fail
11. Enable heal and cluster-quorum-count
12. Write IO -> should fail
Change-Id: I229b58c1bcd70dcd87d35dc410e12f51b032b9c4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding a parameter `interval_check` will ease the user and help
in the reducing the waiting time for heal in scenarios.
By default, ls -l <brickpath> | grep -ve "xattrop-" | wc -l
is checked for every 2 minutes.
Problem: I have 100 files that needs to be healed, after 2 minutes.
suppose there are only 2/3 files that needs to be healed.
With the existing approach the next check will wait for whole 2
minutes even though the files would have healed by 10 seconds after
previouscheck.
Solution: Giving an option for the user to check at which interval
to check for the files that needs to healed, we can reduce the
unnecssary waiting time.
It won't affect the existing cases as the interval_check is defaults
to 120.
Change-Id: Ib288c75549644b6f6c94b5288f1c07cce7933915
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a distributed-replicated(3X3)/distributed-arbiter(3X(2+1))
and mount it on one client
2. Kill 3 bricks corresponding to the 1st subvol
3. Unmount and remount the volume on the same client
4. Create deep dir from mount point 'dir1/subdir1/deepdir1'
5. Create files under dir1/subdir1/deepdir1; touch <filename>
6. Now bring all sub-vols up by volume start force
7. Validate backend bricks for dir creation, the subvol which is
offline will have no dirs created, whereas other subvols will have
dirs created from step 4
8. Trigger heal from client by '#find . | xargs stat'
9. Verify that the directory entries are created on all back-end bricks
10. Create new dir (dir2) on location dir1/subdir1/deepdir1
11. Trigger rebalance and wait for the completion
12. Check backend bricks for all entries of dirs
Change-Id: I4d8f39e69c84c28ec238ea73935cd7ca0288bffc
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In most of the testcases due to redundant logging,
the performance of the whole suite completion time
is affected.
Solution:
In BVT Test suite, there are 184 g.log.info messages
more than half of them are redundant.
Removed logs wherever it is not required.
Added missing get_super_method for setUp and tearDown
for one testcase, modified increment in the test.
Change-Id: I19e4462f2565906710c2be117bc7c16c121ddd32
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create 1x3 volume and fuse mount the volume
2) On mount created a dir dir1
3) Pkill glusterfsd on node n1 (b2 on node2 and b3 and node3 up)
4) touch f{1..10} on the mountpoint
5) b2 and b3 xattrs would be blaming b1 as files are created while
b1 is down
6) Reset the b3 xattrs to NOT blame b1 by using setattr
7) Now pkill glusterfsd of b2 on node2
8) Restart glusterd on node1 to bring up b1
9) Now bricks b1 online , b2 down, b3 online
10) touch x{1..10} under dir1 itself
11) Again reset xattr on node3 of b3 so that it doesn't blame b2,
as done for b1 in step 6
12) Do restart glusterd on node2 hosting b2 to bring all bricks online
13) Check for heal info, split-brain and arequal for the bricks
Change-Id: Ieea875dd7243c7f8d2c6959aebde220508134d7a
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
| |
Add logic to do ls -l before and after.
Add logic to set all log-levels to debug.
Change-Id: I512e3b229fe9e2126f6c596fdc031c00a25fbe0b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Earlier, on running a test script which calls get_volume_type(),
the test script displayed below failure message in glusto logs:
"INFO (get_volume_type) Failed to find brick-path
10.70.47.44:/bricks/brick2/testvol_distributed_brick0// for volume
testvol_distributed"
- even though the brick-path was present in the volume.
Checked by directly calling the function as well:
>>> from glusto.core import Glusto as g
>>> from glustolibs.gluster.volume_libs import get_volume_type
>>> ret = get_volume_type('10.70.47.44:/bricks/brick2/vol1-b1/')
>>> print ret
Distribute
>>> ret = get_volume_type('10.70.47.44:/bricks/brick2/vol1-b1//')
>>> print ret
None
Observed that the issue occurs if an extra "/" is present at the end
of the brickdir_path(str) parameter passed to the function. Hence
have added a check for the same.
Change-Id: I01fe2d05b7f206d7767c83e57e714053358dc42c
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Starting writing and reading data on file
3.Bring down 1 brick
4.Validate read and write to file
5.Bring up brick and start healing
6.Monitor healing and completion
7.Bring down 2nd brick
8.Read and write to same file
9.Bring up brick and start healing
10.Monitor healing and completion
11.Check split-brain
Change-Id: Ib03a1ad7ee626337904b084e85eee38750fea141
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
- Validate `heal info` returns before timeout with IO
- Validate `heal info` returns before timeout with IO and brick down
- Validate data heal on file append in AFR, arbiter
- Validate entry heal on file append in AFR, arbiter
Change-Id: I803b931cd82d97b5c20bd23cd5670cb9e6f04176
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In most of the testcases due to redundant logging,
the performance of the whole suite completion time
is affected.
Solution:
Currently there are 100+ g.log.info statements in the
authentincation suite and half of them are redundant.
Removed the g.log.info statements whereever it is not
required. After the changes the g.log.info statements
are around 50 and not removed the statements to reduce
the number of lines but for the improvement of the
whole suite.
Modified few line indents as well and added teardown
for the missing files.
Note: Will be submitting for each components separately
Change-Id: I63973e115dd5dbbc7fc9462978397e7915181265
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
| |
Cluster authentication between the nodes are done after
setting a password for the user 'hacluster' on all the nodes
Change-Id: Ic8b8838ef9490d2776172467c177d61cb615720f
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Due to patch [1] which was sent for issue #24
causes a large number of testcases to fail
or get stuck in the latest DHT run.
Solution:
Make changes sot that getfattr command
sends back the output in text wherever needed.
Links:
[1] https://review.gluster.org/#/c/glusto-tests/+/24841/
Change-Id: I6390e38130b0699ceae652dee8c3b2db2ef3f379
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcases test_volume_start_stop_while_rebalance_is_in_progress throws
the below traceback when run:
```
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135
```
This is because g.log.error() was used instead of
self.assertTrue().
Solution:
Changing to self.assertTrue().
Change-Id: If926eb834c0128a4e507da9fdd805916196432cb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I19e7e5e4338fce0d77e42dae716cc5eb5f814a17
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create volume and create files/dirs from mount point
2. With IO in progress execute reset-brick start
3. Now format the disk from back-end, using rm -rf <brick path>
4. Execute reset brick commit and check for the brick is online.
5. Issue volume heal using "gluster vol heal <volname> full"
6. Check arequal for all bricks to verify all backend bricks
including the resetted brick have same data
Change-Id: I06b93d79200decb25f863e7a3f72fc8e8b1c4ab4
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a pure-ec volume (say 1x(4+2))
2. Mount volume on two clients
3. Create some files and dirs from both mnts
4. Add bricks in this case the (4+2) ie 6 bricks
5. Create a new dir(common_dir) and in that directory create a distinct
directory(using hostname as dirname) for each client and pump IOs
from the clients(dd)
6. While IOs are in progress replace any of the bricks
7. Check for errors if any collected after step 6
Change-Id: I3125fc5906b5d5e0bc40477e1ed88825f53fa758
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
| |
TC's was failing due to timeout issue
increased reabalnce timeout from 900 to 1800
Change-Id: I726217a21ebbde6391660dd3c9dc096cc9ca6bb4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I813f3e78ad8b0b79940635df6721e34e6bc93f34
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
| |
Change-Id: I62ff8409a2170becdebdaf8274a1032e63db40ea
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the get_fattr() function returns the xattr value in the
hex encoding format. Adding the option to add/specify other
encoding types, i.e. text and base64, where hex will be the default
encoding format.
Reason -
When the xattrs are custom set through mountpoint, it becomes
easier to test if the value is correct by using the "text"
encoding.
Example -
>>> ret = get_fattr(host, fqpath, fattr)
>>> print ret
0x414243
>>> ret = get_fattr(host, fqpath, fattr, encode="text")
>>> print ret
"ABC"
The value "ABC" is easily readable as opposed to 0x414243 when
performing a test.
Change-Id: Ie2377b924816ebab0a2af116d82600e01f03d61f
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Some testcases fail with UnicodeDecodeError when
the framework is run using python3. This happens becuase
the get_fattr() command returns non-unicode output
which leads to data.decode() used in subprocess.Popen
to fail. This isn't the case in python2 as it doesn't
bother about encoding and dumps whatever is the output back
to the management node.
```
gfid = get_fattr(brick_tuple[0], brick_path + '/' + direc,
'trusted.gfid')
/root/glusto-tests/tests/functional/dht/test_dht_create_dir.py:127:
/usr/local/lib/python3.8/site-packages/glustolibs_gluster-0.22-py3.8.egg/glustolibs/gluster/glusterfile.py:113: in get_fattr
rcode, rout, rerr = g.run(host, command)
/usr/local/lib/python3.8/site-packages/glusto-0.72-py3.8.egg/glusto/connectible.py:132: in run
stdout, stderr = proc.communicate()
/usr/lib64/python3.8/subprocess.py:1024: in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
/usr/lib64/python3.8/subprocess.py:1904: in _communicate
stdout = self._translate_newlines(stdout,
self = <subprocess.Popen object at 0x7f22b4e2f490>, data = b'\xber\t\nO\xebO\xee\xa4\x9c\xc4L\xac\x1cj\xd5',
encoding = 'UTF-8', errors = 'strict'
def _translate_newlines(self, data, encoding, errors):
data = data.decode(encoding, errors)
E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbe in position 0: invalid start byte
/usr/lib64/python3.8/subprocess.py:901: UnicodeDecodeError
```
Solution:
Change get_fattr() command to return xattr value
in hex to avoid UnicodeDecodeError error from
Popen.
Fixes: https://github.com/gluster/glusto-tests/issues/24
Change-Id: I8c4786c882adf6079404b97eca2c399535db068f
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create a directory say d1
3.Create deep directories and files in d1
4.Bring down redundant bricks
5.Delete d1
6.Create d1 and same data again
7.Bring bricks up
8.Monitor heal
9.Verify split-brain
Change-Id: I778fab6bf6d9f81fca79fe18285073e1f7ccc7e7
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: Ic0473b25b989ea193dc79dd96edcf53f7661a056
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
There have been a couple of instances where we have received
issues which don't have a clear description,
making it difficult for glusto-tests developers to fix it.
One such example is [1].
Solution:
Adding a ISSUE_TEMPLATE for github issues
so that the reporter of the issue follows
a given format for the issue.
Links:
[1] https://github.com/gluster/glusto-tests/issues/19
Fixes: https://github.com/gluster/glusto-tests/issues/29
Change-Id: I0d3265ccc373a919d5d4fc7cc8df283f8306c4a5
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Test script test_alert_time_out currently fails
2 out of 6 times when executed on the same setup
this is due to the log files not have 120004 A
alert message. This issue is only observed in
distributed volume type mounted over fuse
protocol.
Solution:
There is no permanent solution to this problem
as even if we increase the sleep 20 seconds there
is still a chance that it might fail. The optimal
sleep time where it only fails 5 times after 15
attempts is 6 seconds. Hence changing sleep time
to 6 seconds.
Change-Id: I9e9bd41321e24f502d90c3c34edce9113133755e
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
The assertIsNotNone is missing the param.
Change-Id: Iddff9b203672b2edf702ada624bfac1892641712
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Adding code to get dir tree and dump all
xattr in hex for Bug 1810901 before remove-brick
also adding logic to set log-level to debug.
Change-Id: I9c9c970c4de7d313832f6f189cdca8428a073b1e
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Integrate the changes made in library to the test
Change-Id: I9bf8c3f1f732132170a96405a4a12839463a2eaa
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Checks that there is no data loss when remove-brick
operation is stopped and then new bricks are added to
the volume.
Steps:
1) Create a volume.
2) Mount the volume using FUSE.
3) Create files and dirs on the mount-point.
4) Calculate the arequal-checksum on the mount-point.
5) Start remove-brick operation on the volume.
6) While migration is in progress, stop the remove-brick
operation.
7) Add-bricks to the volume and trigger rebalance.
8) Wait for rebalance to complete.
9) Calculate the arequal-checksum on the mount-point.
Change-Id: I96a7311f5acd0ae19b17d7b7c7da4d3899cdef77
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create a volume and mount it
- disable metadata,data,entry heal
- Create files and take arequal of mount point
- Bring down redundant bricks
- Append data and create hardlinks
- Bring up bricks
- Check healing and split-brain
- Bring down redundant bricks
- Truncate data
- Check file and hardlink stat match
- Bring up bricks
Change-Id: I9b26f2fb26d72b71abd63a25ef8d9173f32997d4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcases test_mount_snap_delete and test_restore_online_vol
were failing in the latest runs with the below traceback
```
Traceback (most recent call last):
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module>
rc = args.func(args)
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files
base_file_name, file_types)
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files
ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files)
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in <lambda>
ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files)
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 270, in _create_file
with open(file_abs_path, "w+") as new_file:
FileExistsError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/file1.txt'
```
This was because I/O logic was trying to create 2 files with the same
name from 2 clients.
Fix:
Modify logic to use counters to create files with different names.
Change-Id: I2896736d28f6bd17435f941088fd634347e3f4fd
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create an volume and mount it
- Bring bricks offline
- Write 50k files
- Bring bricks online
- Monitor heal completion
- Check for split-brain
Change-Id: I40739effdfa1c1068fa0628467154b9a667161a3
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
| |
Adding code to get dir tree and dump all
xattr for Bug 1810901.
Change-Id: Ia59dcd2623e845066e31037c96a64249efa074c2
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create, start and mount an arbiter volume in two clients
- Create two dir's, fill IO in first dir and take note of arequal
- Start a continuous IO from second directory
- Convert arbiter to x2 replicated volume (remove brick)
- Convert x2 replicated to x3 replicated volume (add brick)
- Wait for ~5 min for vol file to be updated on all clients
- Enable client side heal options and issue volume heal
- Validate heal completes with no errors and arequal of first dir
matches against initial checksum
Change-Id: I291acf892b72bc8a05e76d0cffde44d517d05f06
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create and mount a replicated volume
- Kill one of the bricks and write IO from mount point
- Verify `gluster volume heal <volname> info healed` and `gluster
volume heal <volname> info heal-failed` command results in error
- Validate `gluster volume help` doesn't list `healed` and
`heal-failed` commands
Change-Id: Ie1c3db12cdfbd54914e61f812cbdac382c9c723e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
NFS-Ganesha Tests inherits 'NfsGaneshaClusterSetupClass' whereas
the other tests inherits 'GlusterBaseClass'. This causes a cyclic
dependency when trying to run other modules with Nfs-Ganesha.
Fix:
1. Move the Nfs-Ganesha dependencies to GlusterBaseClass
2. Modify the Nfs-Ganesha tests to inherit from GlusterBaseClass
3. Remove setup_nfs_ganesha method call from existing Ganesha tests
as its invoked by default from GlusterBaseClass.SetUpClass
Change-Id: I1e382fdb2b29585c097dfd0fea0b45edafb6442b
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Add method run_linux_untar() to io/utils.py
which downloads linux-5.4.54.tar.xz and
untar it on specific dirs of mount point or
by default on mount point.
Change-Id: Id00ea50b4d7fb7c360150aeaac65baec5612e589
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I16c5f070d807673662e5ac3583aace06873a9c14
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Description:
Sos must be able to capture the required logs in sosreport including
gluster logs, without compromising the integrity of Gluster like
deleting socket files etc
Change-Id: Ifec57778ff5d1fc0ceaa3ecf94a9851244076d2b
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Test is failing with below traceback
when ran with python3 as default.
`
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: a bytes-like object is required, not 'str'
`
Solution:
Added ''.encode() which will fix the issue when ran
using both python2 and python3
Added a check for core file on the client node.
Change-Id: I8f800f5fad97c3b7591db79ea51203e5293a1f69
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
- Remove unneccessary disablement of client side heal options
- Check if client side heal options are disabled by default
- Test data heal by default method
- Explicit data heal by calling self heal command
Change-Id: I3be9001fc1cf124a4cf5a290cee985e166c0b685
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description : Check that all directories are read and listed while
rebalance is still in progress.
Steps :
1) Create a volume.
2) Mount the volume using FUSE.
3) Create a dir "master" on mount-point.
4) Create 8000 empty dirs (dir1 to dir8000) inside dir "master".
5) Now inside a few dirs (e.g. dir1 to dir10), create deep dirs
and inside every dir, create 50 files.
6) Collect the number of dirs present on /mnt/<volname>/master
7) Change the rebalance throttle to lazy.
8) Add-brick to the volume (at least 3 replica sets.)
9) Start rebalance using "force" option on the volume.
10) List the directories on dir "master".
Change-Id: I4d04b3e2be93b5c25b5ed70516bb99d99fb1fb8a
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When run for nightly gluster builds dht cases fails
with below traceback:
```
def get_gluster_version(host):
"""Checks the gluster version on the nodes
Args:
host(str): IP of the host whose gluster version has to be checked.
Returns:
(float): The gluster version value.
"""
command = 'gluster --version'
_, out, _ = g.run(host, command)
g.log.info("The Gluster verion of the cluster under test is %s",
out)
> return float(out.split(' ')[1])
E ValueError: could not convert string to float: '20200719.9334a8d\nRepository
```
This is due to nightly builds returning the below output:
```
$ gluster --version
glusterfs 20200708.cdf01cc
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser General Public License,
version 3 or any later version (LGPLv3 or later), or the GNU General Public License,
version 2 (GPLv2), in all cases as published by the Free Software Foundation.
```
Instead of:
```
$ gluster --version
glusterfs 6.0
```
This is caused as while building the we use
`VERSION="${GIT_DATE}.${GIT_HASH}` which causes the version
API to return new output.
Solution:
Remove the checks and modify the function
to return string instead of float.
Fixes: https://github.com/gluster/glusto-tests/issues/22
Change-Id: I2e889bd0354a1aa75de25aedf8b14eb5ff5ecbe6
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
- `replace` funciton to used to forgo version check
- `unicode` is not being recognized from builtins in py2
- `replace` seems correct alternative than fixing unicode
Change-Id: Ieb9b5ad283e1a31d65bd8a9715b80f9deb0c05fe
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
| |
Fixes: https://github.com/gluster/glusto-tests/issues/21
Change-Id: I08115a2c11d657cdcb0ab0cc4fe9be697c947a8f
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
- Remove decimal before passing to `head` command
- Breakup sparsefile into chunks to ~half of brick size
- Whole test has to be skipped due to BZ #1339144
Change-Id: I7a9ae25798b442c74248954023dd821c3442f8f9
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Creating a third mount obj works for glusterfs
protocol but in future while running for nfs/cifs might
face complications and test might fail.
Solution: Skip test unless three clients are provided
Removing redundant logging and minor fixes.
Change-Id: Ie657975a46b6989cb9f057f5cc337333bbf1010d
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
| |
- Translate function is availble on `unicode` string in Python2
Change-Id: I6aa01606acc73b18d889a965f1c01f9a393c2c46
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|