| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
Test to validate gluster peer probe scenarios using ip addr,
hostname and fqdn by verifying each with peer status output,
pool list and cmd_history.log
Change-Id: I77512cfcf62b28e70682405c47014646be71593c
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1) Create a volume and start it.
2) Fetch the brick list
3) Remove any brickpath
4) Check number of bricks online is equal
to number of bricks in volume
Change-Id: I4c3a6692fc88561a47a7d2564901f21dfe0073d4
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Check for the presence of /var/lib/glusterd/glusterd.info file
2. Get the UUID of the current NODE
3. check the value of the uuid returned by executing the command
"gluster system:: uuid get "
4. Check the uuid value shown by other node in the cluster
for the same node "gluster peer status"
on one node will give the UUID of the other node
Change-Id: I61dfb227e37b87e889577b77283d65eda4b3cd29
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Reason : The cd will change the working directory to root
and renames and softlink creations for subsequent files will
fail as seen in the glusto logs.
Change-Id: I174ac11007dc301ba6ec8ccddaeb919a181b1c30
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume and start it.
2. Mount the volume using FUSE.
3. Create multiple level of dirs and files inside every dir.
4. Rename files such that linkto files are created.
5. From the mount point do an rm -rf * and check if all files
are delete or not from mount point as well as backend bricks.
Change-Id: I658f67832715dde7260827cc0a27b005b6df5fe3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a 2 x (4+2) disperse volume and start it.
2. Disable performance.force-readdirp and dht.force-readdirp.
3. Mount the volume on one client and create 8 directories.
4. Do a lookup on the mount using the same mount point,
number of directories should be 8.
5. Mount the volume again on a different client and check
if number of directories is the same or not.
Change-Id: Id94db2bc9200ab2ce4ca2fb604f38ca4525e6ed1
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a pure distribute volume with 2 bricks,
start and mount it.
2. Create dir dir0/dir1/dir2 inside which create 1000
files and rename all the files.
3. Start remove-brick operation on the volume.
4. Check remove-brick status till status is completed.
5. When remove-brick status is completed stop it.
6. Go to brick used for remove brick and perform lookup
on the files.
8. Change the linkto xattr value for every file in brick
used for remove brick to point to itself.
9. Perfrom rm -rf * from mount point.
Change-Id: Ic4a5e0ff93485c9c7d9a768093a52e1d34b78bdf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create volume with 5 sub-volumes, start and mount it.
2. Check df -h for available size.
3. Create 2 sparse file one from /dev/null and one from /dev/zero.
4. Find out size of files and compare them through du and ls.
(They shouldn't match.)
5. Check df -h for available size.(It should be less than step 2.)
6. Remove the files using rm -rf.
CentOS-CI failure analysis:
The testcase fails on CentOS-CI on distributed-disperse
volumes as it requires 30 bricks which aren't avaliable
on CentOS-CI.
Change-Id: Ie53b2531cf6105117625889d21c6e27ad2c10667
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a distributed volume, start and mount it
2. Create 1000 dirs and 1000 files under a directory say 'dir1'
3. Set xattr glusterfs.dht.nuke to "test" for dir1
4. Validate dir-1 is not seen from mount point
5. Validate if the entry is moved to '/brickpath/.glusterfs/landfill'
and deleted eventually.
Change-Id: I6359ee3c39df4e9e024a1536c95d861966f78ce5
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Extending the existing validation by adding
node restart as a method to bring back
offline bricks along with exiting volume start
approach.
Change-Id: I1291b7d9b4a3c299859175b4cdcd2952339c48a4
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a 1 brick pure distributed volume.
2. Start the volume and mount it on a client node using FUSE.
3. Create a directory on the mount point.
4. Check trusted.glusterfs.dht xattr on the backend brick.
5. Add brick to the volume using force.
6. Do lookup from the mount point.
7. Check the directory permissions from the backend bricks.
8. Check trusted.glusterfs.dht xattr on the backend bricks.
9. From mount point cd into the directory.
10. Check the directory permissions from backend bricks.
11. Check trusted.glusterfs.dht xattr on the backend bricks.
Change-Id: I1ba2c07560bf4bdbf7de5d3831e5de71173b64a2
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create Volume of type distribute
2. Set Quota limit on the root directory
3. Do some IO to reach the Hard limit
4. After IO ends, compute arequal checksum
5. Add bricks to the volume.
6. Start rebalance
7. After rebalance is completed, check arequal checksum
Change-Id: I1cffafbe90dd30013e615c353d6fd7daa5990a86
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create and start volume.
2. Create some special files on mount point.
3. Once it is complete, start some IO.
4. Add brick into the volume and start rebalance.
5. All IO should be successful.
Failing on centos-ci issue due to: https://github.com/gluster/glusterfs/issues/1461
Change-Id: If91886afb3f44d5ede09dfc84e966f66c89ff709
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create Volume of type distribute
2. Set Quota limit on subdirectory
3. Do some IO to reach the Hard limit
4. After IO ends, compute arequal checksum
5. Add bricks to the volume.
6. Start rebalance
7. After rebalance is completed, check arequal checksum
Change-Id: I0a431ffb5d1c957e8d11817dd8142d9551323a65
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume
2. Create directories or files
3. Calculate checksum using arequal
4. Add brick and start rebalance
5. While rebalance is running, rename the files or
directories
6. After rebalance is completed, calculate checksum
7. Compare the Checksum
Change-Id: I59f80b06a23f6b4c406907673d71b254d054461d
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test script covers below scenarios:
1) Sub-volume is down - Directory - Verify extended attribute
creation, display, modification and removal.
2) Directory self heal - extended custom attribute when sub-volume
is up again.
3) Sub-volume is down -create new Directory - Verify extended
attribute creation, display, modification and removal.
4) Newly Directory self heal - extended custom attribute when
sub-volume is up again.
Change-Id: I35f8772d7758c2e9c02558b46301681d6c0f319b
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a distribute volume.
2. Set quota limit on a directory on mount.
3. Do IO to reach the hardlimit on the directory.
4. After IO is completed, remove a brick.
5. Check if quota is validated, i.e. hardlimit exceeded true
after rebalance.
Change-Id: I8408cc31f70019c799df91e1c3faa7dc82ee5519
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
| |
Change-Id: I54ecce22f243b10248eea78b52c6b8cf2c7fd338
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a Replica volume.
2. Bring down one of the brick down in the replica pair
3. Do some IO and create files on the mount point
4. Add a pair of bricks to the volume
5. Initiate rebalance
6. Bring back the brick which was down
7. After self heal happens, all the files should be present.
Change-Id: I78a42866d585b00c40a2712c4ae8f2ab3552adca
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Currently, there is no validation for shared storage whether it is
mounted or not post reboot. Added the validation for reboot scenario
Made the testcase modular for future updates to the test.
Change-Id: I9d39beb3c6718e648eabe15a409c4b4985736645
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Problem :ValueError: invalid literal for int()
with base 10: 'N/A'
Solution : Wait for 5 sec so that brick will get the
port
Change-Id: Idf518392ba5584d09e81e76fca6e29037ac43e90
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
--------
Problem 1:
In the latest runs the following testcases fail
with wait timeout mostly on rebalance with an exception
on test_stack_overflow which fails on layout:
1.functional.dht.test_stack_overflow.TestStackOverflow_cplex_dispersed_glusterfs.test_stack_overflow
2.functional.dht.test_rebalance_dir_file_from_multiple_clients.RebalanceValidation_cplex_dispersed_glusterfs.test_expanding_volume_when_io_in_progress
3.functional.dht.test_restart_glusterd_after_rebalance.RebalanceValidation_cplex_dispersed_glusterfs.test_restart_glusterd_after_rebalance
4.functional.dht.test_stop_glusterd_while_rebalance_in_progress.RebalanceValidation_cplex_dispersed_glusterfs.test_stop_glusterd_while_rebalance_in_progress
5.functional.dht.test_rebalance_with_hidden_files.RebalanceValidation_cplex_dispersed_glusterfs.test_rebalance_with_hidden_files
This is mostly observed on disprese volumes which
is expected as in most cases disprese volumes take
more time than pure replicated or distributed volumes
due to it's design.
Problem 2:
Another issue which was observed was that
test_rebalance_with_hidden_files failing on I/O with
distributed volume type with the below stack trace:
Traceback (most recent call last):
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module>
rc = args.func(args)
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files
base_file_name, file_types)
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files
ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
IOError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/.1.txt'
Solution:
--------
Problem 1
Increasing or adding timeout so that wait timeouts are not observed.
Problem 2
Adding counter logic to fix the I/O failure.
Change-Id: I917137abdeb2e3844ee666258235f6ccc854ee9f
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test script verifies below scenarios:
1)Sub-volume is down copy directory
2)Sub-volume is down copy directory - destination dir hash to
up sub-volume
3)Sub-volume is down copy newly created directory - destination
dir hash to up sub-volume
4)Sub-volume is down copy newly created directory - destination
dir hash to down sub-volume
Change-Id: I22b9bf79ef4775b1128477fb858c509a719efb4a
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Three Scenarios:
- Simulate gfid split brain files under a directory
- Resolve gfid splits using `source-brick`, `bigger-file` and
`latest-mtime` methods
- Validate all the files are healed and data is consistent
Change-Id: I8b143f341c0db2f32086ecb6878cbfe3bdb247ce
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create, start and mount a volume.
2. Create a directory on the mount point and start
linux utar.
3. Create another directory on the mount point and
start rsync of linux untar directory.
4. Add bricks to the volume
5. Trigger rebalance on the volume.
6. Wait for rebalance to complete on volume.
7. Wait for I/O to complete.
8. Validate if checksum of both the untar and rsync is same.
Change-Id: I008c65b1783d581129b4c35f3ff90642fffe29d8
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and mount it
2. Start gluster compilation
3. Bring down redundant bricks
4. Wait for compilation to complete
5. Bring up bricks
6. Check if mountpoint is accessible
7. Delete glusterfs from mountpoint and
start gluster compilation again
8. Bring down redundant bricks
9. Wait for compilation to complete
10. Bring up bricks
11. Check if mountpoint is accessible
Change-Id: Ic5a272fba7db9707c4acf776d5a505a31a34b915
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The code fails if we have give hostname in glusto-tests
config file. This is becuase we have a converstion logic
present in the testcase which converts IP to hostname.
Solution:
Adding code to check if it's an IP and then only
run the code to convert it.
Change-Id: I3bb1a566d469a4c32161c91fa610da378d46e77e
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the following testcases for block, character
and pipe files:
Test case 1:
1. Create distributed volume with 5 sub-volumes,
start and mount it.
2. Create character and block device files.
3. Check filetype of files from mount point.
4. Verify that the files are stored on only the bricks which is
mentioned in trusted.glusterfs.pathinfo xattr.
5. Verify stat output from mount point and bricks.
Test case 2:
1. Create distributed volume with 5 sub-volumes,
start and mount it.
2. Create character and block device files.
3. Check filetype of files from mount point.
4. Verify that the files are stored on only one bricks which is
mentioned in trusted.glusterfs.pathinfo xattr.
5. Delete the files.
6. Verify if the files are delete from all the bricks
Test case 3:
1. Create distributed volume with 5 sub-volumes,
start and mount it.
2. Create character and block device files.
3. Check filetype of files from mount point.
4. Set a custom xattr for files.
5. Verify that xattr for files is displayed on mount point and bricks.
6. Modify custom xattr value and verify that xattr for files
is displayed on mount point and bricks.
7. Remove the xattr and verify that custom xattr is not displayed.
8. Verify that mount point and brick shows pathinfo xattr properly.
Test case 4:
1. Create distributed volume with 5 sub-volumes,
start and mount it.
2. Create a pipe file.
3. Check filetype of files from mount point.
4. Verify that the files are stored on only the bricks which is
mentioned in trusted.glusterfs.pathinfo xattr.
5. Verify stat output from mount point and bricks.
6. Write data to fifo file and read data from fifo file
from the other instance of the same client.
Upstream bug: https://github.com/gluster/glusterfs/issues/1461
Change-Id: I0e72246ba3d6d20a5de95a95d51271337b6b5a57
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
- Create and mount a replicated volume and disable quorum,
self-heal deamon
- Create ~10 files from the mount point and simulate data, metadata
split-brain for 2 files each
- Create a dir with some files and simulate entry/gfid split brain
- Validate volume successfully recognizing split-brain
- Validate a lookup on split-brain files fails with EIO error on mount
- Validate `heal info` and `heal info split-brain` command shows only
the files that are in split-brain
- Validate new files and dir's can be created from the mount
Change-Id: I8caeb284c53304a74473815ae5181213c710b085
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
brickdir.hashrange_contains_hash() returns true
or False. However it test test_create_file it's
check it ret == 1 or not
Fix:
Changing ret == 1 to ret.
Change-Id: I53655794f10fc5d778790bdffbe65563907bef6d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
- `str.rsplit` doesn't accept named args in py2
- Removed named arg to make it compatible with both versions
Change-Id: Iba287ef4c98ebcbafe55f2166c99aef0c20ed9aa
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create deep directory and file in each directory
3.Rename the file
4.Check if brickpath contains old files
5.Delete all data
6.Check .glusterfs/indices/xattrop is empty
7.Check if brickpath is empty
Change-Id: I04e50ef94379daa344be1ae1d19cf2d66f8f460b
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a volume and mount it
2. Disable heal and cluster-quorum-count
3. Bring down one data and arbiter brick from one
subvol
4. Write IO and validate it
5. Bring up bricks
6. Bring down another data brick and arbiter brick
from the same subvol
7. Write IO and validate it
8. Bring up bricks
9. Check if split-brain is created
10. Write IO -> should fail
11. Enable heal and cluster-quorum-count
12. Write IO -> should fail
Change-Id: I229b58c1bcd70dcd87d35dc410e12f51b032b9c4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a distributed-replicated(3X3)/distributed-arbiter(3X(2+1))
and mount it on one client
2. Kill 3 bricks corresponding to the 1st subvol
3. Unmount and remount the volume on the same client
4. Create deep dir from mount point 'dir1/subdir1/deepdir1'
5. Create files under dir1/subdir1/deepdir1; touch <filename>
6. Now bring all sub-vols up by volume start force
7. Validate backend bricks for dir creation, the subvol which is
offline will have no dirs created, whereas other subvols will have
dirs created from step 4
8. Trigger heal from client by '#find . | xargs stat'
9. Verify that the directory entries are created on all back-end bricks
10. Create new dir (dir2) on location dir1/subdir1/deepdir1
11. Trigger rebalance and wait for the completion
12. Check backend bricks for all entries of dirs
Change-Id: I4d8f39e69c84c28ec238ea73935cd7ca0288bffc
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In most of the testcases due to redundant logging,
the performance of the whole suite completion time
is affected.
Solution:
In BVT Test suite, there are 184 g.log.info messages
more than half of them are redundant.
Removed logs wherever it is not required.
Added missing get_super_method for setUp and tearDown
for one testcase, modified increment in the test.
Change-Id: I19e4462f2565906710c2be117bc7c16c121ddd32
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create 1x3 volume and fuse mount the volume
2) On mount created a dir dir1
3) Pkill glusterfsd on node n1 (b2 on node2 and b3 and node3 up)
4) touch f{1..10} on the mountpoint
5) b2 and b3 xattrs would be blaming b1 as files are created while
b1 is down
6) Reset the b3 xattrs to NOT blame b1 by using setattr
7) Now pkill glusterfsd of b2 on node2
8) Restart glusterd on node1 to bring up b1
9) Now bricks b1 online , b2 down, b3 online
10) touch x{1..10} under dir1 itself
11) Again reset xattr on node3 of b3 so that it doesn't blame b2,
as done for b1 in step 6
12) Do restart glusterd on node2 hosting b2 to bring all bricks online
13) Check for heal info, split-brain and arequal for the bricks
Change-Id: Ieea875dd7243c7f8d2c6959aebde220508134d7a
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
| |
Add logic to do ls -l before and after.
Add logic to set all log-levels to debug.
Change-Id: I512e3b229fe9e2126f6c596fdc031c00a25fbe0b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Starting writing and reading data on file
3.Bring down 1 brick
4.Validate read and write to file
5.Bring up brick and start healing
6.Monitor healing and completion
7.Bring down 2nd brick
8.Read and write to same file
9.Bring up brick and start healing
10.Monitor healing and completion
11.Check split-brain
Change-Id: Ib03a1ad7ee626337904b084e85eee38750fea141
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
- Validate `heal info` returns before timeout with IO
- Validate `heal info` returns before timeout with IO and brick down
- Validate data heal on file append in AFR, arbiter
- Validate entry heal on file append in AFR, arbiter
Change-Id: I803b931cd82d97b5c20bd23cd5670cb9e6f04176
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In most of the testcases due to redundant logging,
the performance of the whole suite completion time
is affected.
Solution:
Currently there are 100+ g.log.info statements in the
authentincation suite and half of them are redundant.
Removed the g.log.info statements whereever it is not
required. After the changes the g.log.info statements
are around 50 and not removed the statements to reduce
the number of lines but for the improvement of the
whole suite.
Modified few line indents as well and added teardown
for the missing files.
Note: Will be submitting for each components separately
Change-Id: I63973e115dd5dbbc7fc9462978397e7915181265
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Due to patch [1] which was sent for issue #24
causes a large number of testcases to fail
or get stuck in the latest DHT run.
Solution:
Make changes sot that getfattr command
sends back the output in text wherever needed.
Links:
[1] https://review.gluster.org/#/c/glusto-tests/+/24841/
Change-Id: I6390e38130b0699ceae652dee8c3b2db2ef3f379
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcases test_volume_start_stop_while_rebalance_is_in_progress throws
the below traceback when run:
```
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135
```
This is because g.log.error() was used instead of
self.assertTrue().
Solution:
Changing to self.assertTrue().
Change-Id: If926eb834c0128a4e507da9fdd805916196432cb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create volume and create files/dirs from mount point
2. With IO in progress execute reset-brick start
3. Now format the disk from back-end, using rm -rf <brick path>
4. Execute reset brick commit and check for the brick is online.
5. Issue volume heal using "gluster vol heal <volname> full"
6. Check arequal for all bricks to verify all backend bricks
including the resetted brick have same data
Change-Id: I06b93d79200decb25f863e7a3f72fc8e8b1c4ab4
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a pure-ec volume (say 1x(4+2))
2. Mount volume on two clients
3. Create some files and dirs from both mnts
4. Add bricks in this case the (4+2) ie 6 bricks
5. Create a new dir(common_dir) and in that directory create a distinct
directory(using hostname as dirname) for each client and pump IOs
from the clients(dd)
6. While IOs are in progress replace any of the bricks
7. Check for errors if any collected after step 6
Change-Id: I3125fc5906b5d5e0bc40477e1ed88825f53fa758
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
| |
TC's was failing due to timeout issue
increased reabalnce timeout from 900 to 1800
Change-Id: I726217a21ebbde6391660dd3c9dc096cc9ca6bb4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I813f3e78ad8b0b79940635df6721e34e6bc93f34
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create a directory say d1
3.Create deep directories and files in d1
4.Bring down redundant bricks
5.Delete d1
6.Create d1 and same data again
7.Bring bricks up
8.Monitor heal
9.Verify split-brain
Change-Id: I778fab6bf6d9f81fca79fe18285073e1f7ccc7e7
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Test script test_alert_time_out currently fails
2 out of 6 times when executed on the same setup
this is due to the log files not have 120004 A
alert message. This issue is only observed in
distributed volume type mounted over fuse
protocol.
Solution:
There is no permanent solution to this problem
as even if we increase the sleep 20 seconds there
is still a chance that it might fail. The optimal
sleep time where it only fails 5 times after 15
attempts is 6 seconds. Hence changing sleep time
to 6 seconds.
Change-Id: I9e9bd41321e24f502d90c3c34edce9113133755e
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
The assertIsNotNone is missing the param.
Change-Id: Iddff9b203672b2edf702ada624bfac1892641712
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Adding code to get dir tree and dump all
xattr in hex for Bug 1810901 before remove-brick
also adding logic to set log-level to debug.
Change-Id: I9c9c970c4de7d313832f6f189cdca8428a073b1e
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|