| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Set the max-port option in glusterd.vol file to 49200
2) Restart glusterd on one of the node
3) Create 50 volumes in a loop
4) Try to start the 50 volumes in a loop
5) Confirm that the 50th volume failed to start
6) Confirm the error message, due to which volume failed to start
7) Set the max-port option in glusterd.vol file back to default value
8) Restart glusterd on the same node
9) Starting the 50th volume should succeed now
Change-Id: I084351db20cc37e3391061b7b313a18896cc90b1
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
| |
Change-Id: I7cbc6422a6a6d2946440e51e8d540f47ccc9bf46
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a gluster volume.
2. Kill all gluster related processes.
3. Start glusterd service.
4. Verify that all gluster processes are up.
5. Repeat the above steps 5 times.
Change-Id: If01788ae8bcdd75cdb55261715c34edf83e6f018
Signed-off-by: Rinku Kothiya <rkothiya@redhat.com>
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifying the glusterfind list command functionality with valid
and invalid values for the required and optional parameters.
* Create a volume
* Create a session on the volume and call glusterfind
list with the following combinations:
- Valid values for optional parameters
- Invalid values for optional parameters
NOTE:
There are no required parameters for glusterfind list command.
Change-Id: I2677f507dad42904b404b5f2daf0e354c37c0cb4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Fix:
Improved the check for default quorum options on the volume,
to work with the present as well as older default values
Older default value: 51
Current Default value: 51 (DEFAULT)
Change-Id: I200b81334e84a7956090bede3e2aa50b9d4cf8e0
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Issue: The cluster options set during TC aren't reset,
causing the cluster options to affect subsequent TC runs.
Fix: Adding volume_reset() in the tearDown of a TC to
perform a cleanup of the cluster options.
Change-Id: I00da5837d2a4260b4d414cc3c8083f83d8f6fadd
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test steps:
1) Create a volume and start it.
2) Fetch the max bricks per process value
3) Reset the volume options
4) Fetch the max bricks per process value
5) Compare the value fetched in last step with the initial value
6) Enable brick-multiplexing in the cluster
7) Fetch the max bricks per process value
8) Compare the value fetched in last step with the initial value
Change-Id: I20bdefd38271d1e12acf4699b4fe5d0da5463ab3
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The cluster options once set aren't reset and this would cause
problem for subsequent TCs. hence reseting the options at
teardown.
Change-Id: Ifd1df2632a25ca7788a6bb4f765b3f6583ab06d6
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume of type distributed-replicated or
distributed-arbiter or distributed-dispersed and start it.
2. Mount the volume to clients and create 2000 directories
and 10 files inside each directory.
3. Wait for I/O to complete on mount point and perform ls
(ls should complete within 10 seconds).
Change-Id: I5c08c185f409b23bd71de875ad1d0236288b0dcc
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. stop one of the volume
(i.e) gluster volume stop <vol-name>
2. Get the status of the volumes with --xml dump
(i.e) gluster volume status all --xml
XML dump should be consistent
Signed-off-by: “Milind <“mwaykole@redhat.com”>
Change-Id: I3e7af6d1bc45b73ed8302bf3277e3613a6b1100f
|
|
|
|
|
|
|
|
| |
Moving the gluster mem_leak test case to resource_leak
dir
Change-Id: I8189dc9b509a09f793fe8ca2be53e8546babada7
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Modified the command from 'grep epoll_wait' to
'grep -i 'sys_epoll_wait' to address the changes in the epoll
functionality for newer versions of Linux.
Details of the changes can be found here:
https://github.com/torvalds/linux/commit/791eb22eef0d077df4ddcf633ee6eac038f0431e
Change-Id: I1671a74e538d20fe5dbf951fca6f8edabe0ead7f
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create a trusted storage pool by peer probing the node
2. Create a distributed-replicated volume
3. Start the volume and fuse mount the volume and start IO
4. Create another replicated volume and start it and stop it
5. Start rebalance on the volume.
6. While rebalance in progress, stop glusterd on one of the
nodes in the Trusted Storage pool.
7. Get the status of the volumes with --xml dump
Change-Id: I581b7713d7f9bfdd7be00add3244578b84daf94f
Signed-off-by: “Milind <“mwaykole@redhat.com”>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue: Glusterd start fails after repeated start and stop. ( Due to
the cap on maximum of 6 starts of the service within an hour )
Fix: Hence it is prudent to add the retry option similar to that
of restart_glusterd so as to run `systemctl reset-failed glusterd`
on the servers.
Change-Id: Ic0378934623dfa6dc5ab265246c746269f6995bc
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
This test is to verify BZ:1785577 (
https://bugzilla.redhat.com/show_bug.cgi?id=1785577)
To verify that there are no memeory leak when SSL is
enabled
Change-Id: I1f44de8c65b322ded76961253b8b7a7147aca76a
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Setup and mount a volume on client.
2) Stop glusterd on a random server.
3) Start IO on mount points
4) Set an option on the volume
5) Start glusterd on the stopped node.
6) Verify all the bricks are online after starting glusterd.
7) Check if the volume info is synced across the cluster.
Change-Id: Ia2982ce4e26f0d690eb2bc7516d463d2a71cce86
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Start glusterd
2. Check ping timeout value in glusterd.vol should be 0
3. Create a test script for epoll thread count
4. Source the test script
5. Fetch the pid of glusterd
6. Check epoll thread count of glusterd should be 1
Change-Id: Ie3bbcb799eb1776004c3db4922d7ee5f5993b100
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a distributed-replicated volume, start and mount it.
2. Create deep dirs (200) and create some 100 files on the
deepest directory.
3. Expand volume.
4. Start rebalance.
5. Once rebalance is completed, do a lookup on mount and log
the time taken.
Change-Id: I3a55d2670cc6bda7670f97f0cd6208dc9e36a5d6
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
| |
Change-Id: I626914130554cccf1008ab43158d7063d131b870
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create files and dirs on the mount point.
3. Add bricks to the volume.
4. Replace 2 old brick to the volume.
5. Trigger rebalance fix layout and wait for it to complete.
6. Check layout on all the bricks through trusted.glusterfs.dht.
Change-Id: Ibc8ded6ce2a54b9e4ec8bf0dc82436fcbcc25f56
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create a data set on the client node such that all the available
space is used and "No space left on device" error is generated.
3. Set cluster.min-free-disk to 30%.
4. Add bricks to the volume, trigger rebalance and wait for rebalance
to complete.
Change-Id: I69c9d447b4713b107f15b4801f4371c33f5fb2fc
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios:
----------
Test case 1:
1. Create a volume, start it and mount it using fuse.
2. Create 50 files on the mount point and create 50
hardlinks for the files.
3. After the files and hard links creation is complete,
add bricks to the volume and trigger rebalance on the
volume.
4. Wait for rebalance to complete and check if files are
skipped or not.
5. Trigger rebalance on the volume with force and repeat
step 4.
Test case 2:
1. Create a volume, start it and mount it using fuse.
2. Create 50 files on the mount point and set sticky bit
to the files.
3. After the files creation and sticky bit addition is
complete, add bricks to the volume and trigger rebalance
on the volume.
4. Wait for rebalance to complete.
5. Check for data corruption by comparing arequal before
and after.
Change-Id: I61bcf14185b0fe31b44e9d2b0a58671f21752633
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Fill few bricks till min.free.limit is reached.
3. Add brick to the volume.
4. Set cluster.min-free-disk to 30%.
5. Remove bricks from the volume.
(Remove brick should pass without any errors)
6. Check for data loss by comparing arequal before and after.
Change-Id: I0033ec47ab2a2958178ce23c9d164939c9bce2f3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create some data on the volume.
3. Start remove-brick on the volume.
4. When remove-brick is in progress kill brick process of a brick
which is being remove.
5. Remove-brick should complete without any failures.
Change-Id: I8b8740d0db82d3345279dee3f0f5f6e17160df47
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test scenarios:
===============
Test case: 1
1. Create a volume, start it and mount it.
2. Create some data on the volume.
3. Run remove-brick start, status and finally commit.
4. Check if there is any data loss or not.
Test case: 2
1. Create a volume, start it and mount it.
2. Create some data on the volume.
3. Run remove-brick with force.
4. Check if bricks are still seen on volume or not
Change-Id: I2cfd324093c0a835811a682accab8fb0a19551cb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Enable brickmux on cluster, create a volume, start it and mount it.
2. Start the below I/O from 4 clients:
From client-1 : run script to create folders and files continuously
From client-2 : start linux kernel untar
From client-3 : while true;do find;done
From client-4 : while true;do ls -lRt;done
3. Kill brick process on one of the nodes.
4. Add brick to the volume.
5. Remove bricks from the volume.
6. Validate if I/O was successful or not.
Skip reason:
Test case skipped due to bug 1571317.
Change-Id: I48bdb433230c0b13b0738bbebb5bb71a95357f57
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Check that when there are pending heals and healing and I/O are going
on, heal info completes successfully.
Change-Id: I7b00c5b6446d6ec722c1c48a50e5293272df0fdf
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create volume, start it and mount it.
2. Open file datafile on mount point and start copying /etc/passwd
line by line(Make sure that the copy is slow).
3. Start remove-brick of the subvol to which has datafile is hashed.
4. Once remove-brick is complete compare the checksum of /etc/passwd
and datafile.
Change-Id: I278e819731af03094dcee93963ec1da115297bef
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I080328dfbcde5652f9ab697f8751b87bf96e8245
Signed-off-by: “Milind <“mwaykole@redhat.com”>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and mount it.
2. Set the quorum type to 'server'.
3. Bring some nodes down such that quorum isn't met.
4. Brick status in the node which is up should be offline.
5. Restart glusterd in this node.
6. Brick status in the restarted node should be offline.
Change-Id: If6885133848d77ec803f059f7a056dc3aeba7eb1
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a pure distribute volume with 3 bricks.
2. Start it and mount it on client.
3. Fill one disk of the volume till it's full
4. Add brick to volume, start rebalance and wait for it to complete.
5. Check arequal checksum before and after add brick should be same.
6. Check if link files are present on bricks or not.
Change-Id: I4645a3eea33fefe78d48805a3794556b81b189bc
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create and start a volume
2) Run volume info command
3) Run volume status command
4) Run volume stop command
5) Run volume start command
6) Check the default log level of cli.log
Change-Id: I871d83500b2a3876541afa348c49b8ce32169f23
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create and start a volume
2. Check the output of '/var/lib/glusterd/options' file
3. Store the value of 'global-option-version'
4. Set server-quorum-ratio to 70%
5. Check the output of '/var/lib/glusterd/options' file
6. Compare the value of 'global-option-version' and check
if the value of 'server-quorum-ratio' is set to 70%
Change-Id: I5af40a1e05eb542e914e5766667c271cbbe126e8
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create and start a volume
2. Disable brick mutliplex
2. Set auth.allow option on volume for the client address on which
volume is to be mounted
3. Mount the volume on client and then unmmount it.
4. Reset the volume
5. Set auth.reject option on volume for the client address on which
volume is to be mounted
6. Mounting the volume should fail
7. Reset the volume and mount it on client.
8. Repeat the steps 2-7 with brick multiplex enabled
Change-Id: I26d88a217c03f1b4732e4bdb9b8467a9cd608bae
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Create a distributed-replicated volume and start it.
2) Enable storage.reserve option on the volume
using below command, gluster volume set storage.reserve.
let's say, set it to a value of 50.
3) Mount the volume on a client
4) check df -h output of the mount point and backend bricks.
Change-Id: I74f891ce5a92e1a4769ec47c64fc5469b6eb9224
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a replicated/distributed-replicate volume and mount it
2. Set data/metadata/entry-self-heal to off and
data-self-heal-algorithm to diff
3. Create few files inside a directory with some data
4. Check arequal of the subvol and all the bricks in the subvol should
have same checksum
5. Bring down a brick from the subvol and validate it is offline
6. Modify the data of existing files under the directory
7. Bring back the brick online and wait for heal to complete
8. Check arequal of the subvol and all the brick in the same subvol
should have same checksum
Change-Id: I568a932c6e1db4a9084c01556c5fcca7c8e24a49
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Changes done in this patch:
1. Adding get_usable_size_per_disk() to lib_utils.py.
2. Removing the redundant code from
dht/test_rename_with_brick_min_free_limit_crossed.py.
Change-Id: I80c1d6124b7f0ce562d8608565f7c46fd8612d0d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create files and dirs on the mount point.
3. Start remove-brick and copy huge file when remove-brick is
in progress.
4. Commit remove-brick and check checksum of orginal and copied file.
Change-Id: I487ca05114c1f36db666088f06cf5512671ee7d7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test script covers below scenarios:
1) Creation of various file types - regular, block,
character and pipe file
2) Hard link create, validate
3) Symbolic link create, validate
Issue : Fails on CI due to-
https://github.com/gluster/glusterfs/issues/1461
Change-Id: If50b8d697115ae7c23b4d30e0f8946e9fe705ece
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create a replicated/distributed-replicate volume and mount it
2. Start IO from the clients
3. Bring down a brick from the subvol and validate it is offline
4. Bring back the brick online and wait for heal to complete
5. Once the heal is completed, expand the volume.
6. Trigger rebalance and wait for rebalance to complete
7. Validate IO, no errors during the steps performed from step 2
8. Check arequal of the subvol and all the brick in the same subvol
should have same checksum
Note: This tests is cleary for replicated volume types.
Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create a volume and start it.
2) Mount volume on client and start IO.
3) Start profile on the volume.
4) Create another volume.
5) Start profile on the volume.
6) Run volume status in a loop for 100 times in one node.
7) Run profile info for the new volume on one of the other node
8) Run profile info for the new volume in loop for 100 times on
the other node
Change-Id: I1c32a938bf434a88aca033c54618dca88623b9d1
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1 . Check the location of glusterd socket file ( glusterd.socket )
ls /var/run/ | grep -i glusterd.socket
2. systemctl is-enabled glusterd -> enabled
Change-Id: I6557c27ffb7e91482043741eeac0294e171a0925
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios added:
----------------
Test case:
1. Create a volume, start it and mount it.
2. Start I/O from mount point.
3. Check if there are any memory leaks and OOM killers.
Test case:
1. Create a volume, start it and mount it.
2. Set features.cache-invalidation to ON.
3. Start I/O from mount point.
4. Run gluster volume heal command in a loop
5. Check if there are any memory leaks and OOM killers on servers.
Design change:
--------------
- self.id() is moved into test class as it was hitting bound
errors in the original logic.
- Logic changed for checking leaks fuse.
- Fixed breakage in methods where ever needed.
Change-Id: Icb600d833d0c08636b6002abb489342ea1f946d7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create and start volume.
2. Check that the quorum options aren't coming up in the vol info.
3. Kill two glusterd processes.
4. There shouldn't be any effect on the glusterfsd processes.
Change-Id: I40e6ab5081e723ae41417f1e5a6ece13c65046b3
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create all types of volumes.
2. Mount the brick on two client mounts
3. Prepare same script to do flock on the two nodes
while running this script it should not hang
4. Wait till 300 iteration on both the node
Change-Id: I53e5c8b3b924ac502e876fb41dee34e9b5a74ff7
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a disperse volume and start it.
2. Set the eager lock option
3. mount the volume and create a file
4. Check the profile info of the volume for inodelk count.
5. check xattrs of the file for dirty bit.
6. Reset the eager lock option and check the attributes again.
Change-Id: I0ef1a0e89c1bc202e5df4022c6d98ad0de0c1a68
Signed-off-by: Sheetal <spamecha@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
changed from
`self.validate_vol_option('storage.reserve', '1 (DEFAULT)')`
to
`self.validate_vol_option('storage.reserve', '1')`
Change-Id: If75820b4ab3c3b04454e232ea1eccc4ee5f7be0b
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and mount it.
2. Set ownership permissions on the mountpoint and validate it.
3. Restart the volume.
4. Validate the permissions set on the mountpoint.
Change-Id: I1bd3f0b5181bc93a7afd8e77ab5244224f2f4fed
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
| |
Add test to verify whether the glusterd crash is found while
performing a peer probe with firewall services removed.
Change-Id: If68c3da2ec90135a480a3cb1ffc85a6b46b1f3ef
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a volume and start it.
2. Fetch the brick list
3. Bring any one brick down umount the brick
4. Force start the volume and check that all the bricks are not online
5. Remount the removed brick and bring back the brick online
6. Force start the volume and check if all the bricks are online
Change-Id: I464d3fe451cb7c99e5f21835f3f44f0ea112d7d2
Signed-off-by: nik-redhat <nladha@redhat.com>
|