| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it
2. On mount point, create some files
3. Collect arequal checksum on mount point pre-rebalance
4. Do the following 3 times:
5. Expand the volume
6. Start rebalance and wait for it to finish
7. Collect arequal checksum on mount point post-rebalance
and compare with value from step 3
Change-Id: I8a455ad9baf2edb336258448965b54a403a48ae1
Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test scenarios added:
1. Test to check entry self heal.
2. Test to check meta data self heal.
3. Test self heal when files are removed and dirs
created with the same name.
Additional libraries added:
1. group_del(): Deletes groups created
2. enable_granular_heal(): Enables granular heal
3. disable_granular_heal(): Disables granular heal
Change-Id: Iffa9a100fddaecae99c384afe3aaeaf13dd37e0d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test to verify the df -h output when for a given volume, the bricks are
replaces, volume size is shrinked and when the volume size is expanded.
Steps:
- Take the output of df -h.
- Replace any one brick for the volumes.
- Wait till the heal is completed
- Repeat steps 1, 2 and 3 for all bricks for all volumes.
- Check if there are any inconsistencies in the output of df -h
- Remove bricks from volume and check output of df -h
- Add bricks to volume and check output of df -h
The size of mount points should remain unchanged during a replace op,
and the sizes should vary according to shrink or expand op performed
on the volume.
Change-Id: I323da4938767cad1976463c2aefb6c41f355ac57
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding additional checks to verify the glusterd logs
for `Responded to` and `Received ACC` while performing
a glusterd restart.
Replacing reboot with network interface down to validate
the peer probe scenarios.
Adding lib to bring down network interface.
Change-Id: Ifb01d53f67835224d828f531e7df960c6cb0a0ba
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it to a client.
2. Start I/O on volume.
3. Add brick and trigger rebalance, wait for rebalance to complete.
(The volume which was 1x3 should now be 2x3)
4. Add brick and trigger rebalance, wait for rebalance to complete.
(The volume which was 2x3 should now be 3x3)
5. Remove brick from volume such that it becomes a 2x3.
6. Remove brick from volume such that it becomes a 1x3.
7. Wait for I/O to complete and check for any input/output errors in
the logs of both the I/O and rebalance logs.
Additional library fix:
Adding `return True` at the end of is_layout_complete()
to return True if no issues found in layout.
Refernce BZ: #1726673
Change-Id: Ifd0360f948b334bfcf341c1015a731274acdb2bf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In test_no_glustershd_with_distribute, we are trying to
setup all volume types at once where it fails on setup
in CI as we don't have sufficiant bricks.
Solution:
Enable brick sharing in setup_volume() by setting
multi_vol to True.
Change-Id: I2129e3059fd156138d0a874d6aa6904f3cb0cb9b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create, start and mount a volume consisting 2 subvols on 2 clients
2. Create a dir `dir` and file `dir/file` from client0
3. Take note of layouts of `brick1`/dir and `brick2`/dir of the volume
4. Validate for success lookup from only one brick path
5. Re-assign layouts ie., brick1/dir to brick2/dir and vice-versa
6. Remove `dir/file` from client0 and recreate same file from client0
and client1
7. Validate for success lookup from only one brick path (as layout is
changed file creation path will be changed)
8. Validate checksum is matched from both the clients
Change-Id: I91ec020ee616a0f60be9eff92e71b12d20a5cadf
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. On Node N1, Add "transport.socket.bind-address N1" in the
/etc/glusterfs/glusterd.vol
2. Create a replicate (1X3) and disperse (4+2) volumes with
name more than 108 chars
3. Mount the both volumes using node 1 where you added the
"transport.socket.bind-address" and start IO(like untar)
4. Perform add-brick on replicate volume 3-bricks
5. Start rebalance on replicated volume
6. Perform add-brick for disperse volume 6 bricks
7. Start rebalance of disperse volume
Change-Id: Ibc57f18b84d21439bbd65a665b31d45b9036ca05
Signed-off-by: “Milind <“mwaykole@redhat.com”>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it
2. Create a 2nd volume, start it and mount it
3. Create files on mount points
4. Collect arequal checksum on mount point pre-rebalance
5. Expand the volumes
6. Start rebalance simultaneously on the 2 volumes
7. Wait for rebalance to complete
8. Collect arequal checksum on mount point post-rebalance and compare
with value from step 4
Change-Id: I6120bb5f96ff5cfc345d2b0f84dd99ca749ffc74
Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Detach a peer
2. Create a volume, start it and mount it
3. Start creating a few files on mount point
4. Collect arequal checksum on mount point pre-rebalance
5. Expand the volume
6. Start rebalance
7. While rebalance is going, probe a peer and check if the peer was
probed successfully
8. Collect arequal checksum on mount point post-rebalance and compare
with value from step 4
Change-Id: Ifee9f3dcd69e87ba1d5b4b97c29c0a6cb7491e60
Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test to verify whether the lock is being granted to two diff
clients at the same time.
- Take lock from client 1 => Lock is acquired
- Try taking lock from client 2
- Release lock from client1
- Take lock from client2
- Again try taking lock from client 1
Also, verifying the behaviour with eagerlock and other eagerlock
set of on and off.
Change-Id: Ie839f893f7a4f9b2c6fc9375cdf9ee8a27fad13b
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Scenarios:
---------------
1. Test heal of hard links through default heal
2. Test heal of soft links through default heal
CentOS-CI failing due to issue:
https://github.com/gluster/glusterfs/issues/1954
Change-Id: I9fd7695de6271581fed7f38ba41bda8634ee0f28
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
For implementing a similar testcase,
Add a test to rebalance-preserves-permission TC
which runs double-expand before rebalance.
Change-Id: I4a37c383bb8e823c6ca84c1a6e6699b18e80a450
Signed-off-by: Tamar Shacked <tshacked@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume.
2. Create a session on the volume.
3. Create various files from mount point.
4. Bring down one of the nodes.
5. Perform glusterfind pre.
6. Perform glusterfind post.
7. Check the contents of outfile.
8. Create more files from mountpoint
9. Reboot one of the nodes
10. Perform gluserfind pre
11. Perform glusterfind post
12. Check the contents of outfile
Change-Id: I5d27bf32f3f028d0e919e8c33ac742d00193b81e
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
After an entry heal is complete, verify that the atime/mtime/ctime of
the parent directory is same on all bricks of the replica.
The test is run with features.ctime enabled as well as disabled.
Change-Id: Iefb6a8b50bd31cf5c5aae72e4030239cc0f1a43d
Reference: BZ# 1572163
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Create a distributed-replicated volume and start it.
2) Enable storage.reserve option on the volume using below command,
gluster volume set storage.reserve 50
3) Mount the volume on a client
4) Add some data on the mount point (should be within reserve limits)
5) Now, add-brick and trigger rebalance.
While rebalance is in-progress change the reserve limit to a lower
value say (30)
6. Stop the rebalance
7. Reset the storage reserve value to 50 as in step 2
8. trigger rebalance
9. while rebalance in-progress change the reserve limit to a higher
value say (70)
Change-Id: I1b2e449f74bb75392a25af7b7088e7ebb95d2860
Signed-off-by: “Milind <“mwaykole@redhat.com”>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it to a client.
2. Create 10 files on the mount point and set acls on the files.
3. Check the acl value and collect arequal-checksum.
4. Add bricks to the volume and start rebalance.
5. Check the value of acl(it should be same as step 3),
collect and compare arequal-checksum with the one collected
in step 3
Additional functions added:
a. set_acl(): Set acl rule on a specific file
b. get_acl(): Get all acl rules set to a file
c. delete_acl(): Delete a specific or all acl rules
set on a file
Change-Id: Ia420cbcc8daea272cd4a282ae27d24f13b4991fe
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
The cluster options are reset post the TC run
so that they don't persist through the other
TC runs.
Change-Id: Id55bb64ded09e113cdc0fc512a17857195619e41
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create a volume and start it.
2) Add an iptable rule to drop outbound glusterd traffic
3) Check if the rule is added in iptables list
4) Execute few Gluster CLI commands like volume status, peer status
5) Gluster CLI commands should fail with suitable error message
Change-Id: Ibc5717659e65f0df22ea3cec098bf7d1932bef9d
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Start creating a few files on mount point.
3. While file creation is going on, kill one of the bricks
in the replica pair.
4. After file creattion is complete collect arequal checksum
on mount point.
5. Bring back the brick online by starting volume with force.
6. Check if all bricks are online and if heal is in progress.
7. Add bricks to the volume and start rebalance.
8. Wait for rebalance and heal to complete on volume.
9. Collect arequal checksum on mount point and compare
it with the one taken in step 4.
Change-Id: I2999b81443e8acabdb976401b0a56566a6740a39
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create symlinks on the volume such that the files for the symlink
are outside the volume.
3. Once all the symlinks are create a data file using dd:
dd if=/dev/urandom of=FILE bs=1024 count=100
4. Start copying the file's data to all the symlink.
5. When data is getting copied to all files through symlink add brick
and start rebalance.
6. Once rebalance is complete check the md5sum of each file through
symlink and compare if it's same as the orginal file.
Change-Id: Icbeaa75f11e7605e13fa4a64594137c8f4ae8aa2
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Set the max-port option in glusterd.vol file to 49200
2) Restart glusterd on one of the node
3) Create 50 volumes in a loop
4) Try to start the 50 volumes in a loop
5) Confirm that the 50th volume failed to start
6) Confirm the error message, due to which volume failed to start
7) Set the max-port option in glusterd.vol file back to default value
8) Restart glusterd on the same node
9) Starting the 50th volume should succeed now
Change-Id: I084351db20cc37e3391061b7b313a18896cc90b1
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
| |
Change-Id: I7cbc6422a6a6d2946440e51e8d540f47ccc9bf46
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a gluster volume.
2. Kill all gluster related processes.
3. Start glusterd service.
4. Verify that all gluster processes are up.
5. Repeat the above steps 5 times.
Change-Id: If01788ae8bcdd75cdb55261715c34edf83e6f018
Signed-off-by: Rinku Kothiya <rkothiya@redhat.com>
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifying the glusterfind list command functionality with valid
and invalid values for the required and optional parameters.
* Create a volume
* Create a session on the volume and call glusterfind
list with the following combinations:
- Valid values for optional parameters
- Invalid values for optional parameters
NOTE:
There are no required parameters for glusterfind list command.
Change-Id: I2677f507dad42904b404b5f2daf0e354c37c0cb4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Fix:
Improved the check for default quorum options on the volume,
to work with the present as well as older default values
Older default value: 51
Current Default value: 51 (DEFAULT)
Change-Id: I200b81334e84a7956090bede3e2aa50b9d4cf8e0
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Issue: The cluster options set during TC aren't reset,
causing the cluster options to affect subsequent TC runs.
Fix: Adding volume_reset() in the tearDown of a TC to
perform a cleanup of the cluster options.
Change-Id: I00da5837d2a4260b4d414cc3c8083f83d8f6fadd
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test steps:
1) Create a volume and start it.
2) Fetch the max bricks per process value
3) Reset the volume options
4) Fetch the max bricks per process value
5) Compare the value fetched in last step with the initial value
6) Enable brick-multiplexing in the cluster
7) Fetch the max bricks per process value
8) Compare the value fetched in last step with the initial value
Change-Id: I20bdefd38271d1e12acf4699b4fe5d0da5463ab3
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The cluster options once set aren't reset and this would cause
problem for subsequent TCs. hence reseting the options at
teardown.
Change-Id: Ifd1df2632a25ca7788a6bb4f765b3f6583ab06d6
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume of type distributed-replicated or
distributed-arbiter or distributed-dispersed and start it.
2. Mount the volume to clients and create 2000 directories
and 10 files inside each directory.
3. Wait for I/O to complete on mount point and perform ls
(ls should complete within 10 seconds).
Change-Id: I5c08c185f409b23bd71de875ad1d0236288b0dcc
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. stop one of the volume
(i.e) gluster volume stop <vol-name>
2. Get the status of the volumes with --xml dump
(i.e) gluster volume status all --xml
XML dump should be consistent
Signed-off-by: “Milind <“mwaykole@redhat.com”>
Change-Id: I3e7af6d1bc45b73ed8302bf3277e3613a6b1100f
|
|
|
|
|
|
|
|
| |
Moving the gluster mem_leak test case to resource_leak
dir
Change-Id: I8189dc9b509a09f793fe8ca2be53e8546babada7
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Modified the command from 'grep epoll_wait' to
'grep -i 'sys_epoll_wait' to address the changes in the epoll
functionality for newer versions of Linux.
Details of the changes can be found here:
https://github.com/torvalds/linux/commit/791eb22eef0d077df4ddcf633ee6eac038f0431e
Change-Id: I1671a74e538d20fe5dbf951fca6f8edabe0ead7f
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create a trusted storage pool by peer probing the node
2. Create a distributed-replicated volume
3. Start the volume and fuse mount the volume and start IO
4. Create another replicated volume and start it and stop it
5. Start rebalance on the volume.
6. While rebalance in progress, stop glusterd on one of the
nodes in the Trusted Storage pool.
7. Get the status of the volumes with --xml dump
Change-Id: I581b7713d7f9bfdd7be00add3244578b84daf94f
Signed-off-by: “Milind <“mwaykole@redhat.com”>
|
|
|
|
|
|
|
|
|
|
| |
This test is to verify BZ:1785577 (
https://bugzilla.redhat.com/show_bug.cgi?id=1785577)
To verify that there are no memeory leak when SSL is
enabled
Change-Id: I1f44de8c65b322ded76961253b8b7a7147aca76a
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Setup and mount a volume on client.
2) Stop glusterd on a random server.
3) Start IO on mount points
4) Set an option on the volume
5) Start glusterd on the stopped node.
6) Verify all the bricks are online after starting glusterd.
7) Check if the volume info is synced across the cluster.
Change-Id: Ia2982ce4e26f0d690eb2bc7516d463d2a71cce86
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Start glusterd
2. Check ping timeout value in glusterd.vol should be 0
3. Create a test script for epoll thread count
4. Source the test script
5. Fetch the pid of glusterd
6. Check epoll thread count of glusterd should be 1
Change-Id: Ie3bbcb799eb1776004c3db4922d7ee5f5993b100
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a distributed-replicated volume, start and mount it.
2. Create deep dirs (200) and create some 100 files on the
deepest directory.
3. Expand volume.
4. Start rebalance.
5. Once rebalance is completed, do a lookup on mount and log
the time taken.
Change-Id: I3a55d2670cc6bda7670f97f0cd6208dc9e36a5d6
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
| |
Change-Id: I626914130554cccf1008ab43158d7063d131b870
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create files and dirs on the mount point.
3. Add bricks to the volume.
4. Replace 2 old brick to the volume.
5. Trigger rebalance fix layout and wait for it to complete.
6. Check layout on all the bricks through trusted.glusterfs.dht.
Change-Id: Ibc8ded6ce2a54b9e4ec8bf0dc82436fcbcc25f56
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create a data set on the client node such that all the available
space is used and "No space left on device" error is generated.
3. Set cluster.min-free-disk to 30%.
4. Add bricks to the volume, trigger rebalance and wait for rebalance
to complete.
Change-Id: I69c9d447b4713b107f15b4801f4371c33f5fb2fc
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios:
----------
Test case 1:
1. Create a volume, start it and mount it using fuse.
2. Create 50 files on the mount point and create 50
hardlinks for the files.
3. After the files and hard links creation is complete,
add bricks to the volume and trigger rebalance on the
volume.
4. Wait for rebalance to complete and check if files are
skipped or not.
5. Trigger rebalance on the volume with force and repeat
step 4.
Test case 2:
1. Create a volume, start it and mount it using fuse.
2. Create 50 files on the mount point and set sticky bit
to the files.
3. After the files creation and sticky bit addition is
complete, add bricks to the volume and trigger rebalance
on the volume.
4. Wait for rebalance to complete.
5. Check for data corruption by comparing arequal before
and after.
Change-Id: I61bcf14185b0fe31b44e9d2b0a58671f21752633
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Fill few bricks till min.free.limit is reached.
3. Add brick to the volume.
4. Set cluster.min-free-disk to 30%.
5. Remove bricks from the volume.
(Remove brick should pass without any errors)
6. Check for data loss by comparing arequal before and after.
Change-Id: I0033ec47ab2a2958178ce23c9d164939c9bce2f3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create some data on the volume.
3. Start remove-brick on the volume.
4. When remove-brick is in progress kill brick process of a brick
which is being remove.
5. Remove-brick should complete without any failures.
Change-Id: I8b8740d0db82d3345279dee3f0f5f6e17160df47
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test scenarios:
===============
Test case: 1
1. Create a volume, start it and mount it.
2. Create some data on the volume.
3. Run remove-brick start, status and finally commit.
4. Check if there is any data loss or not.
Test case: 2
1. Create a volume, start it and mount it.
2. Create some data on the volume.
3. Run remove-brick with force.
4. Check if bricks are still seen on volume or not
Change-Id: I2cfd324093c0a835811a682accab8fb0a19551cb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Enable brickmux on cluster, create a volume, start it and mount it.
2. Start the below I/O from 4 clients:
From client-1 : run script to create folders and files continuously
From client-2 : start linux kernel untar
From client-3 : while true;do find;done
From client-4 : while true;do ls -lRt;done
3. Kill brick process on one of the nodes.
4. Add brick to the volume.
5. Remove bricks from the volume.
6. Validate if I/O was successful or not.
Skip reason:
Test case skipped due to bug 1571317.
Change-Id: I48bdb433230c0b13b0738bbebb5bb71a95357f57
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Check that when there are pending heals and healing and I/O are going
on, heal info completes successfully.
Change-Id: I7b00c5b6446d6ec722c1c48a50e5293272df0fdf
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create volume, start it and mount it.
2. Open file datafile on mount point and start copying /etc/passwd
line by line(Make sure that the copy is slow).
3. Start remove-brick of the subvol to which has datafile is hashed.
4. Once remove-brick is complete compare the checksum of /etc/passwd
and datafile.
Change-Id: I278e819731af03094dcee93963ec1da115297bef
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I080328dfbcde5652f9ab697f8751b87bf96e8245
Signed-off-by: “Milind <“mwaykole@redhat.com”>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and mount it.
2. Set the quorum type to 'server'.
3. Bring some nodes down such that quorum isn't met.
4. Brick status in the node which is up should be offline.
5. Restart glusterd in this node.
6. Brick status in the restarted node should be offline.
Change-Id: If6885133848d77ec803f059f7a056dc3aeba7eb1
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|