| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Bug https://bugzilla.redhat.com/show_bug.cgi?id=1690254 has to be fixed
before merging this patch.
Change-Id: I90e669269fafa9d0a064a64883c3e4b88080d25f
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changing error message displayed when peer detach is issued
with bricks are present on the node which is being detached.
Adding a logic to handle both the new as well as the old
error message.
Old msg:
peer detach: failed: Brick(s) with the peer <my_server>
exist in cluster
New msg:
peer detach: failed: Peer <my_server> hosts one or more bricks.
If the peer is in not recoverable state then use either
replace-brick or remove-brick command with force to remove
all bricks from the peer and attempt the peer detach again.
Change-Id: I3d8fdac2c33638ecc2a8b5782c68caebbf17cf41
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The error message which is displayed when we do a gluster v get
for options which don't exist has been changed. Adding a if based
logic which can check for the old as well as the new error message.
Old msg:
volume get option: failed: Did you mean auth.allow or ...reject?
New msg:
volume get option: failed: Did you mean ctime.noatime?
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
Change-Id: I9496d391a7da9dba64d3426a024c2b1b68455f20
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
1) Create a volume and start it.
2) Mount volume on client and start IO.
3) Start profile info on the volume.
4) Run profile info with different parameters
and see if all bricks are present or not.
5) Stop profile on the volume.
6) Create another volume.
7) Start profile without starting the volume.
Change-Id: I6e8ec9285d48c1c828cd1d20bff6ea8f3de064f7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
1) Create a volume and start it.
2) Mount volume on client and start IO.
3) Start profile info on the volume.
4) Stop glusterd on one node.
5) Run profile info with different parameters and
see if all bricks are present or not.
6) Stop profile on the volume.
Change-Id: Ie573414816362ebbe30d2c419fd0e348522ceaec
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1.Create a 1X3 volume with only 3 nodes from the cluster.
2.Mount volume on client node using the ip of the fourth node.
3.Write IOs to the volume.
4.Detach node N4 from cluster.
5.Create a new directory on the mount point.
6.Create a few files using the same command used in step 3.
7.Add three more bricks to make the volume 2x3 using add-brick command.
8.Do a gluster volume rebalance on the volume.
9.Create more files from the client on the mount point.
10.Check for files on bricks from both replica sets.
11.Create a new directory from the client on the mount point.
12.Check for directory in both replica sets.
Change-Id: I228b79955dca565a40994919b2903e59cad7d8f5
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create a volume
2. Set the quorum type to server and ratio to 90
3. Stop glusterd randomly on one of the node
4. Start profile on the volume
5. Start glusterd on the node where it is stopped
6. Start profile on the volume
7. Stop profile on the volume where it is started
Change-Id: Ifeb9fddf6f1a14c9df73ed2f0453636d2853e944
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
| |
Change-Id: Ibf41c11a4e98baeaad658ee10ba8a807318504be
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
In jenkins this case is failing with peers are not connected
while volume creation. Now having a check before creating
the volume to make sure that peers are in cluster and in
connected state after peer probe.
Change-Id: I8aa9d2c4d1669475dd8867d42752a31604ff572f
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
| |
Change-Id: I405843e0093ddb7138ee0a8afbfd4cd2f91e6284
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I252ab0c0f6248b9a5c1d7977146c15876e144b38
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I4a1097fbdebd49555fffcfa5fe609f4070e39182
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
In jenkins right after peer probe, add brick function is
failing with peer not in cluster. So having check for the
peer to be connected or not then proceed to next step.
Change-Id: I73bf92819ad44f7a6a14795ab07c45d260cd04eb
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
| |
Change-Id: I53fb7f4cceae395698568129669dc5f3a9a5e4bb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I1bfa2fb3ae4ff1fc247b40c73f4fade9a3afeede
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I64309c3b46dc9087eeb3181acba63b981b2ecc6f
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Ica0771bdee1e96e9d6bb5157fb6c2125a4b419f1
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Idc379ad7f31274cc63f384d7223bf769bb89ace3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
glusterd
Change-Id: I6ee034f019a4aa36a83e087f2d9fed007e4fd9d7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Altered code to check for daemons only on servers where the bricks
for a given volume are present.
Change-Id: I79312f3b09fd5e1b0fdf6db40e29481662e56303
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Ibd50170d2c3172d7b98c2174630d31a066762f7c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I627c78792c6c1ea12c4a023095a4a983f8cee9b0
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I0486abff96ea3ea626ce4d18ac0c24f10ed6a846
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I9e61480ff93d0c16b66eeddacdaef715a0b47d1c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Ie44703a9d114b9ecaa5bbce07a98c8a040393f2c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier while volume creation randomly selected one node
and stopped gluster but the volume type used in the test
is pure distribute and default glusto config takes 4
bricks.
With the fix now randomly selecting only first
4 nodes for volume creation and it will fail.
Change-Id: I3cf2fc8281c9747b190e1fe9ef471edbb2c4d2ca
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
| |
Change-Id: I7494b877dff64e195a3517af0176e2a00fa2a86c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I7361059b663ea19c08a44c4763881634703d36a4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Ic7f61241addec2ad81c4ef05dd3268c013dbb083
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Iac928311281fac68a5e0c773d87842a35c8d499e
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
| |
Earlier quorum ratio is not reset to default which
affects the remaining cases.`
Change-Id: I5b2e6c23c5b9cd4cfb310534a7ff89a99bf0931b
Signed-off-by: root <root@localhost.localdomain>
|
|
|
|
|
| |
Change-Id: Ic60e9da0a97818cba59b5be5048492e54fc0edb3
Signed-off-by: Sri <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I3c6a9169f3b6afb7f92aca28f59efa42ca2a6b21
Signed-off-by: Sri <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: If8d18cd60d1993ce46fa019b659770cf6e7aa6b8
Signed-off-by: Sri <sselvan@redhat.com>
|
|
|
|
| |
Change-Id: I46fc2feffe6443af6913785d67bf310838532421
|
|
|
|
|
|
|
|
|
|
|
| |
It could have failed without anyone noticing.
Added 'xfail' - do we expect to fail in deletion
(and changed tests accordingly)
On the way, ensure stdout and stderr are logged in case of such failures.
Change-Id: Ibdf7a43cadb0393707a6c68c19a664453a971eb1
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
| |
No functional change, just make the tests a bit more readable.
It could be moved to a decorator later on, wrapping tests.
Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case we are checking whether the peer status is having
FQDN or not.
Steps followed are:
1. Peer probe to a new node i.e., N1 to N2 using hostname
2. Check peer status in N2, it should have FQDN of N1
3. Check peer status on N1, it should have FQDN of N2
4. Create a distributed volume with single brick on each node.
5. Start volume
6. Peer probe to a new node N3 using IP
7. Add a brick from node3 to the volume, add brick should succeed.
8. Get volume info, it should have correct information
Change-Id: I7f2bb8cecf28e61273ca83d7e3ad502ced979c5c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Create snap for that volume
-> Enable uss
-> Check snapd running or not
-> Probe a new node while snapd is running
Change-Id: Ic28036436dc501ed894f3f99060d0297dd9d3c8a
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Enabling server quorum
-> Set server quorum ratio to 95%
-> Stop the glusterd on any one of the node
-> Perform remove brick operation
-> start glusterd
-> Check gluster vol info, bricks should be same before and after
performing remove brick operation.
Change-Id: I4b7e97ecc6cf6854ec8ff36d296824e549bf9b97
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Remove any one Brick directory
-> Start Volume
-> Check the gluster volume status
Change-Id: I83c25c59607d065f8e411e7befa8f934009a9d64
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 4 nodes
2. Create a distributed-replicated volumes with 4 bricks
3. Start the volume
4. Fuse mount the gluster volume on out of trusted nodes
5. Create some data file
6. Start remove-brick operation for one replica pair
7. Restart glusterd on all nodes
8. Try to commit the remove-brick operation while rebalance
is in progress, it should fail
Change-Id: I64901078865ef282b86c9b3ff54d065f976b9e84
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 2 nodes
2. Create a distributed volumes with 2 bricks
3. Start the volume
4. Stop glusterd on one node 2
5. Modify any of the volume option on node 1
6. Start glusterd on node 2
7. Check volume status, brick should get port
Change-Id: I688f954f5f53678290e84df955f5529ededaf78f
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create Volume
-> Fuse mount the volume
-> Perform I/O on fuse mount
-> Add bricks to the volume
-> Perform rebalance on the volume
-> While rebalance is in progress,
-> restart glusterd on all the nodes in the cluster
Change-Id: I522d7aa55adedc2363bf315f96e51469b6565967
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 3 nodes
2. Create a distributed volumes with 3 bricks
3. Start the volume
4. Fuse mount the gluster volume on out of trusted nodes
5. Remove a brick from the volume
6. Check remove-brick status
7. Stop the remove brick process
8. Perform fix-layoyt on the volume
9. Get the rebalance fix-layout status
10. Create a directory from mount point
11. Check trusted.glusterfs.dht extended attribue for newly
created directory on the remove brick
Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount volume
-> write files on mount point
-> delete files from mount point
-> check for any errors filled in all brick logs
Change-Id: Ic744ad04daa0bdb7adcc672360c9ed03f56004ab
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 2 nodes
2. Create a distributed volumes with 2 bricks
3. Start the volume
4. Mount the volume
5. Add some data file on mount
6. Start rebalance with force
7. stop glusterd on 2nd node
8. Check rebalance status , it should not hang
9. Issue volume related command
Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960
Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
4. Bring down glusterd in half of the nodes
5. Confirm that quorum is not met, by check whether the bricks are down.
6. Perform a add brick operation, which should fail.
7. Check whether added brick is part of volume.
Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Set global options and other volume specific options on the volume
-> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1
-> gluster volume set VOL nfs.addr-namelookup on
-> gluster volume set VOL cluster.server-quorum-type server
-> gluster volume set VOL network.ping-timeout 20
-> gluster volume set VOL nfs.port 2049
-> gluster volume set VOL performance.nfs.write-behind on
-> Peer probe for a new node
Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|