summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* [Test] Add TC to test multiple volume shrinks and rebalanceHEADmasterBarak Sason Rofman2021-03-091-0/+87
| | | | | | | | | | | | | | | Test case: 1. Modify the distribution count of a volume 2. Create a volume, start it and mount it 3. Create some file on mountpoint 4. Collect arequal checksum on mount point pre-rebalance 5. Do the following 3 times: 6. Shrink the volume 7. Collect arequal checksum on mount point post-rebalance and compare with value from step 4 Change-Id: Ib64575e759617684009c68d8b6bb5f011c553b55 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Add TC to rebalance highly nested dir structureBarak Sason Rofman2021-03-091-0/+99
| | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it 2. On mount point, create a large nested dir structure with files in the inner-most dir 3. Collect arequal checksum on mount point pre-rebalance 4. Expand the volume 5. Start rebalance and wait for it to finish 6. Collect arequal checksum on mount point post-rebalance and compare wth value from step 3 Change-Id: I87f0e8df8c4ca850bdf749583635fc8cd2ba1b86 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Bring down data bricks in cyclic order and trigger healManisha Saini2021-03-081-0/+140
| | | | | Change-Id: Ibf0391a2f7709fb08326f57a0c4c899e28faf62f Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test]Verify auth functionality with brick down and healManisha Saini2021-03-011-0/+171
| | | | | Change-Id: I050525df6cf00fae331e7ddd84661251926e31ca Signed-off-by: Manisha Saini <msaini@redhat.com>
* [TestFix] Remove temp code from test casePranav2021-02-171-33/+1
| | | | | | | Removing temp steps added to verify BZ#1810901 Change-Id: I7d64ed1c797914b8a5b2f0e45271f01a09d51e98 Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add test to verify permission changes made on mount dirPranav2021-02-141-0/+134
| | | | | | | | | | | | | | | | | | | Adding test to verify whether the permission changes made on the mount point dir is reflected on the brick dirs as well, even when the change is made while a brick was down. 1. create pure dist volume 2. mount on client 3. Checked default permission (should be 755) 4. Change the permission to 444 and verify 5. Kill a brick 6. Change root permission to 755 7. Verify permission changes on all bricks, except down brick 8. Bring back the brick and verify the changes are reflected Change-Id: I0a24b94fc5b75706c664f9e2d1363d38b77f9e3a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Verify metadata and data self healLeela Venkaiah G2021-02-121-0/+297
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test: test_metadata_heal_from_shd Steps: 1. Create, mount and run IO on volume 2. Set `self-heal-daemon` to `off` and bring files into metadata split brain 3. Set `self-heal-daemon` to `on` and wait for heal completion 4. Validate areequal checksum on backend bricks Test: test_metadata_heal_from_heal_cmd Steps: 1. Create, mount and run IO on volume 2. Set `self-heal-daemon` to `off` and bring files into metadata split brain 3. Set `self-heal-daemon` to `on`, invoke `gluster vol <vol> heal` 4. Validate areequal checksum on backend bricks Test: test_data_heal_from_shd Steps: 1. Create, mount and run IO on volume 2. Set `self-heal-daemon` to `off` and bring files into data split brain 3. Set `self-heal-daemon` to `on` and wait for heal completion 4. Validate areequal checksum on backend bricks Change-Id: I24411d964fb6252ae5b621c6569e791b54dcc311 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test]Test split-brain with hard link and brick down scenarioManisha Saini2021-02-101-0/+175
| | | | | Change-Id: Ib58a45522fc57b5a55207d03b297060b29ab27cf Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test]: Add tc to check increase in glusterd memory consumptionnik-redhat2021-02-101-0/+207
| | | | | | | | | | | | | | | | | | Test Steps: 1) Enable brick-multiplex and set max-bricks-per-process to 3 in the cluster 2) Get the glusterd memory consumption 3) Perform create,start,stop,delete operation for 100 volumes 4) Check glusterd memory consumption, it should not increase by more than 50MB 5) Repeat steps 3-4 for two more time 6) Check glusterd memory consumption it should not increase by more than 10MB Upstream issue link: https://github.com/gluster/glusterfs/issues/2142 Change-Id: I54d5e337513671d569267fa23fe78b6d3410e944 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Testfix] Fix I/O logic and change lookup commandkshithijiyer2021-02-021-6/+17
| | | | | | | | | | | | | Problem: 1. I/O command has %s instead of %d for for int. 2. Lookup logic doesn't tigger client heal. Fix: Changing to %d and adding a cmd which automatically triggers heal. Change-Id: Ibc8a1817894ef755b13c3fee21218adce3ed9c77 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to test split brain with node rebootManisha Saini2021-01-251-0/+149
| | | | | Change-Id: Ic5258b83b92f503c1ee50368668bd7e1244ac822 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Test Memory leak for arbiter volume on deleting fileManisha Saini2021-01-251-0/+113
| | | | | Change-Id: I1a9cefd16ddb376d8c496089d114c92efa1fd1ea Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Add test to check default granular entry healkshithijiyer2021-01-251-0/+235
| | | | | | | | | | | | | | | | | | | | | | Test case: 1. Create a cluster. 2. Create volume start it and mount it. 3. Check if cluster.granular-entry-heal is ON by default or not. 4. Check /var/lib/glusterd/<volname>/info for cluster.granular-entry-heal=on. 5. Check if option granular-entry-heal is present in the volume graph or not. 6. Kill one or two bricks of the volume depending on volume type. 7. Create all types of files on the volume like text files, hidden files, link files, dirs, char device, block device and so on. 8. Bring back the killed brick by restarting the volume. 9. Wait for heal to complete. 10. Check arequal-checksum of all the bricks and see if it's proper or not. Refernce BZ: #1890506 Change-Id: Ic264600e8d1e29c78e40ab7f93709a31ba2b883c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to verify memory leak when ssl enabledPranav2021-01-251-0/+231
| | | | | | | | | | | Adding tests to verify memory leak when 1. Management encryption enabled and gluster v heal info is run for 12 hrs 2. Management encryption is enabled, brick-mux enabled Change-Id: If6ff76afe87490a135c450cbec99bceb3a6011ae Signed-off-by: Pranav <prprakas@redhat.com>
* [LibFix] Add nfs_ganesha lib fixPranav2021-01-222-9/+72
| | | | | | | | | 1. The volume mount has to be done via VIP for nfs_ganesha. 2. Add steps to handle ganesha setup and teardown. Change-Id: I2e33d30118502b71ca9ca4821ef633ba4bd5fa10 Signed-off-by: Pranav <prprakas@redhat.com>
* [Testfix] Fix I/O logic and use text for getfattrkshithijiyer2021-01-221-21/+20
| | | | | | | | | | | | | | | | Problem: 1. The code uses both clients to create files with the same names, causing I/O failures in one set of clients 2. Code tries to match hex value of replica.split-brain-status to text string. Solution: Fix I/O logic to use only one client and getfattr in text for proper comparision. Change-Id: Ia0786a018973a23835cd2fecd57db92aa860ddce Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check self heal with expand volumekshithijiyer2021-01-211-0/+221
| | | | | | | | | | | | | | | | | | | | | | Test case: 1. Created a 2X3 volume. 2. Mount the volume using FUSE and give 777 permissions to the mount. 3. Added a new user. 4. Login as new user and created 100 files from the new user: for i in {1..100};do dd if=/dev/urandom of=$i bs=1024 count=1;done 5. Kill a brick which is part of the volume. 6. On the mount, login as root user and create 1000 files: for i in {1..1000};do dd if=/dev/urandom of=f$i bs=10M count=1;done 7. On the mount, login as new user, and copy existing data to the mount. 8. Start volume using force. 9. While heal is in progress, add-brick and start rebalance. 10. Wait for rebalance and heal to complete. 11. Check for MSGID: 108008 errors in rebalance logs. Refernce BZ: #1821599 Change-Id: I0782d4b6e44782fd612d4f2ced248c3737132855 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [ToolFix] Loop over outer index for server countLeela Venkaiah G2021-01-201-1/+2
| | | | | | | | | | Issue: In a nested loop, anchor is picking inner loop index 1 always Solution: Store outer loop index and use it while lopping over server list and create correct anchor Change-Id: Ib3aaed2f0153b567ea9dd5cd8f4ef20ecf604dd8 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Test glusterfind functionality when a brick is downsrijan-sivakumar2021-01-201-0/+219
| | | | | | | | | | | | | 1. Create a volume 2. Create a session on the volume 3. Create various files from mount point 4. Bring down brick process on one of the node 5. Perform glusterfind pre 6. Perform glusterfind post 7. Check the contents of outfile Change-Id: Iacbefef816350efc0307d46ece2e9720626ab927 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add TC to test multiple volume expansions and rebalanceBarak Sason Rofman2021-01-201-0/+100
| | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it 2. On mount point, create some files 3. Collect arequal checksum on mount point pre-rebalance 4. Do the following 3 times: 5. Expand the volume 6. Start rebalance and wait for it to finish 7. Collect arequal checksum on mount point post-rebalance and compare with value from step 3 Change-Id: I8a455ad9baf2edb336258448965b54a403a48ae1 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test+Lib] Add tests to check self healkshithijiyer2021-01-203-2/+664
| | | | | | | | | | | | | | | | Test scenarios added: 1. Test to check entry self heal. 2. Test to check meta data self heal. 3. Test self heal when files are removed and dirs created with the same name. Additional libraries added: 1. group_del(): Deletes groups created 2. enable_granular_heal(): Enables granular heal 3. disable_granular_heal(): Disables granular heal Change-Id: Iffa9a100fddaecae99c384afe3aaeaf13dd37e0d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to verify df -h output after replace, expand and shrink opsPranav2021-01-181-0/+171
| | | | | | | | | | | | | | | | | | | | Test to verify the df -h output when for a given volume, the bricks are replaces, volume size is shrinked and when the volume size is expanded. Steps: - Take the output of df -h. - Replace any one brick for the volumes. - Wait till the heal is completed - Repeat steps 1, 2 and 3 for all bricks for all volumes. - Check if there are any inconsistencies in the output of df -h - Remove bricks from volume and check output of df -h - Add bricks to volume and check output of df -h The size of mount points should remain unchanged during a replace op, and the sizes should vary according to shrink or expand op performed on the volume. Change-Id: I323da4938767cad1976463c2aefb6c41f355ac57 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix+Lib] Add steps to validate glusterd logsPranav2021-01-182-44/+46
| | | | | | | | | | | | | | Adding additional checks to verify the glusterd logs for `Responded to` and `Received ACC` while performing a glusterd restart. Replacing reboot with network interface down to validate the peer probe scenarios. Adding lib to bring down network interface. Change-Id: Ifb01d53f67835224d828f531e7df960c6cb0a0ba Signed-off-by: Pranav <prprakas@redhat.com>
* [Test+Libfix] Add test to add brick followed by remove brickkshithijiyer2021-01-182-0/+172
| | | | | | | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it to a client. 2. Start I/O on volume. 3. Add brick and trigger rebalance, wait for rebalance to complete. (The volume which was 1x3 should now be 2x3) 4. Add brick and trigger rebalance, wait for rebalance to complete. (The volume which was 2x3 should now be 3x3) 5. Remove brick from volume such that it becomes a 2x3. 6. Remove brick from volume such that it becomes a 1x3. 7. Wait for I/O to complete and check for any input/output errors in the logs of both the I/O and rebalance logs. Additional library fix: Adding `return True` at the end of is_layout_complete() to return True if no issues found in layout. Refernce BZ: #1726673 Change-Id: Ifd0360f948b334bfcf341c1015a731274acdb2bf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix setup bug in test_no_glustershd_with_distributekshithijiyer2021-01-131-2/+2
| | | | | | | | | | | | | | Problem: In test_no_glustershd_with_distribute, we are trying to setup all volume types at once where it fails on setup in CI as we don't have sufficiant bricks. Solution: Enable brick sharing in setup_volume() by setting multi_vol to True. Change-Id: I2129e3059fd156138d0a874d6aa6904f3cb0cb9b Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Test read file from a stale layoutLeela Venkaiah G2021-01-081-0/+181
| | | | | | | | | | | | | | | | | Test Steps: 1. Create, start and mount a volume consisting 2 subvols on 2 clients 2. Create a dir `dir` and file `dir/file` from client0 3. Take note of layouts of `brick1`/dir and `brick2`/dir of the volume 4. Validate for success lookup from only one brick path 5. Re-assign layouts ie., brick1/dir to brick2/dir and vice-versa 6. Remove `dir/file` from client0 and recreate same file from client0 and client1 7. Validate for success lookup from only one brick path (as layout is changed file creation path will be changed) 8. Validate checksum is matched from both the clients Change-Id: I91ec020ee616a0f60be9eff92e71b12d20a5cadf Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Rebalance should start successfully if name of volume more than 108 chars“Milind2021-01-081-0/+173
| | | | | | | | | | | | | | | | 1. On Node N1, Add "transport.socket.bind-address N1" in the /etc/glusterfs/glusterd.vol 2. Create a replicate (1X3) and disperse (4+2) volumes with name more than 108 chars 3. Mount the both volumes using node 1 where you added the "transport.socket.bind-address" and start IO(like untar) 4. Perform add-brick on replicate volume 3-bricks 5. Start rebalance on replicated volume 6. Perform add-brick for disperse volume 6 bricks 7. Start rebalance of disperse volume Change-Id: Ibc57f18b84d21439bbd65a665b31d45b9036ca05 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [LibFix] Check to ignore peer validation in case of single node serverArthy Loganathan2021-01-061-0/+5
| | | | | Change-Id: I5ad55be92b3acaa605e66de246ce7d40bcec6d5b Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] Add TC to rebalance two volume simultaneouslyBarak Sason Rofman2021-01-041-0/+163
| | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it 2. Create a 2nd volume, start it and mount it 3. Create files on mount points 4. Collect arequal checksum on mount point pre-rebalance 5. Expand the volumes 6. Start rebalance simultaneously on the 2 volumes 7. Wait for rebalance to complete 8. Collect arequal checksum on mount point post-rebalance and compare with value from step 4 Change-Id: I6120bb5f96ff5cfc345d2b0f84dd99ca749ffc74 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [Test] Add TC to add peer to cluster while rebalance is in progressBarak Sason Rofman2021-01-041-0/+130
| | | | | | | | | | | | | | | | | Test case: 1. Detach a peer 2. Create a volume, start it and mount it 3. Start creating a few files on mount point 4. Collect arequal checksum on mount point pre-rebalance 5. Expand the volume 6. Start rebalance 7. While rebalance is going, probe a peer and check if the peer was probed successfully 8. Collect arequal checksum on mount point post-rebalance and compare with value from step 4 Change-Id: Ifee9f3dcd69e87ba1d5b4b97c29c0a6cb7491e60 Signed-off-by: Barak Sason Rofman <bsasonro@redhat.com>
* [LibFix] Add dirname support form_bricks_list()“Milind2021-01-041-5/+17
| | | | | | | Adding arg dirname as a gluster brick directory Change-Id: I1bb69b4d719bad4cbac3a0e6a497fdae386c6004 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [Test] Add test to verify lock behaviour from 2 diff clientsPranav2021-01-042-0/+186
| | | | | | | | | | | | | | | | Test to verify whether the lock is being granted to two diff clients at the same time. - Take lock from client 1 => Lock is acquired - Try taking lock from client 2 - Release lock from client1 - Take lock from client2 - Again try taking lock from client 1 Also, verifying the behaviour with eagerlock and other eagerlock set of on and off. Change-Id: Ie839f893f7a4f9b2c6fc9375cdf9ee8a27fad13b Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add tests to check heal with hard and soft linkskshithijiyer2021-01-041-0/+405
| | | | | | | | | | | | | Test Scenarios: --------------- 1. Test heal of hard links through default heal 2. Test heal of soft links through default heal CentOS-CI failing due to issue: https://github.com/gluster/glusterfs/issues/1954 Change-Id: I9fd7695de6271581fed7f38ba41bda8634ee0f28 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add double-expand test to rebalance-preserves-permission TCTamar Shacked2020-12-301-55/+57
| | | | | | | | | For implementing a similar testcase, Add a test to rebalance-preserves-permission TC which runs double-expand before rebalance. Change-Id: I4a37c383bb8e823c6ca84c1a6e6699b18e80a450 Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* [Test] Test glusterfind when a node is down.srijan-sivakumar2020-12-241-0/+280
| | | | | | | | | | | | | | | | | | | | Steps- 1. Create a volume. 2. Create a session on the volume. 3. Create various files from mount point. 4. Bring down one of the nodes. 5. Perform glusterfind pre. 6. Perform glusterfind post. 7. Check the contents of outfile. 8. Create more files from mountpoint 9. Reboot one of the nodes 10. Perform gluserfind pre 11. Perform glusterfind post 12. Check the contents of outfile Change-Id: I5d27bf32f3f028d0e919e8c33ac742d00193b81e Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Check directory time stamps are healed during entry healRavishankar N2020-12-231-0/+160
| | | | | | | | | | | After an entry heal is complete, verify that the atime/mtime/ctime of the parent directory is same on all bricks of the replica. The test is run with features.ctime enabled as well as disabled. Change-Id: Iefb6a8b50bd31cf5c5aae72e4030239cc0f1a43d Reference: BZ# 1572163 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* [Test] Change the reserve limits to lower and higher while rebal in-progress“Milind2020-12-211-0/+127
| | | | | | | | | | | | | | | | | | | 1) Create a distributed-replicated volume and start it. 2) Enable storage.reserve option on the volume using below command, gluster volume set storage.reserve 50 3) Mount the volume on a client 4) Add some data on the mount point (should be within reserve limits) 5) Now, add-brick and trigger rebalance. While rebalance is in-progress change the reserve limit to a lower value say (30) 6. Stop the rebalance 7. Reset the storage reserve value to 50 as in step 2 8. trigger rebalance 9. while rebalance in-progress change the reserve limit to a higher value say (70) Change-Id: I1b2e449f74bb75392a25af7b7088e7ebb95d2860 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [LibFix] Optimizing setup_volume api.srijan-sivakumar2020-12-181-2/+2
| | | | | | | | | | | | | | | | Currently the setup volume API is calling the get_volume_info to obtain the volume information to check if the said volume already exists. Internally the get_volume_info would have to parse the complete xml dump received for volume info. Instead of that one can invoke the get_volume_list which would mean reduced effort put into parsing the output received as volume name is the only important factor in this check inside setup_volume. Change-Id: I024d42fe471bf26ac85dd3108d6f123cd56a0766 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Tool] Tool to generate config file for executing glusto testsArthy Loganathan2020-12-185-0/+199
| | | | | Change-Id: Ie8fc6949b79b6e91c1be210c90a4ef25cfb81754 Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test+Lib] Add test to check rebalance impact on aclkshithijiyer2020-12-182-0/+205
| | | | | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it to a client. 2. Create 10 files on the mount point and set acls on the files. 3. Check the acl value and collect arequal-checksum. 4. Add bricks to the volume and start rebalance. 5. Check the value of acl(it should be same as step 3), collect and compare arequal-checksum with the one collected in step 3 Additional functions added: a. set_acl(): Set acl rule on a specific file b. get_acl(): Get all acl rules set to a file c. delete_acl(): Delete a specific or all acl rules set on a file Change-Id: Ia420cbcc8daea272cd4a282ae27d24f13b4991fe Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Adding cluster options reset in TCsrijan-sivakumar2020-12-171-5/+5
| | | | | | | | | The cluster options are reset post the TC run so that they don't persist through the other TC runs. Change-Id: Id55bb64ded09e113cdc0fc512a17857195619e41 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test]: Add tc to detect drop of outbound traffic as network failure in glusterdnik-redhat2020-12-171-0/+115
| | | | | | | | | | | | Test Steps: 1) Create a volume and start it. 2) Add an iptable rule to drop outbound glusterd traffic 3) Check if the rule is added in iptables list 4) Execute few Gluster CLI commands like volume status, peer status 5) Gluster CLI commands should fail with suitable error message Change-Id: Ibc5717659e65f0df22ea3cec098bf7d1932bef9d Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add test to check rebalance with self heal runningkshithijiyer2020-12-171-0/+136
| | | | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Start creating a few files on mount point. 3. While file creation is going on, kill one of the bricks in the replica pair. 4. After file creattion is complete collect arequal checksum on mount point. 5. Bring back the brick online by starting volume with force. 6. Check if all bricks are online and if heal is in progress. 7. Add bricks to the volume and start rebalance. 8. Wait for rebalance and heal to complete on volume. 9. Collect arequal checksum on mount point and compare it with the one taken in step 4. Change-Id: I2999b81443e8acabdb976401b0a56566a6740a39 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add brick with symlink pointing outkshithijiyer2020-12-171-0/+133
| | | | | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create symlinks on the volume such that the files for the symlink are outside the volume. 3. Once all the symlinks are create a data file using dd: dd if=/dev/urandom of=FILE bs=1024 count=100 4. Start copying the file's data to all the symlink. 5. When data is getting copied to all files through symlink add brick and start rebalance. 6. Once rebalance is complete check the md5sum of each file through symlink and compare if it's same as the orginal file. Change-Id: Icbeaa75f11e7605e13fa4a64594137c8f4ae8aa2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test]: Add tc to test reserved port range for glusternik-redhat2020-12-171-0/+152
| | | | | | | | | | | | | | | | Test Steps: 1) Set the max-port option in glusterd.vol file to 49200 2) Restart glusterd on one of the node 3) Create 50 volumes in a loop 4) Try to start the 50 volumes in a loop 5) Confirm that the 50th volume failed to start 6) Confirm the error message, due to which volume failed to start 7) Set the max-port option in glusterd.vol file back to default value 8) Restart glusterd on the same node 9) Starting the 50th volume should succeed now Change-Id: I084351db20cc37e3391061b7b313a18896cc90b1 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Testfix] Fixing minor typos in comment and docstringkshithijiyer2020-12-102-2/+2
| | | | | Change-Id: I7cbc6422a6a6d2946440e51e8d540f47ccc9bf46 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Memory crash - stop and start gluster processes multiple timessrijan-sivakumar2020-12-091-0/+123
| | | | | | | | | | | | | Steps- 1. Create a gluster volume. 2. Kill all gluster related processes. 3. Start glusterd service. 4. Verify that all gluster processes are up. 5. Repeat the above steps 5 times. Change-Id: If01788ae8bcdd75cdb55261715c34edf83e6f018 Signed-off-by: Rinku Kothiya <rkothiya@redhat.com> Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test to verify glusterfind list clikshithijiyer2020-12-091-0/+111
| | | | | | | | | | | | | | | | | Verifying the glusterfind list command functionality with valid and invalid values for the required and optional parameters. * Create a volume * Create a session on the volume and call glusterfind list with the following combinations: - Valid values for optional parameters - Invalid values for optional parameters NOTE: There are no required parameters for glusterfind list command. Change-Id: I2677f507dad42904b404b5f2daf0e354c37c0cb4 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test Fix]: Assertion of default quorum optionsnik-redhat2020-12-041-3/+4
| | | | | | | | | | | Fix: Improved the check for default quorum options on the volume, to work with the present as well as older default values Older default value: 51 Current Default value: 51 (DEFAULT) Change-Id: I200b81334e84a7956090bede3e2aa50b9d4cf8e0 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Performing cluster options reset.srijan-sivakumar2020-12-041-1/+15
| | | | | | | | | | | Issue: The cluster options set during TC aren't reset, causing the cluster options to affect subsequent TC runs. Fix: Adding volume_reset() in the tearDown of a TC to perform a cleanup of the cluster options. Change-Id: I00da5837d2a4260b4d414cc3c8083f83d8f6fadd Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>