summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* [Test] Validates data deletion on EC volumeubansal2020-06-041-0/+270
| | | | | | | | | | | | | | Test Steps: 1. Create a volume, start and mount it 2. Create directories and files 3. Rename, change permissions of files 4. Create hardlink and soflink and different types of IO's 5. Delete all the data 6. Check no heals are pending 7. Check al bricks are empty Change-Id: Ic8f5dad1a44de71688a6b0a2fcfb4a25cef435ba Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Read from hardlink in disperseBala Konda Reddy M2020-06-031-0/+111
| | | | | | | | | | | | Test steps: 1. Create volume, start and mount it one client. 2. Enable metadata-cache(md-cache) options on the volume. 3. Touch a file and create a hardlink for it. 4. Read data from the hardlink. 5. Read data from the actual file. Change-Id: Ibf4b8757262707fcfb4d09b4b031ff9dea166570 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Validate file rename with brick downsayaleeraut2020-06-031-0/+172
| | | | | | | | | | | | | | | | | | | | | | | | | | Description: The TC checks that there is no data loss when rename is performed with a brick of volume down. Steps : 1) Create a volume. 2) Mount the volume using FUSE. 3) Create 1000 files on the mount point. 4) Create the soft-link for file{1..100} 5) Create the hard-link for file{101..200} 6) Check for the file count on the mount point. 7) Begin renaming the files, in multiple iterations. 8) Let few iterations of the rename complete successfully. 9) Then while rename is still in progress, kill a brick part of the volume. 10) Let the brick be down for sometime, such that the a couple of rename iterations are completed. 11) Bring the brick back online. 12) Wait for the IO to complete. 13) Check if there is any data loss. 14) Check if all the files are renamed properly. Change-Id: I7b7c4aed7df7f19a10ec8c2577dfec1f1ceeb46c Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Libfix] Change sequence of option set & start opPranav2020-06-031-8/+24
| | | | | | | | As SSL cannot be set after volume start op, moving set_volume_option prior to volume start. Change-Id: I14e1dc42deb0c0c28736f03e07cf25f3adb48349 Signed-off-by: Pranav <prprakas@redhat.com>
* [Libfix] Fix get_bricks_to_bring_offline_from_replicated_volumePranav2020-06-031-2/+2
| | | | | | | | | | | | | | Finding the offline brick limit using ceil returns incorrect value. E.g., For replica count 3, ceil(3/2) returns 2, and the subsequent method uses this value to bring down 2 out of 3 available bricks, resulting in IO and many other failures. Fix: Change ceil to floor. Also change the '/' operator to '//' for py2/3 compatibility Change-Id: I3ee10647bb037a3efe95d1b04e0864cf61e2499e Signed-off-by: Pranav <prprakas@redhat.com>
* [Lib] Add find_specific_hashed methodPranav2020-06-021-0/+34
| | | | | | | | This method helps in cases of rename scenarios where the new filename has to be hashed to a specific subvol Change-Id: Ia36ea8e3d279ddf130f3a8a940dbe1fcb1910974 Signed-off-by: Pranav <prprakas@redhat.com>
* [Tool] Add tool to fetch sosreportskshithijiyer2020-06-013-0/+268
| | | | | | | | | | | | | | Adding tool to fetch sosreports from all servers and clients using glusto-tests config file. This tool is essentially just a tweeked version of getsos[1] tool which can take glusto-tests config file and is relicensed under GPLv3+. Reference: [1] https://github.com/kshithijiyer/getsos Change-Id: Ic1685163154ed4358064397d74d3965097448621 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Tool] Add tool to verify multiple runs for a given set of testcase(s)Pranav2020-05-292-0/+195
| | | | | | | | | | | This tool verifies the stability of a given set of testcase(s) by executing it consecutively for a pre-defined number of times. This ensures that the written code is stable and also helps the user to identify unexpected failures or errors that may arise while executing it multiple times. It also checks the given code for any pylint/flake8 issues. Change-Id: I731277a448d4fc8d0028f43f51e08d6d9366c19a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Check 'storage.reserve' with wrong valuesLeela Venkaiah G2020-05-281-0/+79
| | | | | | | | | | Test Steps: 1) Create and start a distributed-replicated volume. 2) Give different inputs to the storage.reserve volume set options 3) Validate the command behaviour on wrong inputs Change-Id: I4bbad81cbea9b3b9e59a61fcf7f2b70eac19b216 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Fix assertItemsEqual issue with python3Pranav2020-05-277-26/+36
| | | | | | | | | | | | | | Issue: In python3 assertItemsEqual is no longer supported and is replaced with assertCountEqual (Refer [1]). Because of this issue, few arbiter tests are failing. [1] https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertItemsEqual Fix: The replacement assertCountEqual is not supported in python2. So the fix is to replace assertItemsEqual with assertEqual(sorted(expected), sorted(actual)) Change-Id: Ic1d599fa31f85a8a41598b6c245056a6ff01e000 Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Verify 'gluster get-state' on brick unmountLeela Venkaiah G2020-05-261-0/+126
| | | | | | | | | | | | | | Testcase steps: 1. Form a gluster cluster by peer probing and create a volume 2. Unmount the brick using which the volume is created 3. Run 'gluster get-state' and validate absence of error 'Failed to get daemon state. Check glusterd log file for more details' 4. Create another volume and start it using different bricks which are not used to create above volume 5. Run 'gluster get-state' and validate the absence of above error. Change-Id: Ib629b53c01860355e5bfafef53dcc3233af071e3 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Add tc for root squash on nfs-ganeshaManisha Saini2020-05-262-0/+182
| | | | | | | | | | | | | | | | | | | | Verifcation of root-squash functionality with NFS-Ganesha * Create a volume and export it via Ganesha * Mount the volume on clients * Create some files and dirs inside mount point * Check for owner and group Owner and group should be root * Set permission as 777 for mount point * Enable root-squash on volume * Create some more files and dirs * Check for owner and group for any file Owner and group should be nfsnobody * Edit file created by root user nfsnobody user should not be allowed to edit file Change-Id: Ia345c772c84fcfe6ef716b9f1026fca5d399ab2a Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Libfix] Fetch all entries under a directory in recursive fashionnchilaka2020-05-221-5/+12
| | | | | | | This method fetches all entries under a directory in recursive fashion Change-Id: I4fc066ccf7a3a4730d568f96d926e46dea7b20a1 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Test] Quota validation on EC volumeubansal2020-05-211-0/+159
| | | | | Change-Id: I3f77dc73044a5bc59a26319c55e8e024e2edf449 Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix] DHT Tests - Remove NFS protocolsayaleeraut2020-05-211-10/+11
| | | | | | | | | | | | | | | Scenarios: 1 - Rename directory when destination is not present 2 - Rename directory when destination is present The TC was failing when the volume was mounted using NFS at validate_files_in_dir() because the method uses 'trusted.glusterfs.pathinfo' on the mount, which is a glusterfs specific xattr. When the volume is mounted using NFS, it cannot find the xattr and hence it failed. Change-Id: Ic61de773525e717a73178a4694c015276da2a688 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Testfix] Fix assertIn in test_enabling_brick_muxkshithijiyer2020-05-201-2/+2
| | | | | | | | | assertIn statement looks for out in warning_message which fails every time as it should ideally look for warning_message in out. Change-Id: I57e0221097c861e251995e5e8456cb19964e7d17 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Add parameter for volume create onlyBala Konda Reddy M2020-05-182-7/+30
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently setup_volume in volume_libs.py and gluster_base_class.py are to create volume and start it. There are tests, where only volume_create is required and if the test has to run on all volume types. Any contributor have to do all the validations which are already implemented in setup_volume and classmethod of setup volume in the gluster_base_class to their test. Solution: Added a parameter in the setup_volume() function "create_only" by default it is false, unless specified this paramter setup_volume will work as it is. similarly, have added a parameter in classmethod of setup_volume in gluster_base_class.py "only_volume_create", here also defaults to false unless specified. Note: After calling "setup_volume() -> volume_stop" is not same as just "volume_create()" in the actual test. Change-Id: I76cde1b668b3afcac41dd882c2a376cb6fac88a3 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Validate USS after snapshot restoreVinayak Papnoi2020-05-181-0/+239
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a snapshot is restored, that snapshot gets removed. USS makes use of a '.snaps' directory in the mount point where all the activated snapshots can be listed. So the restored snapshot should not be listed under the '.snaps' directory regardless of it being activated or deactivated. Steps: * Perform I/O on mounts * Enable USS on volume * Validate USS is enabled * Create a snapshot * Activate the snapshot * Perform some more I/O * Create another snapshot * Activate the second * Restore volume to the second snapshot * From mount point validate under .snaps - first snapshot should be listed - second snapshot should not be listed Change-Id: I5630d8aad6b4758d49e8d4f53497073c78a00a6b Co-authored-by: Sunny Kumar <sunkumar@redhat.com> Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* [Test][Bug] Check for consistent timevalues of a created objectnchilaka2020-05-182-0/+130
| | | | | | | | | | | | | | | Test Summary and Steps: This testcase validates if ctime, mtime and atime of a created object is same 1. Create a volume and check if features.ctime is disabled by default 2. Enable features.ctime 3. Create a new directory dir1 and check if m|a|ctimes are same 4. Create a new file file1 and check if m|a|ctimes are same 5. Again create a new file file2 and check if m|a|ctimes are same after issueing an immediate lookup Change-Id: I024c11706a0309806c081c957b9305be92936f7f Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Test] Add tc to remove brick using nfs_ganeshakshithijiyer2020-05-181-0/+152
| | | | | | | | | | | | | | Verify remove brick operation while IO is running Steps: 1. Start IO on mount points 2. Perform remove brick operation 3. Validate IOs Change-Id: Ie394f96c9180be57704ca637c8cd725af82323cb Co-authored-by: Jilju Joy <jijoy@redhat.com> Signed-off-by: Jilju Joy <jijoy@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Move test from teardown class to teardownSri Vignesh2020-05-155-66/+29
| | | | | | | Move cases from teardown class to teardown in snapshot Change-Id: I7b33fa2728665fad000a5ad881f6690d40913f22 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Libfix] Assign correct atime, ctime, mtime valuesnchilaka2020-05-151-2/+2
| | | | | | | | | Changed get_file_stat function to assign correct key-value pairs for atime, mtime and ctime respectively. Previously, all timestamp keys were assigned to atime value Change-Id: I471ec341d1a213395a89f6c01315f3d0f2e976af Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Libfix] Fixing the pkill commandBala Konda Reddy M2020-05-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Below command to 'pkill pidof glusterd' is not right, as it is not getting the pidof glusterd. eg: cmd = "pkill pidof glusterd" ret, out ,err = g.run("10.20.30.40", cmd, "root") >>> ret, out, err (2, '', "pkill: only one pattern can be provided\n Try `pkill --help' for more information.\n") Here command is failing. Solution: Added `pidof glusterd` which will get proper glusterd pid and kill the stale pid after glusterd stop failed. cmd = "pkill `pidof glusterd`" ret, out ,err = g.run("10.20.30.40", cmd, "root") >>> ret, out, err (1, '', '') Note: The ret value is 1, as it is tried on a machine where glusterd is running. The purpose of the fix is to get the proper pid. Change-Id: Iacba3712852b9d16546ced9a4c071c62182fe385 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Kill stale bricks in scratch_cleanupBala Konda Reddy M2020-05-151-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: While performing scratch clenaup, observerd posix health checkers warnings once the glusterd is started as shown below [2020-05-05 12:19:10.633623] M [MSGID: 113075] [posix-helpers.c:2194:posix_health_check_thread_proc] 0-testvol_distributed-dispersed-posix: health-check failed, going down Solution: In scartch cleanup, once the glusterd is stopped, and runtime socket file removed for glusterd daemon, there are stale glusterfsd present on few the machines. Adding a step to get glusterfsd processes if any and using kill_process method killing the stale glusterfsd processes and continuing with the existing procedure. Once the glusterd is started won't see any posix health checkers. Change-Id: Ib3e9492ec029b5c9efd1c07b4badc779375a66d6 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Libfix] Added atime, ctime and mtime for filesBala Konda Reddy M2020-05-141-2/+10
| | | | | | | | | | | | | | | | | | | | | get_file_stat function doesn't have access time modified time and change time for a file or directory. Added respective parameters for get- ting the values into the dictionary. Changed the separator from ':' to '$', reason is to overcome the unpacking of the tuple error as below: 2020-04-02 19:27:45.962477021 If ":" as separator is used, will be hitting "ValueError: too many values to unpack" error. Used $ as separator, as it is not used for the filenames in the glusto-tests and not part of the stat output. Change-Id: I40b0c1fd08a5175d3730c1cf8478d5ad8df6e8dd Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] test rmdir with subvol downPranav2020-05-131-0/+361
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | test case: (rmdir with subvol down) case -1: - create parent - bring down a non-hashed subvolume for directory child - create parent/child - rmdir /mnt/parent will fail with ENOTCONN case -2: - create dir1 and dir2 - bring down hashed subvol for dir1 - bring down a non-hashed subvol for dir2 - rmdir dir1 should fail with ENOTCONN - rmdir dir2 should fail with ENOTCONN case -3: - create parent - mkdir parent/child - touch parent/child/file - bringdown a subvol where file is not present - rm -rf parent - Only file should be deleted - rm -rf should fail with ENOTCONN case -4: - Bring down a non-hashed subvol for parent_dir - mkdir parent - rmdir parent should fails with ENOTCONN Change-Id: I8fbd425729aaf04eabfced315f94167178918e31 Co-authored-by: Susant Palai <spalai@redhat.com> Signed-off-by: Susant Palai <spalai@redhat.com> Signed-off-by: Pranav <prprakas@redhat.com>
* [Test][BUG] Uss and snapshot in EC volumeubansal2020-05-131-0/+328
| | | | | | | | | | | | | | | Steps: 1.Enable uss and create snapshot, list and delete 2.Create Snapshot with same same and list Github Issue for CentOS-CI failure: https://github.com/gluster/glusterfs/issues/1203 Testcase failing due to: https://bugzilla.redhat.com/show_bug.cgi?id=1828820 Change-Id: I829e6b340dfb4963355b445259fcb011b62ba057 Signed-off-by: ubansal <ubansal@redhat.com>
* [Libfix] Remove ssl_ops.py librarykshithijiyer2020-05-121-226/+0
| | | | | | | | | | | | | | | | Problem: Ideally operations done in ssl_ops.py should be performed on a gluster cluster even before peer probing the nodes. This makes the library useless as we can't run any library in glusto-tests without peer probing Solution: Enable SSL on gluster cluster before parsing it to run tests present in glusto-tests. Change-Id: If803179c67d5b3271b70c1578269350444aa3cf6 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix georep_config_get methodPranav2020-05-121-1/+1
| | | | | | | | The command creation with a specific user had six substitutions, but had only 5 placeholders. Change-Id: I2c9f63213f78e5cec9e5bd30cac8d75eb8dbd6ce Signed-off-by: Pranav <prprakas@redhat.com>
* [BUG][Test] Add tc to check heal with only shd runningPranav2020-05-111-0/+245
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Failing in CentOS-CI due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=1768380 Description: Test Script which verifies that the server side healing must happen only if the heal daemon is running on the node where source brick resides. * Create and start the Replicate volume * Check the glustershd processes - Only 1 glustershd should be listed * Bring down the bricks without affecting the cluster * Create files on volume * kill the glustershd on node where bricks is running * bring the bricks up which was killed in previous steps * check the heal info - heal info must show pending heal info, heal shouldn't happen since glustershd is down on source node * issue heal * trigger client side heal * heal should complete successfully Change-Id: I1fba01f980a520b607c38d8f3371bcfe086f7783 Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>, Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Milind Waykole <milindwaykole96@gmail.com> Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix][Quota] Moving test steps from teardown class to teardownSri Vignesh2020-05-1110-120/+86
| | | | | | | Move cases from teardown class to teardown in quota Change-Id: Ia20fe9bef09842f891f0f27ab711a1ef4c9f6f39 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [TestFix] Reduce the workload as rebalance was failingubansal2020-05-111-5/+5
| | | | | Change-Id: Id94870735b26fbeab2bf448d4f80341c92beb5ba Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix][DHT] Moving test steps from teardown class to teardownSri Vignesh2020-05-1113-98/+98
| | | | | | | Move cases from teardown class to teardown in dht Change-Id: Id0cf120c6229715521ae19fd4bb00cad553d701f Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test] Add tc to check volume metadata-self-healkshithijiyer2020-05-111-0/+606
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Testcase steps: 1.Turn off the options self heal daemon 2.Create IO 3.Calculate arequal of the bricks and mount point 4.Bring down "brick1" process 5.Change the permissions of the directories and files 6.Change the ownership of the directories and files 7.Change the group of the directories and files 8.Bring back the brick "brick1" process 9.Execute "find . | xargs stat" from the mount point to trigger heal 10.Verify the changes in permissions are not self healed on brick1 11.Verify the changes in permissions on all bricks but brick1 12.Verify the changes in ownership are not self healed on brick1 13.Verify the changes in ownership on all the bricks but brick1 14.Verify the changes in group are not successfully self-healed on brick1 15.Verify the changes in group on all the bricks but brick1 16.Turn on the option metadata-self-heal 17.Execute "find . | xargs md5sum" from the mount point to trgger heal 18.Wait for heal to complete 19.Verify the changes in permissions are self-healed on brick1 20.Verify the changes in ownership are successfully self-healed on brick1 21.Verify the changes in group are successfully self-healed on brick1 22.Calculate arequal check on all the bricks and mount point Change-Id: Ia7fb1b272c3c6bf85093690819b68bd83efefe14 Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tc to check custom file xattrskshithijiyer2020-05-071-0/+256
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Description: This test case creates files at mount point and verifies custom attributes across bricks Testcase steps: 1.Create a gluster volume and start it. 2.Create file and link files. 3.Create a custom xattr for file. 4.Verify that xattr for file is displayed on mount point and bricks 5.Modify custom xattr value and verify that xattr for file is displayed on mount point and bricks 6.Verify that custom xattr is not displayed once you remove it 7.Create a custom xattr for symbolic link. 8.Verify that xattr for symbolic link is displayed on mount point and sub-volume 9.Modify custom xattr value and verify that xattr for symbolic link is displayed on mount point and bricks 10.Verify that custom xattr is not displayed once you remove it. Change-Id: Iff7360273369c77da243f2c09df2e10a0eec27ea Co-authored-by: Kartik_Burmee <kburmee@redhat.com> Signed-off-by: Kartik_Burmee <kburmee@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Add create_link_file() to glusterfile.pykshithijiyer2020-05-041-0/+39
| | | | | | | | Adding function create_link_file() to create soft and hard links for an existing file. Change-Id: I6be313ded1a640beb450425fbd29374df51fbfa3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [testfix][bug] Remove disperse vol type - quotaVinayak Papnoi2020-05-041-2/+1
| | | | | | | | | | | | | The test case 'tests/functional/quota/test_limit_usage_deep_dir.py' fails erratically for disperse volume. A bug [1] had been raised for the same where it was decided to remove the disperse volume type from the 'runs_on' of that test. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1672983 Change-Id: Ica8f2af449225d72d1b60c2c86b20e16b80a5a5a Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* [Test] Add tc to check impact of replace brick on shdkshithijiyer2020-04-291-0/+186
| | | | | | | | | | | | | | | | | | | | | | | Description: Test Script to verify the glustershd server vol file has only entries for replicate volumes. Testcase steps: 1.Create multiple volumes and start all volumes 2.Check the glustershd processes(Only 1 glustershd should be listed) 3.Do replace brick on the replicate volume 4.Confirm that the brick is replaced 5.Check the glustershd processes(Only 1 glustershd should be listed and pid should be different) 6.glustershd server vol should be updated with new bricks Change-Id: I09245c8ff6a2b31a038749643af294aa8b81a51a Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>, Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vijay Avuthu <vavuthu@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] functional/disperse: Verify remove brick operationSri Vignesh2020-04-292-1/+368
| | | | | | | | | | This test verifies remove brick operations on disperse volume. Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468 Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test] Heal Full after deleting the files from bricksPranav2020-04-281-0/+202
| | | | | | | | | | | | | | | | | | Test case: * Create IO * Calculate arequal from mount * kill glusterd process and glustershd process on arbiter nodes * Delete data from backend from the arbiter nodes * Start glusterd process and force start the volume to bring the processes online * Check if heal is completed * Check for split-brain * Calculate arequal checksum and compare it Change-Id: I41192134530ec42db3398ae97e4f328b77e529d1 Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Pranav <prprakas@redhat.com>
* [Test+Lib] No fresh lookups on directoryBala Konda Reddy M2020-04-282-0/+216
| | | | | | | | | | | | | | | | | | | Test Steps: 1. Create a volume and set the volume option 'diagnostics.client-log-level' to DEBUG mount the volume on one client. 2. Create a directory 3. Validate the number of lookups for the directory creation from the log file. 4. Perform a new lookup of the directory 5. No new lookups should have happened on the directory, validate from the log file. 6. Bring down one subvol of the volume and repeat step 4, 5 7. Bring down one brick from the online bricks and repeat step 4, 5 8. Start the volume with force and wait for all process to be online. Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* [Libfix] Fix check_brick_pid_matches_glusterfsd_pid() to use pgrepkshithijiyer2020-04-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Problem: On latest platforms pidof command is returning multiple pids as shown below: 27190 27078 26854 This is becasue it was returning glusterd,glusterfsd and glusterfs processes as well. The problem is that /usr/sbin/glusterd is a link to glusterfsd. 'pidof' has a new feature that pidof searches for the pattern in /proc/PID/cmdline, /proc/PID/stat and finally /proc/PID/exe. Hence pidof matches realpath of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd and results in glusterd, glusterfs and glusterfsd pids being returned in output. Fix: Use pgrep instead of pidof to get glusterfsd pids. And change the split logic accordingly. Change-Id: I729e05c3f4cacf7bf826592da965a94a49bb6f33 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix get_brick_processes_count() to use pgrepkshithijiyer2020-04-271-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | Problem: On latest platforms pidof command is returning multiple pids as shown below: 27190 27078 26854 This is becasue it was returning glusterd,glusterfsd and glusterfs processes as well. The problem is that /usr/sbin/glusterd is a link to glusterfsd. 'pidof' has a new feature that pidof searches for the pattern in /proc/PID/cmdline, /proc/PID/stat and finally /proc/PID/exe. Hence pidof matches realpath of /proc/<pid_of_glusterd>/exe as /usr/sbin/glusterfsd and results in glusterd, glusterfs and glusterfsd pids being returned in output. Fix: Use pgrep instead of pidof to get glusterfsd pids. And change the split logic accordingly. Change-Id: Ie215734387989f2d8cb19e4b4f7cddc73d2a5608 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Function to check xattr of the bricksubansal2020-04-241-1/+41
| | | | | | | Check if the xattr of the given bricks are same Change-Id: Ib1ba010bfeafc132123a88a893017f870a989789 Signed-off-by: ubansal <ubansal@redhat.com>
* [Lib] Add is_shd_daemon_running methodPranav2020-04-231-0/+30
| | | | | | | | | | | Verifies whether the shd daemon is up and running on a particular node. The method verifies whether the shd pid is present or not on the given node. If present, as an additional verification, verifies that the 'self-heal daemon' for the node specified is not there in the get volume status output Change-Id: I4865dc5c493a72ed7334ea998d0a231f4f8c75c8 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Add code to override the volume configsayaleeraut2020-04-231-38/+73
| | | | | | | | | | | | | | | Changing the distribute count to 4 for the volume type distributed-replicated or distributed-dispersed, as earlier with distribute count 2, after remove-brick, the dist-rep & dist-disp volumes were converted to pure rep or pure dispersed, which caused "layout not complete" error as with the DHT pass-through feature layout is not set on bricks if volume type is pure replicated/pure dispersed on gluster version 6.0 Adding distributed-arbiter volume type and have added code to override its configuration as well. Change-Id: Ic7a3404ed49d24f956de33f7bd5ca8ea61297e5b Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add tc to check get-state when brick is killedPranav2020-04-231-0/+124
| | | | | | | | | Test case verifies whether the gluster get-state shows the proper brick status in the output. The test case checks the brick status when the brick is up and also after killing the brick process. It also verifies whether the other bricks are up when a particular brick process is killed. Change-Id: I9801249d25be2817104194bb0a8f6a16271d662a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add TC to check SEL context on glusterfs.xml fileLeela Venkaiah G2020-04-221-0/+75
| | | | | | | | | | Test Steps: 1. Check the existence of '/usr/lib/firewalld/services/glusterfs.xml' 2. Validate the owner of this file as 'glusterfs-server' 3. Validate SELinux label context as 'system_u:object_r:lib_t:s0' Change-Id: I55bfb3b51a9188e2088459eaf5304b8b73f2834a Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Add tc to stop rebalance with migration in progresskshithijiyer2020-04-221-0/+167
| | | | | | | | | | | | | | | | | | | | | | Description: This test case creates a large file at mount point, adds extra brick and initiates rebalance. While migration is in progress, it stops rebalance process and checks if it has stopped. Testcase Steps: 1. Create and start a volume. 2. Mount volume on client and create a large file. 3. Add bricks to the volume and check layout 4. Rename the file such that it hashs to different subvol. 5. Start rebalance on volume. 6. Stop rebalance on volume. Change-Id: I7edd37a548467d6624ffe1efa64b0c1b56ff26ed Co-authored-by: Kartik_Burmee <kburmee@redhat.com> Signed-off-by: Kartik_Burmee <kburmee@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Fix to make TC compatible with DHT libsayaleeraut2020-04-221-38/+52
| | | | | | | | | | | | | The TC was failing with "AssertionError: ('hash range is not there %s', False)" even though the bricks were healed and the directory was created on non-hashed bricks. This was due to the conflict between the TC and the DHT library changes (added to fix the issues caused by DHT pass-through functionality). The code is now modified according to the library changes and hence the TC works fine. Change-Id: I501e7db89643822fbc711e631ceacda79e4c4ea4 Signed-off-by: sayaleeraut <saraut@redhat.com>