summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* [Libfix] Fix python3 getfattr() issueskshithijiyer2020-08-172-16/+22
| | | | | | | | | | | | | | | | | Problem: Due to patch [1] which was sent for issue #24 causes a large number of testcases to fail or get stuck in the latest DHT run. Solution: Make changes sot that getfattr command sends back the output in text wherever needed. Links: [1] https://review.gluster.org/#/c/glusto-tests/+/24841/ Change-Id: I6390e38130b0699ceae652dee8c3b2db2ef3f379 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Change g.log.error() to self.assertTrue()kshithijiyer2020-08-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcases test_volume_start_stop_while_rebalance_is_in_progress throws the below traceback when run: ``` Traceback (most recent call last): File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit msg = self.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format return fmt.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format record.message = record.getMessage() File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135 ``` This is because g.log.error() was used instead of self.assertTrue(). Solution: Changing to self.assertTrue(). Change-Id: If926eb834c0128a4e507da9fdd805916196432cb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Reset brick and trigger heal fullBala Konda Reddy M2020-08-131-0/+157
| | | | | | | | | | | | | 1. Create volume and create files/dirs from mount point 2. With IO in progress execute reset-brick start 3. Now format the disk from back-end, using rm -rf <brick path> 4. Execute reset brick commit and check for the brick is online. 5. Issue volume heal using "gluster vol heal <volname> full" 6. Check arequal for all bricks to verify all backend bricks including the resetted brick have same data Change-Id: I06b93d79200decb25f863e7a3f72fc8e8b1c4ab4 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Replace brick after add brick on disperseBala Konda Reddy M2020-08-131-0/+168
| | | | | | | | | | | | | | | | Test Steps: 1. Create a pure-ec volume (say 1x(4+2)) 2. Mount volume on two clients 3. Create some files and dirs from both mnts 4. Add bricks in this case the (4+2) ie 6 bricks 5. Create a new dir(common_dir) and in that directory create a distinct directory(using hostname as dirname) for each client and pump IOs from the clients(dd) 6. While IOs are in progress replace any of the bricks 7. Check for errors if any collected after step 6 Change-Id: I3125fc5906b5d5e0bc40477e1ed88825f53fa758 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Increased timeout for rebalanceubansal2020-08-121-1/+1
| | | | | | | | TC's was failing due to timeout issue increased reabalnce timeout from 900 to 1800 Change-Id: I726217a21ebbde6391660dd3c9dc096cc9ca6bb4 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Rootsquash functionality with multiple clientsManisha Saini2020-08-121-0/+174
| | | | | Change-Id: I813f3e78ad8b0b79940635df6721e34e6bc93f34 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Tests gfid self heal on arbiter volumeubansal2020-08-101-0/+206
| | | | | | | | | | | | | | | | Steps: 1.Create a volume and mount it 2.Create a directory say d1 3.Create deep directories and files in d1 4.Bring down redundant bricks 5.Delete d1 6.Create d1 and same data again 7.Bring bricks up 8.Monitor heal 9.Verify split-brain Change-Id: I778fab6bf6d9f81fca79fe18285073e1f7ccc7e7 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Increase sleep time in test_alert_time_outkshithijiyer2020-08-081-2/+2
| | | | | | | | | | | | | | | | | | | | | Problem: Test script test_alert_time_out currently fails 2 out of 6 times when executed on the same setup this is due to the log files not have 120004 A alert message. This issue is only observed in distributed volume type mounted over fuse protocol. Solution: There is no permanent solution to this problem as even if we increase the sleep 20 seconds there is still a chance that it might fail. The optimal sleep time where it only fails 5 times after 15 attempts is 6 seconds. Hence changing sleep time to 6 seconds. Change-Id: I9e9bd41321e24f502d90c3c34edce9113133755e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix missing param in test_rmdir_subvol_down.pyPranav2020-08-071-1/+2
| | | | | | | The assertIsNotNone is missing the param. Change-Id: Iddff9b203672b2edf702ada624bfac1892641712 Signed-off-by: Pranav <prprakas@redhat.com>
* [Testfix] Add logic to log more infokshithijiyer2020-08-061-1/+19
| | | | | | | | | Adding code to get dir tree and dump all xattr in hex for Bug 1810901 before remove-brick also adding logic to set log-level to debug. Change-Id: I9c9c970c4de7d313832f6f189cdca8428a073b1e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Fix test test_root_squash_with_volume_restartPranav2020-08-061-25/+6
| | | | | | | Integrate the changes made in library to the test Change-Id: I9bf8c3f1f732132170a96405a4a12839463a2eaa Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Validate data integritysayaleeraut2020-08-051-0/+169
| | | | | | | | | | | | | | | | | | | | Description: Checks that there is no data loss when remove-brick operation is stopped and then new bricks are added to the volume. Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create files and dirs on the mount-point. 4) Calculate the arequal-checksum on the mount-point. 5) Start remove-brick operation on the volume. 6) While migration is in progress, stop the remove-brick operation. 7) Add-bricks to the volume and trigger rebalance. 8) Wait for rebalance to complete. 9) Calculate the arequal-checksum on the mount-point. Change-Id: I96a7311f5acd0ae19b17d7b7c7da4d3899cdef77 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Test self heal of data and hardlink on arbiterubansal2020-07-311-0/+256
| | | | | | | | | | | | | | | | | | Steps: - Create a volume and mount it - disable metadata,data,entry heal - Create files and take arequal of mount point - Bring down redundant bricks - Append data and create hardlinks - Bring up bricks - Check healing and split-brain - Bring down redundant bricks - Truncate data - Check file and hardlink stat match - Bring up bricks Change-Id: I9b26f2fb26d72b71abd63a25ef8d9173f32997d4 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Fix I/O logic to use distinct file nameskshithijiyer2020-07-312-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcases test_mount_snap_delete and test_restore_online_vol were failing in the latest runs with the below traceback ``` Traceback (most recent call last): File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module> rc = args.func(args) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files base_file_name, file_types) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/lib64/python3.6/multiprocessing/pool.py", line 266, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/usr/lib64/python3.6/multiprocessing/pool.py", line 644, in get raise self._value File "/usr/lib64/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/usr/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in <lambda> ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 270, in _create_file with open(file_abs_path, "w+") as new_file: FileExistsError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/file1.txt' ``` This was because I/O logic was trying to create 2 files with the same name from 2 clients. Fix: Modify logic to use counters to create files with different names. Change-Id: I2896736d28f6bd17435f941088fd634347e3f4fd Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Test self-heal of 50k filesubansal2020-07-311-0/+140
| | | | | | | | | | | | | Steps: - Create an volume and mount it - Bring bricks offline - Write 50k files - Bring bricks online - Monitor heal completion - Check for split-brain Change-Id: I40739effdfa1c1068fa0628467154b9a667161a3 Signed-off-by: ubansal <ubansal@redhat.com>
* [Testfix] Add logic to collect additional infokshithijiyer2020-07-311-1/+16
| | | | | | | | Adding code to get dir tree and dump all xattr for Bug 1810901. Change-Id: Ia59dcd2623e845066e31037c96a64249efa074c2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Convert arb to x3 repl volume with IOLeela Venkaiah G2020-07-301-0/+221
| | | | | | | | | | | | | | | | Steps: - Create, start and mount an arbiter volume in two clients - Create two dir's, fill IO in first dir and take note of arequal - Start a continuous IO from second directory - Convert arbiter to x2 replicated volume (remove brick) - Convert x2 replicated to x3 replicated volume (add brick) - Wait for ~5 min for vol file to be updated on all clients - Enable client side heal options and issue volume heal - Validate heal completes with no errors and arequal of first dir matches against initial checksum Change-Id: I291acf892b72bc8a05e76d0cffde44d517d05f06 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Test absence of `healed` and `heal-failed` optionsLeela Venkaiah G2020-07-301-0/+104
| | | | | | | | | | | | | Steps: - Create and mount a replicated volume - Kill one of the bricks and write IO from mount point - Verify `gluster volume heal <volname> info healed` and `gluster volume heal <volname> info heal-failed` command results in error - Validate `gluster volume help` doesn't list `healed` and `heal-failed` commands Change-Id: Ie1c3db12cdfbd54914e61f812cbdac382c9c723e Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Libfix] Move NFS Ganesha support to GlusterBaseClassPranav2020-07-3012-239/+42
| | | | | | | | | | | | | | | | Problem: NFS-Ganesha Tests inherits 'NfsGaneshaClusterSetupClass' whereas the other tests inherits 'GlusterBaseClass'. This causes a cyclic dependency when trying to run other modules with Nfs-Ganesha. Fix: 1. Move the Nfs-Ganesha dependencies to GlusterBaseClass 2. Modify the Nfs-Ganesha tests to inherit from GlusterBaseClass 3. Remove setup_nfs_ganesha method call from existing Ganesha tests as its invoked by default from GlusterBaseClass.SetUpClass Change-Id: I1e382fdb2b29585c097dfd0fea0b45edafb6442b Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Rootsquash functionality with volume restartManisha Saini2020-07-291-0/+196
| | | | | Change-Id: I16c5f070d807673662e5ac3583aace06873a9c14 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test]: Test sosreport interoperability with GlusterFSnchilaka2020-07-291-0/+143
| | | | | | | | | | Description: Sos must be able to capture the required logs in sosreport including gluster logs, without compromising the integrity of Gluster like deleting socket files etc Change-Id: Ifec57778ff5d1fc0ceaa3ecf94a9851244076d2b Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Testfix] Fix TypeError for python3Bala Konda Reddy M2020-07-281-5/+6
| | | | | | | | | | | | | | | | | | Problem: Test is failing with below traceback when ran with python3 as default. ` Traceback (most recent call last): File "<string>", line 1, in <module> TypeError: a bytes-like object is required, not 'str' ` Solution: Added ''.encode() which will fix the issue when ran using both python2 and python3 Added a check for core file on the client node. Change-Id: I8f800f5fad97c3b7591db79ea51203e5293a1f69 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] AFR Data heal by default and explicit self-heal commandnchilaka2020-07-271-126/+29
| | | | | | | | | | - Remove unneccessary disablement of client side heal options - Check if client side heal options are disabled by default - Test data heal by default method - Explicit data heal by calling self heal command Change-Id: I3be9001fc1cf124a4cf5a290cee985e166c0b685 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Test] Validate readdirp with rebalancesayaleeraut2020-07-271-0/+173
| | | | | | | | | | | | | | | | | | | | | Description : Check that all directories are read and listed while rebalance is still in progress. Steps : 1) Create a volume. 2) Mount the volume using FUSE. 3) Create a dir "master" on mount-point. 4) Create 8000 empty dirs (dir1 to dir8000) inside dir "master". 5) Now inside a few dirs (e.g. dir1 to dir10), create deep dirs and inside every dir, create 50 files. 6) Collect the number of dirs present on /mnt/<volname>/master 7) Change the rebalance throttle to lazy. 8) Add-brick to the volume (at least 3 replica sets.) 9) Start rebalance using "force" option on the volume. 10) List the directories on dir "master". Change-Id: I4d04b3e2be93b5c25b5ed70516bb99d99fb1fb8a Signed-off-by: sayaleeraut <saraut@redhat.com>
* [TestFix] Replace `translate` with `replace`Leela Venkaiah G2020-07-271-8/+1
| | | | | | | | | - `replace` funciton to used to forgo version check - `unicode` is not being recognized from builtins in py2 - `replace` seems correct alternative than fixing unicode Change-Id: Ieb9b5ad283e1a31d65bd8a9715b80f9deb0c05fe Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Breakdown sparefile creation into chunkspre_glusterfs_6_0Leela Venkaiah G2020-07-221-17/+43
| | | | | | | | | - Remove decimal before passing to `head` command - Breakup sparsefile into chunks to ~half of brick size - Whole test has to be skipped due to BZ #1339144 Change-Id: I7a9ae25798b442c74248954023dd821c3442f8f9 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] skip test unless 3 clients are providedBala Konda Reddy M2020-07-221-24/+9
| | | | | | | | | | | | Problem: Creating a third mount obj works for glusterfs protocol but in future while running for nfs/cifs might face complications and test might fail. Solution: Skip test unless three clients are provided Removing redundant logging and minor fixes. Change-Id: Ie657975a46b6989cb9f057f5cc337333bbf1010d Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Make test compatible with Python 2Leela Venkaiah G2020-07-201-8/+16
| | | | | | | - Translate function is availble on `unicode` string in Python2 Change-Id: I6aa01606acc73b18d889a965f1c01f9a393c2c46 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Truncate a file with brick downBala Konda Reddy M2020-07-201-0/+124
| | | | | | | | | | | | | | Test steps: 1. Create a volume, start and mount it on a client 2. Bring down redundant bricks in the subvol 3. Create a file on the volume using "touch" 4. Truncate the file using "O_TRUNC" 5. Bring the brick online 6. Write data on the file and wait for heal completion 7. Check for crashes and coredumps Change-Id: Ie02a56ab5180f6a88e4499c8cf6e5fe5019e8df1 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] mv and ls operations with bricks offlineBala Konda Reddy M2020-07-171-2/+121
| | | | | | | | | | | | | | | | | Test Steps: 1. Created a volume and mount this volume on 3 clients. 2. Bring down two bricks offline in each subvol. 3. On client1: under dir1 create files f{1..10000} run in background 4. On client2: under / touch x{1..1000} 5. On client3: start creating x{1001..10000} 6. Bring bricks online which were offline(brought up all the bricks which were down (2 in each of the two subvols) 7. While IO on Client1 and Client3 were happening, On client2 move all the x* files into dir1 8. Perform lookup from client 3 Change-Id: Ib72648af783535557e20cea7e64ea68036b23121 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Check data integrity for EC volumeubansal2020-07-171-0/+314
| | | | | | | | | | | | | | Steps: 1.Create a EC volume and mount it 2.Run different types of IO's 3.Take arequal of mountpoint 4.Bring down redundant bricks 5.Take arequal of mountpoint 6.Bring down another set of redundant bricks 7.Take arequal of mountpoint Change-Id: If253cdfe462c6671488e858871ec904fbb2f9ead Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Add tc for NFS-Ganesha ExportIDManisha Saini2020-07-171-0/+53
| | | | | Change-Id: I8ae78b06706bc4818cbd2b00b386f362883cb9d7 Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Rootsquash functionality test with glusterd restartManisha Saini2020-07-151-0/+190
| | | | | | | | | | | | | | | | Verification of rootsquash functionality with glusterd restart * Create some files and dirs inside mount point * Set permission as 777 for mount point * Enable root-squash on volume * Create some more files and dirs * Restart glusterd on all the node * Try to edit file created in step 1 nfsnobody user should not be allowed to edit file * Try to edit the file created in step 5 nfsnobody user should be allowed to edit file Change-Id: Id2208127ce3c3ea2181d64af0e5e114c49f196ba Signed-off-by: Manisha Saini <msaini@redhat.com>
* [Test] Validate open file migrationsayaleeraut2020-07-141-0/+131
| | | | | | | | | | | | | | | | | | | | Description: Checks that files with open fd are migrated successfully. Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create files on volume mount. 4) Open fd for the files and keep on doing read write operations on these files. 5) While fds are open, add bricks to the volume and trigger rebalance. 6) Wait for rebalance to complete. 7) Wait for write on open fd to complete. 8) Check for any data loss during rebalance. 9) Check if rebalance has any failures. Change-Id: I9345827ae36eb6d2c264d0e0874738211aadc55e Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Validate EIO changes to EDQUOTE errorsLeela Venkaiah G2020-07-141-0/+389
| | | | | | | | | | | - Tests to check EIO changes to EDQUOTE errors on reaching quota - Scenarios covered are: - Redundant bricks are down in a volume - Multiple IOs were happening from clients - Single IO session from a client Change-Id: Ie15244231dae7fe2e61cc6df0d7f35d2231d9bdf Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Fix failures in test_dht_create_dir.pysayaleeraut2020-07-091-80/+44
| | | | | | | | | | | | | | Added following changes: 1) The test script consists of 2 test cases. Hence changed the setUpClass(cls) to setUp(self). 2) Changed the code that checks if the symlink is pointing to correct location in the test_create_link_for_directory(self), as earlier it was failing with "AssertionError: sym link does not point to correct location" as the output of command 'stat' for symlink file varies as per the platform. Change-Id: I43f98a0d60b3ebf30236ff7e702667373a39a0e1 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add file rename test when dest exist in diff subvol combinationsPranav2020-07-081-0/+919
| | | | | | | | Tests to validate the behaviour of rename cases when destination file exists and is hashed or cached to different subvol combinations Change-Id: I44752a444d9c112d590efd66c48ff095c22fcecd Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Remove hot and cold bricks list - Part2Bala Konda Reddy M2020-07-084-22/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the non-tiered volume types, In few test cases while bringing bricks offline, collecting both hot_tier_bricks and cold_tier_bricks and it is not needed to collect hot and cold tier bricks. Removing tier kwarg in one of the test. Removing the hot and cold tiered bricks and collecting only bricks of the particular volume as mentioned below. Removing below section ``` bricks_to_bring_offline_dict = (select_bricks_to_bring_offline( self.mnode, self.volname)) bricks_to_bring_offline = list(filter(None, ( bricks_to_bring_offline_dict['hot_tier_bricks'] + bricks_to_bring_offline_dict['cold_tier_bricks'] + bricks_to_bring_offline_dict['volume_bricks']))) ``` Modifying as below for bringing bricks offline. ``` bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks'] ``` Change-Id: I4f59343b380ced498516794a8cc7c968390a8459 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] IO conitinuity on brick down in EC VolumeLeela Venkaiah G2020-07-081-0/+215
| | | | | | | | | | | | | | | | | Test Steps: - Create, start and mount an EC volume in two clients - Create multiple files and directories including all file types on one directory from client 1 - Take arequal check sum of above data - Create another folder and pump different fops from client 2 - Fail and bring up redundant bricks in a cyclic fashion in all of the subvols maintaining a minimum delay between each operation - In every cycle create new dir when brick is down and wait for heal - Validate heal info on volume when brick down erroring out instantly - Validate arequal on brining the brick offline Change-Id: Ied5e0787eef786e5af7ea70191f5521b9d5e34f6 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] Fix test_mount_point_not_go_to_rofs failurekshithijiyer2020-07-061-34/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcase test_mount_point_not_go_to_rofs fails every time in the CI runs with the below traceback: > ret = wait_for_io_to_complete(self.all_mounts_procs, self.mounts) tests/functional/arbiter/test_mount_point_while_deleting_files.py:137: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ build/bdist.linux-x86_64/egg/glustolibs/io/utils.py:290: in wait_for_io_to_complete ??? /usr/lib/python2.7/site-packages/glusto/connectible.py:247: in async_communicate stdout, stderr = p.communicate() /usr/lib64/python2.7/subprocess.py:800: in communicate return self._communicate(input) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <subprocess.Popen object at 0x7febb64238d0>, input = None def _communicate(self, input): if self.stdin: # Flush stdio buffer. This might block, if the user has # been writing to .stdin in an uncontrolled fashion. > self.stdin.flush() E ValueError: I/O operation on closed file /usr/lib64/python2.7/subprocess.py:1396: ValueError This is because the self.io_validation_complete is never set to True in the testcase. Fix: Adding code to set self.io_validation_complete to True and moving code from TearDownClass to TearDown. Modifying logic to not add both clients to self.mounts. Change-Id: I51ed635e713838ee3054c4d1dd8c6cdc16bbd8bf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Increased timeout for some TC'subansal2020-06-291-3/+5
| | | | | | | | Few TC's were failing sue to timeout issue increased timeout for those TC's Change-Id: Id62bee81e1cb6b8bb3a712858404c7092142072b Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix] Increasing timeout for healubansal2020-06-261-2/+2
| | | | | | | | As heal completion is failing intermitently for disperse volume, increased timeout for heal Change-Id: I5e7b7c8eb332ada1abc72389fc8ce883e269d226 Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix] Test FD IO's on replace-brick in EC volumeubansal2020-06-261-4/+41
| | | | | Change-Id: Ib39894e9f44c41f5539377c5c124ad45a786cbb3 Signed-off-by: ubansal <ubansal@redhat.com>
* [Test] Add file rename tests when dest file in src hashed/cached subvolPranav2020-06-251-0/+639
| | | | | | | | | Tests to validate behaviour of different scenarios of file rename cases, when destination file exists intially and is hashed to the source file hashed or cached subvol. Change-Id: Iec12d33c459cb966861d2efac2bae85103555cc1 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Change method namesayaleeraut2020-06-241-1/+1
| | | | | | | | Changing the method name from test_readdirp_with_rebalance(self) to test_access_file_with_stale_linkto_xattr(self) Change-Id: I5503e301d65f96e38aa135827d8bc698a0371281 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Check access file with stale linkto xattrsayaleeraut2020-06-241-0/+169
| | | | | | | | | | | | | | | | | | | | Description: The test script verfies that a file with stale linkto xattr can be accessed from a non-root user. Steps: 1) Create a volume and start it. 2) Mount the volume on client node using FUSE. 3) Create a file. 4) Enable performance.parallel-readdir and performance.readdir-ahead on the volume. 5) Rename the file in order to create a linkto file. 6) Force the linkto xattr values to become stale by changing the dht subvols in the graph. 7) Login as an non-root user and access the file. Change-Id: I4f275dedd47a851c2c4839f51cf1867638a66667 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [TestFix] Remove tier related kwarg from testBala Konda Reddy M2020-06-241-1/+1
| | | | | | | | | Removing 'add_to_hot_tier' parameter as it defaults to False and it is not needed for the add-brick operation in the test as the volume type is not tier. Change-Id: I4a697a453e368197dfaf143d344a623d449e2614 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Start/Stop of snapd deamon on the cloned volumesrivickynesh2020-06-241-0/+377
| | | | | | | | | | | | Test Cases in this module tests the USS functionality of snapshots snapd on cloned volume and validated snapshots are present inside .snaps directory by terminating snapd on one by one nodes and validating .snaps directory is still accessible. Change-Id: I98d48268e7c5c5952a7f0f544960203d8634b7ac Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test] Set disperse quorum count to 5 and test EC volumeubansal2020-06-241-0/+309
| | | | | | | | | | | | | | | | | | | | | | On setting disperse quorum count to 5 , atleat 5 bricks should be online for successful writes on volume Steps: 1.Set disperse quorum count to 5 2.Write and read IO's 2.Bring down 1st brick 3.Writes and reads successful 4.Brind down 2nd brick 5.Writes should fail and reads successful 4.Write and read again 5.Writes should fail and reads successful 6.Rebalance should fail as quorum not met 7.Reset volume 8.Write and read IO's and vaildate them 9.Bring down redundant bricks 10.Write and read IO's and vaildate them Change-Id: Ib825783f01a394918c9016808cc62f6530fe8c67 Signed-off-by: ubansal <ubansal@redhat.com>
* [TestFix] Remove hot and cold bricks list on regular volumesBala Konda Reddy M2020-06-2415-60/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | For the non-tiered volume types, In few test cases while bringing bricks offline, collecting both hot_tier_bricks and cold_tier_bricks and it is not needed to collect hot and cold tier bricks. Removing the hot and cold tiered bricks and collecting only bricks of the particular volume as mentioned below. Removing below section ``` bricks_to_bring_offline_dict = (select_bricks_to_bring_offline( self.mnode, self.volname)) bricks_to_bring_offline = list(filter(None, ( bricks_to_bring_offline_dict['hot_tier_bricks'] + bricks_to_bring_offline_dict['cold_tier_bricks'] + bricks_to_bring_offline_dict['volume_bricks']))) ``` Modifying as below for bringing bricks offline. ``` bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks'] ``` Change-Id: Icb1dc4a79cf311b686d839f2c9390371e42142f7 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>