summaryrefslogtreecommitdiffstats
path: root/tests/functional
Commit message (Collapse)AuthorAgeFilesLines
...
* Fixing the default quorum type issueVijay Avuthu2018-08-021-3/+3
| | | | | | | | In 3.4, default quorum type is chnaged to auto. pre 3.4 releases, it was None Change-Id: I4e58ff8cc4727db81bb6b9baadd101687ddb74b0 Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
* snapshot:: snapshot creation testcasesrivickynesh2018-08-021-25/+25
| | | | | | | The purpose of this test is to validate snapshot create Change-Id: Icacafc48739e6322fb925e117df63462d690251f Signed-off-by: srivickynesh <sselvan@redhat.com>
* snapshot:: configuring max-hard and softlimit of snapshotssrivickynesh2018-08-021-9/+11
| | | | | | | | This testcase contains tests which verifies the snapshot max-hardlimit and soft-limit and deletion of snapshots for a volume Change-Id: Iff00475abdaf01b2beac72545774f477e99a13ce Signed-off-by: srivickynesh <sselvan@redhat.com>
* Quota: check alert time and message on exeeding soft limit.hari gowtham2018-07-311-0/+339
| | | | | | | | | | On a quota enabled volume validate if alert time is printed only after crossing the soft limit. Change-Id: Ia94ae9dd760fed644841df11fe046c184cdd3398 Signed-off-by: hari gowtham <hgowtham@redhat.com> Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com> Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* functional/bvt: verify volume sanityAkarsha2018-07-271-0/+106
| | | | | | | | This test verifies that file/directory creation or deletion doesn't leave behind any stale spaces Change-Id: I02b16625d64426abef390c5b00473ad8a7b7d84d Signed-off-by: Akarsha <akrai@redhat.com>
* cifs: stat-prefetch while io is runningvivek das2018-07-272-0/+135
| | | | | | | | | Run IOs over cifs mount and simultaneously do a gluster volume set stat-prefetch on and off. The IOs should not fail. Change-Id: I9327e599eb3536f3c49a90d468391055ea4c3bf9 Signed-off-by: vivek das <vdas@redhat.com>
* afr: split-brain resolution on a file not in split-brainroot2018-07-241-0/+234
| | | | | | | | | This test case performs split brain resolution on a file not in split-brain. This action should fail. Signed-off-by: Kartik_Burmee <kburmee@redhat.com> Change-Id: I01b9a41530498e96f6092283372798e61a9ac2b2
* added an automated dht test case: create directoryKartik_Burmee2018-07-171-0/+294
| | | | | Change-Id: I7534850d317993ee0b4c81ec06c1bdaeaf0d7535 Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* Tests: ensure volume deletion works when intended.Yaniv Kaul2018-07-172-24/+31
| | | | | | | | | | | It could have failed without anyone noticing. Added 'xfail' - do we expect to fail in deletion (and changed tests accordingly) On the way, ensure stdout and stderr are logged in case of such failures. Change-Id: Ibdf7a43cadb0393707a6c68c19a664453a971eb1 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* Shorten all the logs around verify_io_procsYaniv Kaul2018-07-1746-730/+702
| | | | | | | | No functional change, just make the tests a bit more readable. It could be moved to a decorator later on, wrapping tests. Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* Test handling data split-brain of files (heal command)Vitalii Koriakov2018-07-111-0/+371
| | | | | Change-Id: I9a0cb923c7e3ec0146c2b3b9bf0dcafe6ab892e8 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Quota : validate quota on add-brick without rebalanceharigowtham2018-07-061-0/+166
| | | | | | | | | | | | | | | | The below operations are performed on a dis-rep volume * Enable Quota * Set limit of 1GB on / * Mount the volume * Create some random amount of data inside each directory until quota is reached * Perform a quota list operation * Perform add-brick * Trying add files and see if quota is honored. Change-Id: Idd6a647c65a9e2dc7322bc729b48366cf6a46c85 Signed-off-by: harigowtham <hgowtham@redhat.com>
* Quota: test quota list with a directory's deletion and recreationharigowtham2018-07-061-0/+149
| | | | | | | | | | | | | | | The following steps are followed: * Enable quota on the volume * Set the quota on the non-existing directory * Create the directory as above and set limit * List the quota on the volume * Delete the directory * Recreate the directory * List the quota on volume * Check for volume status Change-Id: Ieb37b24715f4c4a9689db582489a7d9a965d54ad Signed-off-by: harigowtham <hgowtham@redhat.com>
* snapshot: Activate/Deactivate status when glusterd is downsrivickynesh2018-07-051-0/+204
| | | | | | | | | | Test Cases in this module tests the snapshot activation and deactivation status when glusterd is down. Change-Id: I9ae97ceba40ce2511b1b731d5e02990f78f424e9 Signed-off-by: srivickynesh <sselvan@redhat.com> Signed-off-by: Vignesh <vignesh@localhost.localdomain>
* Quota: Test case to validate unique soft-limits on directoriesVinayak Papnoi2018-07-011-0/+176
| | | | | | | | | | Test quota with respect to the limit-usage and soft-limit quota volume options. Set a unique soft-limit on each directory and create some data such that the soft-limit is exceeded and validate with the quota list command. Change-Id: I80fbb93072a9d8c2fa861c931042ae2fba1fefe1 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Quota: Test case to validate limit-usage on a symlinkVinayak Papnoi2018-07-011-0/+121
| | | | | | | | | | | | | | Quota limit-usage should fail when trying to set the limit on a symlink. - Enable quota - Set a limit on the volume - Create a directory from mount - Create a symlink of the directory from mount - Try to set a quota limit on the symlink Change-Id: Ic9979a5c67af74d08527acc0f9a30c1d97291906 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* afr: Test gfid-split-brain resolution of filesRavishankar N2018-07-011-0/+273
| | | | | | | | | | This test creates gfid split-brain of files and uses the source-brick option in the CLI to resolve them. Polarion test: RHG3-4402 Change-Id: I4fb3f16bfcdf77afe92c3a6f98f259147fef30c2 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* snapshot : validate restore on online volumeSunny Kumar2018-07-011-0/+159
| | | | | | | | This test case will validate snap restore on online volume. When we try to restore online volume it should fail Change-Id: If542a6052ee6e9eec7d5e1268f480f047916e66c Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* afr: Test metadata split-brain resolution using heal CLIkarthik-us2018-07-011-0/+282
| | | | | Change-Id: I634d11cb582521b03f0bb481172e2f4f68d1c2ce Signed-off-by: karthik-us <ksubrahm@redhat.com>
* Quota : test case with creation of file larger than quota limitVinayak Papnoi2018-07-011-0/+180
| | | | | | | | | | | | | | | | | | If a limit is set and a file of size larger than limit is created then the file creation will stop when it will reach the limit. - Enable Quota - Create a directory from mount point - Set a limit of 10 MB on the directory - Set Quota soft-timeout and hard-timeout to 0 seconds - Create a file of size larger than the Quota limit eg. 20 MB file - Perform Quota list operation to check if all the fields are appropriate such as hard_limit, available_space, sl_exceeded, hl_execeeded, etc. Change-Id: I44d7e23e60589cbe022735e9b4b5f3bfd1b7458e Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* afr: test readlink fopRavishankar N2018-07-011-0/+125
| | | | | | | Polarion test case #RHG3-4094 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Change-Id: I1f7100ddb6697cfc8749f8cd2c29e14e9bfdb5ce
* afr: test chown, chmod and chgrpRavishankar N2018-07-011-0/+151
| | | | | | Polarion test case #RHG3-4094 Change-Id: Id7492b1e0a7a000ece788c7a0cc4ed9dd8743700 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusto/glusterd: Peer status should have FQDNSanju Rakonde2018-07-011-0/+152
| | | | | | | | | | | | | | | | | | In this test case we are checking whether the peer status is having FQDN or not. Steps followed are: 1. Peer probe to a new node i.e., N1 to N2 using hostname 2. Check peer status in N2, it should have FQDN of N1 3. Check peer status on N1, it should have FQDN of N2 4. Create a distributed volume with single brick on each node. 5. Start volume 6. Peer probe to a new node N3 using IP 7. Add a brick from node3 to the volume, add brick should succeed. 8. Get volume info, it should have correct information Change-Id: I7f2bb8cecf28e61273ca83d7e3ad502ced979c5c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* afr: Heal the directories and assign gfids when doing named lookupkarthik-us2018-07-011-0/+157
| | | | | | | | | If there are directories present on only one brick without having gfid (created from backend), heal them and assign gfids when named lookup comes on those directories. Change-Id: I32c27f0b04c8eb36b25899ca9fbe7aef141f13b9 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* glusto/dht: test case for directory healingSusant Palai2018-07-011-0/+139
| | | | | | | | | | | The test case tests directory healing. If a mkdir happens successfully when a subvol is down, the directory should heal and should have zeroed layout once the subvol is up. Change-Id: Ia2f5747a1008112a5dcebda8a953ee3d2de9f75f Signed-off-by: Susant Palai <spalai@redhat.com>
* snapshot:: .snaps already present before uss is enabledsrivickynesh2018-07-011-0/+245
| | | | | | | | | | Test Cases in this module tests the USS where .snaps folder is already created and checks for behaviour before enabling uss and after disabling uss Change-Id: I9c6dcca1198b7fed37103cc21f3ecba72bfa20a5 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test peer probe while snapd is running.Rajesh Madaka2018-07-011-0/+107
| | | | | | | | | | | -> Create Volume -> Create snap for that volume -> Enable uss -> Check snapd running or not -> Probe a new node while snapd is running Change-Id: Ic28036436dc501ed894f3f99060d0297dd9d3c8a Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* snapshot: Listing Invalid casessrivickynesh2018-06-301-0/+117
| | | | | | | | | Test Cases in this module tests the snapshot creation and listing Invalid names and parameters. Change-Id: Iab3dd71a83176d850780c0834f95d4e8ac1e8997 Signed-off-by: srivickynesh <sselvan@redhat.com>
* TC: Induce holes in layout by removebrick force then lookupPrasad Desala2018-06-301-0/+136
| | | | | | | | | | Objective: When induced holes in the layout by remove brick force, lookup sent on the directory from client should fix the directory layout without any holes or overlaps. Change-Id: If4af10929e8e7d4da93cea80c515e37acf53d34e Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test remove-brick operation when quorum not metRajesh Madaka2018-06-301-0/+181
| | | | | | | | | | | | | | -> Create volume -> Enabling server quorum -> Set server quorum ratio to 95% -> Stop the glusterd on any one of the node -> Perform remove brick operation -> start glusterd -> Check gluster vol info, bricks should be same before and after performing remove brick operation. Change-Id: I4b7e97ecc6cf6854ec8ff36d296824e549bf9b97 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* glusto/dht: test lookup of directoriesSusant Palai2018-06-291-0/+280
| | | | | | | | | | | | | | | | | | | | | test: case -1: - bring down a subvol - create a directory so that it does not hash to down subvol - make sure stat is successful on the dir case -2: - create directory - bring down hashed subvol - make sure stat is successful on the dir case -3: - create dir - bringdown unhashed subvol - make sure stat is successful on the dir Change-Id: I9cbd2e7f04c885eaa09414d6b49632cf77dd72ec Signed-off-by: Susant Palai <spalai@redhat.com>
* Snapshot: Create clone and check for self healsrivickynesh2018-06-291-0/+295
| | | | | | | | Create a clone and check for self heal operation on the cloned volume. Change-Id: Icf61f996fcd503a6c0d0bf936900858b715a4742 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding test case for change owner, group, permission for directoryMohit Agrawal2018-06-292-0/+414
| | | | | | | | | | | | | | | | | 1) Create Dir with some file inside dir 2) Verify dir exists on all bricks as well as mount point 3) Compare dir stat with mount-point and brick location path 4) Change the ownership of directory 5) Compare dir stats with mount-point and brick path 6) Try to change pemission with different user for directory 7) Compare dir stat with mount-point and brick path 8) Try to change permission with different user for directory 9) change permission of directory 10) Compare dir stat with mount-point and brick path 11) Try to change permission with different user for same directory Change-Id: I284842be8c7562d4618d4e69e202c4d80945f1c5 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* TC: Induce holes in layout by remove brick force then fix-layoutPrasad Desala2018-06-291-0/+154
| | | | | | | | | Objective: When induced holes in the layout by remove brick force, fix-layout start should fix the layout without any holes or overlaps. Change-Id: Ie4c47fff11957784044e717c644743263812a0e4 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* glusto/dht: test case to check mkdir hashed to a down subvolumeSusant Palai2018-06-291-0/+155
| | | | | | | | The test case verifies that a mkdir of directory hashed to a down subvolume should fail. Change-Id: I8465f4869c9283d4339c50cdbd56b0256fa11bb9 Signed-off-by: Susant Palai <spalai@redhat.com>
* snapshot : validate snapshot delete when glusterd is downSunny Kumar2018-06-291-0/+128
| | | | | | | | | This test cases validates snapshot delete behaviour when glusterd is down on one node. When we bring up the brought down node, post handshake number of snaps should be consistent. Change-Id: If7ed6c3f384658faa4eb3de8577e38d3fe55f980 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Volume start and status when one of the brick is absentRajesh Madaka2018-06-281-0/+118
| | | | | | | | | | -> Create Volume -> Remove any one Brick directory -> Start Volume -> Check the gluster volume status Change-Id: I83c25c59607d065f8e411e7befa8f934009a9d64 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* afr: Test data split-brain resolution using heal CLIkarthik-us2018-06-281-0/+270
| | | | | Change-Id: I525f50a42e29270d9ac445d62e12c7e7e25a7ae3 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* Snapshot: Delete an existing snapshot scheduled Jobsrivickynesh2018-06-281-0/+238
| | | | | | | | Test Cases in this module tests the snapshot scheduler for deleting existing scheduled jobs Change-Id: Ibd1a00fb336f279d8c13a89ae513a914977f593d Signed-off-by: srivickynesh <sselvan@redhat.com>
* Quota : Test case to validate quota-deem-statfs and quotadVinayak Papnoi2018-06-271-0/+138
| | | | | | | | | Test quota with respect to the quota daemon and features.quota-deem-statfs quota volume option when quota is enabled/disabled on the volume. Change-Id: I91a4ced6a5d31fe93c6bb9b0aa842cd5baf38ee0 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Adding DHT test case: stop glusterd while rebalance in progressPrasad Desala2018-06-271-0/+218
| | | | | | | Rebalance should proceed even if glusterd is down on a node. Change-Id: I499e8a4e6b42bd7a8153c1d82c8b329a1933e748 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Snapshot: deleting orginal volume and check clone volumesrivickynesh2018-06-271-0/+207
| | | | | | | | | | Test Cases in this module tests the Creation of clone from snapshot of volume and delete snapshot and original volume. Validate cloned volume is not affected. Change-Id: Ic61defe96fed6adeab21b3715878ac8093156645 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding testcase: Exercise rebalance commandPrasad Desala2018-06-271-0/+376
| | | | | | | | | | | | | This testcase exercises below rebalance commands, 1) Rebalance with fix-layout 2) Rebalance start --> status --> stop 3) Rebalance with force option Changes: - Remove pytest.mark from test cases Change-Id: I467de068dabac90018f6241472b2d91d9d9e85a8 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test remove brick after restart glusterdMohit Agrawal2018-06-271-0/+174
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 4 nodes 2. Create a distributed-replicated volumes with 4 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Create some data file 6. Start remove-brick operation for one replica pair 7. Restart glusterd on all nodes 8. Try to commit the remove-brick operation while rebalance is in progress, it should fail Change-Id: I64901078865ef282b86c9b3ff54d065f976b9e84 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* snapshot: delete a snap of a volumesrivickynesh2018-06-271-0/+110
| | | | | | | | | Test Cases in this module tests the snapshot deletion with snapname, with volumename and delete all snapshot commands. Change-Id: I1e4e361e58b35744e08e63c48b43d9e8caf2e953 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test brick status after stop glusterd and modify volumeMohit Agrawal2018-06-271-0/+193
| | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Stop glusterd on one node 2 5. Modify any of the volume option on node 1 6. Start glusterd on node 2 7. Check volume status, brick should get port Change-Id: I688f954f5f53678290e84df955f5529ededaf78f Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Test restart glusterd while rebalance is in progressRajesh Madaka2018-06-271-0/+172
| | | | | | | | | | | | | -> Create Volume -> Fuse mount the volume -> Perform I/O on fuse mount -> Add bricks to the volume -> Perform rebalance on the volume -> While rebalance is in progress, -> restart glusterd on all the nodes in the cluster Change-Id: I522d7aa55adedc2363bf315f96e51469b6565967 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding Testcase: Rebalance_while_remove_brick_in_progressPrasad Desala2018-06-271-0/+186
| | | | | | | | While remove-brick operation is in-progress on a volume, glusterd should not allow rebalance on the same volume. Change-Id: Ic94754bc12c86a32f2f5fd064129bf6bc038ed6a Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test FuseSubdir with Quota Object functionalityManisha Saini2018-06-261-0/+261
| | | | | Change-Id: I80c73f4bb7de90faf482f6c2559d364692067768 Signed-off-by: Manisha Saini <msaini@redhat.com>
* snapshot:: Delete Snaps and list snaps after restarting glusterdsrivickynesh2018-06-261-0/+129
| | | | | | | | | Test Cases in this module tests the snapshot listing before and after glusterd restart. Change-Id: I7cabe284c52974256e1a48807a5fc1787789583a Signed-off-by: srivickynesh <sselvan@redhat.com>