summaryrefslogtreecommitdiffstats
path: root/tests/functional
Commit message (Collapse)AuthorAgeFilesLines
* snapshot:: .snaps already present before uss is enabledsrivickynesh2018-07-011-0/+245
| | | | | | | | | | Test Cases in this module tests the USS where .snaps folder is already created and checks for behaviour before enabling uss and after disabling uss Change-Id: I9c6dcca1198b7fed37103cc21f3ecba72bfa20a5 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test peer probe while snapd is running.Rajesh Madaka2018-07-011-0/+107
| | | | | | | | | | | -> Create Volume -> Create snap for that volume -> Enable uss -> Check snapd running or not -> Probe a new node while snapd is running Change-Id: Ic28036436dc501ed894f3f99060d0297dd9d3c8a Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* snapshot: Listing Invalid casessrivickynesh2018-06-301-0/+117
| | | | | | | | | Test Cases in this module tests the snapshot creation and listing Invalid names and parameters. Change-Id: Iab3dd71a83176d850780c0834f95d4e8ac1e8997 Signed-off-by: srivickynesh <sselvan@redhat.com>
* TC: Induce holes in layout by removebrick force then lookupPrasad Desala2018-06-301-0/+136
| | | | | | | | | | Objective: When induced holes in the layout by remove brick force, lookup sent on the directory from client should fix the directory layout without any holes or overlaps. Change-Id: If4af10929e8e7d4da93cea80c515e37acf53d34e Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test remove-brick operation when quorum not metRajesh Madaka2018-06-301-0/+181
| | | | | | | | | | | | | | -> Create volume -> Enabling server quorum -> Set server quorum ratio to 95% -> Stop the glusterd on any one of the node -> Perform remove brick operation -> start glusterd -> Check gluster vol info, bricks should be same before and after performing remove brick operation. Change-Id: I4b7e97ecc6cf6854ec8ff36d296824e549bf9b97 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* glusto/dht: test lookup of directoriesSusant Palai2018-06-291-0/+280
| | | | | | | | | | | | | | | | | | | | | test: case -1: - bring down a subvol - create a directory so that it does not hash to down subvol - make sure stat is successful on the dir case -2: - create directory - bring down hashed subvol - make sure stat is successful on the dir case -3: - create dir - bringdown unhashed subvol - make sure stat is successful on the dir Change-Id: I9cbd2e7f04c885eaa09414d6b49632cf77dd72ec Signed-off-by: Susant Palai <spalai@redhat.com>
* Snapshot: Create clone and check for self healsrivickynesh2018-06-291-0/+295
| | | | | | | | Create a clone and check for self heal operation on the cloned volume. Change-Id: Icf61f996fcd503a6c0d0bf936900858b715a4742 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding test case for change owner, group, permission for directoryMohit Agrawal2018-06-292-0/+414
| | | | | | | | | | | | | | | | | 1) Create Dir with some file inside dir 2) Verify dir exists on all bricks as well as mount point 3) Compare dir stat with mount-point and brick location path 4) Change the ownership of directory 5) Compare dir stats with mount-point and brick path 6) Try to change pemission with different user for directory 7) Compare dir stat with mount-point and brick path 8) Try to change permission with different user for directory 9) change permission of directory 10) Compare dir stat with mount-point and brick path 11) Try to change permission with different user for same directory Change-Id: I284842be8c7562d4618d4e69e202c4d80945f1c5 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* TC: Induce holes in layout by remove brick force then fix-layoutPrasad Desala2018-06-291-0/+154
| | | | | | | | | Objective: When induced holes in the layout by remove brick force, fix-layout start should fix the layout without any holes or overlaps. Change-Id: Ie4c47fff11957784044e717c644743263812a0e4 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* glusto/dht: test case to check mkdir hashed to a down subvolumeSusant Palai2018-06-291-0/+155
| | | | | | | | The test case verifies that a mkdir of directory hashed to a down subvolume should fail. Change-Id: I8465f4869c9283d4339c50cdbd56b0256fa11bb9 Signed-off-by: Susant Palai <spalai@redhat.com>
* snapshot : validate snapshot delete when glusterd is downSunny Kumar2018-06-291-0/+128
| | | | | | | | | This test cases validates snapshot delete behaviour when glusterd is down on one node. When we bring up the brought down node, post handshake number of snaps should be consistent. Change-Id: If7ed6c3f384658faa4eb3de8577e38d3fe55f980 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Volume start and status when one of the brick is absentRajesh Madaka2018-06-281-0/+118
| | | | | | | | | | -> Create Volume -> Remove any one Brick directory -> Start Volume -> Check the gluster volume status Change-Id: I83c25c59607d065f8e411e7befa8f934009a9d64 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* afr: Test data split-brain resolution using heal CLIkarthik-us2018-06-281-0/+270
| | | | | Change-Id: I525f50a42e29270d9ac445d62e12c7e7e25a7ae3 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* Snapshot: Delete an existing snapshot scheduled Jobsrivickynesh2018-06-281-0/+238
| | | | | | | | Test Cases in this module tests the snapshot scheduler for deleting existing scheduled jobs Change-Id: Ibd1a00fb336f279d8c13a89ae513a914977f593d Signed-off-by: srivickynesh <sselvan@redhat.com>
* Quota : Test case to validate quota-deem-statfs and quotadVinayak Papnoi2018-06-271-0/+138
| | | | | | | | | Test quota with respect to the quota daemon and features.quota-deem-statfs quota volume option when quota is enabled/disabled on the volume. Change-Id: I91a4ced6a5d31fe93c6bb9b0aa842cd5baf38ee0 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* Adding DHT test case: stop glusterd while rebalance in progressPrasad Desala2018-06-271-0/+218
| | | | | | | Rebalance should proceed even if glusterd is down on a node. Change-Id: I499e8a4e6b42bd7a8153c1d82c8b329a1933e748 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Snapshot: deleting orginal volume and check clone volumesrivickynesh2018-06-271-0/+207
| | | | | | | | | | Test Cases in this module tests the Creation of clone from snapshot of volume and delete snapshot and original volume. Validate cloned volume is not affected. Change-Id: Ic61defe96fed6adeab21b3715878ac8093156645 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Adding testcase: Exercise rebalance commandPrasad Desala2018-06-271-0/+376
| | | | | | | | | | | | | This testcase exercises below rebalance commands, 1) Rebalance with fix-layout 2) Rebalance start --> status --> stop 3) Rebalance with force option Changes: - Remove pytest.mark from test cases Change-Id: I467de068dabac90018f6241472b2d91d9d9e85a8 Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test remove brick after restart glusterdMohit Agrawal2018-06-271-0/+174
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 4 nodes 2. Create a distributed-replicated volumes with 4 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Create some data file 6. Start remove-brick operation for one replica pair 7. Restart glusterd on all nodes 8. Try to commit the remove-brick operation while rebalance is in progress, it should fail Change-Id: I64901078865ef282b86c9b3ff54d065f976b9e84 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* snapshot: delete a snap of a volumesrivickynesh2018-06-271-0/+110
| | | | | | | | | Test Cases in this module tests the snapshot deletion with snapname, with volumename and delete all snapshot commands. Change-Id: I1e4e361e58b35744e08e63c48b43d9e8caf2e953 Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test brick status after stop glusterd and modify volumeMohit Agrawal2018-06-271-0/+193
| | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Stop glusterd on one node 2 5. Modify any of the volume option on node 1 6. Start glusterd on node 2 7. Check volume status, brick should get port Change-Id: I688f954f5f53678290e84df955f5529ededaf78f Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Test restart glusterd while rebalance is in progressRajesh Madaka2018-06-271-0/+172
| | | | | | | | | | | | | -> Create Volume -> Fuse mount the volume -> Perform I/O on fuse mount -> Add bricks to the volume -> Perform rebalance on the volume -> While rebalance is in progress, -> restart glusterd on all the nodes in the cluster Change-Id: I522d7aa55adedc2363bf315f96e51469b6565967 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Adding Testcase: Rebalance_while_remove_brick_in_progressPrasad Desala2018-06-271-0/+186
| | | | | | | | While remove-brick operation is in-progress on a volume, glusterd should not allow rebalance on the same volume. Change-Id: Ic94754bc12c86a32f2f5fd064129bf6bc038ed6a Signed-off-by: Prasad Desala <tdesala@redhat.com>
* Test FuseSubdir with Quota Object functionalityManisha Saini2018-06-261-0/+261
| | | | | Change-Id: I80c73f4bb7de90faf482f6c2559d364692067768 Signed-off-by: Manisha Saini <msaini@redhat.com>
* snapshot:: Delete Snaps and list snaps after restarting glusterdsrivickynesh2018-06-261-0/+129
| | | | | | | | | Test Cases in this module tests the snapshot listing before and after glusterd restart. Change-Id: I7cabe284c52974256e1a48807a5fc1787789583a Signed-off-by: srivickynesh <sselvan@redhat.com>
* Test rebalance spurious failureMohit Agrawal2018-06-261-0/+174
| | | | | | | | | | | | | | | | | | 1. Trusted storage Pool of 3 nodes 2. Create a distributed volumes with 3 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Remove a brick from the volume 6. Check remove-brick status 7. Stop the remove brick process 8. Perform fix-layoyt on the volume 9. Get the rebalance fix-layout status 10. Create a directory from mount point 11. Check trusted.glusterfs.dht extended attribue for newly created directory on the remove brick Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Snapshot: Enable USS on volume when snapshots are taken when IO is going onsrivickynesh2018-06-261-0/+178
| | | | | | | | Test Cases in this module tests the uss functionality while io is going on. Change-Id: Ie7bf440b02980a0606bf4c4061f5a1628179c128 Signed-off-by: srivickynesh <sselvan@redhat.com>
* functional/disperse: Verify IO hang during server side heal.Ashish Pandey2018-06-261-0/+139
| | | | | | | | | | When the IOs are done with client side heal disabled, it should not hang. RHG3-11098 Change-Id: I2f180dd1ba2f45ae0f302a730a02b90ae77b99ad Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* glusto/dht: Verify create file operationSusant Palai2018-06-261-0/+166
| | | | | | | | | | test case: - Verify that the file is created on the hashed subvol alone - Verify that the trusted.glusterfs.pathinfo reflects the file location - Verify that the file creation fails if the hashed subvol is down Change-Id: I951c20f03772a0c5739244ec354f9bbfd6d0ea65 Signed-off-by: Susant Palai <spalai@redhat.com>
* Test to verify whether invalid values are accepted in auth options.Jilju Joy2018-06-261-0/+164
| | | | Change-Id: I9caa937f74fefad3b9cf13fc0abd0e6a4b380b96
* Fuse mount point should not to go to Read only while deleting filesVitalii Koriakov2018-06-261-0/+253
| | | | | Change-Id: I0360e6590425aea48d7acf2ddb10d9fbfe9fdeef Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* No Errors should generate in brick logs after deleting files from mountpointRajesh Madaka2018-06-261-0/+179
| | | | | | | | | | | -> Create volume -> Mount volume -> write files on mount point -> delete files from mount point -> check for any errors filled in all brick logs Change-Id: Ic744ad04daa0bdb7adcc672360c9ed03f56004ab Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Quota : Validate limit on deep directoriesharigowtham2018-06-261-0/+228
| | | | | | | | | | | | | | | The below operations are performed on various types of volumes and mount. * Enable Quota * Create 10 directories one inside the other and set limit of 1GB on each directory * Perform a quota list operation * Create some random amount of data inside each directory * Perform a quota list operation * Remove the quota limit and delete the data Change-Id: I2a706eba5c23909e2e6996f485b3f4ead9d5dbca Signed-off-by: harigowtham <hgowtham@redhat.com>
* Testcase to verify authentication feature using a combination of IP and fqdnJilju Joy2018-06-261-0/+162
| | | | Change-Id: I02d3b6f77276e11417bab6236e74d1be0e6a3b32
* afr: Test gfid assignment on dist-rep volumekarthik-us2018-06-251-0/+135
| | | | | | | | | This test case checks whether directory with null gfid is getting the gfids assigned on all the subvols of a dist-rep volume when lookup comes on that directory from the mount point. Change-Id: Ie68cd0e8b293e9380532e2ccda3d53659854de9b Signed-off-by: karthik-us <ksubrahm@redhat.com>
* snapshot : snapshot scheduler behaviourSunny Kumar2018-06-251-0/+151
| | | | | | | | This test cases will validate snapshot scheduler behaviour when we enable/disable scheduler. Change-Id: Ia6f01a9853aaceb05155bfc92cccba686d320e43 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* FuseSubdir Test With replace Brick functionalityManisha Saini2018-06-251-0/+228
| | | | | Change-Id: I14807c51fb534e5b729da6de69eb062601e80b42 Signed-off-by: Manisha Saini <msaini@redhat.com>
* Remove faulty subvolume and add with new subvolVitalii Koriakov2018-06-251-0/+221
| | | | | Change-Id: I29eefb9ba5bbe46ba79267b85fb8814a14d10b00 Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
* Test rebalance hang after stop glusterd on other nodeMohit Agrawal2018-06-251-0/+201
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Mount the volume 5. Add some data file on mount 6. Start rebalance with force 7. stop glusterd on 2nd node 8. Check rebalance status , it should not hang 9. Issue volume related command Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* functional/disperse: verify IO hang during client side healSunil Kumar Acharya2018-06-251-0/+150
| | | | | | | | | | | | | | | | | | | When the IOs are done with server side heal disabled, it should not hang. ec_check_heal_comp function will fail because of the bug 1593224- Client side heal is not removing dirty flag for some of the files While this bug has been raised and investigated by dev, this patch is doing its job and testing the target functionality. RHG3-11097 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com> Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: I841285c9b1a747f5800ec8cdd29a099e5fcc08c5 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* snapshot : validate uss when brick is downSunny Kumar2018-06-251-0/+202
| | | | | | | | This test case will validate USS behaviour when we enable USS on the volume when brick is down. Change-Id: I9be021135c1f038a0c6949ce2484b47cd8634c1e Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Validate snap info from detached node in the clusterBala Konda Reddy Mekala2018-06-251-0/+143
| | | | | Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960 Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* glusto-tests/glusterd: add brick should fail, when server quorum is not metSanju Rakonde2018-06-251-0/+159
| | | | | | | | | | | | | | Steps followed are: 1. Create and a start a volume 2. Set cluster.server-quorum-type as server 3. Set cluster.server-quorum-ratio as 95% 4. Bring down glusterd in half of the nodes 5. Confirm that quorum is not met, by check whether the bricks are down. 6. Perform a add brick operation, which should fail. 7. Check whether added brick is part of volume. Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Quota : Limit-Usage on volume as well as sub-directoriesVinayak Papnoi2018-06-251-0/+180
| | | | | | | | | | | | - Enable Quota - Set quota limit of 1 GB on / - Create 10 directories inside volume - Set quota limit of 100 MB on directories - Fill data inside the directories till quota limit is reached - Validate the size fields using quota list Change-Id: I917da8cdf0d78afd6eeee22b6cf6a4d580ac0c9f Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
* afr: heal gfids for files created from backendRavishankar N2018-06-251-0/+178
| | | | | Change-Id: Iaaa78c071bd7ee3ad3ed222957e71aec61f80045 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* This TC is to verify that auth allow fails with blank string for volumes and ↵ubansal2018-06-241-0/+80
| | | | | | | subdir Change-Id: I8c71470a67fef17d54d5fdfbcf0d36eb156c07dd Signed-off-by: ubansal <ubansal@redhat.com>
* Testcase for FUSE mount to not allow mounting of rejected client using hostnameubansal2018-06-241-0/+191
| | | | | Change-Id: I84c4375c38ef7322e65f113db6c6229620c57214 Signed-off-by: ubansal <ubansal@redhat.com>
* Test peer probe after setting global options to the volumeRajesh Madaka2018-06-241-0/+136
| | | | | | | | | | | | | | -> Set global options and other volume specific options on the volume -> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1 -> gluster volume set VOL nfs.addr-namelookup on -> gluster volume set VOL cluster.server-quorum-type server -> gluster volume set VOL network.ping-timeout 20 -> gluster volume set VOL nfs.port 2049 -> gluster volume set VOL performance.nfs.write-behind on -> Peer probe for a new node Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* TC for authentication for volume by using auth rejectubansal2018-06-231-0/+169
| | | | | Change-Id: I5dd80e8b1ec8a0e3ab7f565c478be368c2e7c73d Signed-off-by: ubansal <ubansal@redhat.com>
* Quota : Test case for quota with multiple values of limit-usageVinayak Papnoi2018-06-231-0/+277
| | | | | | | | | | | | | Test quota limit-usage by setting limits of various values being big, small and decimal values. eg. 1GB, 10GB, 2.5GB, etc. and validate the limits by creating data more than the hard limits. (after reaching hard limit the data creation should stop) Addressed review comments. Change-Id: If2801cf13ea22c253b22ecb41fc07f2f1705a6d7 Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>