summaryrefslogtreecommitdiffstats
path: root/tests/functional/glusterd
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix for tests/functional/glusterd/test_brick_status_when_quorum_not_met.pySri2018-09-201-1/+7
| | | | | Change-Id: I3c6a9169f3b6afb7f92aca28f59efa42ca2a6b21 Signed-off-by: Sri <sselvan@redhat.com>
* Fix for test_add_brick_when_quorum_not_met.pySri2018-09-181-20/+18
| | | | | Change-Id: If8d18cd60d1993ce46fa019b659770cf6e7aa6b8 Signed-off-by: Sri <sselvan@redhat.com>
* Fix spelling mistake across the codebaseNigel Babu2018-08-0718-33/+33
| | | | Change-Id: I46fc2feffe6443af6913785d67bf310838532421
* Tests: ensure volume deletion works when intended.Yaniv Kaul2018-07-172-24/+31
| | | | | | | | | | | It could have failed without anyone noticing. Added 'xfail' - do we expect to fail in deletion (and changed tests accordingly) On the way, ensure stdout and stderr are logged in case of such failures. Change-Id: Ibdf7a43cadb0393707a6c68c19a664453a971eb1 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* Shorten all the logs around verify_io_procsYaniv Kaul2018-07-178-27/+25
| | | | | | | | No functional change, just make the tests a bit more readable. It could be moved to a decorator later on, wrapping tests. Change-Id: I484bb8b46907ee8f33dfcf4c960737a21819cd6a Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusto/glusterd: Peer status should have FQDNSanju Rakonde2018-07-011-0/+152
| | | | | | | | | | | | | | | | | | In this test case we are checking whether the peer status is having FQDN or not. Steps followed are: 1. Peer probe to a new node i.e., N1 to N2 using hostname 2. Check peer status in N2, it should have FQDN of N1 3. Check peer status on N1, it should have FQDN of N2 4. Create a distributed volume with single brick on each node. 5. Start volume 6. Peer probe to a new node N3 using IP 7. Add a brick from node3 to the volume, add brick should succeed. 8. Get volume info, it should have correct information Change-Id: I7f2bb8cecf28e61273ca83d7e3ad502ced979c5c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Test peer probe while snapd is running.Rajesh Madaka2018-07-011-0/+107
| | | | | | | | | | | -> Create Volume -> Create snap for that volume -> Enable uss -> Check snapd running or not -> Probe a new node while snapd is running Change-Id: Ic28036436dc501ed894f3f99060d0297dd9d3c8a Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test remove-brick operation when quorum not metRajesh Madaka2018-06-301-0/+181
| | | | | | | | | | | | | | -> Create volume -> Enabling server quorum -> Set server quorum ratio to 95% -> Stop the glusterd on any one of the node -> Perform remove brick operation -> start glusterd -> Check gluster vol info, bricks should be same before and after performing remove brick operation. Change-Id: I4b7e97ecc6cf6854ec8ff36d296824e549bf9b97 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Volume start and status when one of the brick is absentRajesh Madaka2018-06-281-0/+118
| | | | | | | | | | -> Create Volume -> Remove any one Brick directory -> Start Volume -> Check the gluster volume status Change-Id: I83c25c59607d065f8e411e7befa8f934009a9d64 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test remove brick after restart glusterdMohit Agrawal2018-06-271-0/+174
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 4 nodes 2. Create a distributed-replicated volumes with 4 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Create some data file 6. Start remove-brick operation for one replica pair 7. Restart glusterd on all nodes 8. Try to commit the remove-brick operation while rebalance is in progress, it should fail Change-Id: I64901078865ef282b86c9b3ff54d065f976b9e84 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Test brick status after stop glusterd and modify volumeMohit Agrawal2018-06-271-0/+193
| | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Stop glusterd on one node 2 5. Modify any of the volume option on node 1 6. Start glusterd on node 2 7. Check volume status, brick should get port Change-Id: I688f954f5f53678290e84df955f5529ededaf78f Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Test restart glusterd while rebalance is in progressRajesh Madaka2018-06-271-0/+172
| | | | | | | | | | | | | -> Create Volume -> Fuse mount the volume -> Perform I/O on fuse mount -> Add bricks to the volume -> Perform rebalance on the volume -> While rebalance is in progress, -> restart glusterd on all the nodes in the cluster Change-Id: I522d7aa55adedc2363bf315f96e51469b6565967 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test rebalance spurious failureMohit Agrawal2018-06-261-0/+174
| | | | | | | | | | | | | | | | | | 1. Trusted storage Pool of 3 nodes 2. Create a distributed volumes with 3 bricks 3. Start the volume 4. Fuse mount the gluster volume on out of trusted nodes 5. Remove a brick from the volume 6. Check remove-brick status 7. Stop the remove brick process 8. Perform fix-layoyt on the volume 9. Get the rebalance fix-layout status 10. Create a directory from mount point 11. Check trusted.glusterfs.dht extended attribue for newly created directory on the remove brick Change-Id: I055438056a9b5df26599a503dd413225eb6f87f5 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* No Errors should generate in brick logs after deleting files from mountpointRajesh Madaka2018-06-261-0/+179
| | | | | | | | | | | -> Create volume -> Mount volume -> write files on mount point -> delete files from mount point -> check for any errors filled in all brick logs Change-Id: Ic744ad04daa0bdb7adcc672360c9ed03f56004ab Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test rebalance hang after stop glusterd on other nodeMohit Agrawal2018-06-251-0/+201
| | | | | | | | | | | | | | | 1. Trusted storage Pool of 2 nodes 2. Create a distributed volumes with 2 bricks 3. Start the volume 4. Mount the volume 5. Add some data file on mount 6. Start rebalance with force 7. stop glusterd on 2nd node 8. Check rebalance status , it should not hang 9. Issue volume related command Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Validate snap info from detached node in the clusterBala Konda Reddy Mekala2018-06-251-0/+143
| | | | | Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960 Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* glusto-tests/glusterd: add brick should fail, when server quorum is not metSanju Rakonde2018-06-251-0/+159
| | | | | | | | | | | | | | Steps followed are: 1. Create and a start a volume 2. Set cluster.server-quorum-type as server 3. Set cluster.server-quorum-ratio as 95% 4. Bring down glusterd in half of the nodes 5. Confirm that quorum is not met, by check whether the bricks are down. 6. Perform a add brick operation, which should fail. 7. Check whether added brick is part of volume. Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Test peer probe after setting global options to the volumeRajesh Madaka2018-06-241-0/+136
| | | | | | | | | | | | | | -> Set global options and other volume specific options on the volume -> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1 -> gluster volume set VOL nfs.addr-namelookup on -> gluster volume set VOL cluster.server-quorum-type server -> gluster volume set VOL network.ping-timeout 20 -> gluster volume set VOL nfs.port 2049 -> gluster volume set VOL performance.nfs.write-behind on -> Peer probe for a new node Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test read-only option on volumesRajesh Madaka2018-06-231-0/+156
| | | | | | | | | | | | -> Create volume -> Mount a volume -> set 'read-only on' on a volume -> perform some I/O's on mount point -> set 'read-only off' on a volume -> perform some I/O's on mount point Change-Id: Iab980b1fd51edd764ef38b329275d72f875bf3c0 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test replace brick when quorum not metRajesh Madaka2018-06-211-0/+225
| | | | | | | | | | | | | | | | | -> Create volume -> Set quorum type -> Set quorum ratio to 95% -> Start the volume -> Stop the glusterd on one node -> Now quorum is in not met condition -> Check all bricks went to offline or not -> Perform replace brick operation -> Start glusterd on same node which is already stopped -> Check all bricks are in online or not -> Verify in vol info that old brick not replaced with new brick Change-Id: Iab84df9449feeaba66ff0df2d0acbddb6b4e7591 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* glusto-tests/glusterd: setting volume option when one of the node is down in ↵Sanju Rakonde2018-06-201-0/+203
| | | | | | | | | | | | | | | | | | | | | | | | | | the cluster In this test case, we are setting up some volume options when one of the node in the cluster is down and then after the node is up, chekcing whether the volume info is synced. And we are also trying to peer probe a new node at the same time bringing down the glusterd of some node in the cluster. After the node is up, checking whether peer status has correct information. Steps followed are: 1. Create a cluster 2. Create a 2x3 distribute-replicated volume 3. Start the volume 4. From N1 issue 'gluster volume set <vol-name> stat-prefetch on' 5. At the same time when Step4 is happening, bring down glusterd of N2 6. Start glusterd on N2 7. Verify volume info is synced 8. From N1, issue 'gluster peer probe <new-host>' 9. At the same time when Step8 is happening, bring down glusterd of N2 10. Start glusterd on N2 11. Check the peer status has correct information across the cluster. Change-Id: Ib95268a3fe11cfbc5c76aa090658133ecc8a0517 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* No Errors should generate in glusterd.log while detaching node from glusterRajesh Madaka2018-06-201-0/+88
| | | | | | | | | | -> Detach the node from peer -> Check that any error messages related to peer detach in glusterd log file -> No errors should be there in glusterd log file Change-Id: I481df5b15528fb6fd77cd1372110d7d23dd5cdef Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* glusto-tests/glusterd: remove brick should fail, when server quorum is not metSanju Rakonde2018-06-191-0/+140
| | | | | | | | | | | | | Steps followed are: 1. Create and a start a volume 2. Set cluster.server-quorum-type as server 3. Set cluster.server-quorum-ratio as 95% 3. Bring down glusterd in half of the nodes 4. Confirm that quorum is not met, by check whether the bricks are down. 5. Perform a remove brick operation, which should fail. Change-Id: I69525651727ec92dce2f346ad706ab0943490a2d Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusto-tests/glusterd: setting auth.allow with more than 4096 charactersSanju Rakonde2018-06-191-0/+102
| | | | | | | | | | | | | | | | In this test case we will set auth.allow option with more than 4096 characters and restart the glusterd. glusterd should restart successfully. Steps followed: 1. Create and start a volume 2. Set auth.allow with <4096 characters 3. Restart glusterd, it should succeed 4. Set auth.allow with >4096 characters 5. Restart glusterd, it should succeed 6. Confirm that glusterd is running on the stopped node Change-Id: I7a5a8e49a798238bd88e5da54a8f4857c039ca07 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Add identical brick on new node after down a brick on other nodeMohit Agrawal2018-06-191-0/+126
| | | | | | | | | | | 1. Create Dist Volume on Node 1 2. Down brick on Node 1 3. Peer Probe N2 from N1 4. Add identical brick on newly added node 5. Check volume status Change-Id: I17c4769df6e4ec2f11b7d948ca48a006cf301073 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Test volume status fd while IO in progressRajesh Madaka2018-06-191-0/+152
| | | | | | | | | | | | -> Create volume -> Mount the volume on 2 clients -> Run I/O's on mountpoint -> While I/O's are in progress -> Perfrom gluster volume status fd repeatedly -> List all files and dirs listed Change-Id: I2d979dd79fa37ad270057bd87d290c84569c4a3d Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Create volume using bricks of deleted volumeRajesh Madaka2018-06-181-0/+179
| | | | | | | | | | | | | -> Create distributed-replica Volume -> Add 6 bricks to the volume -> Mount the volume -> Perform some I/O's on mount point -> unmount the volume -> Stop and delete the volume -> Create another volume using bricks of deleted volume Change-Id: I263d2f0a359ccb0409dba620363a39d92ea8d2b9 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* glusto-tests/glusterd: gluster volume status with/without xml tagSanju Rakonde2018-06-151-0/+112
| | | | | | | | | | | | | | | | | | | | In this test case, we will check gluster volume status and gluster volume status --xml from a node which is part of cluster but not having any bricks of volumes. Steps followed are: 1. Create a two node cluster 2. Create a distributed volume with one brick(Assume brick contains to N1) 3. From node which is not having any bricks i.e, N2 check gluster v status which should fail saying volume is not started. 4. From N2, check gluster v status --xml. It should fail because volume is not started yet. 5. Start the volume 6. From N2, check gluster v status, this should succeed. 7. From N2, check gluster v status --xml, this should succeed. Change-Id: I1a230b82c0628c66c16f25f89dd4e6d1d0b3f443 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusto-tests/glusterd: check uuid of bricks in vol info --xmlSanju Rakonde2018-06-131-0/+105
| | | | | | | | | | | | | | | | In this test case, We will check uuid's of bricks in output of gluster volume info --xml output from newly probed node. Steps followed are: 1. Create a two node cluster 2. Create and start a 2x2 volume 3. From the existing cluster probe to a new node 4. Check gluster volume info --xml from newly probed node 5. In the output gluster volume info --xml, uuid's of bricks should be non-zero Change-Id: I73d07f1b91b5beab26cc87217defb8999fba474e Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Verifying task type and status in vol statusRajesh Madaka2018-06-121-0/+192
| | | | | | | | | | | | | | | | | -> Create Volume -> Start rebalance -> Check task type in volume status -> Check task status string in volume status -> Check task type in volume status xml -> Check task status string in volume status xml -> Start Remove brick operation -> Check task type in volume status -> Check task status string in volume status -> Check task type in volume status xml -> Check task status string in volume status xml Change-Id: I9de53008e19f1965dac21d4b80b9b271bbcf53a1 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Verifying max supported op-version and vol info fileRajesh Madaka2018-06-091-0/+129
| | | | | | | | | | | | | | | | -> Create Volume -> Get the current op-version -> Get the max supported op-version -> Verify vol info file exists or not in all servers -> Get the version number from vol info file -> If current op-version is less than max-op-version set the current op-version to max-op-version -> After vol set operation verify that version number increased by one or not in vol info file -> verify that current-op-version and max-op-version same or not. Change-Id: If56210a406b15861b0a261e29d2e5f45e14301fd Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test brick status when quorum not metRajesh Madaka2018-06-071-0/+151
| | | | | | | | | | | | | | -> Create volume -> Enable server quorum on volume -> Stop glusterd on all nodes except first node -> Verify brick status of nodes where glusterd is running with default quorum ratio(51%) -> Change the cluster.server-quorum-ratio from default to 95% -> Start glusterd on all servers except last node -> Verify the brick status again Change-Id: I249574fe6c758e6b8e5bea603f36dcf8698fc1de Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Setting volume option when quorum is not metBala Konda Reddy M2018-06-061-0/+129
| | | | | | Change-Id: I2a3427cb9165cb2b06a1c72962071e286a65e0a8 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com> Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* glusterd test cases: Peer probe from a standalone node to existing glusterGaurav Yadav2018-06-011-0/+240
| | | | | Change-Id: I5ffd826bd375956e29ef6f52913fa7dabf8bc7ce Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* Fix replace brick failure and peer probe failureNigel Babu2018-05-092-15/+12
| | | | | | | | The replace brick setUp function had a syntax error and a wrong assert. The peer probe tearDown method did not work in a situation where the test failed leading to cascading failures in other tests. Change-Id: Ia7e0d85bb88c0c9bc6d489b4d03dc7610fd4f129
* Fix assertIsNotNone call in glusterd testsNigel Babu2018-05-081-2/+1
| | | | Change-Id: I774f64e2f355e2ca2f41c7a5c472aeae5adcd3dc
* Replace brick with valid brick path and non-existing brick pathBala Konda Reddy M2018-05-081-0/+114
| | | | | Change-Id: Ic6cb4e96d8f14558c0f9d4eb5e24cbb507578f4c Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Glusterd Testcase: Setting lower gluster opversionBala Konda Reddy Mekala2018-05-071-0/+89
| | | | | | | | In this testcase, Setting the lowest Gluster opversion and invalid opversion are validated Change-Id: Ie45859228e35b7cb171493dd22e30e2f26b70631 Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* Glusterd split brain scenario with volume options reset and quorumBala Konda Reddy Mekala2018-05-071-0/+151
| | | | | Change-Id: I2802171403490c9de715aa281fefb562e08249fe Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
* Create cluster with shortnames and create volume using IP and FQDNGaurav Yadav2018-05-031-0/+184
| | | | | | | | | | | | - Peer probe using short name - Create volume using IP - Start/stop/getvolumeinfo - Create volume using FQDN - Start/stop/getvolumeinfo Change-Id: I2d55944035c44e8ee360beb4ce41550338586d15 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test case for Verifying checksum of mountpointRajesh Madaka2018-04-201-0/+179
| | | | | | | | | before and after changing network ping timeout Desc: Change-Id: I8f3636cd899e536d2401a8cd93b98bf66ceea0f7 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Fix up coding style issues in testsNigel Babu2018-03-2714-381/+311
| | | | Change-Id: I14609030983d4485dbce5a4ffed1e0353e3d1bc7
* glusterd test case: check rebalance status form newly probed nodeSanju Rakonde2018-02-271-0/+162
| | | | | | | | | | | | | | In this test case, 1. Create a volume and mount it 2. create some data 3. add brick 4. start rebalance 5. probe a new node 6. check rebalance status from new node. We should be able to check rebalance status from newly probed node. Change-Id: Ib09b468dcd3e81eb01f873e0491afe5ecf5124cc Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd test cases: validate volume create operationSanju Rakonde2018-02-271-0/+200
| | | | | | | | | | | In this test case, volume create operations such as creating volume with non existing brick path, already used brick, already existing volume name, bring the bricks to online with volume start force, creating a volume with bricks in another cluster, creating a volume when one of the brick node is down are validated. Change-Id: I796c8e9023244c592c88116cf3baff52ddade48f Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd test case: validating add brick functionalitySanju Rakonde2018-02-141-0/+134
| | | | | | | | | | | In this test case, 1. adding a single brick to replicated volume 2. adding non existing bricks to volume 3. adding bricks from node which is not a part of cluster 4. triggering rebalance start after add brick are validated. Change-Id: I982ff42dcbe6cd0cfbf3653b8cee0b269314db3f Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd-test case: validating volume operationsSanju Rakonde2018-02-141-0/+148
| | | | | | | | | | | | | In this test case basic volume operations are validated. i.e., starting, stopping and deleting a non existing volume, creating all types of volumes, creation of volume by using a brick from node which is not a part of cluster, starting a already started volume, stopping a volume twice, deleting a volume twice and validating volume info, volume list commands. these commands are internally validating xml output also. Change-Id: Ibf44d24e678d8bb14aa68bdeff988488b74741c6 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Concurrent volume set on different volumes simultaneously should succeedGaurav Yadav2018-02-141-0/+107
| | | | | | | | | | It includes: - Create 2 volumes - Run concurrent set operation on both the vols - Check for error or if any core generated Change-Id: I5f735290ff57ec5e9ad8d85fd5d822c739dbbb5c Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* Test Cases for performing NFS disable, enable andRajesh Madaka2018-02-141-0/+172
| | | | | | | | | | | | | | | | | | performing NFS mount and unmoount on all volumes, performing different types quorum settings. -> Set nfs.disable off -> Mount it with nfs and unmount it -> set nfs.disable enable -> Mount it with nfs -> Set nfs.disable disable -> Enable server quorum -> Set the quorum ratio to numbers and percentage, negative- numbers should fail, negative percentage should fail, fraction should fail, negative fraction should fail Change-Id: I6c4f022d571378f726b1cdbb7e74fdbc98d7f8cb Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* quorum related messages in logsRajesh Madaka2018-02-081-0/+292
| | | | | | | | | | | | | | | | | | desc: create two volumes Set server quorum to both the volumes set server quorum ratio 90% stop glusterd service any one of the node quorum regain message should be recorded with message id - 106002 for both the volumes in /var/log/messages and /var/log/glusterfs/glusterd.log start the glusterd service of same node quorum regain message should be recorded with message id - 106003 for both the volumes in /var/log/messages and /var/log/glusterfs/glusterd.log Change-Id: I9ecab59b6131fc9c4c58bb972b3a41f15af1b87c Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* Test case for performing different combinations of glusterRajesh Madaka2018-02-051-0/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | volume get functionalities Test steps: 1. Create a gluster cluster 2. Get the option from the non-existing volume, # gluster volume get <non-existing vol> io-cache 3. Get all options from the non-existing volume, # gluster volume get <non-existing volume > all 4. Provide a incorrect command syntax to get the options from the volume # gluster volume get <vol-name> # gluster volume get # gluster volume get io-cache 5. Create any type of volume in the cluster 6. Get the value of the non-existing option # gluster volume get <vol-name> temp.key 7. get all options set on the volume # gluster volume get <vol-name> all 8. get the specific option set on the volume # gluster volume get <vol-name> io-cache 9. Set an option on the volume # gluster volume set <vol-name> performance.low-prio-threads 14 10. Get all the options set on the volume and check for low-prio-threads # gluster volume get <vol-name> all | grep -i low-prio-threads 11. Get all the options set on the volume # gluster volume get <vol-name> all 12. Check for any cores in "cd /" Change-Id: Ifd7697e68d7ecf297d7be75680a5681686c51ca0 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>