| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding library function reset_failed_glusterd()
and modifying scratch_cleanup() to use
reset_failed_glusterd(). This is needed
because of patch [1] and [2] sent to glusterfs
where changes are made to glusterd.service.in
to not allow glusterd restart for more than 6 times
within an hour.
Links:
[1] https://review.gluster.org/#/c/glusterfs/+/23751/
[2] https://review.gluster.org/#/c/glusterfs/+/23970/
Change-Id: I25f982517420f20f11a610e8a68afc71f3b7f2a9
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Calculates arequal for a particular path in
the mountpoint
Change-Id: I018302e6dbb11a9c11d42fc0381ec4183b3725a0
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
| |
Tests to validate behaviour of different scenarios of file rename
cases, when destination file doesn't exist intially.
Change-Id: I3f22d61d9bd2fa5c54930715e2ef976c7d1ba54e
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
| |
Opens a FD to a file , waits and then
writes to the file
Change-Id: Ib993b646ba45d2b05a5765e02b6b1b7b2869ecd3
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
As in setUpClass, the setup_volume method is used
for volume creation, volume mount is not done.
But in teardown, unmount_volume_and_cleanup_volume
is used. This causes teardown to fail
Fix:
change unmount_volume_and_cleanup_volume
to cleanup_volume
Change-Id: Ia9bb6bcd36ce9ddb9c200ef18779df47a009d42f
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Decreased the deep directory creation from 15 to 10
As heal completion within 20 mins for disperse
was intermitently failing
Also increased timeout for bricks to be online
and healing to complete
Change-Id: I1c1eef383ca4bf3f7f1f89e00da096bbbf57b9db
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
- Create a gluster cluster
- With brick mux set to disable:
1.Set cluster.max-bricks-per-process to int and check
error message(Must fail)
2.Set cluster.max-bricks-per-process to string(Must fail)
- With brick mux set to enable:
1.Set cluster.max-bricks-per-process to string(Must fail)
2.Set cluster.max-bricks-per-process to 0
3.Set cluster.max-bricks-per-process to 1 and check
error message.(Must fail)
4.Set cluster.max-bricks-per-process to int value > 1.
Also fixing small issues observed when running all the tests
in the file.
Change-Id: Iad27cd5bbeccc2bd2f0a7e510f881b0ffcb0d3b6
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
There are scenarios where multiple files are to be renamed
to hash to a particular subvol. The existing method returns
the same name as the loop always starts from 1.
Fix:
Adding an optional argument, existing_names which contains
names already hashed to the subvol. An additional check is
added to ensure the name found is not already used
Change-Id: I453ee290c8462322194cebb42c40e8fbc7c373ed
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The method takes mnode, volname and throttle-type as parameters.
It sets the rebal-throttle for the volume as per the mentioned
throttle-type.
Change-Id: I9eb14e39f87158c9ae7581636c2cad1333fd573c
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The volume distributed-arbiter was mentined as
"distributed-arbiter: &distrbuted_arbiter" and
"type: distributed_arbiter" under "volume_types:"; but it must
be "&distrbuted-arbiter" and "type: distributed-arbiter", in
accordance with the format in gluster_base_class.py
Change-Id: I4a35a8827f26050328acfeddc3ce930225181e7a
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The function "set_volume_options()" is a part of "volume_ops" lib,
but was wrongly imported from "volume_libs" lib earlier. Corrected
the import statement.
Change-Id: I7295684e7a564468ac42bbe1f00643ee150f769d
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
A pattern was observed where testcases
which were passing were throwing error
in teadownclass this was becuase
docleanup was running before teadownclass
and when teardownclass was executed it
failed as the setup was already cleaned.
Solution:
Change code to teardown from teardownclass
and move setup volume code to setup from
setupclass.
Change-Id: I37c6fde1f592224c114148f0ed7215b2494b4502
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The TC checks that there is no stack overflow
in readdirp with parallel-readdir enabled.
Steps :
1) Create a volume.
2) Mount the volume using FUSE.
3) Enable performance.parallel-readdir and
performance.readdir-ahead on the volume.
4) Create 10000 files on the mount point.
5) Add-brick to the volume.
6) Perform fix-layout on the volume (not rebalance).
7) From client node, rename all the files, this will result in
creation of linkto files on the newly added brick.
8) Do ls -l (lookup) on the mount-point.
Change-Id: I821efea55e3f8981ecd307b93781f2da5790c548
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Description:
This testcase tests the new xattr glusterfs.mdata attributes
when features.ctime is enabled
Change-Id: I3c2dd6537f88b6d9da85aa6819c96e1c236a0d61
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
- Create a gluster cluster
- Set cluster.brick-multiplex value to random string(Must fail)
- Set cluster.brick-multiplex value to random int(Must fail)
- Set cluster.brick-multiplex value to random
special characters(Must fail)
Change-Id: Ib0233668aad8d72572b1dd9d17a5d0c27c364250
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
- create and mount EC volume 4+2
- start append to a file from client
- bring down one of the bricks (say b1)
- wait for ~minute and bring down another brick (say b2)
- after ~minute bring up first brick (b1)
- check the xattrs 'ec.size', 'ec.version'
- xattrs of online bricks should be same as an indication to heal
Change-Id: I81a5bad4a91dd891fbbc9d93ae3f76610237789e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume, start and mount it
2. Create directories and files
3. Rename, change permissions of files
4. Create hardlink and soflink and different types of IO's
5. Delete all the data
6. Check no heals are pending
7. Check al bricks are empty
Change-Id: Ic8f5dad1a44de71688a6b0a2fcfb4a25cef435ba
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test steps:
1. Create volume, start and mount it one client.
2. Enable metadata-cache(md-cache) options on the volume.
3. Touch a file and create a hardlink for it.
4. Read data from the hardlink.
5. Read data from the actual file.
Change-Id: Ibf4b8757262707fcfb4d09b4b031ff9dea166570
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The TC checks that there is no data loss when
rename is performed with a brick of volume down.
Steps :
1) Create a volume.
2) Mount the volume using FUSE.
3) Create 1000 files on the mount point.
4) Create the soft-link for file{1..100}
5) Create the hard-link for file{101..200}
6) Check for the file count on the mount point.
7) Begin renaming the files, in multiple iterations.
8) Let few iterations of the rename complete successfully.
9) Then while rename is still in progress, kill a brick part of
the volume.
10) Let the brick be down for sometime, such that the a couple
of rename iterations are completed.
11) Bring the brick back online.
12) Wait for the IO to complete.
13) Check if there is any data loss.
14) Check if all the files are renamed properly.
Change-Id: I7b7c4aed7df7f19a10ec8c2577dfec1f1ceeb46c
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
As SSL cannot be set after volume start op, moving
set_volume_option prior to volume start.
Change-Id: I14e1dc42deb0c0c28736f03e07cf25f3adb48349
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Finding the offline brick limit using ceil returns incorrect
value. E.g., For replica count 3, ceil(3/2) returns 2, and
the subsequent method uses this value to bring down 2 out of
3 available bricks, resulting in IO and many other failures.
Fix:
Change ceil to floor. Also change the '/' operator to '//'
for py2/3 compatibility
Change-Id: I3ee10647bb037a3efe95d1b04e0864cf61e2499e
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
| |
This method helps in cases of rename scenarios where the new filename
has to be hashed to a specific subvol
Change-Id: Ia36ea8e3d279ddf130f3a8a940dbe1fcb1910974
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding tool to fetch sosreports from all servers
and clients using glusto-tests config file.
This tool is essentially just a tweeked version
of getsos[1] tool which can take glusto-tests config file
and is relicensed under GPLv3+.
Reference:
[1] https://github.com/kshithijiyer/getsos
Change-Id: Ic1685163154ed4358064397d74d3965097448621
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This tool verifies the stability of a given set of testcase(s) by executing it
consecutively for a pre-defined number of times. This ensures that the written
code is stable and also helps the user to identify unexpected failures or errors
that may arise while executing it multiple times. It also checks the given code
for any pylint/flake8 issues.
Change-Id: I731277a448d4fc8d0028f43f51e08d6d9366c19a
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create and start a distributed-replicated volume.
2) Give different inputs to the storage.reserve volume set options
3) Validate the command behaviour on wrong inputs
Change-Id: I4bbad81cbea9b3b9e59a61fcf7f2b70eac19b216
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
In python3 assertItemsEqual is no longer supported and is replaced with assertCountEqual (Refer [1]).
Because of this issue, few arbiter tests are failing.
[1] https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertItemsEqual
Fix:
The replacement assertCountEqual is not supported in python2. So the fix is to replace assertItemsEqual
with assertEqual(sorted(expected), sorted(actual))
Change-Id: Ic1d599fa31f85a8a41598b6c245056a6ff01e000
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Form a gluster cluster by peer probing and create a volume
2. Unmount the brick using which the volume is created
3. Run 'gluster get-state' and validate absence of error 'Failed to get
daemon state. Check glusterd log file for more details'
4. Create another volume and start it using different bricks which are
not used to create above volume
5. Run 'gluster get-state' and validate the absence of above error.
Change-Id: Ib629b53c01860355e5bfafef53dcc3233af071e3
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifcation of root-squash functionality with NFS-Ganesha
* Create a volume and export it via Ganesha
* Mount the volume on clients
* Create some files and dirs inside mount point
* Check for owner and group
Owner and group should be root
* Set permission as 777 for mount point
* Enable root-squash on volume
* Create some more files and dirs
* Check for owner and group for any file
Owner and group should be nfsnobody
* Edit file created by root user
nfsnobody user should not be allowed to edit file
Change-Id: Ia345c772c84fcfe6ef716b9f1026fca5d399ab2a
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
| |
This method fetches all entries under a directory in recursive fashion
Change-Id: I4fc066ccf7a3a4730d568f96d926e46dea7b20a1
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I3f77dc73044a5bc59a26319c55e8e024e2edf449
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios:
1 - Rename directory when destination is not present
2 - Rename directory when destination is present
The TC was failing when the volume was mounted using NFS at
validate_files_in_dir() because the method uses
'trusted.glusterfs.pathinfo' on the mount, which is a glusterfs
specific xattr. When the volume is mounted using NFS, it cannot
find the xattr and hence it failed.
Change-Id: Ic61de773525e717a73178a4694c015276da2a688
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
assertIn statement looks for out in warning_message
which fails every time as it should ideally
look for warning_message in out.
Change-Id: I57e0221097c861e251995e5e8456cb19964e7d17
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently setup_volume in volume_libs.py and gluster_base_class.py
are to create volume and start it. There are tests, where only
volume_create is required and if the test has to run on all volume
types. Any contributor have to do all the validations which are
already implemented in setup_volume and classmethod of setup volume
in the gluster_base_class to their test.
Solution:
Added a parameter in the setup_volume() function "create_only" by
default it is false, unless specified this paramter setup_volume
will work as it is.
similarly, have added a parameter in classmethod of setup_volume
in gluster_base_class.py "only_volume_create", here also defaults
to false unless specified.
Note: After calling "setup_volume() -> volume_stop" is not same as
just "volume_create()" in the actual test.
Change-Id: I76cde1b668b3afcac41dd882c2a376cb6fac88a3
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a snapshot is restored, that snapshot gets removed.
USS makes use of a '.snaps' directory in the mount point
where all the activated snapshots can be listed.
So the restored snapshot should not be listed under the
'.snaps' directory regardless of it being activated or
deactivated.
Steps:
* Perform I/O on mounts
* Enable USS on volume
* Validate USS is enabled
* Create a snapshot
* Activate the snapshot
* Perform some more I/O
* Create another snapshot
* Activate the second
* Restore volume to the second snapshot
* From mount point validate under .snaps
- first snapshot should be listed
- second snapshot should not be listed
Change-Id: I5630d8aad6b4758d49e8d4f53497073c78a00a6b
Co-authored-by: Sunny Kumar <sunkumar@redhat.com>
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Summary and Steps:
This testcase validates if ctime, mtime and atime of a
created object is same
1. Create a volume and check if features.ctime is disabled by default
2. Enable features.ctime
3. Create a new directory dir1 and check if m|a|ctimes are same
4. Create a new file file1 and check if m|a|ctimes are same
5. Again create a new file file2 and check if m|a|ctimes are same
after issueing an immediate lookup
Change-Id: I024c11706a0309806c081c957b9305be92936f7f
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verify remove brick operation while IO is running
Steps:
1. Start IO on mount points
2. Perform remove brick operation
3. Validate IOs
Change-Id: Ie394f96c9180be57704ca637c8cd725af82323cb
Co-authored-by: Jilju Joy <jijoy@redhat.com>
Signed-off-by: Jilju Joy <jijoy@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Move cases from teardown class to teardown in snapshot
Change-Id: I7b33fa2728665fad000a5ad881f6690d40913f22
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Changed get_file_stat function to assign correct key-value pairs
for atime, mtime and ctime respectively.
Previously, all timestamp keys were assigned to atime value
Change-Id: I471ec341d1a213395a89f6c01315f3d0f2e976af
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Below command to 'pkill pidof glusterd' is not
right, as it is not getting the pidof glusterd.
eg:
cmd = "pkill pidof glusterd"
ret, out ,err = g.run("10.20.30.40", cmd, "root")
>>> ret, out, err
(2, '', "pkill: only one pattern can be provided\n
Try `pkill --help' for more information.\n")
Here command is failing.
Solution:
Added `pidof glusterd` which will get proper
glusterd pid and kill the stale pid after
glusterd stop failed.
cmd = "pkill `pidof glusterd`"
ret, out ,err = g.run("10.20.30.40", cmd, "root")
>>> ret, out, err
(1, '', '')
Note: The ret value is 1, as it is tried on a machine
where glusterd is running. The purpose of the fix is
to get the proper pid.
Change-Id: Iacba3712852b9d16546ced9a4c071c62182fe385
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
While performing scratch clenaup, observerd
posix health checkers warnings once the glusterd is
started as shown below
[2020-05-05 12:19:10.633623] M [MSGID: 113075]
[posix-helpers.c:2194:posix_health_check_thread_proc]
0-testvol_distributed-dispersed-posix: health-check
failed, going down
Solution:
In scartch cleanup, once the glusterd is stopped,
and runtime socket file removed for glusterd daemon,
there are stale glusterfsd present on few the
machines. Adding a step to get glusterfsd processes
if any and using kill_process method killing the stale
glusterfsd processes and continuing with the existing
procedure. Once the glusterd is started won't see any
posix health checkers.
Change-Id: Ib3e9492ec029b5c9efd1c07b4badc779375a66d6
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_file_stat function doesn't have access time
modified time and change time for a file or
directory. Added respective parameters for get-
ting the values into the dictionary.
Changed the separator from ':' to '$', reason is
to overcome the unpacking of the tuple error as
below:
2020-04-02 19:27:45.962477021
If ":" as separator is used, will be hitting
"ValueError: too many values to unpack" error.
Used $ as separator, as it is not used for the
filenames in the glusto-tests and not part of
the stat output.
Change-Id: I40b0c1fd08a5175d3730c1cf8478d5ad8df6e8dd
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test case: (rmdir with subvol down)
case -1:
- create parent
- bring down a non-hashed subvolume for directory child
- create parent/child
- rmdir /mnt/parent will fail with ENOTCONN
case -2:
- create dir1 and dir2
- bring down hashed subvol for dir1
- bring down a non-hashed subvol for dir2
- rmdir dir1 should fail with ENOTCONN
- rmdir dir2 should fail with ENOTCONN
case -3:
- create parent
- mkdir parent/child
- touch parent/child/file
- bringdown a subvol where file is not present
- rm -rf parent
- Only file should be deleted
- rm -rf should fail with ENOTCONN
case -4:
- Bring down a non-hashed subvol for parent_dir
- mkdir parent
- rmdir parent should fails with ENOTCONN
Change-Id: I8fbd425729aaf04eabfced315f94167178918e31
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Enable uss and create snapshot, list and delete
2.Create Snapshot with same same and list
Github Issue for CentOS-CI failure:
https://github.com/gluster/glusterfs/issues/1203
Testcase failing due to:
https://bugzilla.redhat.com/show_bug.cgi?id=1828820
Change-Id: I829e6b340dfb4963355b445259fcb011b62ba057
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Ideally operations done in ssl_ops.py should
be performed on a gluster cluster even before
peer probing the nodes. This makes the library
useless as we can't run any library in glusto-tests
without peer probing
Solution:
Enable SSL on gluster cluster before parsing it
to run tests present in glusto-tests.
Change-Id: If803179c67d5b3271b70c1578269350444aa3cf6
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
The command creation with a specific user had six substitutions,
but had only 5 placeholders.
Change-Id: I2c9f63213f78e5cec9e5bd30cac8d75eb8dbd6ce
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Failing in CentOS-CI due to this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1768380
Description:
Test Script which verifies that the server side healing must happen
only if the heal daemon is running on the node where source brick
resides.
* Create and start the Replicate volume
* Check the glustershd processes - Only 1 glustershd should be listed
* Bring down the bricks without affecting the cluster
* Create files on volume
* kill the glustershd on node where bricks is running
* bring the bricks up which was killed in previous steps
* check the heal info - heal info must show pending heal info, heal
shouldn't happen since glustershd is down on source node
* issue heal
* trigger client side heal
* heal should complete successfully
Change-Id: I1fba01f980a520b607c38d8f3371bcfe086f7783
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
| |
Move cases from teardown class to teardown in quota
Change-Id: Ia20fe9bef09842f891f0f27ab711a1ef4c9f6f39
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: Id94870735b26fbeab2bf448d4f80341c92beb5ba
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
| |
Move cases from teardown class to teardown in dht
Change-Id: Id0cf120c6229715521ae19fd4bb00cad553d701f
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Turn off the options self heal daemon
2.Create IO
3.Calculate arequal of the bricks and mount point
4.Bring down "brick1" process
5.Change the permissions of the directories and files
6.Change the ownership of the directories and files
7.Change the group of the directories and files
8.Bring back the brick "brick1" process
9.Execute "find . | xargs stat" from the mount point to trigger heal
10.Verify the changes in permissions are not self healed on brick1
11.Verify the changes in permissions on all bricks but brick1
12.Verify the changes in ownership are not self healed on brick1
13.Verify the changes in ownership on all the bricks but brick1
14.Verify the changes in group are not successfully self-healed
on brick1
15.Verify the changes in group on all the bricks but brick1
16.Turn on the option metadata-self-heal
17.Execute "find . | xargs md5sum" from the mount point to trgger heal
18.Wait for heal to complete
19.Verify the changes in permissions are self-healed on brick1
20.Verify the changes in ownership are successfully self-healed
on brick1
21.Verify the changes in group are successfully self-healed on brick1
22.Calculate arequal check on all the bricks and mount point
Change-Id: Ia7fb1b272c3c6bf85093690819b68bd83efefe14
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|