| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Tests to validate the behaviour of rename cases when destination file
exists and is hashed or cached to different subvol combinations
Change-Id: I44752a444d9c112d590efd66c48ff095c22fcecd
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the non-tiered volume types, In few test cases while bringing
bricks offline, collecting both hot_tier_bricks and cold_tier_bricks
and it is not needed to collect hot and cold tier bricks.
Removing tier kwarg in one of the test.
Removing the hot and cold tiered bricks and collecting only bricks
of the particular volume as mentioned below.
Removing below section
```
bricks_to_bring_offline_dict = (select_bricks_to_bring_offline(
self.mnode, self.volname))
bricks_to_bring_offline = list(filter(None, (
bricks_to_bring_offline_dict['hot_tier_bricks'] +
bricks_to_bring_offline_dict['cold_tier_bricks'] +
bricks_to_bring_offline_dict['volume_bricks'])))
```
Modifying as below for bringing bricks offline.
```
bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks']
```
Change-Id: I4f59343b380ced498516794a8cc7c968390a8459
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
- Create, start and mount an EC volume in two clients
- Create multiple files and directories including all file types on
one directory from client 1
- Take arequal check sum of above data
- Create another folder and pump different fops from client 2
- Fail and bring up redundant bricks in a cyclic fashion in all of
the subvols maintaining a minimum delay between each operation
- In every cycle create new dir when brick is down and wait for heal
- Validate heal info on volume when brick down erroring out instantly
- Validate arequal on brining the brick offline
Change-Id: Ied5e0787eef786e5af7ea70191f5521b9d5e34f6
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcase test_mount_point_not_go_to_rofs fails
every time in the CI runs with the below traceback:
> ret = wait_for_io_to_complete(self.all_mounts_procs, self.mounts)
tests/functional/arbiter/test_mount_point_while_deleting_files.py:137:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
build/bdist.linux-x86_64/egg/glustolibs/io/utils.py:290: in wait_for_io_to_complete
???
/usr/lib/python2.7/site-packages/glusto/connectible.py:247: in async_communicate
stdout, stderr = p.communicate()
/usr/lib64/python2.7/subprocess.py:800: in communicate
return self._communicate(input)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <subprocess.Popen object at 0x7febb64238d0>, input = None
def _communicate(self, input):
if self.stdin:
# Flush stdio buffer. This might block, if the user has
# been writing to .stdin in an uncontrolled fashion.
> self.stdin.flush()
E ValueError: I/O operation on closed file
/usr/lib64/python2.7/subprocess.py:1396: ValueError
This is because the self.io_validation_complete is
never set to True in the testcase.
Fix:
Adding code to set self.io_validation_complete to
True and moving code from TearDownClass to
TearDown.
Modifying logic to not add both clients to self.mounts.
Change-Id: I51ed635e713838ee3054c4d1dd8c6cdc16bbd8bf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Few TC's were failing sue to timeout issue
increased timeout for those TC's
Change-Id: Id62bee81e1cb6b8bb3a712858404c7092142072b
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
| |
As heal completion is failing intermitently for
disperse volume, increased timeout for heal
Change-Id: I5e7b7c8eb332ada1abc72389fc8ce883e269d226
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: Ib39894e9f44c41f5539377c5c124ad45a786cbb3
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Tests to validate behaviour of different scenarios of file rename
cases, when destination file exists intially and is hashed to the
source file hashed or cached subvol.
Change-Id: Iec12d33c459cb966861d2efac2bae85103555cc1
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
| |
Changing the method name from test_readdirp_with_rebalance(self)
to test_access_file_with_stale_linkto_xattr(self)
Change-Id: I5503e301d65f96e38aa135827d8bc698a0371281
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The test script verfies that a file with stale
linkto xattr can be accessed from a non-root user.
Steps:
1) Create a volume and start it.
2) Mount the volume on client node using FUSE.
3) Create a file.
4) Enable performance.parallel-readdir and
performance.readdir-ahead on the volume.
5) Rename the file in order to create
a linkto file.
6) Force the linkto xattr values to become stale by changing the
dht subvols in the graph.
7) Login as an non-root user and access the file.
Change-Id: I4f275dedd47a851c2c4839f51cf1867638a66667
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Removing 'add_to_hot_tier' parameter as it defaults
to False and it is not needed for the add-brick
operation in the test as the volume type is not tier.
Change-Id: I4a697a453e368197dfaf143d344a623d449e2614
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the
USS functionality of snapshots snapd
on cloned volume and validated snapshots
are present inside .snaps directory by
terminating snapd on one by one nodes
and validating .snaps directory is still accessible.
Change-Id: I98d48268e7c5c5952a7f0f544960203d8634b7ac
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On setting disperse quorum count to 5 , atleat 5 bricks
should be online for successful writes on volume
Steps:
1.Set disperse quorum count to 5
2.Write and read IO's
2.Bring down 1st brick
3.Writes and reads successful
4.Brind down 2nd brick
5.Writes should fail and reads successful
4.Write and read again
5.Writes should fail and reads successful
6.Rebalance should fail as quorum not met
7.Reset volume
8.Write and read IO's and vaildate them
9.Bring down redundant bricks
10.Write and read IO's and vaildate them
Change-Id: Ib825783f01a394918c9016808cc62f6530fe8c67
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the non-tiered volume types, In few test cases while bringing
bricks offline, collecting both hot_tier_bricks and cold_tier_bricks
and it is not needed to collect hot and cold tier bricks.
Removing the hot and cold tiered bricks and collecting only bricks
of the particular volume as mentioned below.
Removing below section
```
bricks_to_bring_offline_dict = (select_bricks_to_bring_offline(
self.mnode, self.volname))
bricks_to_bring_offline = list(filter(None, (
bricks_to_bring_offline_dict['hot_tier_bricks'] +
bricks_to_bring_offline_dict['cold_tier_bricks'] +
bricks_to_bring_offline_dict['volume_bricks'])))
```
Modifying as below for bringing bricks offline.
```
bricks_to_bring_offline = bricks_to_bring_offline_dict['volume_bricks']
```
Change-Id: Icb1dc4a79cf311b686d839f2c9390371e42142f7
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
test_snap_delete_multiple fails on I/O validation across all automation
runs constantly with the below trace back:
Traceback (most recent call last):
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module>
rc = args.func(args)
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files
base_file_name, file_types)
File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files
ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
IOError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/testfile42.txt'
Fix:
Change the I/O to use --base-file-name parameter when running the I/O
scripts.
Change-Id: Ic5a8222f4fafeac4ac9aadc9c4d23327711ed9f0
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On setting disperse quorum count to 6 all bricks should
be online for successful writes on volume
Steps:
1.Set disperse quorum count to 6
2.Write and read IO's
2.Bring down 1 brick
3.Writes should fail and reads successful
4.Write and read again
5.Writes should fail and reads successful
6.Rebalance should fail as quorum not met
7.Reset volume
8.Write and read IO's and vaildate them
9.Bring down redundant bricks
10.Write and read IO's and vaildate them
Change-Id: I93d418fd75d75fa3563d23f52fdd5aed71cfe540
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create volume and mount the volume on 3 clients, c1(client1),
c2(client2), and, c3(client3)
2. On c1, mkdir /c1/dir
3. On c2, Create 4000 files on mount point i.e. "/"
4. After step 3, Create next 4000 files on c2 on mount point i.e. "/"
5. On c1 Create 10000 files on /dir/
6. On c3 start moving 4000 files created on step 3 from mount point
to /dir/
7. On c3, start ls in a loop for 20 iterations
Note: Used upload scripts in setupclass, as there is one more test
to be added in the same file.
Change-Id: Ibab74433cbec4d6a4f9b494f257b3e517b8fbfbc
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Description:
This script tests Disperse(EC) eagerlock default values
and the performance impact on lookups with eagerlock
and other-eagerlock default values
Change-Id: Ia083d0d00f99a42865fb6f06eda75ecb18ff474f
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase Steps:
1.Create an EC volume
2.Set the eager lock option by turning
on disperse.eager-lock by using different inputs:
- Try non boolean values(Must fail)
- Try boolean values
Change-Id: Iec875ce9fb4c8f7c68b012ede98bd94b82d04d7e
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The test script verifies that if a file is picked for
migration and if it is deleted, then the file should be removed
successfully.
Steps :
1) First create a big data file of 10GB.
2) Rename that file, such that after rename a linkto file is created
(we are doing this to make sure that file is picked for migration.)
3) Add bricks to the volume and trigger rebalance using force option.
4) When the file has been picked for migration, delete that file from
the mount point.
5) Check whether the file has been deleted or not on the mount-point
as well as the back-end bricks.
Change-Id: I137512a1d94a89aa811a3a9d61a9fb4002bf26be
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
Tests to validate behaviour of different scenarios of file rename
cases, when destination file exists intially.
Change-Id: I12cd2568540bec198f1c3cf85213e0107c9ddd6b
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The testcase test_volume_create_with_glusterd_restarts
consist of a asynchronous loop of glusterd restarts
which fails in the lastest runs due to patch [1]
and [2] added to glusterfs which limits the
glusterd restarts to 6.
Fix:
Add `systemctl reset-failed glusterd` to the
asynchronous loop.
Links:
[1] https://review.gluster.org/#/c/glusterfs/+/23751/
[2] https://review.gluster.org/#/c/glusterfs/+/23970/
Change-Id: Idd52bfeb99c0c43afa45403d71852f5f7b4514fa
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
- Test is designed to run on EC volumes only
Change-Id: Ice6a77422695ebabbec6b9cfd910e453e5b2c81a
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create a directory, dir1 and run different types of IO's
3.Create a directory, dir2
4.Bring down redundant bricks
5.Write IO's to directory dir2
6.Create a directory, dir3 and run IO's(read,write,ammend)
7.Bring up bricks
8.Monitor heal
9.Check for data intergrity of dir1
Change-Id: I9a7e366084bb46dcfc769b1d98b89b303fc16150
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create an EC volume
2. Mount the volume using FUSE on two different clients
3. Create ~9 files from one of the client
4. Create ~9 dir with ~9 files each from another client
5. Create soft-links, hard-links for file{4..6}, file{7..9}
6. Create soft-links for dir{4..6}
7. Begin renaming the files, in multiple iterations
8. Bring down a brick while renaming the files
9. Bring the brick online after renaming some of the files
10. Wait for renaming of the files
11. Validate no data loss and files are renamed successfully
Change-Id: I6d98c00ff510cb473978377bb44221908555681e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume and mount it on one client
2. git clone the glusterfs repo on the glusterfs volume
3. Set the performance options to off
4. Repeat step 2 on a different directory
Change-Id: Iaecce7cd14ecf84058c75847a037c6589d3833e9
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Open a File descriptor when a brick in down
2.Write to File descriptor when brick has come up and
check if healing is complete
Change-Id: I721cedf4dc6a420f0c153d4232b046f780da201b
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
| |
Tests to validate behaviour of different scenarios of file rename
cases, when destination file doesn't exist intially.
Change-Id: I3f22d61d9bd2fa5c54930715e2ef976c7d1ba54e
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
As in setUpClass, the setup_volume method is used
for volume creation, volume mount is not done.
But in teardown, unmount_volume_and_cleanup_volume
is used. This causes teardown to fail
Fix:
change unmount_volume_and_cleanup_volume
to cleanup_volume
Change-Id: Ia9bb6bcd36ce9ddb9c200ef18779df47a009d42f
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Decreased the deep directory creation from 15 to 10
As heal completion within 20 mins for disperse
was intermitently failing
Also increased timeout for bricks to be online
and healing to complete
Change-Id: I1c1eef383ca4bf3f7f1f89e00da096bbbf57b9db
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
- Create a gluster cluster
- With brick mux set to disable:
1.Set cluster.max-bricks-per-process to int and check
error message(Must fail)
2.Set cluster.max-bricks-per-process to string(Must fail)
- With brick mux set to enable:
1.Set cluster.max-bricks-per-process to string(Must fail)
2.Set cluster.max-bricks-per-process to 0
3.Set cluster.max-bricks-per-process to 1 and check
error message.(Must fail)
4.Set cluster.max-bricks-per-process to int value > 1.
Also fixing small issues observed when running all the tests
in the file.
Change-Id: Iad27cd5bbeccc2bd2f0a7e510f881b0ffcb0d3b6
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The volume distributed-arbiter was mentined as
"distributed-arbiter: &distrbuted_arbiter" and
"type: distributed_arbiter" under "volume_types:"; but it must
be "&distrbuted-arbiter" and "type: distributed-arbiter", in
accordance with the format in gluster_base_class.py
Change-Id: I4a35a8827f26050328acfeddc3ce930225181e7a
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The function "set_volume_options()" is a part of "volume_ops" lib,
but was wrongly imported from "volume_libs" lib earlier. Corrected
the import statement.
Change-Id: I7295684e7a564468ac42bbe1f00643ee150f769d
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
A pattern was observed where testcases
which were passing were throwing error
in teadownclass this was becuase
docleanup was running before teadownclass
and when teardownclass was executed it
failed as the setup was already cleaned.
Solution:
Change code to teardown from teardownclass
and move setup volume code to setup from
setupclass.
Change-Id: I37c6fde1f592224c114148f0ed7215b2494b4502
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The TC checks that there is no stack overflow
in readdirp with parallel-readdir enabled.
Steps :
1) Create a volume.
2) Mount the volume using FUSE.
3) Enable performance.parallel-readdir and
performance.readdir-ahead on the volume.
4) Create 10000 files on the mount point.
5) Add-brick to the volume.
6) Perform fix-layout on the volume (not rebalance).
7) From client node, rename all the files, this will result in
creation of linkto files on the newly added brick.
8) Do ls -l (lookup) on the mount-point.
Change-Id: I821efea55e3f8981ecd307b93781f2da5790c548
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Description:
This testcase tests the new xattr glusterfs.mdata attributes
when features.ctime is enabled
Change-Id: I3c2dd6537f88b6d9da85aa6819c96e1c236a0d61
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
- Create a gluster cluster
- Set cluster.brick-multiplex value to random string(Must fail)
- Set cluster.brick-multiplex value to random int(Must fail)
- Set cluster.brick-multiplex value to random
special characters(Must fail)
Change-Id: Ib0233668aad8d72572b1dd9d17a5d0c27c364250
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
- create and mount EC volume 4+2
- start append to a file from client
- bring down one of the bricks (say b1)
- wait for ~minute and bring down another brick (say b2)
- after ~minute bring up first brick (b1)
- check the xattrs 'ec.size', 'ec.version'
- xattrs of online bricks should be same as an indication to heal
Change-Id: I81a5bad4a91dd891fbbc9d93ae3f76610237789e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume, start and mount it
2. Create directories and files
3. Rename, change permissions of files
4. Create hardlink and soflink and different types of IO's
5. Delete all the data
6. Check no heals are pending
7. Check al bricks are empty
Change-Id: Ic8f5dad1a44de71688a6b0a2fcfb4a25cef435ba
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test steps:
1. Create volume, start and mount it one client.
2. Enable metadata-cache(md-cache) options on the volume.
3. Touch a file and create a hardlink for it.
4. Read data from the hardlink.
5. Read data from the actual file.
Change-Id: Ibf4b8757262707fcfb4d09b4b031ff9dea166570
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The TC checks that there is no data loss when
rename is performed with a brick of volume down.
Steps :
1) Create a volume.
2) Mount the volume using FUSE.
3) Create 1000 files on the mount point.
4) Create the soft-link for file{1..100}
5) Create the hard-link for file{101..200}
6) Check for the file count on the mount point.
7) Begin renaming the files, in multiple iterations.
8) Let few iterations of the rename complete successfully.
9) Then while rename is still in progress, kill a brick part of
the volume.
10) Let the brick be down for sometime, such that the a couple
of rename iterations are completed.
11) Bring the brick back online.
12) Wait for the IO to complete.
13) Check if there is any data loss.
14) Check if all the files are renamed properly.
Change-Id: I7b7c4aed7df7f19a10ec8c2577dfec1f1ceeb46c
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create and start a distributed-replicated volume.
2) Give different inputs to the storage.reserve volume set options
3) Validate the command behaviour on wrong inputs
Change-Id: I4bbad81cbea9b3b9e59a61fcf7f2b70eac19b216
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
In python3 assertItemsEqual is no longer supported and is replaced with assertCountEqual (Refer [1]).
Because of this issue, few arbiter tests are failing.
[1] https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertItemsEqual
Fix:
The replacement assertCountEqual is not supported in python2. So the fix is to replace assertItemsEqual
with assertEqual(sorted(expected), sorted(actual))
Change-Id: Ic1d599fa31f85a8a41598b6c245056a6ff01e000
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Form a gluster cluster by peer probing and create a volume
2. Unmount the brick using which the volume is created
3. Run 'gluster get-state' and validate absence of error 'Failed to get
daemon state. Check glusterd log file for more details'
4. Create another volume and start it using different bricks which are
not used to create above volume
5. Run 'gluster get-state' and validate the absence of above error.
Change-Id: Ib629b53c01860355e5bfafef53dcc3233af071e3
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifcation of root-squash functionality with NFS-Ganesha
* Create a volume and export it via Ganesha
* Mount the volume on clients
* Create some files and dirs inside mount point
* Check for owner and group
Owner and group should be root
* Set permission as 777 for mount point
* Enable root-squash on volume
* Create some more files and dirs
* Check for owner and group for any file
Owner and group should be nfsnobody
* Edit file created by root user
nfsnobody user should not be allowed to edit file
Change-Id: Ia345c772c84fcfe6ef716b9f1026fca5d399ab2a
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
| |
Change-Id: I3f77dc73044a5bc59a26319c55e8e024e2edf449
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios:
1 - Rename directory when destination is not present
2 - Rename directory when destination is present
The TC was failing when the volume was mounted using NFS at
validate_files_in_dir() because the method uses
'trusted.glusterfs.pathinfo' on the mount, which is a glusterfs
specific xattr. When the volume is mounted using NFS, it cannot
find the xattr and hence it failed.
Change-Id: Ic61de773525e717a73178a4694c015276da2a688
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
assertIn statement looks for out in warning_message
which fails every time as it should ideally
look for warning_message in out.
Change-Id: I57e0221097c861e251995e5e8456cb19964e7d17
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a snapshot is restored, that snapshot gets removed.
USS makes use of a '.snaps' directory in the mount point
where all the activated snapshots can be listed.
So the restored snapshot should not be listed under the
'.snaps' directory regardless of it being activated or
deactivated.
Steps:
* Perform I/O on mounts
* Enable USS on volume
* Validate USS is enabled
* Create a snapshot
* Activate the snapshot
* Perform some more I/O
* Create another snapshot
* Activate the second
* Restore volume to the second snapshot
* From mount point validate under .snaps
- first snapshot should be listed
- second snapshot should not be listed
Change-Id: I5630d8aad6b4758d49e8d4f53497073c78a00a6b
Co-authored-by: Sunny Kumar <sunkumar@redhat.com>
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Summary and Steps:
This testcase validates if ctime, mtime and atime of a
created object is same
1. Create a volume and check if features.ctime is disabled by default
2. Enable features.ctime
3. Create a new directory dir1 and check if m|a|ctimes are same
4. Create a new file file1 and check if m|a|ctimes are same
5. Again create a new file file2 and check if m|a|ctimes are same
after issueing an immediate lookup
Change-Id: I024c11706a0309806c081c957b9305be92936f7f
Signed-off-by: nchilaka <nchilaka@redhat.com>
|