| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
The volume distributed-arbiter was mentined as
"distributed-arbiter: &distrbuted_arbiter" and
"type: distributed_arbiter" under "volume_types:"; but it must
be "&distrbuted-arbiter" and "type: distributed-arbiter", in
accordance with the format in gluster_base_class.py
Change-Id: I4a35a8827f26050328acfeddc3ce930225181e7a
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The function "set_volume_options()" is a part of "volume_ops" lib,
but was wrongly imported from "volume_libs" lib earlier. Corrected
the import statement.
Change-Id: I7295684e7a564468ac42bbe1f00643ee150f769d
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
A pattern was observed where testcases
which were passing were throwing error
in teadownclass this was becuase
docleanup was running before teadownclass
and when teardownclass was executed it
failed as the setup was already cleaned.
Solution:
Change code to teardown from teardownclass
and move setup volume code to setup from
setupclass.
Change-Id: I37c6fde1f592224c114148f0ed7215b2494b4502
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The TC checks that there is no stack overflow
in readdirp with parallel-readdir enabled.
Steps :
1) Create a volume.
2) Mount the volume using FUSE.
3) Enable performance.parallel-readdir and
performance.readdir-ahead on the volume.
4) Create 10000 files on the mount point.
5) Add-brick to the volume.
6) Perform fix-layout on the volume (not rebalance).
7) From client node, rename all the files, this will result in
creation of linkto files on the newly added brick.
8) Do ls -l (lookup) on the mount-point.
Change-Id: I821efea55e3f8981ecd307b93781f2da5790c548
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Description:
This testcase tests the new xattr glusterfs.mdata attributes
when features.ctime is enabled
Change-Id: I3c2dd6537f88b6d9da85aa6819c96e1c236a0d61
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
- Create a gluster cluster
- Set cluster.brick-multiplex value to random string(Must fail)
- Set cluster.brick-multiplex value to random int(Must fail)
- Set cluster.brick-multiplex value to random
special characters(Must fail)
Change-Id: Ib0233668aad8d72572b1dd9d17a5d0c27c364250
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
- create and mount EC volume 4+2
- start append to a file from client
- bring down one of the bricks (say b1)
- wait for ~minute and bring down another brick (say b2)
- after ~minute bring up first brick (b1)
- check the xattrs 'ec.size', 'ec.version'
- xattrs of online bricks should be same as an indication to heal
Change-Id: I81a5bad4a91dd891fbbc9d93ae3f76610237789e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume, start and mount it
2. Create directories and files
3. Rename, change permissions of files
4. Create hardlink and soflink and different types of IO's
5. Delete all the data
6. Check no heals are pending
7. Check al bricks are empty
Change-Id: Ic8f5dad1a44de71688a6b0a2fcfb4a25cef435ba
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test steps:
1. Create volume, start and mount it one client.
2. Enable metadata-cache(md-cache) options on the volume.
3. Touch a file and create a hardlink for it.
4. Read data from the hardlink.
5. Read data from the actual file.
Change-Id: Ibf4b8757262707fcfb4d09b4b031ff9dea166570
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: The TC checks that there is no data loss when
rename is performed with a brick of volume down.
Steps :
1) Create a volume.
2) Mount the volume using FUSE.
3) Create 1000 files on the mount point.
4) Create the soft-link for file{1..100}
5) Create the hard-link for file{101..200}
6) Check for the file count on the mount point.
7) Begin renaming the files, in multiple iterations.
8) Let few iterations of the rename complete successfully.
9) Then while rename is still in progress, kill a brick part of
the volume.
10) Let the brick be down for sometime, such that the a couple
of rename iterations are completed.
11) Bring the brick back online.
12) Wait for the IO to complete.
13) Check if there is any data loss.
14) Check if all the files are renamed properly.
Change-Id: I7b7c4aed7df7f19a10ec8c2577dfec1f1ceeb46c
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create and start a distributed-replicated volume.
2) Give different inputs to the storage.reserve volume set options
3) Validate the command behaviour on wrong inputs
Change-Id: I4bbad81cbea9b3b9e59a61fcf7f2b70eac19b216
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
In python3 assertItemsEqual is no longer supported and is replaced with assertCountEqual (Refer [1]).
Because of this issue, few arbiter tests are failing.
[1] https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertItemsEqual
Fix:
The replacement assertCountEqual is not supported in python2. So the fix is to replace assertItemsEqual
with assertEqual(sorted(expected), sorted(actual))
Change-Id: Ic1d599fa31f85a8a41598b6c245056a6ff01e000
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Form a gluster cluster by peer probing and create a volume
2. Unmount the brick using which the volume is created
3. Run 'gluster get-state' and validate absence of error 'Failed to get
daemon state. Check glusterd log file for more details'
4. Create another volume and start it using different bricks which are
not used to create above volume
5. Run 'gluster get-state' and validate the absence of above error.
Change-Id: Ib629b53c01860355e5bfafef53dcc3233af071e3
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifcation of root-squash functionality with NFS-Ganesha
* Create a volume and export it via Ganesha
* Mount the volume on clients
* Create some files and dirs inside mount point
* Check for owner and group
Owner and group should be root
* Set permission as 777 for mount point
* Enable root-squash on volume
* Create some more files and dirs
* Check for owner and group for any file
Owner and group should be nfsnobody
* Edit file created by root user
nfsnobody user should not be allowed to edit file
Change-Id: Ia345c772c84fcfe6ef716b9f1026fca5d399ab2a
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
| |
Change-Id: I3f77dc73044a5bc59a26319c55e8e024e2edf449
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios:
1 - Rename directory when destination is not present
2 - Rename directory when destination is present
The TC was failing when the volume was mounted using NFS at
validate_files_in_dir() because the method uses
'trusted.glusterfs.pathinfo' on the mount, which is a glusterfs
specific xattr. When the volume is mounted using NFS, it cannot
find the xattr and hence it failed.
Change-Id: Ic61de773525e717a73178a4694c015276da2a688
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
assertIn statement looks for out in warning_message
which fails every time as it should ideally
look for warning_message in out.
Change-Id: I57e0221097c861e251995e5e8456cb19964e7d17
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a snapshot is restored, that snapshot gets removed.
USS makes use of a '.snaps' directory in the mount point
where all the activated snapshots can be listed.
So the restored snapshot should not be listed under the
'.snaps' directory regardless of it being activated or
deactivated.
Steps:
* Perform I/O on mounts
* Enable USS on volume
* Validate USS is enabled
* Create a snapshot
* Activate the snapshot
* Perform some more I/O
* Create another snapshot
* Activate the second
* Restore volume to the second snapshot
* From mount point validate under .snaps
- first snapshot should be listed
- second snapshot should not be listed
Change-Id: I5630d8aad6b4758d49e8d4f53497073c78a00a6b
Co-authored-by: Sunny Kumar <sunkumar@redhat.com>
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Summary and Steps:
This testcase validates if ctime, mtime and atime of a
created object is same
1. Create a volume and check if features.ctime is disabled by default
2. Enable features.ctime
3. Create a new directory dir1 and check if m|a|ctimes are same
4. Create a new file file1 and check if m|a|ctimes are same
5. Again create a new file file2 and check if m|a|ctimes are same
after issueing an immediate lookup
Change-Id: I024c11706a0309806c081c957b9305be92936f7f
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verify remove brick operation while IO is running
Steps:
1. Start IO on mount points
2. Perform remove brick operation
3. Validate IOs
Change-Id: Ie394f96c9180be57704ca637c8cd725af82323cb
Co-authored-by: Jilju Joy <jijoy@redhat.com>
Signed-off-by: Jilju Joy <jijoy@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Move cases from teardown class to teardown in snapshot
Change-Id: I7b33fa2728665fad000a5ad881f6690d40913f22
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test case: (rmdir with subvol down)
case -1:
- create parent
- bring down a non-hashed subvolume for directory child
- create parent/child
- rmdir /mnt/parent will fail with ENOTCONN
case -2:
- create dir1 and dir2
- bring down hashed subvol for dir1
- bring down a non-hashed subvol for dir2
- rmdir dir1 should fail with ENOTCONN
- rmdir dir2 should fail with ENOTCONN
case -3:
- create parent
- mkdir parent/child
- touch parent/child/file
- bringdown a subvol where file is not present
- rm -rf parent
- Only file should be deleted
- rm -rf should fail with ENOTCONN
case -4:
- Bring down a non-hashed subvol for parent_dir
- mkdir parent
- rmdir parent should fails with ENOTCONN
Change-Id: I8fbd425729aaf04eabfced315f94167178918e31
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Enable uss and create snapshot, list and delete
2.Create Snapshot with same same and list
Github Issue for CentOS-CI failure:
https://github.com/gluster/glusterfs/issues/1203
Testcase failing due to:
https://bugzilla.redhat.com/show_bug.cgi?id=1828820
Change-Id: I829e6b340dfb4963355b445259fcb011b62ba057
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Failing in CentOS-CI due to this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1768380
Description:
Test Script which verifies that the server side healing must happen
only if the heal daemon is running on the node where source brick
resides.
* Create and start the Replicate volume
* Check the glustershd processes - Only 1 glustershd should be listed
* Bring down the bricks without affecting the cluster
* Create files on volume
* kill the glustershd on node where bricks is running
* bring the bricks up which was killed in previous steps
* check the heal info - heal info must show pending heal info, heal
shouldn't happen since glustershd is down on source node
* issue heal
* trigger client side heal
* heal should complete successfully
Change-Id: I1fba01f980a520b607c38d8f3371bcfe086f7783
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
| |
Move cases from teardown class to teardown in quota
Change-Id: Ia20fe9bef09842f891f0f27ab711a1ef4c9f6f39
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: Id94870735b26fbeab2bf448d4f80341c92beb5ba
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
| |
Move cases from teardown class to teardown in dht
Change-Id: Id0cf120c6229715521ae19fd4bb00cad553d701f
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Turn off the options self heal daemon
2.Create IO
3.Calculate arequal of the bricks and mount point
4.Bring down "brick1" process
5.Change the permissions of the directories and files
6.Change the ownership of the directories and files
7.Change the group of the directories and files
8.Bring back the brick "brick1" process
9.Execute "find . | xargs stat" from the mount point to trigger heal
10.Verify the changes in permissions are not self healed on brick1
11.Verify the changes in permissions on all bricks but brick1
12.Verify the changes in ownership are not self healed on brick1
13.Verify the changes in ownership on all the bricks but brick1
14.Verify the changes in group are not successfully self-healed
on brick1
15.Verify the changes in group on all the bricks but brick1
16.Turn on the option metadata-self-heal
17.Execute "find . | xargs md5sum" from the mount point to trgger heal
18.Wait for heal to complete
19.Verify the changes in permissions are self-healed on brick1
20.Verify the changes in ownership are successfully self-healed
on brick1
21.Verify the changes in group are successfully self-healed on brick1
22.Calculate arequal check on all the bricks and mount point
Change-Id: Ia7fb1b272c3c6bf85093690819b68bd83efefe14
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
This test case creates files at mount point
and verifies custom attributes across bricks
Testcase steps:
1.Create a gluster volume and start it.
2.Create file and link files.
3.Create a custom xattr for file.
4.Verify that xattr for file is displayed on
mount point and bricks
5.Modify custom xattr value and verify that xattr
for file is displayed on mount point and bricks
6.Verify that custom xattr is not displayed
once you remove it
7.Create a custom xattr for symbolic link.
8.Verify that xattr for symbolic link
is displayed on mount point and sub-volume
9.Modify custom xattr value and verify that
xattr for symbolic link is displayed on
mount point and bricks
10.Verify that custom xattr is not
displayed once you remove it.
Change-Id: Iff7360273369c77da243f2c09df2e10a0eec27ea
Co-authored-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test case 'tests/functional/quota/test_limit_usage_deep_dir.py'
fails erratically for disperse volume.
A bug [1] had been raised for the same where it was decided to
remove the disperse volume type from the 'runs_on' of that test.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1672983
Change-Id: Ica8f2af449225d72d1b60c2c86b20e16b80a5a5a
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Test Script to verify the glustershd server vol file
has only entries for replicate volumes.
Testcase steps:
1.Create multiple volumes and start all volumes
2.Check the glustershd processes(Only 1 glustershd
should be listed)
3.Do replace brick on the replicate volume
4.Confirm that the brick is replaced
5.Check the glustershd processes(Only 1 glustershd should be listed
and pid should be different)
6.glustershd server vol should be updated with new bricks
Change-Id: I09245c8ff6a2b31a038749643af294aa8b81a51a
Co-authored-by: Vijay Avuthu <vavuthu@redhat.com>,
Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
This test verifies remove brick operations on disperse
volume.
Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
* Create IO
* Calculate arequal from mount
* kill glusterd process and glustershd process on arbiter nodes
* Delete data from backend from the arbiter nodes
* Start glusterd process and force start the volume
to bring the processes online
* Check if heal is completed
* Check for split-brain
* Calculate arequal checksum and compare it
Change-Id: I41192134530ec42db3398ae97e4f328b77e529d1
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume and set the volume option
'diagnostics.client-log-level' to DEBUG mount the volume on one
client.
2. Create a directory
3. Validate the number of lookups for the directory creation from the
log file.
4. Perform a new lookup of the directory
5. No new lookups should have happened on the directory, validate from
the log file.
6. Bring down one subvol of the volume and repeat step 4, 5
7. Bring down one brick from the online bricks and repeat step 4, 5
8. Start the volume with force and wait for all process to be online.
Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changing the distribute count to 4 for the volume type
distributed-replicated or distributed-dispersed, as earlier with
distribute count 2, after remove-brick, the dist-rep & dist-disp
volumes were converted to pure rep or pure dispersed, which caused
"layout not complete" error as with the DHT pass-through feature
layout is not set on bricks if volume type is pure replicated/pure
dispersed on gluster version 6.0
Adding distributed-arbiter volume type and have added code to
override its configuration as well.
Change-Id: Ic7a3404ed49d24f956de33f7bd5ca8ea61297e5b
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test case verifies whether the gluster get-state shows the proper brick status in the output.
The test case checks the brick status when the brick is up and also after killing the brick process.
It also verifies whether the other bricks are up when a particular brick process is killed.
Change-Id: I9801249d25be2817104194bb0a8f6a16271d662a
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Check the existence of '/usr/lib/firewalld/services/glusterfs.xml'
2. Validate the owner of this file as 'glusterfs-server'
3. Validate SELinux label context as 'system_u:object_r:lib_t:s0'
Change-Id: I55bfb3b51a9188e2088459eaf5304b8b73f2834a
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
This test case creates a large file at mount point,
adds extra brick and initiates rebalance. While
migration is in progress, it stops rebalance process
and checks if it has stopped.
Testcase Steps:
1. Create and start a volume.
2. Mount volume on client and create a large file.
3. Add bricks to the volume and check layout
4. Rename the file such that it hashs to different
subvol.
5. Start rebalance on volume.
6. Stop rebalance on volume.
Change-Id: I7edd37a548467d6624ffe1efa64b0c1b56ff26ed
Co-authored-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The TC was failing with "AssertionError: ('hash range is not
there %s', False)" even though the bricks were healed and the
directory was created on non-hashed bricks. This was due to the
conflict between the TC and the DHT library changes (added to
fix the issues caused by DHT pass-through functionality). The
code is now modified according to the library changes and hence
the TC works fine.
Change-Id: I501e7db89643822fbc711e631ceacda79e4c4ea4
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
There are two python2 to python3 incompatibilities
present in test_add_brick_when_quorum_not_met.py
and test_add_identical_brick_new_node.py.
In test_add_brick_when_quorum_not_met.py the testcase
fails with the below error:
> for node in range(num_of_nodes_to_bring_down, num_of_servers):
E TypeError: 'float' object cannot be interpreted as an integer
This is because a = 10 / 5 returns a float in python3
but it returns a int in python2 as shown below:
Python 2.7.15 (default, Oct 15 2018, 15:26:09)
[GCC 8.2.1 20180801 (Red Hat 8.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10/5
>>> type(a)
<type 'int'>
Python 3.7.3 (default, Mar 27 2019, 13:41:07)
[GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10/5
>>> type(a)
<class 'float'>
In test_add_identical_brick_new_node.py testcase
fails with the below error:
> add_bricks.append(string.replace(bricks_list[0],
self.servers[0], self.servers[1]))
E AttributeError: module 'string' has no attribute 'replace'
This is because string is depriciated in python3 and
is replaced with str.
Solution:
For the first issue we would need to change
a = 10/5 to a = 10//5 as it is constant
across both python versions.
For the second issue adding try except
block as shown below would be suffice:
except AttributeError:
add_bricks.append(str.replace(bricks_list[0],
self.servers[0],
self.servers[1]))
Change-Id: I9ec325760b279032af3748101bd2bfc58589d57d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Add sleep after glusterd restart is run async
in servers by avoiding another transaction in
progress failure in testcase
Change-Id: I514c24813dc7c102b807a582ae2b0d19069e0d34
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`g.rpyc_get_connection()` has a limitaion where it can't
convert python2 calls to python3 calls. Due to this a large
number of testcases fail when executed from a python2 machine
on a python3 only setup or visa versa with the below stack trace:
```
E ========= Remote Traceback (1) =========
E Traceback (most recent call last):
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request
E res = self._HANDLERS[handler](self, *args)
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect
E if hasattr(self._local_objects[id_pack], '____conn__'):
E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__
E return self._dict[key][0]
E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560)
```
Solution:
Write generic code which can run from python2 to
python3 and visa-versa
Change-Id: I7783485a784ef4b57f626f77e6012d918fee6032
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Add steps wait_for_bricks_to_be_online in teardown after
the glusterd is started in teststeps
Change-Id: Id30a3d870c6ba7c77b0e79604521ec41fe624822
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Checks replace-brick and data intergrity post that
2.Checks replace-brick while IO's are in progress
Change-Id: Idfc801fde50967924696b2e909633b9ca95ac721
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Line 135 is missing () which leads to the below trace back
when the testcase fails:
```
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135
```
Solution:
Adding the missing () brackets in line 135.
Change-Id: I318a5b838f01840afee5d4109645cc7dcd86c8fa
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1702298 - Custom xattrs are not healed on newly added brick
Test Steps:
1) Create a volume.
2) Mount the volume using FUSE.
3) Create 100 directories on the mount point.
4) Set the xattr on the directories.
5) Add bricks to the volume and trigger rebalance.
6) Wait for rebalance to complete.
7) After rebalance completes,check if all the bricks have healed.
8) Check the xattr for dirs on the newly added bricks.
Change-Id: If83f65ea163ccf16f9024d6b3a867ba7b35773f0
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcase test_ec_version was failing with the
below traceback:
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: %d format: a number is required, not str
Logged from file test_ec_version_healing_whenonebrickdown.py, line 233
This was due to a missing 's' in the log message on line 233.
Solution:
Add the missing s in the log message on line 233 as
shown below:
g.log.info('Brick %s is offline successfully', brick_b2_down)
Also renaming the file for more clarity of what the
testcase does.
Change-Id: I626fbe23dfaab0dd6d77c75329664a81a120c638
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps: (file access)
- rename the file so that the hashed and cached are different
- make sure file can be accessed as long as cached is up
Fixes a library issue as well in find_new_hashed()
Change-Id: Id81264848d6470b9fe477b50290f5ecf917ceda3
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Case 1:
1.mkdir srcdir and dstdir(such that srcdir and
dstdir hashes to different subvols)
2.Bring down srcdir hashed subvol
3.mv srcdir dstdir (should fail)
Case 2:
1.mkdir srcdir dstdir
2.Bring down srcdir hashed
3.Bring down dstdir hashed
4.mv srcdir dstdir (should fail)
Case 3:
1.mkdir srcdir dstdir
2.Bring down dstdir hashed subvol
3.mv srcdir dstdir (should fail)
Additional library fix details:
Also fixing library function to work with distributed-disperse volume
by removing `if oldhashed._host != brickdir._host:` as the same node
can host multiple bricks of the same volume.
Change-Id: Iaa472d1eb304b547bdec7a8e6b62c1df1a0ce591
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Changes done in this patch include:
1. reduced runtime of test by removing multiple volume configs
2. added extra validation for node already peer detached
3. added test steps to cover peer detach when volume is offline
Change-Id: I80413594e90b59dc63b7f4f52e6e348ddb7a9fa0
Signed-off-by: nchilaka <nchilaka@redhat.com>
|