| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create a single brick volume
2.Add some files and directories
3.Get arequal from mountpoint
4.Add-brick such that this brick makes
the volume a replica vol 1x3
5.Start heal full
6.Make sure heal is completed
7.Get arequals from all bricks and
compare with arequal from mountpoint
Change-Id: I4ef140b326b3d9edcbd5b1f0b7d9c43f38ccfe66
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1257394 - Provide meaningful errors on peer probe and peer detach
Test Steps:
1 check the current peer status
2 detach one of the valid nodes which is already part of cluster
3 stop glusterd on that node
4 try to attach above node to cluster, which must fail with
Transport End point error
5 Recheck the test using hostname, expected to see same result
6 start glusterd on that node
7 halt/reboot the node
8 try to peer probe the halted node, which must fail again.
9 The only error accepted is as below
"peer probe: failed: Probe returned with Transport endpoint is not
connected"
10 Check peer status and make sure no other nodes in peer reject state
Change-Id: Ic0a083d5cb150275e927723d960e89fe1a5528fb
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Add extra time for beaker machines to validate
the testcases
for test_rebalance_spurious.py added cleanup in
teardown because fix layout patch is still not
merged.
Change-Id: I7ee8324ff136bbdb74600b730b4b802d86116427
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Create a volume and mount it.
2. Create a directory on mount and check whether all the bricks have
the same gfid.
3. Now delete gfid attr from all but one backend bricks,
4. Do lookup from the mount.
5. Check whether all the bricks have the same gfid assigned.
Failing in CentOS-CI due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1696075
Change-Id: I4eebc247b15c488cfa24599e0afec2fa5671656f
Co-authored-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The current timeout for reboot given in
test_heal_full_node_reboot is about 350 seconds
which works with most hardware configurations.
However when reboot is done on slower systems which
take time to come up this logic fails due to
which this testcase and the preceding testcases
fail.
Solution:
Change the timeout for reboot from 350 to 700, this
wouldn't affect the testcase's perfromance in good
hardware configurations as the timeout is for the max
value and if the node is up before the testcase it'll
exit anyways.
Change-Id: I60d05236e8b08ba7d0fec29657a93f2ae53404d4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I25d30f7bdb20f0825709c4c852140e1906870ce7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ib357d5690bb28131d788073b80a088647167fe80
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ic0b3b1333ac7b1ae02f701943d49510e6d46c259
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sys library was added to all the testcases to fetch
the `sys.version_info.major` which fetches the version
of python with which glusto and glusto-tests is installed
and runs the I/O script i.e file_dir_ops.py with that
version of python but this creates a problem as older jobs
running on older platforms won't run the way they use to,
like if the older platform had python2 by default and
we are running it tests from a slave which
has python3 it'll fails and visa-versa.
The problem is introduced due the below code:
```
cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files "
"--dirname-start-num 10 --dir-depth 1 --dir-length 1 "
"--max-num-of-dirs 1 --num-of-files 5 %s" % (
sys.version_info.major, self.script_upload_path,
self.mounts[0].mountpoint))
```
The solution to this problem is to change `python%d`
to `python` which would enable the code to run with
whatever version of python is avaliable on that client
this would enable us to run any version of framework
with both the older and latest platforms.
Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Script sometimes fails at expand volume with
"Already part of volume" error fixed it with this patch.
Change-Id: I628bbdb268e5a42112f68d9148da6bdb775acd26
Co-authored-by: Prasad Desala <tdesala@redhat.com>,
Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Prasad Desala <tdesala@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The default value of performance.io-cache was ON by default
before gluster 6.0, in gluster 6.0 it was set to OFF.
Solution:
Adding code to check gluster version and then check
weather it is ON or OFF as shown below:
```
if get_gluster_version(self.mnode) >= 6.0:
self.assertIn("off", ret['performance.io-cache'],
"io-cache value is not correct")
else:
self.assertIn("on", ret['performance.io-cache'],
"io-cache value is not correct")
```
CentOS-CI failure analysis:
This patch is expected to failed as if we run `gluster --version` on
nightly builds the output returned as shown below:
```
# gluster --version
glusterfs 20200220.a0e0890
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
```
This output can't be parsed by get_gluster_version() function
which is used in this patch to get the gluster version
and check for perfromance.io-cache's default value accordingly.
Change-Id: I00b652a9d5747cbf3006825bb17b9ca2f69cb9cd
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Moved steps from teardown class to teardown and removed
unwanted teardown class and rectified the testcase
failing with wait for io to complete by removing the step
because after validate io the sub process terminates and
results in failure.
Change-Id: I2eaf05680b817b681aff8b48683fc9dac88896b0
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I0fa6bbacda16fb97d3454a8510a937442b5755a4
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I04f7b7c894d48d0188379028412d9c6b48eac210
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
and used wait for peer to connect and
wait for glusterd to connect functions in testcases
added fixes to check file exists
increased timeout value for failure cases
Change-Id: I9d5692f635ed324ffe7dac9944ec9b8f3b933fd1
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As distributed-arbiter and arbiter weren't present before patch [1],
arbiter and distributed-arbiter volumes were created by the hack show
below where a distributed-replicated or replicated volume's configuration
was modified to create an arbiter volume.
```
@runs_on([['replicated', 'distributed-replicated'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
.................
@classmethod
def setUpClass(cls):
...............
# Overriding the volume type to specifically test the volume
# type Change from distributed-replicated to arbiter
if cls.volume_type == "distributed-replicated":
cls.volume['voltype'] = { 'type': 'distributed-replicated',
'dist_count': 2,
'replica_count': 3,
'arbiter_count': 1,
'transport': 'tcp'}
```
Now this code is to be updated where we need to remove code which
was used to override volume configuration and
just add arbiter or distributed-arbiter in `@runs_on([],[])`
as shown below:
```
@runs_on([['replicated', 'distributed-arbiter'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
```
Links:
[1] https://github.com/gluster/glusto-tests/commit/08b727842bc66603e3b8d1160ee4b15051b0cd20
Change-Id: I4c44c2f3506bd0183fd991354fb723f8ec235a4b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I2ba0c81dad41bdac704007bd1780b8a98cb50358
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Set the volume option
"metadata-self-heal": "off"
"entry-self-heal": "off"
"data-self-heal": "off"
"self-heal-daemon": "off"
2.Bring down all bricks processes from selected set
3.Create IO (50k files)
4.Get arequal before getting bricks online
5.Bring bricks online
6.Set the volume option
"self-heal-daemon": "on"
7.Check for daemons
8.Start healing
9.Check if heal is completed
10.Check for split-brain
11.Get arequal after getting bricks online and compare with
arequal before getting bricks online
12.Add bricks to volume
13.Do rebalance and wait for it to complete
14.Get arequal after adding bricks and compare with
arequal after getting bricks online
Change-Id: I1598c4d6cf98ce99249e85fc377b9db84886f284
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Added steps to reset volume and resolved teardown class
cleanup failures.
Change-Id: I06b0ed8810c9b064fd2ee7c0bfd261928d8c07db
|
|
|
|
|
|
|
|
|
| |
used library functions to wait for glusterd to start
and wait for peer to connect and made modifications in teardown part
to rectified statements to correct values
Change-Id: I40b4362ae1491acf75681c7623c16c53213bb1b9
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added wait_for_io_to_complete function to testcases
used wait_for_glusterd function
and wait_for_peer_connect function
Change-Id: I4811848aad8cca4198cc93d8e200dfc47ae7ac9b
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Change-Id: I1eacfd74c730d28e36bb8f7e3a1f574edc3d13c7
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase 1: Test entry transaction crash consistency : create
- Create IO
- Calculate arequal before creating snapshot
- Create snapshot
- Modify the data
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 2: Test entry transaction crash consistency : delete
- Create IO of 50 files
- Delete 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Delete 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 3: Test entry transaction crash consistency : rename
- Create IO of 50 files
- Rename 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Rename 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Change-Id: I7cb9182f91ae50c47d5ae9b3f8031413b2bbfbbf
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verifying the glusterfind functionality with deletion of files.
* Create a volume
* Create a session on the volume
* Create various files from mount point
* Perform glusterfind pre
* Perform glusterfind post
* Check the contents of outfile
* Modify the contents of the files from mount point
* Perform glusterfind pre
* Perform glusterfind post
* Check the contents of outfile
Files modified must be listed
Change-Id: Ie696e194364b2b86a7ceb5fb6e10066ecc669577
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There can be a case where due to a failed tearDown of a previous
test case, the snapshot creation might fail because of redundant
snapshot of the same name present.
This fix includes the changed snapshot names along with tearDown
for deleting the snapshots created in the test case
"test_snap_info_glusterd_restart.py".
Changing the names of the test case function to something
relevant to the test case.
Making use of the method "wait_for_glusterd_to_start" in places
where glusterd is restarted.
Change-Id: I6cb50dc84d306194b6bd363daf7ae0ebd6bb12ee
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There can be a case where due to a failed tearDown of a previous
test case, the snapshot creation might fail. This can happen when
a redundant snapshot with the same name is present.
This fix includes the changed snapshot names along with tearDown
for deleting the snapshots created in the test case
"test_snap_list_after_restart.py".
This fix also includes the method 'wait_for_glusterd_to_start'
where there is a restart of glusterd.
Adding a check for validation of snapshots using snapname.
Change-Id: If8b48a12bd067ad54dba742eeb88444beaf5f153
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
| |
Added library to cleanup lv created after snapshot clone and made
modifications with cleanup.
Change-Id: I71a437bf99eac1170895510475ddb30748771672
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Change-Id: Id0cdda175865c84bef917211560acee8ea10fe7b
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
available space
Testcase:
- note the current available space on the mount
- create 1M file on the mount
- note the current available space on the mountpoint and compare
with space before creation
- remove the file
- note the current available space on the mountpoint and compare
with space before creation
Change-Id: Iff017039d1888d03f067ee2a9f26aff327bd4059
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Description: This test creates directories and subdirectories
and their copies at mount point and checks for layout and
directory information at all subvols.
Change-Id: Iabce046e7ce63c5428061bcefb98d06359dac8bd
Co-authored-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test case 'test_256_snapshots.py' contains hard-coded commands
which can be run using the existing library functions from the
snap_ops.py library, such as:
* snap_create
* set_snap_config
This patch includes usage of appropriate functions in place of the
hard-coded commands along with various cosmetic changes.
Change-Id: I6da7444d903efd1be6582c8fea037a5c4fddc111
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Verify file/directory creation or deletion on volume
doesn't leave behind any broken symlinks on the bricks.
Steps:
1.Create 10 files and directories.
2.Check if broken symlinks are present on brick path.
3.Remove files and directories.
4.Check if broken symlinks are present on brick path.
Change-Id: If65a5eaa0fe0e1b96f002496744580e0ede5a4af
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
library files
Moving waiters from testcase and adding it as function in library in gluster_init and peer_ops.
Change-Id: I5ab1e42a5a0366fadb399789da1c156d8d96ec18
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'ExecutionError' must not be used inside the test case function.
Replacing 'ExecutionError' raised in test case function with
assert statements.
Changed the name of the snapshots being created to something
specific to the test case to avoid redundancy with the other
test cases.
Segregating the setUp and tearDown functions.
Adding the description for the test case function.
Changing the test case name for something specific to the
component (snapshot).
Fixing the typo errors and various cosmetic changes.
Change-Id: I24f271cdb7a180ecde8efa21185b4a8f807be203
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding params in gluster_base_class.py to store value
of root user's password and non-root user's name, group
and password. sync_method will be mentioned inside the
testcase as there are testcases which are only specific
to one of the sync type and it doesn't make sense to add
it in the config file. Instead we'll write testcase as shown
below:
```
class georeptestcase(baseclass):
run_chmod_with_changelog_crawl(sync_type):
# Steps of the testcase
test_chmod_with_changelog_crawl_rsync():
# Calling function with rsync as sync_type.
run_chmod_with_changelog_crawl('rsync')
test_chmod_with_changelog_crawl_tarssh():
# Calling function with tarssh as sync_type.
run_chmod_with_changelog_crawl('tarssh')
```
Change-Id: Ie65f542e76bfbee89ac2914bdcd086e1bd08dfdb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Idcc40442869cb3e44873625887409592d9e0710d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When quota limit is reached, if an add-brick followed by rebalance is
performed, then the previous quota limits should be honoured.
No new files should be created because the quota limits have reached.
Change-Id: I37da02ce292e6a7bc614cf888aac69ea109f84f5
Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
Co-authored-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Checking the status of snapshot informations
before and after restarting glusterd.
Test case:
1. Create volume
2. Create two snapshots with description
3. Check snapshot status informations with snapname,
volume name and without snap name/volname.
4. Restart glusterd on all nodes
5. Follow step3 again and validate snapshot
History of the patch:
The testcase was failing in CentOS-ci due to bug [1], However this
bug was defered for fix. Post updating the code with the workaround
mentioned below, it was observed that glusterd restart was failing
due to bug [2]. And now as the both the bugs are fixed this patch
is safe to merge.
Workaround:
Now the only possible workaround for this is to
modify the function get_snap_info_by_volname()
to not use --xml option and directly run the command which
will dump the output as string which can be directly
used in the testcase. This modification in the library function
will not impact any other testcase.
Links:
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1590797
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1782200
Change-Id: I26ac0aaa5f6c8849fd9de41f506d6d13fc55e166
Co-authored-by: srivickynesh <sselvan@redhat.com>
Signed-off-by: srivickynesh <sselvan@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I33e75fe773ee26a2d205f5ebd29198968bfe6c59
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I36949296607b09e66ce0a56029359481c0b76b8b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I0c20652d598c198b58871724e354f2fe803c1243
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: I1df0324dac2da5aad4064cc72ef77dcb5bf67e4f
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: Icf32bb20b7eaf2eabb07b59be813997a28872565
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: I0d2eeb978c6757d6d910ebfe21b07811bf74b80a
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removing script_local_path as both script_local_path and
cls.script_upload_path hold the same values which makes
each script slow. This will help decrease the execution
time of the test suite.
PoC:
$cat test.py
a = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
b = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
$time python test.py
real 0m0.063s
user 0m0.039s
sys 0m0.019s
$cat test.py
a = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
$time python test.py
real 0m0.013s
user 0m0.009s
sys 0m0.003s
Code changes needed:
From:
script_local_path = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
cls.script_upload_path = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
ret = upload_scripts(cls.clients, script_local_path)
To:
cls.script_upload_path = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
ret = upload_scripts(cls.clients, cls.script_upload_path)
Change-Id: I7908b3b418bbc929b7cc3ff81e3675310eecdbeb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I2e85670e50e3dab8727295c34aa6ec4f1326c19d
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case steps:
1. Set cluster.brick-multiplex to enabled.
2. Create and start 2 volumes of type 1x3 and 2x3.
3. Check if cluster.brick-multiplex is enabled.
4. Reset the cluster using "gluster v reset all".
5. Check if cluster.brick-multiplex is disabled.
6. Create a new volume of type 2x3.
7. Set cluster.brick-multiplex to enabled.
8. Stop and start all three volumes.
9. Check the if pids match and check if more
than one pids of glusterfsd is present.
Additional library fix:
Changing the command in check_brick_pid_matches_glusterfsd_pid()
as it won't work when brick-mux is enabled.
From:
cmd = ("ps -eaf | grep glusterfsd |
" "grep %s.%s | grep -v 'grep %s.%s'"
% (volname, brick_node, volname, brick_node))
To:
cmd = "pidof glusterfsd"
Change-Id: If7bdde13071732b176a0a2289635319571872e47
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I1395e14d8d0aa0cc6097e51c64262fb481f36f05
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: Ib414b8496ca65a48bbe42936e32a863c9c1072e4
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: I44fe85519c8fd381064670e54dac8736107b0928
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|