| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
| |
mount_type is 'glusterfs'
Change-Id: I00f0b5edfea0e09381d1404a0cfd16396a8fbde9
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case,
1.volume creation on root brick path without force and with force are validated.
2.deleting a brick manually and then strating the volume with force should not
bring that brick into online, this is validated.
3.Ater clearing all attributes, we should be able to create another volume
with previously used bricks, it is validated.
Change-Id: I7fbf241c7e0fee276ff5f68b47a7a89e928f367c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Volume delete operation should fail, when one of the brick node is
down, it is vaildated in this test case.
Change-Id: I17649de2837f4aee8b50a5fcd760eb9f7c88f3cd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica984b28d2c23e3d0d716d8c0dde6ab6ef69dc8f
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comand limited number of times while io is in progress.
Desc:
Create any type of volume then mount the volume, once
volume mounted successfully on client, start running IOs on
mount point then run the "gluster volume status volname inode"
command on all clusters randomly.
"gluster volume status volname inode" command should not get
hang while IOs in progress.
Then check that IOs completed successfullly or not on mount point.
Check that files in mount point listing properly or not.
Change-Id: I48285ecb25235dadc82e30a750ad303b6e45fffd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Description :
Function to check whether given nodes are online or not
Change-Id: I92a40ac1a6bcdbdfb845413902dd0a798c68ed5c
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
| |
Replace all the time.sleep() instances with
wait_for_volume_process_to_be_online function
Change-Id: Id7e34979f811bd85f7475748406803026741a3a8
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
are online
Change-Id: I25424dd182c347a0570713ada8d2de611840fef3
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: Ifc6aa4d106dadf97e1741ec54a3323ea96e33101
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
This test does not work on NFS mounts due to bug 1473668. Disable this
test until we can find a workaround that makes this test actually green
Change-Id: Icd93cd796be5e8a72e144ba09e66733d6dcf5913
|
|
|
|
|
|
|
|
|
| |
Description:
Bring the self-heal daemon process offline for the nodes
Change-Id: I55301fb86a97147920991aa4455e8e5d80b1c5c3
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This function parses the output of 'gluster vol remove-brick status'
command for the given volume. The output of this function is
remove-brick status output in dictionary format.
Change-Id: I91b0bf9221b2645041abc5bbc016e356d0072b0b
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
| |
Change-Id: I7a8465a60c8e5d8f84a647ae65dbabcab2184516
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: If303d22f52d31e99676a6e97fbe0b9cb7d5a1234
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reset force
Description: Create Distribute volume, then enable bitrot and uss on that volume,
then check the bitd, scrub, snapd daemons running or not.
Then peform the volume reset, after volume reset only snap
daemon will get kill but bitd scrub daemons will be remain running.
Then perform volume reset with force, after volume reset with
force all three(bitd, scrub, snapd) daemons will get kill, these daemons
will not be running.
below are the steps performed for developing this test case:
-> Create Distributed volume
-> Enable BitD, Scrub and Uss on volume
-> Verify the BitD, Scrub and Uss daemons are running on every node
-> Reset the volume
-> Verify the Daemons (BitD, Scrub & Uss ) are running or not
-> Eanble Uss on same volume
-> Reset the volume with force
-> Verify all the daemons(BitD, Scrub & Uss) are running or not
Change-Id: I15d71d1434ec84d80293fda2ab6a8d02a3af5fd6
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Ia0f07590ceb8f680a8e750f793f37a63177904dc
Signed-off-by: Ambarish Soman <asoman@redhat.com>
|
|
|
|
|
|
|
| |
Added is_snapd_running function to uss_ops.py file
Change-Id: Ib1ff0a16550c94604209588bd5221f9ee6e9db92
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Gave meaningful names to functions
Returning -1 if there is no process running
Replace numbers with words
Rewording the msg "More than 1 or 0 self heal daemon"
Review Comments incorporated
Change-Id: If424a6f78536279c178ee45d62099fd8f63421dd
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
| |
Change-Id: Id33b1de7c01cd7774d3c4cce3c40ddfe2dc0d884
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This should fix the test_cvt_test_self_heal_when_io_is_in_progress:
self.assertTrue(ret, "Not all the bricks in list:%s are offline",
> bricks_to_bring_offline)
E TypeError: assertTrue() takes at most 3 arguments (4 given)
Change-Id: Ibfee5253020c2f8927c4fd22a992f7cff7509a5d
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
| |
is str. i.e passing a single node to the function. If it is
str, then convert it to list
Change-Id: I1abacf62fdbe1ec56fe85c86d8e2a323a2c3971b
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
glustolibs-gluster libs
Change-Id: I44f559dd0477f97278b1444e7a6d292ca58b99dc
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
volume type configuration is defined in the config file.
Providing an option in config file to create volume with 'force' option.
Change-Id: Ifeac20685f0949f7573257f30f05df6f79ce1dbd
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1. Waiting for all bricks to be online
2. Waiting for all self-heal-daemons to be online
3. Waiting for all volume processes to be online
Change-Id: I01a8711838227eb167e69710ecbd3abd0fecb9e6
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
| |
Chnages incorporated as per comment
Change-Id: I9a21e0350400198806644c07474ae6aeeeae6c58
Signed-off-by: Vijay Avuthu <vavuthu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. setup_volume
2. mount_volume
3. setup_volume_and_mount
4. cleanup_volume
5. unmount_volume
6. unmount_and_cleanup_volume
These are added as static methods to give the test developer the
flexibility to call the setup/cleanup's or any other function
from any where in the testclass which inherits GlusterBaseClass
Also, this will remove the need for GlusterVolumeBaseClass and
hence removing the hardcoding of creattion of volume, mouting
in setUpClass of GlusterVolumeBaseClass.
This will also help in writing new baseclasses for example:
Block which can have class funcitons specific to block
and inherit all the functions from GlusterBaseClass
Change-Id: I3f0709af75e5bb242d265d04ada3a747c155211d
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
| |
should be setup.py install
Change-Id: I9a0a03b8ff7ea2d4f42fd845ce84c72f72e984e1
|
|
|
|
|
| |
Change-Id: I766c9e1f905728618549a7484a70008d91959538
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: Id1c7fe24e5cd2931556e2fa6c056a7e8a2a75a5c
Signed-off-by: Devyani Kota <devyanikota@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
21:00:43 ./glustolibs-gluster/glustolibs/gluster/lib_utils.py:67:5: E722 do not use bare except'
21:00:43 ./glustolibs-gluster/glustolibs/gluster/lib_utils.py:290:5: E722 do not use bare except'
21:00:43 ./glustolibs-io/shared_files/scripts/file_dir_ops.py:308:13: E722 do not use bare except'
21:00:43 ./glustolibs-io/shared_files/scripts/file_dir_ops.py:316:13: E722 do not use bare except'
Change-Id: Ia0babf3d5a10b19c48425e4fcbcb8e79eea5e391
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: Ie8237836a41d39de0de84b1d4d4b49f9af74b237
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
| |
* Dockerfile: fix gdeploy pkgname and add code coverage tools
* README: update command examples
Change-Id: I73617fbbde6aff34fec730601dcc6baec2b921fa
Signed-off-by: Jonathan Holloway <jholloway@redhat.com>
|
|
|
|
|
|
|
| |
specified
Change-Id: Icb47d923860bbd2c1c70d2f7c23965a5368afa52
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
| |
logging
the output
Change-Id: I6ff7e363871607c2f9d4272be7198150db59af5d
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
| |
Mounting of a volume will not always be distributed volume. Providing a option to mount a local
volume as well.
Change-Id: Iadbb596fba7e2a5fa4ba3ba53967961a70d00c8c
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
| |
after bringing them online.
2) log all the xml output/error to DEBUG log level.
Change-Id: If6bb758ac728f299292def9d72c0ef166a1569ae
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) self-heal failures: With the recent changes made to gluster for the bug: https://bugzilla.redhat.com/show_bug.cgi?id=1480423, the location of the brick process pid's changed to /var/run/gluster.
Making the corresponding changes to glusto-tests libraries.
Moving away from referring to pid file to grep for the process with the brick name.
This fixes the issue.
2) Group options not being set properly: Since we were popping the 'group' option from the 'options'
dictionary after the group options being set to set the other volume options, the option gets removed
from the g.config['gluster']['smb_volume_options'] as well.
Hence perform a deep copy of the dict before modifying the dict.
Change-Id: I293bf81913857cb0327f30aa1db5aaa9be5a318e
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: I5ea8b6c0b58fd19c31fc96cc567c53000cd3841b
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as 'smb share',
'nfs-ganesha export' in the config yml. Reading the configs in the gluster_base_class and
setting those configs when exporting the volumes as 'smb share' or 'nfs-ganesha export'.
recommended options when exporting volume as 'smb share':
group: "metadata-cache"
cache-samba-metadata: "on"
Change-Id: I86a118c7015eaedd849a0f6e8b613605df5b6c32
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
* PyYAML is already installed via pip form glusto image and
gdeploy installs PyYAML as dependency from rpm. The resulting
conflict fails the container build.
Dockerfile: changed order of install, added glusto and pylint/pep8
defaults.yml: sets glusto defaults in container image
Change-Id: I47eaa1fbe74cc619043d975034083c5766e6acd1
Signed-off-by: Jonathan Holloway <jholloway@redhat.com>
|
|
|
|
|
|
|
| |
create hard link, read, copy and delete
Change-Id: If81480450bdaecc59896682d6febb8c6c9463aa7
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
| |
gluster_base_class.py: fix error by adding check to ensure gluster entry exists
when no gluster object defined in config file
setup.py: version up to 0.21
Change-Id: I37001673c03a32571b78bbd32489fc1992333d73
Signed-off-by: Jonathan Holloway <jholloway@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Providing a section in the config file to set volume options that can be
applicable to any volume type created. The glusterbase class also reads
the volume_options if provided in config file and set it on all the
volumes being created. These volume options will be overwritten if there
are any volume options specified while defining the volumes under 'volumes'
section.
Change-Id: I0003312251b4f8b151c9ba5c71d1b6a8884cc85e
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I2747c3770925b8d8f05e10fb7da49d105b7130e6
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
|
|
|
|
|
| |
Change-Id: If16daf6a0633c4ea30f7fb91b919d2ec42d0ff62
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
| |
Change-Id: I78bf4eda8c350b22d3a5fabb32b5a20f48ab474e
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
| |
Change-Id: Ic066b8ad452b297a2c48e912883536ce3960c0eb
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
| |
IO is in progress and subdir mounts from client and server side
Change-Id: I80b22e6602bbc18652135211ea08710392c04cb6
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
under gluster in the config file. Currently the config file
has the options under gluster tag. Hence making the
appropriate changes in the runs_on to parse the options
correctly.
Change-Id: Iec95d1884b13c349a36c4324b571a1c0f23c930a
Signed-off-by: ShwethaHP <spandura@redhat.com>
|
|
|
|
|
|
|
|
| |
* Dockerfile for creating a glusto-tests container.
* README with some basic information for now.
Change-Id: I10d467371b430489a240e979ebc3893f7cc578dd
Signed-off-by: Jonathan Holloway <jholloway@redhat.com>
|