summaryrefslogtreecommitdiffstats
path: root/extras
Commit message (Collapse)AuthorAgeFilesLines
* firewall/spec: Create glusterfs firewall service if firewalld installed.anand2015-09-103-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It creates glusterfs firewall service during installation. glusterfs service : It contains all default ports which needs to be opened. During installation glusterfs.xml is copied into firewall service directory(/usr/lib/firewalld/services/). Note: 1.For bricks: It opens the 512 ports, if brick is running out side this range(>49664) then admin need to open the port for that brick. 2.By default this service is not enabled in any of zone. To enable this service(glusterfs) in firewall: 1. Get active zone(s) in node firewall-cmd --get-active-zones 2. Attached this service(glusterfs) to zone(s) firewall-cmd --zone=<zone_name> --add-service=glusterfs --To apply runtime firewall-cmd --permanent --zone=<zone_name> --add-service=glusterfs --To apply permanent Note: we can also use firewall-config which gives GUI to configure firewall. Change-Id: Id97fe620c560fd10599511d751aed11a99ba4da5 BUG: 1253967 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/11989 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* bash-completion: Swap order of characters in egrep bracket expressionPaul Stauffer2015-09-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With glusterfs-cli installed, bash tab completion fails to work and prints an error message: $ gluster volgrep: Invalid range end ^C The problem is caused by the ordering of characters within an egrep bracket expression in the "_gluster_completion()" function defined in /etc/bash_completion.d/gluster. The file contains this line: egrep -ao --color=never "([A-Za-z0-9_-.]+)|[[:space:]]+|." | \ And egrep is interpreting the "-" character in that bracket expression as indicating a range is being requested, "_-." Fortunately, "_" actually comes after ".", this range expression is invalid, and egrep throws the error instead of silently not doing what was intended. The fix is simply to swap the positions of "-" and "." in that bracket expression: egrep -ao --color=never "([A-Za-z0-9_.-]+)|[[:space:]]+|." | \ With this change, bash tab completion works as intended. Change-Id: Iace2d57a1122b4530987ba6f5f5558b56b094665 BUG: 1243108 Signed-off-by: Paul Stauffer <paulds@horde.com> Reviewed-on: http://review.gluster.org/11939 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* all: reduce "inline" usageJeff Darcy2015-09-012-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | There are three kinds of inline functions: plain inline, extern inline, and static inline. All three have been removed from .c files, except those in "contrib" which aren't our problem. Inlines in .h files, which are overwhelmingly "static inline" already, have generally been left alone. Over time we should be able to "lower" these into .c files, but that has to be done in a case-by-case fashion requiring more manual effort. This part was easy to do automatically without (as far as I can tell) any ill effect. In the process, several pieces of dead code were flagged by the compiler, and were removed. Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155 BUG: 1245331 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/11769 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* snapshot/scheduler: Check if volume exists before adding/editing schedulesAvra Sengupta2015-08-231-21/+67
| | | | | | | | | | | | | | | Before adding or editing a scheduler, check if the volume name provided in the schedule, exists in the cluster or not. Added return code VOLUME_DOES_NOT_EXIST(17) for the same. Change-Id: Ia3fe3cc1e1568ddd10f9193bbf40a098f0fe990a BUG: 1213349 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11830 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* snapshot/scheduler: Output correction of initialisationAvra Sengupta2015-08-231-2/+2
| | | | | | | | | | | Change-Id: I4a6e00805da7b254b8b08e7bb142960fb6c64923 BUG: 1218164 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11924 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* rpm: include required directory for glusterfindNiels de Vos2015-08-192-65/+1
| | | | | | | | | | | | | | | | | | | The directory was marked as %ghost, which causes the following installation failure: Error unpacking rpm package glusterfs-server-3.8dev-0.446.git45e13fe.el7.centos.x86_64 error: unpacking of archive failed on file /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py;5581f20e: cpio: open Also, *all* Python files should be part of the RPM package. This includes generated .pyc and .pyo files. BUG: 1225465 Change-Id: Iee74905b101912c4a845257742c470c3fe42ce2a Signed-off-by: Niels de Vos <ndevos@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/11298 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* build: scripts are not installed with +x bitKaleb S. KEITHLEY2015-07-313-3/+3
| | | | | | | | | | | | | scripts listed in Makefile.am as foo_DATA, should be foo_SCRIPTS to be installed +x Change-Id: Ib9b98efcea968c03b574726bdc0d4f76cdfd1dc1 BUG: 1225018 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/11806 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* extra: "enable-shared-storage" key should create shared-storageGaurav Kumar Garg2015-07-131-1/+1
| | | | | | | | | | | | | | | Currently while creating shared storage it accept only "cluster.enable-shared-storage" key. It should also accept "enable-shared-storage" key. Change-Id: I4c68782f4b7927ec8cd725e411b0b9db17d9c48d BUG: 1238224 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11491 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* NFS-Ganesha : Add-node does not copy "exports" directory correctlyMeghana M2015-07-101-6/+6
| | | | | | | | | | | | | Add-node logic has to copy the "exports" directory into the new node in the same path. There was an error in copying to the correct path. Fixing it. Change-Id: I539d1d525cc5614594b76f2cff1ac93a926712cf BUG: 1241895 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11618 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* NFS-Ganesha : Export fails on RHEL 7.1Meghana M2015-07-103-8/+93
| | | | | | | | | | | | | | | | | We grep for CONFFILE parameter in "/etc/syconfig/ganesha" file to find out the path of the ganesha config file. In RHEL 7.1, this parameter does not exist in the file and we can't find out the ganesha config file. Export fails invariably due to this. Changing this pattern to a more generic one and default it to "/etc/ganesha/ganesha.conf" Change-Id: I4ac97b1b5ee4f5a7e448a351b7c6270385dffe61 BUG: 1241480 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11594 Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* common-ha: ganesha-ha.sh status tries to read ganesha-ha.confKaleb S. KEITHLEY2015-07-091-20/+20
| | | | | | | | | | | | | status doesn't need to read the config Change-Id: Id02252abe52820dbc263f4a880bde72a23b121bd BUG: 1241133 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/11581 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* snapshot/scheduler: Use /var/run/gluster/shared_storage/snaps/tmp_fileAvra Sengupta2015-07-081-6/+6
| | | | | | | | | | | | | | | | | | | for writing data into tmp file and then making an atomic rename to the required filename. The reason for using this location is that it adheres to the selinux policies. Also moving the update of the current_scheduler file, under the lock so as to avoid multiple writes Change-Id: I61e62b5daf6f1bce2319f64f7b1dfb8b93726077 BUG: 1239269 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11535 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/shared_storage: Use /var/lib/glusterd/ss_brick as shared storage's ↵Avra Sengupta2015-07-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | brick The brick path we use to create shared storage is /var/run/gluster/ss_brick. The problem with using this brick path is /var/run/gluster is a tmpfs and all the brick/shared storage data will be wiped off when the node restarts. Hence using /var/lib/glusterd/ss_brick as the brick path for shared storage volume as this brick and the shared storage volume is internally created by us (albeit on user's request), and contains only internal state data and no user data. Change-Id: I808d1aa3e204a5d2022086d23bdbfdd44a2cfb1c BUG: 1218573 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11533 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* NFS-Ganesha : Unexport fails after S31ganesha-start script is runMeghana M2015-07-022-21/+28
| | | | | | | | | | | | | | | | | | | The dbus-send script extracts the export ID from the export config file. It expects the export ID to be written in a particular format. The post-phase hook-script created the export file in a different format,and the dbus-send never gets the correct export ID because of this. Fixing the issue by replacing the write_conf function in the S31ganesha-start hook-script.Also, NFS-Ganesha service stops when dbus signal is sent more than once on the same export. Consecutive start/stop operations creates problems. Fixing all the issues at once. Change-Id: Ibf639eb3556b1e51dd8dcb0c784a6a7f07badb97 BUG: 1238054 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11477 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* common-ha: Fix '/var/lib/nfs/statd/state' path creationSoumya Koduri2015-06-301-3/+5
| | | | | | | | | | | | | | '/var/lib/nfs/statd/state' which contains NSM state number should be a file instead of directory. Change-Id: Id008b4f4dd810fe6d6b4d2599cbc0b488010384b BUG: 1237174 Signed-off-by: Soumya Koduri <skoduri@redhat.com> Reviewed-on: http://review.gluster.org/11468 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* NFS-Ganesha : Exporting volume failsMeghana M2015-06-292-2/+2
| | | | | | | | | | | | | | | | Due to a recent fix, the dbus-send.sh looked for a .export_added file even before it was created. This resulted in the ganesha.enable option failing consistently. Fixing it. Change-Id: I26a68578551b6e38e49a9997e6f6f983fd668971 BUG: 1236561 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11456 Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* NFS-Ganesha : Automatically export volume after volume restartMeghana M2015-06-261-1/+1
| | | | | | | | | | | | | The export file was not getting created in the correct path. Fixing the path in this patch. Change-Id: If624266e1a934514868affb712514881d10239dc BUG: 1231738 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11432 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* common-ha: fix delete-nodeKaleb S. KEITHLEY2015-06-261-44/+55
| | | | | | | | | | | | | | | | | N.B. delete-node is a designed to be "disruptive" surgically delete a node from the config and stop nfs-ganesh on that node. finish the implementation and fix a few minor issues Change-Id: I964bb72a76ee635b5fc484ec5b541e69eeececcd BUG: 1234474 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/11353 Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Meghana M <mmadhusu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Fix ssh issue in geo-repKotresh HR2015-06-251-0/+7
| | | | | | | | | | | | | | | | | | | In geo-rep mountbroker setup, workers fails with 'Permission Denied' even though the public keys are shared to all the slave nodes. The issue is with selinux context not being set for .ssh and .ssh/authorizedkeys. Doing restorecon on these entries to set default selinux security context fixes the issue. Change-Id: I75e16d22f7a168de6c13b0c7571a7ab75761ae0d BUG: 1235359 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/11383 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: darshan n <dnarayan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* NFS-Ganesha : Implement refresh-configMeghana M2015-06-242-15/+89
| | | | | | | | | | | | | | | | | | It is important that we give an automatic way of refreshing the config when the user has changed the export file manually. Without this, the user will be forced to restart the server. Implementing refresh_config by utilizing two other scripts that are already in place. Making a few changes to make sure that "--help" doesn't throw unnecessary error messages. Change-Id: I6559b89e858526717168ba286e1ff7d9977097c6 BUG: 1233624 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11331 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* gluster/shared_storage: Add/Remove shared storage from /etc/fstab during ↵Avra Sengupta2015-06-221-0/+5
| | | | | | | | | | | | | | | | enable/disable While creating/deleting shared storage volume, add/remove the shared storage entry from /etc/fstab, so as to ensure availability of the shared storage, even after a node reboot Change-Id: Ib9edc8fd02c74a677062ca53ffd10be997b056c6 BUG: 1231876 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11272 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* common-ha : Fixing add node operationMeghana M2015-06-191-26/+38
| | | | | | | | | | | | | | | | | | | | | | | | Resource create for the added node referenced a variable new_node that was never passed. This led to a wrong schema type in the cib file and hence the added node always ended up in failed state. And also, resources were wrongly created twice and led to more errors. I have fixed the variable name and deleted the repetitive invocation of the recreate-resource function. The new node has to be added to the existing ganesha-ha config file for correct behaviour during subsequent add-node operations. This edited file has to be copied to all the other cluster nodes. I have added a fix for this as well. Change-Id: Ie55138e2657d22298d89db1c08f2e17930686bd6 BUG: 1233246 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11316 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* common-ha: cluster setup issues on RHEL7Kaleb S. KEITHLEY2015-06-193-14/+35
| | | | | | | | | | | | | | | | | | | | * use --name on RHEL7 (later versions of pcs drop --name) we guessed wrong and did not get the version that dropped use of --name option * more robust config file param parsing for n/v with ""s in the value after not sourcing the config file * pid file fix. RHEL6 init.d adds -p /var/run/ganesha.nfsd.pid to cmdline options. RHEL7 systemd does not, so defaults to /var/run/ganesha.pid. Change-Id: I575aa13c98f05523cca10c55f2c387200bad3f93 BUG: 1229948 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/11257 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Meghana M <mmadhusu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* common-ha: cluster HA setup sometimes failsKaleb S. KEITHLEY2015-06-181-4/+22
| | | | | | | | | | | | | | | | | the "s in the VIP_foo="x.x.x.x" lines are problematic now that the config file isn't sourced. Revised to also handle names containing '-', e.g. host-11, and FQNs, e.g. host-11.lab.gluster.org Change-Id: I1a52afbf398a024cdff851d0c415d8363f699c90 BUG: 1232001 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/11281 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Meghana M <mmadhusu@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* common-ha: cluster HA setup sometimes failsKaleb S. KEITHLEY2015-06-161-4/+9
| | | | | | | | | | | | | | | | the "s in the VIP_foo="x.x.x.x" lines are problematic now that the config file isn't sourced. (A short term work-around is to simply eliminate them.) Change-Id: I65f375f2d3b8453adb45dc3dbbc7d3fb07cf85d0 BUG: 1232001 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/11237 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Meghana M <mmadhusu@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* NFS-Ganesha: Automatically export vol that was exported before vol restartMeghana M2015-06-164-292/+116
| | | | | | | | | | | | | | | | | | | | Consider a volume that is exported via NFS-Ganesha. Stopping this volume will automatically unexport the volume. Starting this volume should automatically export it. Although the logic was already there, there was a bug in it. Fixing the same by introducing a hook script. Also with the new CLI options, the hook script S31ganesha-set.sh is no longer required. Hence, removing the same. Adding a comment to tell the user that one of the CLI commands will take a few minutes to complete. Change-Id: Ibff769ca04fef0c2a129c83fe31fc9c869350e8d BUG: 1231738 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11247 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* common-ha : Clean up cib state completelyMeghana Madhusudhan2015-06-151-23/+1
| | | | | | | | | | | | Clean up cluster state on all the machines during tear down. Change-Id: If9ca65b6ca8790ac97311f33359e28558e90c557 BUG: 1228415 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11231 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* tools/glusterfind: Cleanup glusterfind dir after a volume deleteAravinda VK2015-06-122-1/+65
| | | | | | | | | | | | | | | | | | If `glusterfind delete` command was not run before volume delete, stale session directories exists in /var/lib/glusterd/glusterfind directories. Also shows these sessions in `glusterfind list` When Volume is deleted, Post hook will be run which cleans up the stale session directories BUG: 1225465 Change-Id: I54c46c30313e92c1bb4cb07918ed2029b375462c Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10944 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* common-ha: handle long node names and node names with '-' and '.' in themKaleb S. KEITHLEY2015-06-112-12/+8
| | | | | | | | | | | | | | | sourcing the /etc/ganesha/ganesha-ha.conf file seemed like a simple and elegant solution for reading config params, but bash variable names do not allow '-' and '.' in them. Also fix incorrect path in shared volume mount Change-Id: I40140e5da0903221efd316de94dce40229263e15 BUG: 1225572 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/11035 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* common-ha : Clean up persistent cib statemeghana2015-06-111-1/+2
| | | | | | | | | | | | | | | Pacemaker saves old configurations in the directory, "/var/lib/pacemaker/cib". It's good to clean up this directory during teardown so that old data doesn't show up the next time. Change-Id: If0f413ba2da599dd6672b51e60e1d35e674d576b BUG: 1228415 Signed-off-by: meghana <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11093 Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* snapshot/scheduler: Reload /etc/cron.d/glusterfs_snap_cron_tasks when shared ↵Avra Sengupta2015-06-111-0/+33
| | | | | | | | | | | | | | | | | storage is available If shared storage is not accessible, create a flag in /var/run/gluster/ So that when /etc/cron.d/glusterfs_snap_cron_tasks is available again, the flag will tell us, to reload /etc/cron.d/glusterfs_snap_cron_tasks Change-Id: I41b19f57ff0b8f7e0b820eaf592b0fdedb0a5d86 BUG: 1218573 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11139 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* snapshot/scheduler: Check if GCRON_TASKS exists beforeAvra Sengupta2015-06-101-0/+4
| | | | | | | | | | | | accessing it's mtime Change-Id: I873c83d21620527b20d7de428d11582c5499d1af BUG: 1228613 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11138 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* snapshot/scheduler: Handle OSError in os. callbacksAvra Sengupta2015-06-091-8/+8
| | | | | | | | | | | | | Handle OSError and not IOError in os. callbacks. Change-Id: I2b5bfb629bacbd2d2e410d96034b4e2c11c4931e BUG: 1218060 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11087 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/shared_storage: Provide a volume set option to create and mount the ↵Avra Sengupta2015-06-042-1/+125
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shared storage Introducing a global volume set option(cluster.enable-shared-storage) which helps create and set-up the shared storage meta volume. gluster volume set all cluster.enable-shared-storage enable On enabling this option, the system analyzes the number of peers in the cluster, which are currently connected, and chooses three such peers(including the node the command is issued from). From these peers a volume(gluster_shared_storage) is created. Depending on the number of peers available the volume is either a replica 3 volume(if there are 3 connected peers), or a replica 2 volume(if there are 2 connected peers). "/var/run/gluster/ss_brick" serves as the brick path on each node for the shared storage volume. We also mount the shared storage at "/var/run/gluster/shared_storage" on all the nodes in the cluster as part of enabling this option. If there is only one node in the cluster, or only one node is up then the command will fail Once the volume is created, and mounted the maintainance of the volume like adding-bricks, removing bricks etc., is expected to be the onus of the user. On disabling the option, we provide the user a warning, and on affirmation from the user we stop the shared storage volume, and unmount it from all the nodes in the cluster. gluster volume set all cluster.enable-shared-storage disable Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc BUG: 1222013 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* snapshot/scheduler: Modified main() function to take arguments.nnDarshan2015-06-041-3/+3
| | | | | | | | | | | | | | Modified the main function to take script arguments, so that this script can be used as a module by other programs . Change-Id: I902f0bc7ddfbf0d335cc087f51b1a7af4b7157fc BUG: 1220670 Signed-off-by: n Darshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/10760 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* build: fix compiling on older distributionsNiels de Vos2015-06-033-3/+5
| | | | | | | | | | | | | | | | | | | | | | | data-tiering is disabled on RHEL-5 because it depends on a too new SQLite version. This change also prevents installing some of files that are used by geo-replication, which is also not available on RHEL-5. geo-replication depends on a too recent version of Python. Due to an older version of OpenSSL, some of the newer functions can not be used. A fallback to previous functions is done. Unfortunately RHEL-5 does not seem to have TLSv1.2 support, so only older versions can be used. Change-Id: I672264a673f5432358d2e83b17e2a34efd9fd913 BUG: 1222317 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10803 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapshot/scheduler: Return proper error code in case of failureAvra Sengupta2015-06-021-61/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ENUM RETCODE ERROR ---------------------------------------------------------- INTERNAL_ERROR 2 Internal Error SHARED_STORAGE_DIR_DOESNT_EXIST 3 Shared Storage Dir does not exist SHARED_STORAGE_NOT_MOUNTED 4 Shared storage is not mounted ANOTHER_TRANSACTION_IN_PROGRESS 5 Another transaction is in progress INIT_FAILED 6 Initialisation failed SCHEDULING_ALREADY_DISABLED 7 Scheduler is already disabled SCHEDULING_ALREADY_ENABLED 8 Scheduler is already enabled NODE_NOT_INITIALISED 9 Node not initialised ANOTHER_SCHEDULER_ACTIVE 10 Another scheduler is active JOB_ALREADY_EXISTS 11 Job already exists JOB_NOT_FOUND 12 Job not found INVALID_JOBNAME 13 Jobname is invalid INVALID_VOLNAME 14 Volname is invalid INVALID_SCHEDULE 15 Schedule is invalid INVALID_ARG 16 Argument is invalid Change-Id: Ia1da166659099f4c951fcdb4d755529e41167b80 BUG: 1218055 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11005 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* common-ha: fix race between setting grace and virt IP fail-overKaleb S. KEITHLEY2015-06-011-10/+14
| | | | | | | | | | | | | | Also send stderr output of `pcs resource {create,delete} $node-dead_ip-1` to /dev/null to avoid flooding the logs Change-Id: I29d526429cc4d7521971cd5e2e69bfb64bfc5ca9 BUG: 1219485 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/10646 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Meghana M <mmadhusu@redhat.com>
* common-ha: handle long node names and node names with '-' and '.' in themKaleb S. KEITHLEY2015-05-302-35/+59
| | | | | | | | | | | | | | sourcing the /etc/ganesha/ganesha-ha.conf file seemed like a simple and elegant solution for reading config params, but bash variable names do not allow '-' and '.' in them. Change-Id: I0d2e6cb21017472b1e0f764335cf28946cca95f0 BUG: 1225572 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/10952 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* scripts: Added script stop-all-gluster-processes.sh in rpmAravinda VK2015-05-291-2/+3
| | | | | | | | | | | | This script was not included as part of rpm. Fixed now BUG: 1204641 Change-Id: I5e559b187253cc2f4f8ea7cf8ec56a32802e5ab2 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10931 Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* snapshot/scheduler: Do not enable scheduler if another scheduler is runningAvra Sengupta2015-05-281-2/+85
| | | | | | | | | | | | | | | | | | | | | | | Check if another snapshot scheduler is running before enabling the scheduler. Also introducing a hidden option, disable_force "snapshot_scheduler.py disable_force" will disable the cli snapshot scheduler from any node, even though the node has not been initialised for the scheduler, as long as the shared storage is mounted This option is hidden, because we don't want to encourage users to use all commands from nodes that are not initialised. Change-Id: I7ad941fbbab834225a36e740c61f8e740813e7c8 BUG: 1219442 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10641 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com>
* NFS-Ganesha : Disable ACL by defaultMeghana Madhusudhan2015-05-141-0/+1
| | | | | | | | | | | | | | | | | ACLs need to be disabled by default. To enable ACLs, the user has to change the export file manually, set Disable_ACL=False and run ganesha-ha.sh --refresh-config. Changing the default export file to accommodate these changes. Change-Id: If3fe0f237344c594a43ad6fc5d351bd391ae5256 BUG: 1221131 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10769 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* NFS-Ganesha : Improved sed expression in cleanupMeghana Madhusudhan2015-05-071-1/+1
| | | | | | | | | | | | | | | The global config file should not be emptied when tear down is called. Improving the sed expression to delete the ".conf" files only Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Change-Id: Ida3d303f629cb512c02dadacce1ec7e5f07db018 BUG: 1218854 Reviewed-on: http://review.gluster.org/10630 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* NFS-Ganesha: Add-node and delete-node changes in HA implementationMeghana Madhusudhan2015-05-071-4/+60
| | | | | | | | | | | | | | | NFS-Ganesha related config files have to be copied over to the new node and NFS-Ganesha service has to be started. Similary NFS-Ganesha service has to be stopped when a node is deleted from the HA cluster. Change-Id: Ia38e72cac86713fe23b7d1b829a256637a9ca796 BUG: 1212816 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10596 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* NFS-Ganesha : Do not empty global config file when cluster is tornMeghana Madhusudhan2015-05-061-1/+1
| | | | | | | | | | | | | | | The global config file will need new blocks to support client lock recovery. Current cleanup function empties the entire file. Deleting only "include" lines in the config file. Change-Id: I21f09e30a738d2ba01861ce480ecf906667d887b BUG: 1218854 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10592 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* Cache-invalidation : set to on/off depending on ganesha.enable valueMeghana Madhusudhan2015-05-051-0/+3
| | | | | | | | | | | | | | | | Multi-Head NFS-Ganesha servers need upcall (cache-invalidation) support to notify them in case of any changes to the files in the backend. Hence, upcall xlator option "features.cache-invalidation" needs to be enabled when ganesha.enable is set to 'on'. Similarly, this feature needs to be disabled when ganesha.enable is set to 'off' Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Change-Id: Ifdd1d50e48a2bd2a388f73c0b9e318c6092ac190 BUG: 1213752 Reviewed-on: http://review.gluster.org/10581 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* snapshot/scheduler: Use os.path.realpath() for path validationAvra Sengupta2015-05-051-1/+1
| | | | | | | | | | | | | | In order to accomodate systems, where /var/run is a symlink to /run, we are using os.path.realpath() for path validations. Change-Id: I4eae536867ec6c88f92c762b92f5c1966b622bde BUG: 1216931 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10464 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* snapshot/scheduler: Use shutil.move instead of os.rename()Avra Sengupta2015-04-301-2/+3
| | | | | | | | | | | | | | | os.rename is a wrapper on top of the rename function, which fails with invalid cross-device link if /tmp is a tmpfs. Hence using shutil.move Change-Id: Ia026d2a810b725ccd398db895e612c53bc6a2f95 BUG: 1214574 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10347 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* common-ha: delete-node implementationKaleb S. KEITHLEY2015-04-293-108/+211
| | | | | | | | | | | | | | | | | | | | | | | | | | | | omnibus patch consisting of: + completed implemenation of delete-node (BZ 1212823) + teardown leaves /var/lib/nfs symlink (BZ 1210712) + setup copy config, teardown clean /etc/cluster (BZ 1212823) setup for copy config, teardown clean /etc/cluster: 1. on one (primary) node in the cluster, run: `ssh-keygen -f /var/lib/glusterd/nfs/secret.pem` Press Enter twice to avoid passphrase. 2. deploy the pubkey ~root/.ssh/authorized keys on _all_ nodes, run: `ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@$node` 3. copy the keys to _all_ nodes in the cluster, run: `scp /var/lib/glusterd/nfs/secret.* $node:/var/lib/glusterd/nfs/` N.B. this allows setup, teardown, etc., to be run on any node Change-Id: I9fcd3a57073ead24cd2d0ef0ee7a67c524f3d4b0 BUG: 1213933 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/10234 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Fix case mistake for MKDIR_P in MakefilesEmmanuel Dreyfus2015-04-203-5/+5
| | | | | | | | | | | | | | | | Some makefiles used $(mkdir_p) instead of the corectly defined $(MKDIR_P). The former is substituted as an empty string, leading to possible failures depending of the user shell tolerance. NetBSD's /bin/sh seems to choke more easily than Linux's /bin/bash, but if the later does not fail, it does not created the intended directories anyway. BUG: 1129939 Change-Id: I8caed4000f3c91cb3a685453848fb854793945ed Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/10276 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>