summaryrefslogtreecommitdiffstats
path: root/extras
Commit message (Collapse)AuthorAgeFilesLines
* gcron: create the lockfile if it is missingNiels de Vos2018-06-251-1/+1
| | | | | | | | | | The lockfile for the job may not exist yet. If that is the case, it should be created upon the first time it is accessed. Change-Id: I4da2b3ecdb79cc63ed82cc7bfa026c8f08d4d043 Fixes: bz#1559829 Signed-off-by: Niels de Vos <ndevos@redhat.com> (cherry picked from commit 7005b1a336e483ec150c2f924a618dcfe197db0b)
* gcron: catch OSError as well as IOErrorNiels de Vos2018-06-181-2/+2
| | | | | | | | | | | In case os.open() fails because the file does not exist, an OSError is raised. To prevent the script to abort uncleanly, catch the OSError in addition to the IOError. Change-Id: I48e5b23e17d63639cc33db51b4229249a9887880 Fixes: bz#1559829 Signed-off-by: Niels de Vos <ndevos@redhat.com> (cherry picked from commit 26b52694feb04c98e6c9436bcd4e23e1687f0237)
* geo-rep/scheduler: Fix crashKotresh HR2018-06-111-35/+35
| | | | | | | | | | | | | | | | | | | | Fix crash where session_name is referenced before assignment. Well, this is a corner case where the geo-rep session exists and the status output doesn't show any rows. This might happen when glusterd is down or when the system is in inconsistent state w.r.t glusterd. Backport of: > Patch: https://review.gluster.org/19991/ > Change-Id: Iec1557e01b35068041b4b3c1aacee2bfa0e05873 > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 829f32c61c364323bab494cf9dab880aad4be463) fixes: bz#1577871 Change-Id: Iec1557e01b35068041b4b3c1aacee2bfa0e05873 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* extras/hooks: Fix S10selinux-label-brick.sh hook scriptMilan Zink2018-04-061-28/+29
| | | | | | | | | | | | | | | * script was failng due to syntax error * shellcheck issues fixed * improved performance: semanage & restorecon is being run on unique path Upstream reference: >Change-Id: I58b357d9fd37586004a2a518f7a5d1c5c9ddd7e3 >BUG: 1533342 >Signed-off-by: Milan Zink <zeten30@gmail.com> Change-Id: I58b357d9fd37586004a2a518f7a5d1c5c9ddd7e3 BUG: 1546627 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* glusterd : introduce timer in mgmt_v3_lockGaurav Yadav2017-11-061-0/+1
| | | | | | | | | | | | | | | | | | Problem: In a multinode environment, if two of the op-sm transactions are initiated on one of the receiver nodes at the same time, there might be a possibility that glusterd may end up in stale lock. Solution: During mgmt_v3_lock a registration is made to gf_timer_call_after which release the lock after certain period of time >mainline patch : https://review.gluster.org/#/c/18437/ Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843 BUG: 1503239 Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* geo-rep/scheduler: Add validation for session existenceKotresh HR2017-10-121-0/+4
| | | | | | | | | | | | | | | Added validation to check for session existence to give out proper error message out. > Change-Id: I13c5f6ef29c1395cff092a14e1bd2c197a39f058 > BUG: 1499159 > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 938addeb7ec634e431c2c8c0a768a2a9ed056c0d) Change-Id: I13c5f6ef29c1395cff092a14e1bd2c197a39f058 BUG: 1499392 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* geo-rep/scheduler: Fix '--no-color' help msgKotresh HR2017-10-061-1/+1
| | | | | | | | | | | > Change-Id: I0f51558083e0b11a6563b8a2ef62ec07fe2a9ca9 > BUG: 1495436 > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 8855ebcfecde2a21e0a9ba725e9738708e03904a) Change-Id: I0f51558083e0b11a6563b8a2ef62ec07fe2a9ca9 BUG: 1496238 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* gluster-block: strict-o-direct should be onPranith Kumar K2017-08-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/18120 tcmu-runner is not going to open block with O_SYNC anymore so writes have a chance of getting cached in write-behind when that happens, there is a chance that on failover some data could be stuck in cache and be lost. >BUG: 1485962 >Change-Id: If9835d914821dfc4ff432dc96775677a55d2918f >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> BUG: 1486122 Change-Id: If9835d914821dfc4ff432dc96775677a55d2918f Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/18126 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Krutika Dhananjay <kdhananj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: Gluster should keep PID file in correct locationGaurav Kumar Garg2017-08-122-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently Gluster keeps process pid information of all the daemons and brick processes in Gluster configuration file directory (ie., /var/lib/glusterd/*). These pid files should be seperate from configuration files. Deletion of the configuration file directory might result into serious problems. Also, /var/run/gluster is the default placeholder directory for pid files. So, with this fix Gluster will keep all process pid information of all processes in /var/run/gluster/* directory. > BUG: 1258561 > Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> > Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> > Reviewed-on: https://review.gluster.org/13580 > Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > cherry pick from commit 220d406ad13d840e950eef001a2b36f87570058d BUG: 1480459 Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/18023 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* scripts: invalid test(1) in extras/S32gluster_enable_shared_storage.shKaleb S. KEITHLEY2017-08-021-1/+1
| | | | | | | | | | | | | | | | | | | | | test(1) man pages says -eq is for INTEGER compares, and = is for string compares. Also note the comment that -a and -o are ambiguous and to use test && test or test || test instead. This bug has existed since 2015! (yikes) Found while testing localtime logging and running glusterd in the foreground. Change-Id: Ia544f7295e247b981504d085ebc4c533ab60ba84 BUG: 1476819 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17926 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* extras: Turn eager-lock off for gluster-blockPranith Kumar K2017-08-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the current implementation of eager-lock FINODELK is taking so much time that cassandra workload times out and errors out. AFR eager-locking needs to be changed similar to EC eager-locking to make things work as expected. In the interim, it is better to turn it off This is how the profile looks if eager-lock is turned on: 0.35 628.26 us 64.00 us 129882.00 us 42278 FXATTROP 17.45 16500.54 us 212.00 us 375829.00 us 79568 WRITE 81.76 209862.12 us 15.00 us 1992486.00 us 29318 FINODELK This is how profile looks if eager-lock is turned off: 1.87 283.71 us 65.00 us 298970.00 us 68346 FXATTROP 6.33 199.04 us 13.00 us 373428.00 us 330524 FINODELK 10.37 3151.47 us 53.00 us 1528484.00 us 34172 FSYNC 81.31 5110.45 us 270.00 us 1519722.00 us 165244 WRITE >BUG: 1477404 >Change-Id: I98026b1ecf30002ddac01be76f375c2e8c0b7838 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> BUG: 1477405 Change-Id: I98026b1ecf30002ddac01be76f375c2e8c0b7838 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17955 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Prasanna Kumar Kalever <pkalever@redhat.com>
* extras: Disable remote-dio in gluster-block profilePranith Kumar K2017-07-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | gluster-block file is opened with O_DIRECT but because block-profile has remote-dio enabled, it is leading to high latency for fsync which leads to failures in cassandra. Disabling remote-dio fixed this issue. We need to change remote-dio to disabled in gluster-block. >BUG: 1474190 >Change-Id: Ifd845ea9cbdcc08dd6073faca6082682af376ca3 >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: https://review.gluster.org/17856 >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> >Reviewed-by: Niels de Vos <ndevos@redhat.com> >(cherry picked from commit 0b3fec6924cad5c9f38941550ab4106972efa5cc) BUG: 1476653 Change-Id: Ifd845ea9cbdcc08dd6073faca6082682af376ca3 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17918 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* group-gluster-block: Set default shard-block-size to 64MBPranith Kumar K2017-07-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With 4MB shard size I/O slows down more because of the extra inodelk/xattrops in replicate. So increasing it to 64MB which gave better performance than 4MB. To simulate writes on a preallocated VM-image, fallocate the file and then do dd with notrunc do "fallocate -l 1GB" then "dd if=/dev/zero of=file-1GB bs=1MB count=1024 conv=notrunc" These are the results on my laptop for dd: With 4MB: 1.84 1357.37 us 19.00 us 12431.00 us 1188 FINODELK 2.45 255.08 us 58.00 us 4038.00 us 8428 WRITE 95.69 78967.76 us 30.00 us 20324240.00 us 1063 FXATTROP With 64MB: 0.13 59.36 us 15.00 us 814.00 us 657 FINODELK 6.02 225.53 us 69.00 us 6556.00 us 8205 WRITE 93.82 103015.12 us 32.00 us 13046368.00 us 280 FXATTROP >BUG: 1475605 >Change-Id: I4ed5441409df639e38c731ba0d140fe92902f25f >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> >Reviewed-on: https://review.gluster.org/17887 >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> >(cherry picked from commit abfbc3eb821e144ddbfdc5d7da401557b52beaf1) BUG: 1476654 Change-Id: I4ed5441409df639e38c731ba0d140fe92902f25f Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17919 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* extras: Change Makefile generation in generate_xlator.pyPranith Kumar K2017-07-181-2/+3
| | | | | | | | | | | | Makefile generation should include default LD_FLAGS and also include rpc related paths in include path Change-Id: I45e1c97b96f08bbfe4663384f4873726febef9f6 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17811 Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* systemd/glusterfssharedstorage : remove dependency for ↵Jiffin Tony Thottan2017-07-172-22/+22
| | | | | | | | | | | | | | | | | | | | | | | var-run-gluster-shared_storage Currently the script used by glusterfssharedstorage have dependency over var-run-gluster-shared_storage. But this service will be present only if node has rebooted. Also in reboot scenario , there is a chance that this service can be executed before creating var-run-gluster-shared_storage. In that case glusterfssharedstorage will get succeed even without mounting the shared storage Also the type of glusterfssharedstorage changed to "forking" so that it can be active(instead of dead) after the successful start. Change-Id: I1c02cc64946e534d845aa7ec7b72644bbe4d26f9 BUG: 1452527 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: https://review.gluster.org/17658 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* extras: Enable stat-prefetch in virt profileKrutika Dhananjay2017-07-101-1/+0
| | | | | | | | | | | | | | | In the internal testing that was done, stat-prefetch did help reduce the number of stats coming from qemu hitting the disk, and thereby improved performance. Change-Id: Icf1ce62ecf4e96b97e1946a77b30434157a7786a BUG: 1468191 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: https://review.gluster.org/17713 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* Scripts to identify quota accouting issuesSanoj Unnikrishnan2017-06-303-0/+98
| | | | | | | | | | | | | | | | | | | | | The patch contains 2 scripts: log_accounting.sh does a du -h on the FS hierarchy and a quota list on the hierarchy and interleaves the two output. We can then identify which directory(s) in FS has caused the accounting to go bad and try to investigate what fops happened on those directories. We can also limit the set of directories on which we need to set dirty xattr to correct accounting. xattr_analysis.py reads all the xattr of a brick and dumps it a human readable form to ease debugging. Change-Id: I2155561d10c08dc3ab9e8b09dbd258f0592b4d33 BUG: 1466188 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Reviewed-on: https://review.gluster.org/17649 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* scripts/shared_storage : systemd helper scripts to mount shared storage post ↵Hendrik Visage2017-06-204-4/+57
| | | | | | | | | | | | | | | reboot Reported-by: Hendrik Visage <hvjunk@gmail.com> Change-Id: Ibcff56b00f45c8af54c1ae04974267c2180f5f63 BUG: 1452527 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: https://review.gluster.org/17339 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd.service (systemd), shows "Active failed" when gd in stop stateGaurav Yadav2017-06-191-0/+1
| | | | | | | | | | | | | | | | | | | While doing cleanupandexit glusterd was handling the signal SIGTERM which is clean exit but systemd treats it as failure being a non-zero value. With this fix dependency "SuccessExitStatus " has been added in glusterd.service which takes care of service stop properly. Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Change-Id: Ie5216722632a245f787fd69bfbbf8d0f0068bccb BUG: 1462200 Reviewed-on: https://review.gluster.org/17559 Tested-by: Gaurav Yadav <gyadav@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com>
* nl-cache: add group volume set option for ease of usePoornima G2017-06-122-1/+8
| | | | | | | | | | Change-Id: Id03643a9598da53051a01ca09e1d2a62bc195ab6 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/17495 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* extras/hookscripts: non-portable shell syntaxKaleb S. KEITHLEY2017-06-021-2/+2
| | | | | | | | | | | | | | use of "function" is not portable to other shells Reported-by: Patrick Matthäi <pmatthaei@debian.org> Change-Id: I13a0482b387cc3b7a7a57df424e673850603da37 BUG: 1457812 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17443 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* features/shard: Handle offset in appending writesPranith Kumar K2017-05-271-1/+0
| | | | | | | | | | | | | | | | | | | | | When a file is opened with append, all writes are appended at the end of file irrespective of the offset given in the write syscall. This needs to be considered in shard size update function and also for choosing which shard to write to. At the moment shard piggybacks on queuing from write-behind xlator for ordering of the operations. So if write-behind is disabled and two parallel appending-writes come both of which can increase the file size beyond shard-size the file will be corrupted. BUG: 1455301 Change-Id: I9007e6a39098ab0b5d5386367bd07eb5f89cb09e Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17387 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* extras: Provide group set for gluster-block workloadsPranith Kumar K2017-05-122-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For gluster-block workloads I/O is always with o-direct so it doesn't benefit by any of the perf xlators so disabling all of them to save on memory. performance.quick-read=off performance.read-ahead=off performance.io-cache=off performance.stat-prefetch=off performance.write-behind=off performance.open-behind=off performance.readdir-ahead=off We want the I/O on the file to be with o-direct network.remote-dio=enable Options that are proven to give good performance with VM workloads which is very similar to gluster-block cluster.eager-lock=enable cluster.quorum-type=auto cluster.data-self-heal-algorithm=full cluster.locking-scheme=granular cluster.shd-max-threads=8 cluster.shd-wait-qlength=10000 features.shard=on It is better to turn off things we are not using user.cifs=off It is better to have allow-insecure to be on so that ports that are > 1024 in tcmu-runner are allowed. server.allow-insecure=on Change-Id: I9a21c824fa42242f02b57569feedd03d9b6f9439 BUG: 1450010 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17254 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* extras/hook-scripts: SELinux brick file context management scriptsBrian Foster2017-05-017-1/+138
| | | | | | | | | | | | | | | | | | | | | | The SELinux policy for gluster defines the glusterd_brick_t type to support server side SELinux (e.g., server side labels). Add convenience hook scripts that users/packagers can install to ensure that new bricks are labeled correctly. The volume create hook script adds a new SELinux file context for each brick path and runs a restorecon to label the brick. The volume delete hook removes the per-brick SELinux file context. Change-Id: I5f102db5382d813c4d822ff74e873a7a669b41db BUG: 1047975 Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Niels de Vos <ndevos@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: https://review.gluster.org/6630 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* build: conditionally build legacy gNFS server and associated sub-packagingKaleb S. KEITHLEY2017-04-281-2/+2
| | | | | | | | | | | | | | | | | | | Plus some additional logic in glusterd to ensure gnfs (glusterfs) daemons are never started if server/nfs xlator is not installed. As a service, nfs is still initialized. The glusterfs-gnfs RPM may be installed or uninstalled independent of anything else, including on a system where gluster is actively running, so the existence of the xlator is always tested before trying to start gnfs. Change-Id: I56743ad1cb36a84917226d7d26cb9d015d441e66 BUG: 1326219 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16958 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* extras/hooks: Use double quotes while using [..]Anoop C S2017-04-271-1/+1
| | | | | | | | | | | | | | | This avoids the following warning when first operand is null [: =: unary operator expected Change-Id: I5439d8f60a6d9e30e6ba04c16c3de2096a87c38f BUG: 1446126 Signed-off-by: Anoop C S <anoopcs@redhat.com> Reviewed-on: https://review.gluster.org/17127 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* packaging: /var/run/gluster owner gluster:gluster(0775) for statedumpsKaleb S. KEITHLEY2017-04-271-1/+1
| | | | | | | | | | | | | | | gfapi has the ability to take statedumps. However, if the application using gfapi isn't running with root privs the statedump file can't be written to the default location, i.e. /var/run/gluster. Change-Id: I97d8919ef8b8cd4775e1a206f939a2bf0046786d BUG: 1445569 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17122 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* packaging: (ganesha) remove glusterfs-ganesha subpackage and related files)Kaleb S. KEITHLEY2017-03-2110-353/+2
| | | | | | | | | | | | | | | Indiana Jones and the Temple of Ganesha HA, part two. remove glsuterfs-ganesha subpackage, superceded by storhaug Change-Id: I42a1fc59159add108d77080b9b130696216aa76d BUG: 1418417 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16506 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
* build/packaging: Debian and Ubuntu don't have /usr/libexecKaleb S. KEITHLEY2017-03-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | GLUSTERFS_LIBEXECDIR is effectively hard-coded to /usr/libexec/glusterfs in configure(.ac) Debian-based distributions don't have a /usr/libexec/ directory This issues is partially mitigated by the use of $libexecdir in some of the Makefile.am files, but even so the incorrectly defined GLUSTERFS_LIBEXECDIR results in various things such as gsyncd, glusterfind, eventsd, etc., trying to invoke other scripts and programs from a location that doesn't exist. And once we correctly define GLUSTERFS_LIBEXECDIR, then we might as well use it appropriatedly. Change-Id: If5219cadc51ae316f7ba2e2831d739235c77902d BUG: 1430841 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16880 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Milind Changire <mchangir@redhat.com> Reviewed-by: Joe Julian <me@joejulian.name> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* extras/devel-tools: script to resolve bt addressesMilind Changire2017-03-071-0/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: STACK_WINDs in a gluster backtrace dumped in a log file are undecipherable with only the hex addresses of the location leaving us without a clue. Solution: This utility uses the undeciphered lines in the backtrace and the associated debuginfo rpm to generate the function name and the file and line number associated with the stack frame. Passing "none" as the debuginfo rpm name will make the script assume that you want to resolve against a source install and not a debuginfo rpm. You would need to copy the unresolved lines from the backtrace into a file and pass the name of this file to the utility as the input file. Change-Id: I4d8bc1ae205af37688d03298de49654018bdba9d BUG: 1426891 Signed-off-by: Milind Changire <mchangir@redhat.com> Reviewed-on: https://review.gluster.org/16763 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: fixes to build 3.9.0rc2 on DebianKaleb S. KEITHLEY2017-02-262-1/+92
| | | | | | | | | | | | | | | | Add glustereventsd-Debian(.in) and associated Makefile(.am) and configure(.ac) changes Add UUIDLIBS to fdl's librecon Change-Id: Ibff821691023704978140eaaff2c6532b74c50fa BUG: 1389127 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/15737 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Joe Julian <me@joejulian.name>
* snapshot/scheduler: Set sebool cron_system_cronjob_use_shares to onAvra Sengupta2017-02-221-0/+90
| | | | | | | | | | | | | | | | | | | Rhel 7.1 onwards, the user has to manually set the selinux boolean 'cron_system_cronjob_use_shares' as on, if selinux is enabled for snapshot scheduler to work. With this fix, we are automating that bit, in init step of snapshot scheduler Change-Id: I5c1d23c14133c64770e84a77999ce647526f6711 BUG: 1395643 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: https://review.gluster.org/15857 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com>
* contributors: map more email addresses and domain namesNiels de Vos2017-02-202-1/+19
| | | | | | | | | | | | | | | | | | | | | | While checking the statistics for the upcoming release, I noticed some new names, emailaddresses and domains. Adding the ones for which the mapping is obvious or for which people replied to my request for clarification. Steps to get the more up-to-date statistics (once merged): $ git checkout master $ ./extras/who-wrote-glusterfs/who-wrote-glusterfs.sh v3.9dev..origin/release-3.10 ... Change-Id: I4ab85fdbdb53d09a70a659555b8341cf9376167c Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/16688 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* extras: Add more options to group virtKrutika Dhananjay2017-02-171-2/+7
| | | | | | | | | | | | | | | | | | | Apart from some of the option configurations already listed in the group-virt.example file, we also recommend that the users set certain other options added by this patch for VM use-case. This also helps Gluster-oVirt users in configuring virt options for new volumes at the click of a button as opposed to setting them manually through volume-set command. Change-Id: I8524e8d8a06bbbb0b9247571706e786410013b41 BUG: 1418900 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: https://review.gluster.org/16577 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Sahina Bose <sabose@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* extras/rebalance.py: Fix statvfs for FreeBSD in pythonXavier Hernandez2017-02-071-1/+9
| | | | | | | | | | | | | | | FreeBSD doesn't return the block size in f_bsize as linux does. It returns the optimal I/O size, so we need to consider this to avoid invalid results. On FreeBSD we take f_frsize as the block size. Change-Id: I72083d8ae183548439de874c77f1d60d9c2d14a7 BUG: 1356076 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/16498 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* extras: Provide group set for md-cache and invalidation optionsPoornima G2017-02-032-1/+9
| | | | | | | | | | | | | | | | | | To enable the integration of md-cache and invalidation features we need to perform 3 volume set options in a specific order. In order to ease this for user provide a group volume set option. Usage: gluster vol set <VOLNAME> group metadata-cache Change-Id: I9bf0fd4217aa2a1c7ffbdc93e879b10f87addeac BUG: 1418249 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: https://review.gluster.org/16503 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* extras: glusterfs-mode.el has incorrect FSF addressKaleb S. KEITHLEY2017-02-021-112/+113
| | | | | | | | | | | | | | found by rpmlint on OpenSuSE Build Service And convert DOS crlf to Unix lf, also found by SuSE rpmlint Change-Id: I0329e6682333ead21ca1b76a3b00cb863c2af51b Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16500 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* storhaug HA: first step, remove resource agents and setup scriptKaleb S. KEITHLEY2017-01-177-1768/+3
| | | | | | | | | | | | | | | | resource agents and setup script(s) are now in storhaug This is a phased switch-over to storhaug. Ultimately all components here should be (re)moved to the storhaug project and its packages. But for now some will linger here. Change-Id: Ied3956972b14b14d8a76e22c583b1fe25869f8e7 BUG: 1410843 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/16349 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* common-ha: add node create new node dirs in shared storageKaleb S. KEITHLEY2016-12-221-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When adding a node to the ganesha HA cluster, create the directory tree in shared storage for the added node and create sets of symlinks to match what is/was created for the other nodes. I.e. in a four node cluster the new node needs a set of links to the four existing nodes: /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e1 -> e1 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e2 -> e2 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e3 -> e3 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e4 -> e4 and all the existing nodes need links added for the new node: /run/gluster/shared/nfs-ganesha/$e1/nfs/{ganesha,statd}/$new -> new /run/gluster/shared/nfs-ganesha/$e2/nfs/{ganesha,statd}/$new -> new /run/gluster/shared/nfs-ganesha/$e3/nfs/{ganesha,statd}/$new -> new /run/gluster/shared/nfs-ganesha/$e5/nfs/{ganesha,statd}/$new -> new Likewise when deleting, remove the dir and symlinks. original change http://review.gluster.org/16036 BUG: 1400613 Change-Id: I52839046745728d06ab5a07f38081c032093bff6 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/16216 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com>
* common-ha: Correct the VIP assigned to the new node addedSoumya Koduri2016-12-211-4/+4
| | | | | | | | | | | | | | | | There is a regression introduced with patch#16115. An incorrect VIP gets assigned to the new node being added to the cluster. This patch fixes the same. Change-Id: I468c7d16bf7e4efa04692db83b1c5ee58fbb7d5f BUG: 1406410 Signed-off-by: Soumya Koduri <skoduri@redhat.com> Reviewed-on: http://review.gluster.org/16213 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* ganesha/scripts : Prevent removal of entries in ganesha.conf during deletion ↵Jiffin Tony Thottan2016-12-211-1/+1
| | | | | | | | | | | | | of a node Change-Id: Ia6c653eeb9bef7ff4107757f845218c2316db2e4 BUG: 1406249 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/16209 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com>
* common-ha: add node create new node dirs in shared storageKaleb S. KEITHLEY2016-12-161-1/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When adding a node to the ganesha HA cluster, create the directory tree in shared storage for the added node and create sets of symlinks to match what is/was created for the other nodes. I.e. in a four node cluster the new node needs a set of links to the four existing nodes: /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e1 -> e1 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e2 -> e2 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e3 -> e3 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e4 -> e4 and all the existing nodes need links added for the new node: /run/gluster/shared/nfs-ganesha/$e1/nfs/{ganesha,statd}/$new -> new /run/gluster/shared/nfs-ganesha/$e2/nfs/{ganesha,statd}/$new -> new /run/gluster/shared/nfs-ganesha/$e3/nfs/{ganesha,statd}/$new -> new /run/gluster/shared/nfs-ganesha/$e5/nfs/{ganesha,statd}/$new -> new Likewise when deleting, remove the dir and symlinks. Change-Id: Id2f78f70946f29c3503e1e6db141b66cb431e0ea BUG: 1400613 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/16036 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
* common-ha: explicitly set udpu transport for corosyncKaleb S. KEITHLEY2016-12-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | On RHEL7 corosync uses udpu (udp unicast) by default. On RHEL6 the default is (now) udp multi-cast. In network environments that don't support udp multi-cast this causes the ever growing lists of [TOTEM ] Retruansmit errors. Always specifying --transport udpu is thus a no-op on RHEL7. Using the same transport on both RHEL6 and RHEL7 may (or may not give similar behavior and performance--it's hard to say. It remains a mystery why things have always worked on RHEL6 prior to now. Further investigation is required to uncover why this is the case. Change-Id: I4d0de97fe4425c47f249beaaf51aeca3e91731fa BUG: 1404410 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/16122 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com>
* common-ha: Create portblock RA as part of add/delete-nodeSoumya Koduri2016-12-121-8/+61
| | | | | | | | | | | | | | | | | When a node is added to or deleted from existing nfs-ganesha cluster, we need to create or cleanup portblock RA as well. This patch is to address the same. Also we need to adjust the quorum-policy with increase/decrease in the number of nodes in the cluster. Change-Id: I31a896715b9b7fc931009723d1570bf7aa4da9b6 BUG: 1403130 Signed-off-by: Soumya Koduri <skoduri@redhat.com> Reviewed-on: http://review.gluster.org/16089 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* ganesha/scripts : find export id for already exported volume in ↵Jiffin Tony Thottan2016-12-081-0/+3
| | | | | | | | | | | | | | S31ganesha-start.sh Change-Id: Iada90ed215966d3f526fa20aa5359b67f25a6944 BUG: 1401822 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/16037 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd/ganesha : handle volume reset properly for ganesha optionsJiffin Tony Thottan2016-12-062-52/+1
| | | | | | | | | | | | | | | | | The "gluster volume reset" should first unexport the volume and then delete export configuration file. Also reset option is not applicable for ganesha.enable if volume value is "all". This patch also changes the name of create_export_config into manange_export_config Change-Id: Ie81a49e7d3e39a88bca9fbae5002bfda5cab34af BUG: 1397795 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/15914 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* extras: Include shard and full-data-heal in virt groupKrutika Dhananjay2016-12-041-0/+2
| | | | | | | | | | | | Change-Id: Iea66cb017bd1ab62da9cd65895fa65fc6896108b BUG: 1375431 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/15995 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* eventsapi: JSON output and different error codesAravinda VK2016-12-022-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | JSON outputs are added to all commands, use `--json` to get JSON output. Following error codes are added to differenciate between errors. Any other Unknown errors will have return code 1 ERROR_SAME_CONFIG = 2 ERROR_ALL_NODES_STATUS_NOT_OK = 3 ERROR_PARTIAL_SUCCESS = 4 ERROR_WEBHOOK_ALREADY_EXISTS = 5 ERROR_WEBHOOK_NOT_EXISTS = 6 ERROR_INVALID_CONFIG = 7 ERROR_WEBHOOK_SYNC_FAILED = 8 ERROR_CONFIG_SYNC_FAILED = 9 Also hidden `node-` commands in the help message. BUG: 1357753 Change-Id: I962b5435c8a448b4573059da0eae42f3f93cc97e Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/15867 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* ganesha/scripts : avoid incrementing Export Id value for already exported ↵Jiffin Tony Thottan2016-12-011-32/+10
| | | | | | | | | | | | | | | | | | | volumes Currently a volume will unexport when it stops and reexport it during volume start using hook script. And also it increments the value for export id for each reexport. Since a hook script is called from every node parallely which may led inconsistency for export id value. Change-Id: Ib9f19a3172b2ade29a3b4edc908b3267c68c0b20 BUG: 1399186 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/15948 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* common-ha: IPaddr RA is not stopped when pacemaker quorum is listKaleb S. KEITHLEY2016-12-011-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ken Gaillot writes: The other is pacemaker's no-quorum-policy cluster property. The default (which has not changed) is "stop" (stop all resources). Other values are "ignore" (act as if quorum was not lost), "freeze" (continue running existing resources but don't recover resources from unseen nodes) or "suicide" (shut down). But on my four node cluster % pcs property show no-quorum-policy Cluster Properties: % i.e. shows nothing. But: % pcs property list --all Cluster Properties: ... no-quorum-policy: stop ... % Seems to think it knows about it. and then % pcs property set no-quorum-policy=stop % pcs property show no-quorum-policy Cluster Properties: no-quorum-policy: stop % Which looks rather inconsistent. So we will try explicitly setting it to "stop" when there are three or more nodes. Change-Id: I47fc7ee84fcd6ad52ccb776913511978a8d517b4 BUG: 1400237 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/15981 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>