summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.c
Commit message (Collapse)AuthorAgeFilesLines
* quota: add version to quota xattrsvmallika2015-11-021-0/+8
| | | | | | | | | | | | | | | | | | | | | When a quota is disable and the clean-up process terminated without completely cleaning-up the quota xattrs. Now when quota is enabled again, this can mess-up the accounting A version number is suffixed for all quota xattrs and this version number is specific to marker xaltor, i.e when quota xattrs are requested by quotad/client marker will remove the version suffix in the key before sending the response Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb BUG: 1272411 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/12386 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* afr/glusterd: Fix naming issue in tier related changesMohammed Rafi KC2015-10-301-4/+6
| | | | | | | | | | | | | | changing some of the function names added recently as part of the tiering changes. Change-Id: I238831128ee00cdf83f8a80be937d3528d133099 BUG: 1275489 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12431 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* core: use syscall wrappers instead of direct syscalls -- glusterdKaleb S. KEITHLEY2015-10-281-28/+28
| | | | | | | | | | | | | | | various xlators and other components are invoking system calls directly instead of using the libglusterfs/syscall.[ch] wrappers. If not using the system call wrappers there should be a comment in the source explaining why the wrapper isn't used. Change-Id: I28bf2a5f7730b35914e7ab57fed91e1966b30073 BUG: 1267967 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/12379 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tier/shd: make shd commands compatible with tieringMohammed Rafi KC2015-10-121-4/+14
| | | | | | | | | | | | | | | | tiering volfiles may contain afr and disperse together or multiple time based on configuration. And the informations for those configurations are stored in tier_info. So most of the volgen code generation need to be changed to make compatible with it. Change-Id: I563d1ca6f281f59090ebd470b7fda1cc4b1b7e1d BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12135 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier/shd: create shd volfile for tieringMohammed Rafi KC2015-10-111-2/+96
| | | | | | | | | | | | | | | Currently shd graph will only start if it is replicate or disperse volume. But in case of tiering, volume type will be tier. So we need to start shd if any of the cold or hot is compatible with shd volume. Change-Id: Ic689746ac7d2fc6a9eccdabd8518dc9139829de2 BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11962 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* all: reduce "inline" usageJeff Darcy2015-09-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | There are three kinds of inline functions: plain inline, extern inline, and static inline. All three have been removed from .c files, except those in "contrib" which aren't our problem. Inlines in .h files, which are overwhelmingly "static inline" already, have generally been left alone. Over time we should be able to "lower" these into .c files, but that has to be done in a case-by-case fashion requiring more manual effort. This part was easy to do automatically without (as far as I can tell) any ill effect. In the process, several pieces of dead code were flagged by the compiler, and were removed. Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155 BUG: 1245331 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/11769 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* glusterd: Don't allow remove brick start/commit if glusterd is down of the ↵Atin Mukherjee2015-08-261-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | host of the brick remove brick stage blindly starts the remove brick operation even if the glusterd instance of the node hosting the brick is down. Operationally its incorrect and this could result into a inconsistent rebalance status across all the nodes as the originator of this command will always have the rebalance status to 'DEFRAG_NOT_STARTED', however when the glusterd instance on the other nodes comes up, will trigger rebalance and make the status to completed once the rebalance is finished. This patch fixes two things: 1. Add a validation in remove brick to check whether all the peers hosting the bricks to be removed are up. 2. Don't copy volinfo->rebal.dict from stale volinfo during restore as this might end up in a incosistent node_state.info file resulting into volume status command failure. Change-Id: Ia4a76865c05037d49eec5e3bbfaf68c1567f1f81 BUG: 1245045 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11726 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* tiering/glusterd: start tier daemon during volume startMohammed Rafi KC2015-08-171-1/+68
| | | | | | | | | | | | | | | | | | | Tier daemon should always run with tier volume. If volume is stopped and started again, we manually need to start the tier-daemon, instead this patch will automatically trigger tier process along with volume start. A snapshot restored volume will not have node_state_info, so we need to create and store it dynamically Change-Id: I659387c914bec7a1b6929ee5cb61f7b406402075 BUG: 1238593 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/11525 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* quota : volume-reset shouldn't remove quota-deem-statfsManikandan Selvaganesh2015-08-071-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled. 1) glusterd_op_stage_reset_volume () 'gluster volume set/reset <VOLNAME>' features.quota/ features.inode-quota' should not be allowed as it is deprecated. Setting and resetting quota/inode-quota features should be allowed only through 'gluster volume quota <VOLNAME> enable/disable'. 2) glusterd_enable_default_options () Option 'features.quota-deem-statfs' should not be turned off with 'gluster volume reset <VOLNAME>', since quota features can be set/reset only with 'gluster volume quota <VOLNAME> enable/disable'. But, 'gluster volume set features.quota-deem-statfs' can be turned on/off when quota is enabled. Change-Id: Ib5aa00a4d8c82819c08dfc23e2a86f43ebc436c4 BUG: 1250582 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/11839 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Do not log failure if glusterd_get_txn_opinfo fails in gluster ↵Atin Mukherjee2015-08-021-3/+4
| | | | | | | | | | | | | | | | | | | volume status The first RPC call of gluster volume status fetches the list of the volume names from GlusterD and during that time since no volume name is set in the dictionary gluserd_get_txn_opinfo fails resulting into a failure log which is annoying to the user considering this command is triggered frequently. Fix is to have callers log it depending on the need Change-Id: Ib60a56725208182175513c505c61bcb28148b2d0 BUG: 1238936 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11520 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: initialize the daemon services on demandAtin Mukherjee2015-07-271-20/+5
| | | | | | | | | | | | | | | | | | | | | | As of now all the daemon services are initialized at glusterD init path. Since socket file path of per node daemon demands the uuid of the node, MY_UUID macro is invoked as part of the initialization. The above flow breaks the usecases where a gluster image is built following a template could be Dockerfile, Vagrantfile or any kind of virtualization environment. This means bringing instances of this image would have same UUIDs for the node resulting in peer probe failure. Solution is to lazily initialize the services on demand. Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e BUG: 1238135 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* dict: dict_set_bin() should never free the pointer on errorNiels de Vos2015-07-241-1/+3
| | | | | | | | | | | | | | | | | | | | | | dict_set_bin() is handling the pointer that it passed inconsistently. Depending on the errors that can occur, the pointer passed to the dict can be free'd, but there is no guarantee. It is cleaner to have the caller free the pointer that allocated it and dict_set_bin() returned an error. When dict_set_bin() returned success, the given pointer will be free'd when dict_unref() calls data_destroy(). Many callers of dict_set_bin() already take care of free'ing the pointer on error. The ones that did not, are corrected with this change too. Change-Id: I39a4f7ebc0cae6d403baba99307d7ce408f25966 BUG: 1242280 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/11638 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: Pass NULL in glusterd_svc_manager in glusterd_restart_bricksGaurav Kumar Garg2015-07-141-1/+1
| | | | | | | | | | | | | | | | | | | | | On restarting glusterd quota daemon is not started when more than one volumes are configured and quota is enabled only on 2nd volume. This is because of while restarting glusterd it will restart all the bricks. During brick restart it will start respective daemon by passing volinfo of first volume. Passing volinfo to glusterd_svc_manager will imply daemon managers will take action based on the same volume's configuration which is incorrect for per node daemons. Fix is to pass volinfo NULL while restarting bricks. Change-Id: I2602002a8ba7762fc1eb08123e79fbcf568ecab4 BUG: 1242875 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11658 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: use a real host name (instead of numeric) when we have oneJeff Darcy2015-07-101-0/+6
| | | | | | | | | | Change-Id: Ie9cc201204d3d613e3e585cab066a07283db902c BUG: 1241274 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/11587 Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/snapd: Stop snapd daemon when glusterd is restartingAvra Sengupta2015-07-081-0/+6
| | | | | | | | | | | | | Stop snapd daemon when glusterd is coming back, if uss is disabled, or volume is stopped. Change-Id: I4313ecaff19de30f3e9ea76881994509402ed5b0 BUG: 1240952 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11575 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd: Get the local txn_info based on trans_id in op_sm call backs.anand2015-07-061-13/+16
| | | | | | | | | | | | | | | | | | | Issue: when two or more transactions are running concurrently in op_sm, global op_info might get corrupted. Fix: Get local txn_info based on trans_id instead of using global txn_info for commands (re-balance, profile ) which are using op_sm in originator. TODO: Handle errors properly in call backs and completely remove the global op_info from op_sm. Change-Id: I9d61388acc125841ddc77e2bd560cb7f17ae0a5a BUG: 1229139 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/11120 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Porting left out log messages to new frameworkNandaja Varma2015-06-261-76/+113
| | | | | | | | | | | Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647 BUG: 1235538 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/11388 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: use mkdir_p for creating rundirAtin Mukherjee2015-06-241-7/+7
| | | | | | | | | | | | | | | | | | | snapdsvc initialization is now dependent on all subsequent directories and volfile creations as per commit 2b9efc9. However this may not hold true correct for all the cases. While importing a volume we would need to start the service before the store creation. To avoid this dependency, use mkdir_p instead of mkdir to create rundir Change-Id: Ib251043398c40f1b76378e3bc6d0c36c1fe4cca3 BUG: 1234819 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11364 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/uss/snapshot: Intialise snapdsvc after volfiles are createdAvra Sengupta2015-06-221-8/+13
| | | | | | | | | | | | | snapd svc should be initialised only after all relevant volfiles and directories are created. Change-Id: I96770cfc0b350599cd60ff74f5ecec08145c3105 BUG: 1231197 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11227 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/tier: configure tier daemon during volume restartMohammed Rafi KC2015-06-191-9/+36
| | | | | | | | | | | | | | rebalance daemon will be running on every tier volume for promoting/demoting the files. When volume/glusterd is restarted, then we need to configure the daemon. Change-Id: Ib565240a70edea2ec8bc1601c52b40c0783491d3 BUG: 1225330 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/10933 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: Removing sync lock and unlock inside rcu read critical sectionanand2015-06-191-24/+4
| | | | | | | | | | | | | | | | | | | | | Issue : Glsuterd was crashing during peer probe. RCA : In glusterd, we are using big lock which is implemented based on sync task frame work for thread synchronization, sync task frame work swap the threads if there is no worker pool threads available. Due to this rcu lock and rcu unlock was happening in different threads (urcu-bp will not allow this), resulting into glusterd crash. fix : Removing sync lock and unlock inside rcu read critical section, which was left out by http://review.gluster.org/#/c/10285/ patch. Change-Id: Id358dfcc797335bcd3b491c3129017b2caa826eb BUG: 1232693 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/11276 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* utils/glusterd: Porting to new logging framworkNandaja Varma2015-06-111-294/+550
| | | | | | | | | | | | Change-Id: Iacb30eb675693bcdb3ef9de8e7f43892c5d79f6d BUG: 1194640 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/9967 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Poornima G <pgurusid@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: coverity fix for string overflowSakshi Bansal2015-06-101-2/+3
| | | | | | | | | | | | | | | | | Coverity CID: 1222523 Coverity CID: 1210990 Coverity CID: 1229877 Coverity CID: 1229876 Coverity CID: 1124855 Change-Id: Iba615724909216f923074cb4585940b919d02166 BUG: 789278 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/9555 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/shared_storage: Provide a volume set option to create and mount the ↵Avra Sengupta2015-06-041-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shared storage Introducing a global volume set option(cluster.enable-shared-storage) which helps create and set-up the shared storage meta volume. gluster volume set all cluster.enable-shared-storage enable On enabling this option, the system analyzes the number of peers in the cluster, which are currently connected, and chooses three such peers(including the node the command is issued from). From these peers a volume(gluster_shared_storage) is created. Depending on the number of peers available the volume is either a replica 3 volume(if there are 3 connected peers), or a replica 2 volume(if there are 2 connected peers). "/var/run/gluster/ss_brick" serves as the brick path on each node for the shared storage volume. We also mount the shared storage at "/var/run/gluster/shared_storage" on all the nodes in the cluster as part of enabling this option. If there is only one node in the cluster, or only one node is up then the command will fail Once the volume is created, and mounted the maintainance of the volume like adding-bricks, removing bricks etc., is expected to be the onus of the user. On disabling the option, we provide the user a warning, and on affirmation from the user we stop the shared storage volume, and unmount it from all the nodes in the cluster. gluster volume set all cluster.enable-shared-storage disable Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc BUG: 1222013 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: do not show pid of brick in volume status if brick is down.Gaurav Kumar Garg2015-06-031-2/+4
| | | | | | | | | | | | | | | | | glusterd is currently showing pid of brick in volume status if brick goes down. It should not show pid of brick if brick is down. Change-Id: I077100d96de381695b338382808bd8c37bf625c7 BUG: 1223772 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10877 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* build: do not #include "config.h" in each fileNiels de Vos2015-05-291-4/+0
| | | | | | | | | | | | | | | | | | Instead of including config.h in each file, and have the additional config.h included from the compiler commandline (-include option). When a .c file tests for a certain #define, and config.h was not included, incorrect assumtions were made. With this change, it can not happen again. BUG: 1222319 Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10808 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* quota: quota.conf backward compatibility fixvmallika2015-05-281-2/+3
| | | | | | | | | | | | | | | | | | | In release-3.7 the format of quota.conf is changed. There is a backward compatibility issues during upgrade 1) There can be an issue when peer sync between node-3.6 and node-3.7 2) If the user sets/removes limit, there is will different format of file in node-3.6 and node-3.7 This patch fixes the issue: 1) restrict the user to execute command quota enable, limit-usage, remove 2) write quota.conf in older format if op-version is less than 3.6 Change-Id: Ib76f5a0a85394642159607a105cacda743e7d26b BUG: 1223739 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/10889 Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: function to create duplicate of volinfo should copy subvol_countMohammed Rafi KC2015-05-281-0/+1
| | | | | | | | | | | | | | when we create duplicate volfile from a existing volfile, we are not copying the variable subvol_count to the new volfile. Change-Id: I943aa7fdf1a2ca5bf57522cb2402b6b3165501ac BUG: 1215002 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10761 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* tiering: Correct errors in cli and glusterdMohammed Rafi KC2015-05-281-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem 1: volume info shows Cold Bricks instead of Tier type eg: Volume Name: patchy2 Type: Tier Volume ID: 28c25b8d-b8a1-45dc-b4b7-cbd0b344f58f Status: Started Number of Bricks: 3 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 1 Brick1: 10.70.1.35:/home/brick43 Cold Bricks: Cold Tier Type : Distribute Number of Bricks: 2 Brick2: 10.70.1.35:/home/brick19 Brick3: 10.70.1.35:/home/brick16 Options Reconfigured: Problem 2: Detach-tier sending enums of Rebalance detach-tier has it's own Enum to send with detach-tier command, using that enums will make more appropriate. Problem 3: Wrongly sets hot_brick count during the dictionary copying for response Change-Id: Icc054a999a679456881bc70511470d32ff8a86e4 BUG: 1211264 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10768 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* glusterd/tiering: Exchange tier info during glusted handshakeMohammed Rafi KC2015-05-281-0/+154
| | | | | | | | | | Change-Id: Ibc2f8eeb32d3e5dfd6945ca8a6d5f0f80a78ebac BUG: 1211264 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10449 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* glusterd: fix double-free of rebalance process' rpc objectKrishnan Parthasarathi2015-05-261-3/+54
| | | | | | | | | | | Change-Id: I0c79c4de47a160b1ecf3a8994eedc02e3f5002a9 BUG: 1223338 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/10872 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/tiering: display hot tier, and cold tier separatelyMohammed Rafi KC2015-05-091-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cli commands display the brick information without a way to distinguish hot tier, and cold tier. This patch will change all the cli related output, without changing the corresponding xml output. This patch will change following things >> gluster volume info Volume Name: patchy Type: Tier Volume ID: 7745d367-811a-4fe9-a500-d04e7afa94bf Status: Created Number of Bricks: 3 x 2 = 6 Transport-type: tcp Hot Bricks: Brick1: hostname:/home/brick21 Brick2: hostname:/home/brick20 Cold Bricks: Brick3: hostname:/home/brick19 Brick4: hostname:/home/brick16 Brick5: hostname:/home/brick17 Brick6: hostname:/home/brick18 >>gluster volume status Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick hostname:/home/brick21 49152 0 Y 4690 Brick hostname:/home/brick20 49153 0 Y 4707 Cold Bricks: Brick hostname:/home/brick19 49154 0 Y 4724 Brick hostname:/home/brick16 49155 0 Y 4741 Brick hostname:/home/brick17 49156 0 Y 4758 Brick hostname:/home/brick18 49157 0 Y 4775 NFS Server on localhost 2049 0 Y 4793 Task Status of Volume patchy ------------------------------------------------------------------------------ There are no active volume tasks >>gluster volume status pathy detail Status of volume: patchy Hot Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick21 TCP Port : 49162 RDMA Port : 0 Online : Y Pid : 22677 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick20 TCP Port : 49161 RDMA Port : 0 Online : Y Pid : 22660 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Cold Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick19 TCP Port : 49157 RDMA Port : 0 Online : Y Pid : 22501 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick16 TCP Port : 49158 RDMA Port : 0 Online : Y Pid : 22518 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick17 TCP Port : 49159 RDMA Port : 0 Online : Y Pid : 22535 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick18 TCP Port : 49160 RDMA Port : 0 Online : Y Pid : 22552 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Change-Id: I7d584eb8782129c12876cce2ba8ffba6c0a620bd BUG: 1206546 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10328 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Update Not Started to Created in code and docAravinda VK2015-05-091-1/+1
| | | | | | | | | | | | "Not Started" status is now "Created", replaced "Not Started" string in code and doc. Change-Id: If7d606c2cc8156e41291e7eebe9d0da4ad7ac28d Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 1219937 Reviewed-on: http://review.gluster.org/10698 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com>
* glusterd: add counter support for tiered volumesDan Lambright2015-05-081-0/+55
| | | | | | | | | | | | | | | | This fix adds support to view the number of promoted or demoted files from the cli. The mechanism is isolmorphic to checking the status of volumes being rebalanced. gluster volume rebalance <vol> tier status Change-Id: I1b11ca27355ceec36c488967c23531202030e205 BUG: 1213063 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10292 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tiering: Do not allow some operations on tiered volumeMohammed Rafi KC2015-05-071-0/+65
| | | | | | | | | | | | | | | | | Some operations like add-brick,remove-brick,rebalance, replace-brick are not supported on tiered volume. But there is no code level check for this. This patch will allow to do the same Change-Id: I12689f4e902cf0cceaf6f7f29c71057305024977 BUG: 1205624 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10349 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-071-138/+0
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/marker: turn off inode quotas by defaultvmallika2015-05-061-0/+7
| | | | | | | | | | | | | | | | | | | | | inode quota is a new feature implemented in glusterfs-3.7 if quota is enabled in the older version and is upgraded to a new version, we can hit setxattr spike during self-heal of inode quotas. So, when a quota is enabled, turn off inode-quotas with a xlator option. With this patch, we still account for inode quotas but only when a write operation is performed for a particular file. User will be able to query inode quotas once the Inode-quota xlator option is enabled. Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a BUG: 1209430 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10152 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Enable readdir-ahead by default on new volumesanand2015-05-041-1/+17
| | | | | | | | | | | | | | With gluster-3.7, 'performance.readdir-ahead' will be enabled by default on new volumes when the cluster op-version supports it. Change-Id: I44e76a69e7d1c11e6dfad72c941caf887bb810ee BUG: 1216187 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10433 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota: support for inode quota in quota.confvmallika2015-05-041-79/+39
| | | | | | | | | | | | | | | | | Currently when quota limit is set, corresponding gfid is set in quota.conf. This patch supports storing inode-quota limits in quota.conf and also stores additional byte for each gfid to differentiate between usage quota limit and inode quota limit. Change-Id: I444d7399407594edd280e640681679a784d4c46a BUG: 1202244 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10069 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: gluster volume status should show status of bitrot and scrubber daemonGaurav Kumar Garg2015-05-041-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | Command gluster volume status <VOLNAME> should show the status of bitrot and scrubber daemon and its pid information. Along with displaying bitrot and scrubber daemon information in gluster volume status command there should be command to show its individual status separately. Command to show individual status of bitrot and scrubber daemon will following. command to show only bitd daemon information will be gluster volume status <VOLNAME> bitd command to show only scrubber daemon information gluster volume status <VOLNAME> scrub Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4 BUG: 1209752 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10175 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* glusterd: do not pass volinfo in glusterd_svc_manager functionGaurav Kumar Garg2015-04-301-20/+2
| | | | | | | | | | | | | | | | | On restarting of glusterd first it will start all the bricks present in the volume then it will start all the services. During starting of all the services it may pass volinfo as a NULL. It will cause Assert failure in glusterd_bitdsvc_manager function and will cause a glusterd crash. Change-Id: Ia14cf5022da88516cdd576eb2d1e0e7b17a3782b BUG: 1207029 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10241 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* libglusterfs: Implementation of sync lock as recursive lock to avoid crash.anand2015-04-281-1/+3
| | | | | | | | | | | | | | | | | | | | | | | Problem : In glusterd,we are using big lock which is implemented based on sync task frame work for thread synchronization and rcu lock for data consistency. sync task frame work swap the threads if there is no worker poll threads available,due to this rcu lock and rcu unlock was happening in different threads (urcu-bp will not allow this),resulting into glusterd crash. fix : To avoid releasing the sync lock(big lock) in between rcu critical section,implemented sync lock as recursive lock. More details: link : http://www.spinics.net/lists/gluster-devel/msg14632.html Change-Id: I2b56c1caf3f0470f219b1adcaf62cce29cdc6b88 BUG: 1211640 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10285 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* arbiter: load arbiter xlator on every 3rd brick of a replica 3 AFR subvolRavishankar N2015-04-271-0/+1
| | | | | | | | | | | | | | | | | | | Logic for adding the 'glusterd_brickinfo->group' member and using it to find the brick positon has been taken from http://review.gluster.org/#/c/9919. Thanks to Jeff Darcy for that. This patch is a part of the arbiter logic implementation for 3 way AFR details of which can be found at http://review.gluster.org/#/c/9656/ Change-Id: Idbfe4f29ee8e098e0102def8f38b32314316b188 BUG: 1199985 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/10257 Tested-by: NetBSD Build System Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: incorrect rsp.op_ret handling in volume get in cliAtin Mukherjee2015-04-261-1/+0
| | | | | | | | | | Change-Id: Iabe99c06166578fc90121e7cfdca4a6a3f5328ae BUG: 1211132 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10229 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: support for tier volumes 'detach start' and 'detach commit'Dan Lambright2015-04-221-1/+11
| | | | | | | | | | | | | | | | | | | | | These commands work in a manner analagous to rebalancing when removing a brick. The existing migration daemon detects "detach start" and switches to moving data off the hot tier. While in this state all lookups are directed to the cold tier. gluster v detach-tier <vol> start gluster v detach-tier <vol> commit The status and stop cli commands shall be submitted separately. Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0 BUG: 1205540 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: root <root@localhost.localdomain> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10108 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: NetBSD Build System
* glusterd: Replace transaction peers listsKaushal M2015-04-131-19/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transaction peer lists were used in GlusterD to peers belonging to a transaction. This was needed to prevent newly added peers performing partial transactions, which could be incorrect. This was accomplished by creating a seperate transaction peers list at the beginning of every transaction. A transaction peers list referenced the peerinfo data structures of the peers which were present at the beginning of the transaction. RCU protection of peerinfos referenced by the transaction peers list is a hard problem and difficult to do correctly. To have proper RCU protection of peerinfos, the transaction peers lists have been replaced by an alternative method to identify peers that belong to a transaction. The alternative method is to the global peers list along with generation numbers to identify peers that should belong to a transaction. This change introduces a global peer list generation number, and a generation number for each peerinfo object. Whenever a peerinfo object is created, the global generation number is bumped, and the peerinfos generation number is set to the bumped global generation. With the above changes, the algorithm to identify peers belonging to a transaction with RCU protection is as follows, - At the beginning of a transaction, the current global generation number is saved - To identify if a peers belonging to the transaction, - Start a RCU read critical section - For each peer in the global peers list, - If the peers generation number is not greater than the saved generation number, continue with the action on the peer - End the RCU read critical section The above algorithm guarantees that, - The peer list is not modified when a transaction is iterating through it - The transaction actions are only done on peers that were present when the transaction started But, as a transaction could iterate over the peers list multiple times, the algorithm cannot guarantee that same set of peers will be selected every time. A peer could get deleted between two iterations of the list within a transaction. This problem existed with transaction peers list as well, but unlike before now it will not lead to invalid memory access and potential crashes. This problem will be addressed seprately. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 52ded5b Add timespec_cmp 44aedd8 Add create timestamp to peerinfo 7bcbea5 Fix some silly mistakes 13e3241 Add start time to opinfo 17a6727 Use timestamp comparisions to identify xaction peers instead of a xaction peer list 3be05b6 Correct check for peerinfo age 70d5b58 Use read-critical sections for peer list iteration ba4dbca Use peerinfo timestamp checks in op-sm instead of xaction peer list d63f811 Add more peer status checks when iterating peers list in glusterd-syncop 1998a2a Timestamp based peer list traversal of mgmtv3 xactions f3c1a42 Remove transaction peer lists b8b08ee Remove unused labels 32e5f5b Remove 'npeers' usage a075fb7 Remove 'npeers' from mgmt-v3 framework 12c9df2 Use generation number instead of timestamps. 9723021 Remove timespec_cmp 80ae2c6 Remove timespec.h include a9479b0 Address review comments on 10147/4 [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: I9be1033525c0a89276f5b5d83dc2eb061918b97f BUG: 1205186 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/10147 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: bitd daemon should not start on the node which dont have any brickGauravKumarGarg2015-04-101-2/+2
| | | | | | | | | | | | | | | | | | If user enable bitrot from node1 which have brick then glusterd starting bitd daemon on node1 as well as glusterd starting bitd deamon on another node2 which does not have any brick (node1 and node2 are part of cluster). With this fix glusterd will not start bitd daemon on the node which don't have brick. Change-Id: Ic1c68d204221d369d89d628487cdd5957964792e BUG: 1207029 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10071 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* snapshot:Fetching wrong peer data during probeMohammed Rafi KC2015-04-101-5/+6
| | | | | | | | | | | | | | | | | | When adding a new friend to the cluster, the snap volfile are populating with wrong information for reconfigured option. For snap volumes, reconfigured data's are filling from the regular volumes data. This is because wrong dictionary key is used here. Change-Id: I659ebdc48c33419a2b825f26ce1f174abc8ea7dd BUG: 1204636 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9969 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : While snapshot restore, compute quota checksum.Sachin Pandit2015-04-101-0/+1
| | | | | | | | | | | | | | | | | | Problem : During snapshot restore we anyways copy the quota conf file after that we need to compute the checksum for that. If not, there might be a checksum mismatch during glusterd handshake. Solution : Compute a checksum file for quota conf file if its present. Change-Id: Ic4a6567c6ede9923443abf4ca59380679be88094 BUG: 1202436 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9901 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd:Change glusterd_list_add_order to add entry in ascending orderMohammed Rafi KC2015-04-081-2/+2
| | | | | | | | | | | Change-Id: I0f82b1b5ad37e06135e9af33a4b5342ddde3ca94 BUG: 1207132 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10046 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>