summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.h
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/cli: cli to get local state representation from glusterdSamikshan Bairagya2016-08-261-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently there is no existing CLI that can be used to get the local state representation of the cluster as maintained in glusterd in a readable as well as parseable format. The CLI added has the following usage: # gluster get-state [daemon] [odir <path/to/output/dir>] [file <filename>] This would dump data points that reflect the local state representation of the cluster as maintained in glusterd (no other daemons are supported as of now) to a file inside the specified output directory. The default output directory and filename is /var/run/gluster and glusterd_state_<timestamp> respectively. The option for specifying the daemon name leaves room to add support for other daemons in the future. Following are the data points captured as of now to represent the state from the local glusterd pov: * Peer: - Primary hostname - uuid - state - connection status - List of hostnames * Volumes: - name, id, transport type, status - counts: bricks, snap, subvol, stripe, arbiter, disperse, redundancy - snapd status - quorum status - tiering related information - rebalance status - replace bricks status - snapshots * Bricks: - Path, hostname (for all bricks these info will be shown) - port, rdma port, status, mount options, filesystem type and signed in status for bricks running locally. * Services: - name, online status for initialised services * Others: - Base port, last allocated port - op-version - MYUUID Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618 BUG: 1353156 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/14873 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Fix gsyncd upgrade issueKotresh HR2016-07-131-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: gluster upgrade is not generating new volfiles Cause: During upgrade, "glusterd --xlator-option *.upgrade=on -N" is run to generate new volfiles. It is run post 'glusterfs' rpm installation. The above command fails during upgrade if geo-replication is installed. This is because on glusterd start 'gsyncd' binary is called to configure geo-replication related stuff. Since 'glusterfs' rpm is installed prior to 'geo-rep' rpm, the 'gsyncd' binary used to glusterd upgrade command is of old version and hence it fails before generating new volfiles. Solution: Don't call geo-replication configure during upgrade/downgrade. Geo-replication configuration happens during start of glusterd after upgrade. Change-Id: Id58ea44ead9f69982f86fb68dc5b9ee3f6cd11a1 BUG: 1355628 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/14898 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: compare uuid instead of hostname address resolutionAtin Mukherjee2016-07-051-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | In glusterd_get_brickinfo () brick's hostname is address resolved. This adds an unnecessary latency since it uses calls like getaddrinfo (). Instead given the local brick's uuid is already known a comparison of MY_UUID and brickinfo->uuid is much more light weight than the previous approach. On a scale testing where cluster hosting ~400 volumes spanning across 4 nodes, if a node goes for a reboot, few of the bricks don't come up. After few days of analysis its found that glusterd_pmap_sigin () was taking signficant amount of latency and further code walthrough revealed this unnecessary address resolution. Applying this fix solves the issue and now all the brick processes come up on a node reboot. Change-Id: I299b8660ce0da6f3f739354f5c637bc356d82133 BUG: 1352279 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14849 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/glusterd: add/remove brick fixes for arbiter volumesRavishankar N2016-05-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1.Provide a command to convert replica 2 volumes to arbiter volumes. Existing self-heal logic will automatically heal the file hierarchy into the arbiter brick, the progress of which can be monitored using the heal info command. Syntax: gluster volume add-brick <VOLNAME> replica 3 arbiter 1 <HOST:arbiter-brick-path> 2. Add checks when removing bricks from arbiter volumes: - When converting from arbiter to replica 2 volume, allow only arbiter brick to be removed. - When converting from arbiter to plain distribute volume, allow only if arbiter is one of the bricks that is removed. 3. Some clean-up: - Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to log messages that are not failures. - Remove unused variable `brick_list` - Move 'brickinfo->group' related functions to glusted-utils. Change-Id: Ic87b8c7e4d7d3ab03f93e7b9f372b314d80947ce BUG: 1318289 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/14126 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: copy real_path from older brickinfo during brick importAtin Mukherjee2016-05-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | In glusterd_import_new_brick () new_brickinfo->real_path will not be populated for the first time and hence if the underlying file system is bad for the same brick, import will fail resulting in inconsistent configuration data. Fix is to populate real_path from old brickinfo object. Also there were many cases where we were unnecessarily calling realpath() and that may cause in failure. For eg - if a remove brick is executed with a brick whoose underlying file system has crashed, remove-brick fails since realpath() call fails. We'd need to call realpath() here as the value is of no use.Hence passing construct_realpath as _gf_false in glusterd_volume_brickinfo_get_by_brick () is a must in such cases. Change-Id: I7ec93871dc9e616f5d565ad5e540b2f1cacaf9dc BUG: 1335531 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14306 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/glusterd: enhance quota enable and disable processvmallika2016-04-291-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously quota crawl was done from the single mount point, this is very slow process if there are huge number of files exists in the volume This RFE will now spawn crawl process for each brick in the volume, and files are looked in parallel independently for each brick. This improves the speed of crawling process for entire files-system This patch also fixes below problem * Previously, mountdir was created under '/tmp'. If someone tries to cleanup '/tmp'/ directory then it is very dangerous that we loose volume data So create a mount point under /var/run/gluster/tmp instead * Previously, file-system crawl is performed from all the nodes, which is a redundant operation and performance will degrade The problem is fixed with this patch Change-Id: Icabedeb44182139ace9c8106793803122388cab8 BUG: 1290766 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/12952 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: populate brickinfo->real_path conditionallyAtin Mukherjee2016-04-111-2/+5
| | | | | | | | | | | | | | | | | | | | glusterd_brickinfo_new_from_brick () is called from multiple places and one of them is glusterd_brick_rpc_notify where its very well possible that an underlying brick's file system has crashed and a disconnect event has been received. In this case glusterd tries to build the brickinfo from the brickid in the RPC request, however the same fails as glusterd_brickinfo_new_from_brick () fails from realpath. Fix is to skip populating real_path if its a disconnect event. Change-Id: I9d9149c64a9cf2247abb731f219c1b1eef037960 BUG: 1325841 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/13965 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd / afr : Enable auto heal when replica count increasesAnuradha Talur2016-03-211-0/+10
| | | | | | | | | | | | | | | | | | | | | In replicate volumes, when a brick is added to a replicate group, heal to the new brick should be triggered. Also, the new brick should not be considered as source for healing till it is up to date. Previously, extended attributes had to be set manually on the bricks for this to happen. This patch is part 1 patch to automate this process. Change-Id: I29958448618372bfde23bf1dac5dd23dba1ad98f BUG: 1276203 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12451 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com>
* Tier: "tier start force" command implementationhari gowtham2015-12-221-1/+1
| | | | | | | | | | | | | | | | The start command doesnt restart the tier deamon if the deamon is running at one node. hence to bring up the tierd on the nodes where the deamon is down, the force command is implemented. It skips the check for tierd running. Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436 BUG: 1292112 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12983 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: Disallow peer with existing volumes to be probed in clusterAtin Mukherjee2015-12-061-0/+3
| | | | | | | | | | | | | | | As of now we do allow peer to get added in the trusted storage pool even if it has a volume configured. This is definitely not a supported configuration and can lead to issues as we never claim to support merging clusters. A single node running a standalone volume can be considered as a cluster. Change-Id: Id0cf42d6e5f20d6bfdb7ee19d860eee67c7c45be BUG: 1287992 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/12864 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/tier: Reset to reconfigured values after detach-tierMohammed Rafi KC2015-11-261-0/+3
| | | | | | | | | | | | | | After detach-tier commit , we have to reset the option reconfigured for tier volume Change-Id: Iae0210259720d6ac14ccc0cc339dc9f54a0c4571 BUG: 1285046 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12736 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: cli command implementation for bitrot scrub statusGaurav Kumar Garg2015-11-191-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CLI command for bitrot scrub status will be : gluster volume bitrot <volname> scrub status Above command will show the statistics of bitrot scrubber. Upon execution of this command it will show some common scrubber tunable value of volume <VOLNAME> followed by statistics of scrubber statistics of individual nodes. sample ouput for single node: Volume name : <VOLNAME> State of scrub: Active Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: Number of Scrubbed files: Number of Unsigned files: Last completed scrub time: Duration of last scrub: Error count: ========================================================= This is just infrastructure. list of bad file, last scrub time, error count value will be taken care by http://review.gluster.org/#/c/12503/ and http://review.gluster.org/#/c/12654/ patches. Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1 BUG: 1207627 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10231 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* mgmt/gluster: Handle tier brick volgenPranith Kumar K2015-11-191-0/+4
| | | | | | | | | | | | | | | | Index xlator watches only some xattrs based on type of volume. i.e. disperse/afr. When the volume becomes tiered then index is not adding these options in the volfile leading to no maintenance of indices. Thus no proactive self-heals. With this fix, we write brick volfiles considering the type of volume they belong to. BUG: 1250803 Change-Id: Ibe8f2d4ad5cb350306ab7ca0753e0f9a40b96a26 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12595 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* tier/shd: inline warning when compiled with gcc v.5Mohammed Rafi KC2015-10-131-1/+1
| | | | | | | | | | | | Change-Id: I487a26263d6e940eed364a831e99f9b8390bc96a BUG: 1226881 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12342 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anoop C S <anoopcs@redhat.com> Tested-by: Anoop C S <anoopcs@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/shd: create shd volfile for tieringMohammed Rafi KC2015-10-111-0/+7
| | | | | | | | | | | | | | | Currently shd graph will only start if it is replicate or disperse volume. But in case of tiering, volume type will be tier. So we need to start shd if any of the cold or hot is compatible with shd volume. Change-Id: Ic689746ac7d2fc6a9eccdabd8518dc9139829de2 BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11962 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tiering/glusterd: start tier daemon during volume startMohammed Rafi KC2015-08-171-0/+4
| | | | | | | | | | | | | | | | | | | Tier daemon should always run with tier volume. If volume is stopped and started again, we manually need to start the tier-daemon, instead this patch will automatically trigger tier process along with volume start. A snapshot restored volume will not have node_state_info, so we need to create and store it dynamically Change-Id: I659387c914bec7a1b6929ee5cb61f7b406402075 BUG: 1238593 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/11525 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Get the local txn_info based on trans_id in op_sm call backs.anand2015-07-061-4/+0
| | | | | | | | | | | | | | | | | | | Issue: when two or more transactions are running concurrently in op_sm, global op_info might get corrupted. Fix: Get local txn_info based on trans_id instead of using global txn_info for commands (re-balance, profile ) which are using op_sm in originator. TODO: Handle errors properly in call backs and completely remove the global op_info from op_sm. Change-Id: I9d61388acc125841ddc77e2bd560cb7f17ae0a5a BUG: 1229139 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/11120 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/tier: configure tier daemon during volume restartMohammed Rafi KC2015-06-191-0/+7
| | | | | | | | | | | | | | rebalance daemon will be running on every tier volume for promoting/demoting the files. When volume/glusterd is restarted, then we need to configure the daemon. Change-Id: Ib565240a70edea2ec8bc1601c52b40c0783491d3 BUG: 1225330 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/10933 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* snapshot: Fix finding brick mount path logicAvra Sengupta2015-06-051-2/+2
| | | | | | | | | | | | | | | | | | | | | | Previously while finding brick mount paths of snap volume's bricks, we were taking brick order into consideration. This logic fails when a brick is removed or a tier is added. Hence modifying the logic to look for the first occurence of the word "brick" in the brick path. From there we iterate till we find a '/'. The string till the first '/' after we encounter the word brick is the brick mount path. Change-Id: Ic85983c4e975e701cdfd4e13f8e276ac391a3e49 BUG: 1227646 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11060 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/shared_storage: Provide a volume set option to create and mount the ↵Avra Sengupta2015-06-041-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shared storage Introducing a global volume set option(cluster.enable-shared-storage) which helps create and set-up the shared storage meta volume. gluster volume set all cluster.enable-shared-storage enable On enabling this option, the system analyzes the number of peers in the cluster, which are currently connected, and chooses three such peers(including the node the command is issued from). From these peers a volume(gluster_shared_storage) is created. Depending on the number of peers available the volume is either a replica 3 volume(if there are 3 connected peers), or a replica 2 volume(if there are 2 connected peers). "/var/run/gluster/ss_brick" serves as the brick path on each node for the shared storage volume. We also mount the shared storage at "/var/run/gluster/shared_storage" on all the nodes in the cluster as part of enabling this option. If there is only one node in the cluster, or only one node is up then the command will fail Once the volume is created, and mounted the maintainance of the volume like adding-bricks, removing bricks etc., is expected to be the onus of the user. On disabling the option, we provide the user a warning, and on affirmation from the user we stop the shared storage volume, and unmount it from all the nodes in the cluster. gluster volume set all cluster.enable-shared-storage disable Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc BUG: 1222013 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* build: do not #include "config.h" in each fileNiels de Vos2015-05-291-5/+0
| | | | | | | | | | | | | | | | | | Instead of including config.h in each file, and have the additional config.h included from the compiler commandline (-include option). When a .c file tests for a certain #define, and config.h was not included, incorrect assumtions were made. With this change, it can not happen again. BUG: 1222319 Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10808 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: fix double-free of rebalance process' rpc objectKrishnan Parthasarathi2015-05-261-0/+8
| | | | | | | | | | | Change-Id: I0c79c4de47a160b1ecf3a8994eedc02e3f5002a9 BUG: 1223338 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/10872 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* tiering: Do not allow some operations on tiered volumeMohammed Rafi KC2015-05-071-0/+4
| | | | | | | | | | | | | | | | | Some operations like add-brick,remove-brick,rebalance, replace-brick are not supported on tiered volume. But there is no code level check for this. This patch will allow to do the same Change-Id: I12689f4e902cf0cceaf6f7f29c71057305024977 BUG: 1205624 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10349 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-071-10/+0
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/marker: turn off inode quotas by defaultvmallika2015-05-061-0/+3
| | | | | | | | | | | | | | | | | | | | | inode quota is a new feature implemented in glusterfs-3.7 if quota is enabled in the older version and is upgraded to a new version, we can hit setxattr spike during self-heal of inode quotas. So, when a quota is enabled, turn off inode-quotas with a xlator option. With this patch, we still account for inode quotas but only when a write operation is performed for a particular file. User will be able to query inode quotas once the Inode-quota xlator option is enabled. Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a BUG: 1209430 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10152 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* quota: support for inode quota in quota.confvmallika2015-05-041-6/+0
| | | | | | | | | | | | | | | | | Currently when quota limit is set, corresponding gfid is set in quota.conf. This patch supports storing inode-quota limits in quota.conf and also stores additional byte for each gfid to differentiate between usage quota limit and inode quota limit. Change-Id: I444d7399407594edd280e640681679a784d4c46a BUG: 1202244 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10069 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: support for tier volumes 'detach start' and 'detach commit'Dan Lambright2015-04-221-0/+3
| | | | | | | | | | | | | | | | | | | | | These commands work in a manner analagous to rebalancing when removing a brick. The existing migration daemon detects "detach start" and switches to moving data off the hot tier. While in this state all lookups are directed to the cold tier. gluster v detach-tier <vol> start gluster v detach-tier <vol> commit The status and stop cli commands shall be submitted separately. Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0 BUG: 1205540 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: root <root@localhost.localdomain> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10108 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: NetBSD Build System
* glusterd: Replace transaction peers listsKaushal M2015-04-131-14/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transaction peer lists were used in GlusterD to peers belonging to a transaction. This was needed to prevent newly added peers performing partial transactions, which could be incorrect. This was accomplished by creating a seperate transaction peers list at the beginning of every transaction. A transaction peers list referenced the peerinfo data structures of the peers which were present at the beginning of the transaction. RCU protection of peerinfos referenced by the transaction peers list is a hard problem and difficult to do correctly. To have proper RCU protection of peerinfos, the transaction peers lists have been replaced by an alternative method to identify peers that belong to a transaction. The alternative method is to the global peers list along with generation numbers to identify peers that should belong to a transaction. This change introduces a global peer list generation number, and a generation number for each peerinfo object. Whenever a peerinfo object is created, the global generation number is bumped, and the peerinfos generation number is set to the bumped global generation. With the above changes, the algorithm to identify peers belonging to a transaction with RCU protection is as follows, - At the beginning of a transaction, the current global generation number is saved - To identify if a peers belonging to the transaction, - Start a RCU read critical section - For each peer in the global peers list, - If the peers generation number is not greater than the saved generation number, continue with the action on the peer - End the RCU read critical section The above algorithm guarantees that, - The peer list is not modified when a transaction is iterating through it - The transaction actions are only done on peers that were present when the transaction started But, as a transaction could iterate over the peers list multiple times, the algorithm cannot guarantee that same set of peers will be selected every time. A peer could get deleted between two iterations of the list within a transaction. This problem existed with transaction peers list as well, but unlike before now it will not lead to invalid memory access and potential crashes. This problem will be addressed seprately. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 52ded5b Add timespec_cmp 44aedd8 Add create timestamp to peerinfo 7bcbea5 Fix some silly mistakes 13e3241 Add start time to opinfo 17a6727 Use timestamp comparisions to identify xaction peers instead of a xaction peer list 3be05b6 Correct check for peerinfo age 70d5b58 Use read-critical sections for peer list iteration ba4dbca Use peerinfo timestamp checks in op-sm instead of xaction peer list d63f811 Add more peer status checks when iterating peers list in glusterd-syncop 1998a2a Timestamp based peer list traversal of mgmtv3 xactions f3c1a42 Remove transaction peer lists b8b08ee Remove unused labels 32e5f5b Remove 'npeers' usage a075fb7 Remove 'npeers' from mgmt-v3 framework 12c9df2 Use generation number instead of timestamps. 9723021 Remove timespec_cmp 80ae2c6 Remove timespec.h include a9479b0 Address review comments on 10147/4 [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: I9be1033525c0a89276f5b5d83dc2eb061918b97f BUG: 1205186 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/10147 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* build: make contrib/uuid dependency optionalNiels de Vos2015-04-101-1/+1
| | | | | | | | | | | | | | | | | | | On Linux systems we should use the libuuid from the distribution and not bundle and statically link the contrib/uuid/ bits. libglusterfs/src/compat-uuid.h has been introduced and should become an abstraction layer for different UUID APIs. Non-Linux operating systems should implement their compatibility layer there. Once all operating systems have an implementation in compat-uuid.h, we can remove contrib/uuid/ from the repository completely. Change-Id: I345e5357644be2521685e00358bb8c83c4ea0577 BUG: 1206587 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10129 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: group server-quorum related code togetherKrishnan Parthasarathi2015-04-011-34/+0
| | | | | | | | | | | | | | | Server-quorum implementation was spread in many files. This patch brings them all together into a single file, namely glusterd-server-quorum.c. All exported functions are available via glusterd-server-quorum.h Change-Id: I8fd77114b5bc6b05127cb8a6a641e0295f0be7bb BUG: 1205592 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9492 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: clean up global xaction_peer occurancesAtin Mukherjee2015-03-261-3/+0
| | | | | | | | | | | | | | With http://review.gluster.org/#/c/9972/ there is no more need to maintain xaction_peers in glusterd_conf_t. This patch cleans up code for all the occurances of xaction_peers. Change-Id: I4fbf2df0fa9b8a8751029be36be7f76f6464cc76 BUG: 1204727 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9980 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Maintain local xaction_peer list for op-smAtin Mukherjee2015-03-261-0/+13
| | | | | | | | | | | | | | | http://review.gluster.org/9269 addresses maintaining local xaction_peers in syncop and mgmt_v3 framework. This patch is to maintain local xaction_peers list for op-sm framework as well. Change-Id: Idd8484463fed196b3b18c2df7f550a3302c6e138 BUG: 1204727 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9972 Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Prevent possible deadlock due to glusterd_quorum_countKaushal M2015-03-251-1/+2
| | | | | | | | | | | | | | | | | | | | | | | Also rename the macro glusterd_quorum_count to GLUSTERD_QUORUM_COUNT so that it stands out as a macro. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 0fbd7ba Prevent possible deadlock due to glusterd_quorum_count 5da3062 Rename glusterd_quorum_count to GLUSTERD_QUORUM_COUNT b3aa3c4 Enclose GLUSTERD_QUORUM_COUNT definition in a do-while block [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic4b3949f303d72ce53e0139f62b83b8d13fb4e47 BUG: 1205186 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9978 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/glusterd: cli command implementation for bitrot featuresGaurav Kumar Garg2015-03-181-0/+4
| | | | | | | | | | | | | | | | | CLI command for bitrot features. volume bitrot <volname> enable|disable Above command will enable/disable bitrot feature for particular volume. BUG: 1170075 Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Anand nekkunti <anekkunt@redhat.com> Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com> Reviewed-on: http://review.gluster.org/9866 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Replace libglusterfs lists with liburcu listsKaushal M2015-03-031-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces usage of the libglusterfs lists data structures and API in glusterd with the lists data structures and API from liburcu. The liburcu data structes and APIs are a drop-in replacement for libglusterfs lists. All usages have been changed to keep the code consistent, and free from confusion. NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data structures and API, as it holds rpc_transport_t objects, which is not a part of glusterd and is not being changed in this patch. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 6dac576 Replace libglusterfs lists with liburcu lists a51b5ab Fix compilation issues d98a06f Fix merge issues a5d918e Remove merge remnant 1cca113 More style cleanup 1917be3 Address review comments on 9624/1 8d10f13 Use cds_lists for glusterd_svc_t 524ad5d Add rculist header in glusterd-conn-helper.c 646f294 glusterd: add list_add_order API honouring rcu [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0 BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9624 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd: nfs,shd,quotad,snapd daemons refactoringAtin Mukherjee2015-02-201-79/+14
| | | | | | | | | | | | | This patch ports nfs, shd, quotad & snapd with the approach suggested in http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2 BUG: 1191486 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9428 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd: quorum validatation in glusterd syncop frameworkGauravKumarGarg2015-01-201-0/+4
| | | | | | | | | | | | | | | | | | | | Previously glusterd was not checking quorum validation in syncop framework. So when there is loss in quorum then few operation (for eg. add-brick, remove-brick, volume set) which is based on syncop framework passed successfully with out doing quorum validation check. With this change it will do quorum validation in syncop framework and it will block all operation (except volume set <quorum options> and "volume reset all" commands) when there is loss in quorum. Change-Id: I4c2ef16728d55c98a228bb86795023d9c1f4e9fb BUG: 1177132 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/9349 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Implement Volume heal enable/disablePranith Kumar K2015-01-201-0/+3
| | | | | | | | | | | | | | | | | | For volumes with replicate, disperse xlators, self-heal daemon should do healing. This patch provides enable/disable functionality for the xlators to be part of self-heal-daemon. Replicate already had this functionality with 'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it uniform for both types of volumes. Internally it still does 'volume set' based on the volume type. Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29 BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9358 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* features/changelog: Cleanup .processing and .current directoryAravinda VK2015-01-181-3/+0
| | | | | | | | | | | | | | | | | On changelog_register cleanup .processing, .history/.processing, .current and .history/.current from the working directory. Moved glusterd_recursive_rmdir and glusterd_for_each_entry to common place(libglusterfs) and renamed as recursive_rmdir and GF_FOR_EACH_ENTRY_IN_DIR respectively BUG: 1162057 Change-Id: I1f98468a344cead039026762a805437b2f9e507b Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9082 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd: glusterd socket files should reside in /var/run/glusterAtin Mukherjee2015-01-121-1/+1
| | | | | | | | | | | | | glusterfs socket files should not reside outside of gluster folder. Change-Id: I5d7b43b11c8c78a32df8aaf38917b80e4e33c9d0 BUG: 1180972 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9423 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Refactor glusterd-utils.cAvra Sengupta2015-01-081-200/+20
| | | | | | | | | | | | | | | | Refactor glusterd-utils.c to create glusterd-snapshot-utils.c consisting of all snapshot utility functions. Change-Id: Id9823a2aec9b115f9c040c9940f288d4fe753d9b BUG: 1176770 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9391 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: cluster quorum count check correctionAtin Mukherjee2015-01-061-4/+9
| | | | | | | | | | | | | | | | | Due to the recent change introduced by commit da9deb54df91dedc51ebe165f3a0be646455cb5b cluster quorum count calucation now depends on whether the peer list is either all peers or global transaction peer list or the local transaction peer list. Change-Id: I9f63af9a0cb3cfd6369b050247d0ef3ac93d760f BUG: 1173414 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9350 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/uss: Create rebalance volfile.Avra Sengupta2014-11-301-0/+4
| | | | | | | | | | | | | | | | | | | | Create a new rebalance volfile, which will not contain snap-view client translators, irrespective of the status of USS. This volfile, will be created and regenerated everytime the fuse-volfile is generated, and will be consumed by the rebalance process. Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af BUG: 1164711 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9190 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* rdma: Wrong volfile fetch on fuse mounting tcp,rdma volume via rdmaAnoop C S2014-11-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of now for both tcp only volumes and rdma only volumes, volfile names are in the format <volname>-fuse.vol. This patch will change the client volfile namings as shown below. * TCP mounts always use <volname>-fuse.vol * RDMA mounts always use <volname>.rdma-fuse.vol Following the above naming convention, for tcp,rdma volumes both volfiles will be present under /var/lib/glusterd/vols/<volname>/ such that rdma only volume can be mounted as mount -t glusterfs -o transport=rdma <server/ip>:/<volname> <mount-point> OR mount -t glusterfs <server/ip>:/<volname>.rdma <mount-point> The above command format can also be used to fuse mount a tcp,rdma volume via rdma transport. When we try to fuse mount a tcp,rdma volume with transport-type as rdma it silently mounts via tcp. This change will also make sure that it fetches the correct volfile based on the transport-type specified from client side. BUG: 1131502 Change-Id: I34da4b01ac813b69494a43188f51145457412923 Signed-off-by: Anoop C S <achiraya@redhat.com> Reviewed-on: http://review.gluster.org/8498 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd/snapshot: Don't append nouuid mount option for snapshot brickvmallika2014-11-131-0/+3
| | | | | | | | | | | | | if original brick already has this option Change-Id: I2841d2ac371a3e9505f6061f35d1d447946c0bae BUG: 1133456 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8526 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: op state machine shouldn't use global peer listAtin Mukherjee2014-10-281-0/+3
| | | | | | | | | | | | | | | | | | | | Problem : op state machine was relying on the global peer list while sending lock/stage/unlock commit rpc requests to the peers in the cluster. Trusting on global peer list structure is dangerous as this structure gets modified if any peer modification command is attempted in the cluster when there is a ongoing transaction going through the state machine. An ideal usecase of this problem when rebalance is in progress and peer probe is executed rebalance op-sm and peer probe may run into race making peerinfo structure go for toss. Solution: Use local copy of peer list (xaction_peers) in glusterd op-sm. Change-Id: I1ff7118dc6a9a72633e2e87b7ab7bae1796595e0 BUG: 1152890 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8932 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Authenticate management handshake requestsKaushal M2014-09-231-0/+3
| | | | | | | | | | | | | | | | | | | Management handshake requests, which are used to validate op-version supported by the peers, are now only allowed if, - the glusterd doesn't have any other peer, or - the request was sent by another peer. This prevents the op-version of a peer being changed because of a connection attempt by an invalid peer. Change-Id: I248c386ed5ec4f8360e7b5e7f9ab74b7e8a7fc65 BUG: 1109741 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8126 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Prevent rebalance starting with old clientsKaushal M2014-09-031-0/+3
| | | | | | | | | | | | | | | | | Glusterd will prevent rebalance from starting when clients older than glusterfs-v3.6.0 are connected to a volume. This is needed as running rebalance with old clients connected could lead to data loss in some cases. The DHT xlator on newer clients (>= 3.6.0) has been fixed to prevent the data loss issues. Change-Id: If58640236382a2fc13f73f6b43777f01713859f7 BUG: 1136201 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8583 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/glusterd: Support of volume get for a specific volume optionAtin Mukherjee2014-08-261-0/+9
| | | | | | | | | | | | | | This patch introduces a cli command to display a specific volume option/all volume options of a specific volume with the following usage: Usage: volume get <VOLNAME> <key|all> Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d BUG: 983317 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8305 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep/glusterd: API to check active geo-rep session for the volumeKotresh H R2014-08-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Requirement: Snapshot needs an API to fail the CLI if any geo-rep session is active for that volume. Solution: A function "gd_vol_is_geo_rep_active" is provided to check if any geo-rep session is active for that volume. An in memory dict called 'gsync_running_slaves' is maintained in 'volinfo' structure to keep track of active geo-rep session for the volume. The key 'slavenode::slavevol' with value 'running' is added whenever geo-rep is started/resumed into the dict and the same is removed if stopped/paused. So the 'count' in dict is used to decide whether the geo-rep is active or not for that volume. Also added "this->name" in gf_log in routines which this patch is touched. Change-Id: I2b5de7dd686541c6b89c0fd0f7a4dbc92eecfac5 BUG: 1129008 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8459 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>