summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-op-sm.c
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Maintain per transaction op-info objectanand2015-12-101-0/+136
| | | | | | | | | | | | | | | | Issues: Since in op-sm transactions a mix of access to global op-info & per transaction op-info objects are used, the correctness of op-info object may go for a toss resulting into incorrect response getting passed back to cli Fix: Use per transaction op-info object Change-Id: Ice023bace3e137dfd8e7b13bd5b53545a79a203f BUG: 1287027 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/12836 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: add pending_node only if hxlator_count is validRavishankar N2015-12-091-13/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes a regression introduced by commit 0ef62933649392051e73fe01c028e41baddec489 . See BZ for bug description. Problem: To perform GLUSTERD_BRICK_XLATOR_OP, the rpc requires number of xlators (n) the op needs to be performed on and the xlator names are populated in dictionary with xl-0, xl-1... xl-n-1 as keys. When Volume heal full is executed, for each replica group, glustershd on the local node may or may not be selected to perform heal by glusterd. XLATOR_OP rpc should be sent to the shd running on the same node by glusterd only when glustershd on that node is selected at least once. This bug occurs when glusterd sends the rpc to local glustershd even when it is not selected for any of the replica groups. Fix: Don't send the rpc to local glustershd when it is not selected even once. Change-Id: I2c8217a8f00f6ad5d0c6a67fa56e476457803e08 BUG: 1287503 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/12843 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: Change volume start into v3 frameworkMohammed Rafi KC2015-11-251-1/+1
| | | | | | | | | | | | | | | | | | As part of volume start, if the volume is of tier type then we need to start tiering daemon also. But before starting tier daemon all the bricks should be started. So by changing volume start into v3 framework, we can do tier start in post validate phase Change-Id: If921067f4739e6b9a3239fc5717696eaf382c22a BUG: 1284372 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12718 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: cli command implementation for bitrot scrub statusGaurav Kumar Garg2015-11-191-0/+93
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CLI command for bitrot scrub status will be : gluster volume bitrot <volname> scrub status Above command will show the statistics of bitrot scrubber. Upon execution of this command it will show some common scrubber tunable value of volume <VOLNAME> followed by statistics of scrubber statistics of individual nodes. sample ouput for single node: Volume name : <VOLNAME> State of scrub: Active Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: Number of Scrubbed files: Number of Unsigned files: Last completed scrub time: Duration of last scrub: Error count: ========================================================= This is just infrastructure. list of bad file, last scrub time, error count value will be taken care by http://review.gluster.org/#/c/12503/ and http://review.gluster.org/#/c/12654/ patches. Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1 BUG: 1207627 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10231 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* afr/glusterd: Fix naming issue in tier related changesMohammed Rafi KC2015-10-301-2/+4
| | | | | | | | | | | | | | changing some of the function names added recently as part of the tiering changes. Change-Id: I238831128ee00cdf83f8a80be937d3528d133099 BUG: 1275489 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12431 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* core: use syscall wrappers instead of direct syscalls -- glusterdKaleb S. KEITHLEY2015-10-281-3/+3
| | | | | | | | | | | | | | | various xlators and other components are invoking system calls directly instead of using the libglusterfs/syscall.[ch] wrappers. If not using the system call wrappers there should be a comment in the source explaining why the wrapper isn't used. Change-Id: I28bf2a5f7730b35914e7ab57fed91e1966b30073 BUG: 1267967 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/12379 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: call glusterd_store_volinfo in bump up op-versionAtin Mukherjee2015-10-271-0/+7
| | | | | | | | | | | | | | | After an upgrade, op-version is expected to be updated through gluster volume set. If the new version introduces any feature which changes volinfo structure without storing the default values of these new options would result into cksum issues. Change-Id: I57b4667f3403839811735bf66bef29e5200a9241 BUG: 1262805 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/12171 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
* glusterd: disabling enable-shared-storage option should not delete volumeGaurav Kumar Garg2015-10-131-5/+23
| | | | | | | | | | | | | | | | | | | Previously when you create volume with "glusterd_shared_storage" name and if user disable enable-shared-storage option then gluster will delete the "glusterd_shared_storage" volume. With this fix gluster will do appropriate validation of enable-shared-storage option and it will not delete volume with "glusterd_shared_storage" name if it is a user created volume. Change-Id: I2bd92f938fb3de6ef496a934933bdcea9f251491 BUG: 1266818 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12232 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tier/shd: make shd commands compatible with tieringMohammed Rafi KC2015-10-121-95/+148
| | | | | | | | | | | | | | | | tiering volfiles may contain afr and disperse together or multiple time based on configuration. And the informations for those configurations are stored in tier_info. So most of the volgen code generation need to be changed to make compatible with it. Change-Id: I563d1ca6f281f59090ebd470b7fda1cc4b1b7e1d BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12135 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* Tiering: change in status for remove brick and rebalancehari gowtham2015-09-211-12/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when we trigger a detach tier start on a tier vol, it shows in the volume status task as "remove brick" instead of "Detach tier" Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098 Cold Bricks: Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101 Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112 NFS Server on localhost N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ Task : Tier migrate ID : e11d5a3d-b1ae-4c3f-8f95-b28993c60939 Status : in progress Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098 Cold Bricks: Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101 Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112 NFS Server on localhost N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ Task : Detach tier ID : 76d700b1-5bbd-43ed-95fd-1640b2b4af31 Status : completed Change-Id: I4bd3b340d4e700e8afed00e1478b8a8b54dfe2e2 BUG: 1261837 Signed-off-by: hari gowtham <hgowtham@redhat.com> Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12149 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: volume status backward compatibilityHari Gowtham2015-09-071-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | volume status message of 3.7 does not display all the brick in a mixed cluster(3.6 and 3.7). it displays the bricks in 3.7 and misses bricks in 3.6 due to the key difference for ports. Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.171:/data/gluster/tier/cbr2 49153 0 Y 13494 Brick 10.70.42.203:/data/gluster/tier/cbr2 49154 0 Y 27686 NFS Server on localhost N/A N/A N N/A NFS Server on dhcp42-203.lab.eng.blr.redhat .com N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ There are no active volume tasks Change-Id: Icf0dc01a3d21d0889c43e2868c646a0c7e07ff25 BUG: 1255694 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/11986 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : Display status of Self Heal Daemon for disperse volumeAshish Pandey2015-08-251-16/+22
| | | | | | | | | | | | | | | | | Problem : Status of Self Heal Daemon is not displayed in "gluster volume status" As disperse volumes are self heal compatible, show the status of self heal daemon in gluster volume status command Change-Id: I83d3e6a2fd122b171f15cfd76ce8e6b6e00f92e2 BUG: 1217311 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/10764 Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* quota : volume-reset shouldn't remove quota-deem-statfsManikandan Selvaganesh2015-08-071-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled. 1) glusterd_op_stage_reset_volume () 'gluster volume set/reset <VOLNAME>' features.quota/ features.inode-quota' should not be allowed as it is deprecated. Setting and resetting quota/inode-quota features should be allowed only through 'gluster volume quota <VOLNAME> enable/disable'. 2) glusterd_enable_default_options () Option 'features.quota-deem-statfs' should not be turned off with 'gluster volume reset <VOLNAME>', since quota features can be set/reset only with 'gluster volume quota <VOLNAME> enable/disable'. But, 'gluster volume set features.quota-deem-statfs' can be turned on/off when quota is enabled. Change-Id: Ib5aa00a4d8c82819c08dfc23e2a86f43ebc436c4 BUG: 1250582 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/11839 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Stop/restart/notify to daemons(svcs) during reset/set on a volumeanand2015-08-061-4/+11
| | | | | | | | | | | | | | | | | | | | problem : Reset/set commands were not working properly. reset command returns success but it not sending notification to svcs if corresponding graph modified. Fix: Whenever reset/set command issued, generate the temp graph and compare with original graph and do the fallowing actions 1.) If both graph are identical nothing to do with svcs. 2.) If any changes in graph topology restart/stop service by calling svc manager. 3) If changes in options send notify signal by calling glusterd_fetchspec_notify. Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7 BUG: 1209329 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10850 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Do not log failure if glusterd_get_txn_opinfo fails in gluster ↵Atin Mukherjee2015-08-021-12/+11
| | | | | | | | | | | | | | | | | | | volume status The first RPC call of gluster volume status fetches the list of the volume names from GlusterD and during that time since no volume name is set in the dictionary gluserd_get_txn_opinfo fails resulting into a failure log which is annoying to the user considering this command is triggered frequently. Fix is to have callers log it depending on the need Change-Id: Ib60a56725208182175513c505c61bcb28148b2d0 BUG: 1238936 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11520 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com>
* dict: dict_set_bin() should never free the pointer on errorNiels de Vos2015-07-241-0/+2
| | | | | | | | | | | | | | | | | | | | | | dict_set_bin() is handling the pointer that it passed inconsistently. Depending on the errors that can occur, the pointer passed to the dict can be free'd, but there is no guarantee. It is cleaner to have the caller free the pointer that allocated it and dict_set_bin() returned an error. When dict_set_bin() returned success, the given pointer will be free'd when dict_unref() calls data_destroy(). Many callers of dict_set_bin() already take care of free'ing the pointer on error. The ones that did not, are corrected with this change too. Change-Id: I39a4f7ebc0cae6d403baba99307d7ce408f25966 BUG: 1242280 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/11638 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* glusterd/shared_storage: Use /var/lib/glusterd/ss_brick as shared storage's ↵Avra Sengupta2015-07-061-5/+4
| | | | | | | | | | | | | | | | | | | | | | | brick The brick path we use to create shared storage is /var/run/gluster/ss_brick. The problem with using this brick path is /var/run/gluster is a tmpfs and all the brick/shared storage data will be wiped off when the node restarts. Hence using /var/lib/glusterd/ss_brick as the brick path for shared storage volume as this brick and the shared storage volume is internally created by us (albeit on user's request), and contains only internal state data and no user data. Change-Id: I808d1aa3e204a5d2022086d23bdbfdd44a2cfb1c BUG: 1218573 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11533 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd: Get the local txn_info based on trans_id in op_sm call backs.anand2015-07-061-3/+0
| | | | | | | | | | | | | | | | | | | Issue: when two or more transactions are running concurrently in op_sm, global op_info might get corrupted. Fix: Get local txn_info based on trans_id instead of using global txn_info for commands (re-balance, profile ) which are using op_sm in originator. TODO: Handle errors properly in call backs and completely remove the global op_info from op_sm. Change-Id: I9d61388acc125841ddc77e2bd560cb7f17ae0a5a BUG: 1229139 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/11120 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Porting left out log messages to new frameworkNandaja Varma2015-06-261-27/+53
| | | | | | | | | | | Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647 BUG: 1235538 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/11388 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/ afr : set afr pending xattrs on replace brickAnuradha2015-06-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is part one change to prevent data loss in a replicate volume on doing a replace-brick commit force operation. Problem: After doing replace-brick commit force, there is a chance that self heal happens from the replaced (sink) brick rather than the source brick leading to data loss. Solution: During the commit phase of replace brick, after old brick is brought down, create a temporary mount and perform setfattr operation (on virtual xattr) indicating AFR to mark the replaced brick as sink. As a part of this change replace-brick command is being changed to use mgmt_v3 framework rather than op-state-machine framework. Many thanks to Krishnan Parthasarathi for helping me out on this. Change-Id: If0d51b5b3cef5b34d5672d46ea12eaa9d35fd894 BUG: 1207829 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/10076 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* bitrot/glusterd: gluster volume set command for bitrot should not supportedGaurav Kumar Garg2015-06-161-0/+53
| | | | | | | | | | | | | | Currently gluster volume set <VOLNAME> bitrot succeeds. gluster volume set command for bitrot is not supported. Gluster should only accept gluster volume bitrot <VOLNAME> * commands. Change-Id: I5ff4b79f202ad018c76188f19d6311aad0d7c166 BUG: 1229134 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11118 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com>
* sm/glusterd: Porting messages to new logging frameworkNandaja Varma2015-06-121-257/+454
| | | | | | | | | | | Change-Id: I391d1ac6a7b312461187c2e8c6f14d09a0238950 BUG: 1194640 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/9927 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/shared_storage: Provide a volume set option to create and mount the ↵Avra Sengupta2015-06-041-6/+190
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shared storage Introducing a global volume set option(cluster.enable-shared-storage) which helps create and set-up the shared storage meta volume. gluster volume set all cluster.enable-shared-storage enable On enabling this option, the system analyzes the number of peers in the cluster, which are currently connected, and chooses three such peers(including the node the command is issued from). From these peers a volume(gluster_shared_storage) is created. Depending on the number of peers available the volume is either a replica 3 volume(if there are 3 connected peers), or a replica 2 volume(if there are 2 connected peers). "/var/run/gluster/ss_brick" serves as the brick path on each node for the shared storage volume. We also mount the shared storage at "/var/run/gluster/shared_storage" on all the nodes in the cluster as part of enabling this option. If there is only one node in the cluster, or only one node is up then the command will fail Once the volume is created, and mounted the maintainance of the volume like adding-bricks, removing bricks etc., is expected to be the onus of the user. On disabling the option, we provide the user a warning, and on affirmation from the user we stop the shared storage volume, and unmount it from all the nodes in the cluster. gluster volume set all cluster.enable-shared-storage disable Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc BUG: 1222013 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* bitrot/glusterd: scrub option should be disabled once bitrot option is resetGaurav Kumar Garg2015-06-031-0/+7
| | | | | | | | | | | | | Scrubber options should be disabled from the dictionary if user reset bitrot option. Change-Id: Ic7e390cf88b9b749f0ada8bbd4632f4cc0c4aff9 BUG: 1220713 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10936 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* build: do not #include "config.h" in each fileNiels de Vos2015-05-291-4/+0
| | | | | | | | | | | | | | | | | | Instead of including config.h in each file, and have the additional config.h included from the compiler commandline (-include option). When a .c file tests for a certain #define, and config.h was not included, incorrect assumtions were made. With this change, it can not happen again. BUG: 1222319 Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10808 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd/snapshot: Return correct errno in events of failure - PATCH 1Avra Sengupta2015-05-281-2/+3
| | | | | | | | | | | | | | | | RETCODE ERROR ------------------------------------------- 30800 Internal Error 30801 Another Transaction In Progress Change-Id: Ica7fd2e513b2c28717b6df73cfb2667725dbf057 BUG: 1212413 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10313 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* tiering/nfs: duplication of nodes in client graphMohammed Rafi KC2015-05-281-1/+1
| | | | | | | | | | | | | | | | | | | | When creating client volfiles, xlator tier-dht will be loaded for each volume. So for services like nfs have one or more volumes . So for each volume in the graph a tier-dht xlator will be created. So the graph parser will fail because of the redundant node in graph. By this change tier-dht will be renamed as volname-tier-dht Change-Id: I3c9b9c23ddcb853773a8a02be7fd8a5d09a7f972 BUG: 1222840 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10820 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com>
* dht: make lookup-unhashed=auto do something actually usefulJeff Darcy2015-05-101-31/+74
| | | | | | | | | | | | | | | | | | | | | | | | The key concept here is to determine whether a directory is "clean" by comparing its last-known-good topology to the current one for the volume. These are stored as "commit hashes" on the directory and the volume root respectively. The volume's commit hash changes whenever a brick is added or removed, and a fix-layout is done. A directory's commit hash changes only when a full rebalance (not just fix-layout) is done on it. If all bricks are present and have a directory commit hash that matches the volume commit hash, then we can assume that every file is in its "proper" place. Therefore, if we look for a file in that proper place and don't find it, we can assume it's not on any other subvolume and *safely* skip the global (broadcast to all) lookup. Change-Id: Id6ce4593ba1f7daffa74cfab591cb45960629ae3 BUG: 1219637 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/7702 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/tiering: display hot tier, and cold tier separatelyMohammed Rafi KC2015-05-091-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cli commands display the brick information without a way to distinguish hot tier, and cold tier. This patch will change all the cli related output, without changing the corresponding xml output. This patch will change following things >> gluster volume info Volume Name: patchy Type: Tier Volume ID: 7745d367-811a-4fe9-a500-d04e7afa94bf Status: Created Number of Bricks: 3 x 2 = 6 Transport-type: tcp Hot Bricks: Brick1: hostname:/home/brick21 Brick2: hostname:/home/brick20 Cold Bricks: Brick3: hostname:/home/brick19 Brick4: hostname:/home/brick16 Brick5: hostname:/home/brick17 Brick6: hostname:/home/brick18 >>gluster volume status Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick hostname:/home/brick21 49152 0 Y 4690 Brick hostname:/home/brick20 49153 0 Y 4707 Cold Bricks: Brick hostname:/home/brick19 49154 0 Y 4724 Brick hostname:/home/brick16 49155 0 Y 4741 Brick hostname:/home/brick17 49156 0 Y 4758 Brick hostname:/home/brick18 49157 0 Y 4775 NFS Server on localhost 2049 0 Y 4793 Task Status of Volume patchy ------------------------------------------------------------------------------ There are no active volume tasks >>gluster volume status pathy detail Status of volume: patchy Hot Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick21 TCP Port : 49162 RDMA Port : 0 Online : Y Pid : 22677 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick20 TCP Port : 49161 RDMA Port : 0 Online : Y Pid : 22660 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Cold Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick19 TCP Port : 49157 RDMA Port : 0 Online : Y Pid : 22501 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick16 TCP Port : 49158 RDMA Port : 0 Online : Y Pid : 22518 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick17 TCP Port : 49159 RDMA Port : 0 Online : Y Pid : 22535 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick18 TCP Port : 49160 RDMA Port : 0 Online : Y Pid : 22552 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Change-Id: I7d584eb8782129c12876cce2ba8ffba6c0a620bd BUG: 1206546 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10328 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-071-149/+10
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/marker: turn off inode quotas by defaultvmallika2015-05-061-14/+40
| | | | | | | | | | | | | | | | | | | | | inode quota is a new feature implemented in glusterfs-3.7 if quota is enabled in the older version and is upgraded to a new version, we can hit setxattr spike during self-heal of inode quotas. So, when a quota is enabled, turn off inode-quotas with a xlator option. With this patch, we still account for inode quotas but only when a write operation is performed for a particular file. User will be able to query inode quotas once the Inode-quota xlator option is enabled. Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a BUG: 1209430 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10152 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* NFS-Ganesha : Locking global options fileMeghana Madhusudhan2015-05-061-0/+26
| | | | | | | | | | | | | | | | | | | | | Global option gluster features.ganesha enable writes into the global 'option' file. The snapshot feature also writes into the same file. To handle concurrent multiple transactions correctly, a new lock has to be introduced on this file. Every operation using this file needs to contest for the new lock type. Change-Id: Ia8a324d2a466717b39f2700599edd9f345b939a9 BUG: 1200254 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10130 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd: gluster volume status should show status of bitrot and scrubber daemonGaurav Kumar Garg2015-05-041-0/+98
| | | | | | | | | | | | | | | | | | | | | | | | Command gluster volume status <VOLNAME> should show the status of bitrot and scrubber daemon and its pid information. Along with displaying bitrot and scrubber daemon information in gluster volume status command there should be command to show its individual status separately. Command to show individual status of bitrot and scrubber daemon will following. command to show only bitd daemon information will be gluster volume status <VOLNAME> bitd command to show only scrubber daemon information gluster volume status <VOLNAME> scrub Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4 BUG: 1209752 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10175 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* NFS-Ganesha: Handling CLI commands when NFS-Ganesha keys are setMeghana Madhusudhan2015-04-301-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When ganesha.enable is set to on and features.ganesha is enabled, there are a few behaviour changes that should be seen in other volume operations. 1. ganesha.enable can be set to 'on' only when features.ganesha is set to 'enable' 2.When gluster vol is started, and if ganesha.enable key was set to 'on', it should automatically export the volume via NFS-Ganesha. 3.When ganesha.enable is set to 'on', and a volume is stopped, that volume should be unexported via NFS-Ganesha. 4. gluster vol reset <volname> If ganesha.enable was set to on, then unexport the volume via NFS-Ganesha. 5. gluster vol reset all If features.ganesha is set to enable, as part of reset all, set it to disable. This translates to teardown cluster. All the above problems are fixed by checking the global key and value, depending on the value, specific functions are called. And also, functions related to global commands are moved to cli-cmd-global.c Commit phase of features.ganesha enable/disable runs the ganesha-ha.sh setup/teardown respectively. Before the script begins, it is important that the NFS-Ganesha service starts on all the HA nodes. Having the start service commands in the commit phase could lead to problems. Moving the pre-requisite service start commands to the 'stage' phase. Change-Id: I5a256f94f8e1310ddcd5369f329b7168b2a24c47 BUG: 1200265 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10283 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* libglusterfs: Implementation of sync lock as recursive lock to avoid crash.anand2015-04-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Problem : In glusterd,we are using big lock which is implemented based on sync task frame work for thread synchronization and rcu lock for data consistency. sync task frame work swap the threads if there is no worker poll threads available,due to this rcu lock and rcu unlock was happening in different threads (urcu-bp will not allow this),resulting into glusterd crash. fix : To avoid releasing the sync lock(big lock) in between rcu critical section,implemented sync lock as recursive lock. More details: link : http://www.spinics.net/lists/gluster-devel/msg14632.html Change-Id: I2b56c1caf3f0470f219b1adcaf62cce29cdc6b88 BUG: 1211640 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10285 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/trash : fixing trash dir optionJiffin Tony Thottan2015-04-221-3/+14
| | | | | | | | | | | | | | Previously, problem was caused due to buffer overflow of variable used in the code. This patch fix the same. Change-Id: I3df5e06044470022f9475d93d33447db35384da2 BUG: 1132465 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/10215 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anoop C S <achiraya@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: support for tier volumes 'detach start' and 'detach commit'Dan Lambright2015-04-221-3/+17
| | | | | | | | | | | | | | | | | | | | | These commands work in a manner analagous to rebalancing when removing a brick. The existing migration daemon detects "detach start" and switches to moving data off the hot tier. While in this state all lookups are directed to the cold tier. gluster v detach-tier <vol> start gluster v detach-tier <vol> commit The status and stop cli commands shall be submitted separately. Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0 BUG: 1205540 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: root <root@localhost.localdomain> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10108 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: NetBSD Build System
* glusterd: Replace transaction peers listsKaushal M2015-04-131-20/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transaction peer lists were used in GlusterD to peers belonging to a transaction. This was needed to prevent newly added peers performing partial transactions, which could be incorrect. This was accomplished by creating a seperate transaction peers list at the beginning of every transaction. A transaction peers list referenced the peerinfo data structures of the peers which were present at the beginning of the transaction. RCU protection of peerinfos referenced by the transaction peers list is a hard problem and difficult to do correctly. To have proper RCU protection of peerinfos, the transaction peers lists have been replaced by an alternative method to identify peers that belong to a transaction. The alternative method is to the global peers list along with generation numbers to identify peers that should belong to a transaction. This change introduces a global peer list generation number, and a generation number for each peerinfo object. Whenever a peerinfo object is created, the global generation number is bumped, and the peerinfos generation number is set to the bumped global generation. With the above changes, the algorithm to identify peers belonging to a transaction with RCU protection is as follows, - At the beginning of a transaction, the current global generation number is saved - To identify if a peers belonging to the transaction, - Start a RCU read critical section - For each peer in the global peers list, - If the peers generation number is not greater than the saved generation number, continue with the action on the peer - End the RCU read critical section The above algorithm guarantees that, - The peer list is not modified when a transaction is iterating through it - The transaction actions are only done on peers that were present when the transaction started But, as a transaction could iterate over the peers list multiple times, the algorithm cannot guarantee that same set of peers will be selected every time. A peer could get deleted between two iterations of the list within a transaction. This problem existed with transaction peers list as well, but unlike before now it will not lead to invalid memory access and potential crashes. This problem will be addressed seprately. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 52ded5b Add timespec_cmp 44aedd8 Add create timestamp to peerinfo 7bcbea5 Fix some silly mistakes 13e3241 Add start time to opinfo 17a6727 Use timestamp comparisions to identify xaction peers instead of a xaction peer list 3be05b6 Correct check for peerinfo age 70d5b58 Use read-critical sections for peer list iteration ba4dbca Use peerinfo timestamp checks in op-sm instead of xaction peer list d63f811 Add more peer status checks when iterating peers list in glusterd-syncop 1998a2a Timestamp based peer list traversal of mgmtv3 xactions f3c1a42 Remove transaction peer lists b8b08ee Remove unused labels 32e5f5b Remove 'npeers' usage a075fb7 Remove 'npeers' from mgmt-v3 framework 12c9df2 Use generation number instead of timestamps. 9723021 Remove timespec_cmp 80ae2c6 Remove timespec.h include a9479b0 Address review comments on 10147/4 [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: I9be1033525c0a89276f5b5d83dc2eb061918b97f BUG: 1205186 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/10147 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* build: make contrib/uuid dependency optionalNiels de Vos2015-04-101-1/+1
| | | | | | | | | | | | | | | | | | | On Linux systems we should use the libuuid from the distribution and not bundle and statically link the contrib/uuid/ bits. libglusterfs/src/compat-uuid.h has been introduced and should become an abstraction layer for different UUID APIs. Non-Linux operating systems should implement their compatibility layer there. Once all operating systems have an implementation in compat-uuid.h, we can remove contrib/uuid/ from the repository completely. Change-Id: I345e5357644be2521685e00358bb8c83c4ea0577 BUG: 1206587 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10129 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Avoid conflict between contrib/uuid and system uuidEmmanuel Dreyfus2015-04-041-29/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glusterfs relies on Linux uuid implementation, which API is incompatible with most other systems's uuid. As a result, libglusterfs has to embed contrib/uuid, which is the Linux implementation, on non Linux systems. This implementation is incompatible with systtem's built in, but the symbols have the same names. Usually this is not a problem because when we link with -lglusterfs, libc's symbols are trumped. However there is a problem when a program not linked with -lglusterfs will dlopen() glusterfs component. In such a case, libc's uuid implementation is already loaded in the calling program, and it will be used instead of libglusterfs's implementation, causing crashes. A possible workaround is to use pre-load libglusterfs in the calling program (using LD_PRELOAD on NetBSD for instance), but such a mechanism is not portable, nor is it flexible. A much better approach is to rename libglusterfs's uuid_* functions to gf_uuid_* to avoid any possible conflict. This is what this change attempts. BUG: 1206587 Change-Id: I9ccd3e13afed1c7fc18508e92c7beb0f5d49f31a Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/10017 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* Xlators : Fixed typosManikandan Selvaganesh2015-04-021-1/+1
| | | | | | | | | | | Change-Id: I948f85cb369206ee8ce8b8cd5e48cae9adb971c9 BUG: 1075417 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/9529 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
* glusterd: group server-quorum related code togetherKrishnan Parthasarathi2015-04-011-0/+1
| | | | | | | | | | | | | | | Server-quorum implementation was spread in many files. This patch brings them all together into a single file, namely glusterd-server-quorum.c. All exported functions are available via glusterd-server-quorum.h Change-Id: I8fd77114b5bc6b05127cb8a6a641e0295f0be7bb BUG: 1205592 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9492 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Maintain local xaction_peer list for op-smAtin Mukherjee2015-03-261-10/+12
| | | | | | | | | | | | | | | http://review.gluster.org/9269 addresses maintaining local xaction_peers in syncop and mgmt_v3 framework. This patch is to maintain local xaction_peers list for op-sm framework as well. Change-Id: Idd8484463fed196b3b18c2df7f550a3302c6e138 BUG: 1204727 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9972 Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: CLI commands to create and manage tiered volumes.Dan Lambright2015-03-191-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A tiered volume is a normal volume with some number of new bricks representing "hot" storage. The "hot" bricks can be attached or detached dynamically to a normal volume. When this happens, a new graph is constructed. The root of the new graph is an instance of the tier translator. One subvolume of the tier translator leads to the old volume, and another leads to the new hot bricks. attach-tier <VOLNAME> [<replica> <COUNT>] <NEW-BRICK> ... [force] volume detach-tier <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> gluster volume rebalance <volume> tier start gluster volume rebalance <volume> tier stop gluster volume rebalance <volume> tier status The "tier start" CLI command starts a server side daemon. The daemon initiates file level migration based on caching policies. The daemon's status can be monitored and stopped. Note development on the "tier status" command is incomplete. It will be added in a subsequent patch. When the "hot" storage is detached, the tier translator is removed from the graph and the tiered volume reverts to its original state as described in the volume's info file. For more background and design see the feature page [1]. [1] http://www.gluster.org/community/documentation/index.php/Features/data-classification Change-Id: Ic8042ce37327b850b9e199236e5be3dae95d2472 BUG: 1194753 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/9753 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: cli command implementation for bitrot featuresGaurav Kumar Garg2015-03-181-0/+10
| | | | | | | | | | | | | | | | | CLI command for bitrot features. volume bitrot <volname> enable|disable Above command will enable/disable bitrot feature for particular volume. BUG: 1170075 Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Anand nekkunti <anekkunt@redhat.com> Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com> Reviewed-on: http://review.gluster.org/9866 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* CLI : GLobal option for NFS-GaneshaMeghana Madhusudhan2015-03-181-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A new global CLI option has been introduced for NFS-Ganesha. gluster features.ganesha enable/disable. This option is persistent and shall be inherited by new volumes created after this option is set. gluster features.ganesha enable It carries out the following functions: 1. Disables gluster-nfs across the cluster 2. Starts NFS-Ganesha server on a subset of nodes and exports '/'. 3. Creates the HA cluster for NFS-Ganesha. 4. Writes the option into the global config file. gluster features.ganesha disable 1. Stops NFS-Ganesha server. 2. Tears down the HA cluster for NFS-Ganesha With this change the older volume set options with keys "nfs-ganesha.host" and "nfs-ganesha.enable" will no longer be supported. This commit has only has the CLI related changes. Another patch will be submitted to support this feature entirely. Change-Id: Ie4b66a16c23b33b795738654b9a68f8e2c34efe3 BUG: 1188184 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/9538 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* Features/trash : Combined patches for trash translatorAnoop C S2015-03-161-0/+78
| | | | | | | | | | | | | | | | | | | | | | | | | This is the combined patch set for supporting trash feature. http://www.gluster.org/community/documentation/index.php/Features/Trash Current patch includes the following features: * volume set options for enabling trash globally and exclusively for internal operations like self-heal and re-balance * volume set options for setting the eliminate path, trash directory path and maximum trashable file size. * test script for checking the functionality of the feature * brief documentation on different aspects of trash feature. Change-Id: Ic7486982dcd6e295d1eba0f4d5ee6d33bf1b4cb3 BUG: 1132465 Signed-off-by: Anoop C S <achiraya@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/8312 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Protect the peer list and peerinfos with RCU.Kaushal M2015-03-161-16/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The peer list and the peerinfo objects are now protected using RCU. Design patterns described in the Paul McKenney's RCU dissertation [1] (sections 5 and 6) have been used to convert existing non-RCU protected code to RCU protected code. Currently, we are only targetting guaranteeing the existence of the peerinfo objects, ie., we are only looking to protect deletes, not all updaters. We chose this, as protecting all updates is a much more complex task. The steps used to accomplish this are, 1. Remove all long lived direct references to peerinfo objects (apart from the peerinfo list). This includes references in glusterd_peerctx_t (RPC), glusterd_friend_sm_event_t (friend state machine) and others. This way no one has a reference to deleted peerinfo object. 2. Replace the direct references with indirect references, ie., use peer uuid and peer hostname as indirect references to the peerinfo object. Any reader or updater now uses the indirect references to get to the actual peerinfo object, using glusterd_peerinfo_find. Cases where a peerinfo cannot be found are handled gracefully. 3. The readers get and use the peerinfo object only within a RCU read critical section. This prevents the object from being deleted/freed when in actual use. 4. The deletion of a peerinfo object is done in a ordered manner (glusterd_peerinfo_destroy). The object is first removed from the peerinfo list using an atomic list remove, but the list head is not reset to allow existing list readers to complete correctly. We wait for readers to complete, before resetting the list head. This removes the object from the list completely. After this no new readers can get a reference to the object, and it can be freed. This change was developed on the git branch at [2]. This commit is a combination of the following commits on the development branch. d7999b9 Protect the glusterd_conf_t->peers_list with RCU. 0da85c4 Synchronize before INITing peerinfo list head after removing from list. 32ec28a Add missing rcu_read_unlock 8fed0b8 Correctly exit read critical section once peer is found. 63db857 Free peerctx only on rpc destruction 56eff26 Cleanup style issues e5f38b0 Indirection for events and friend_sm 3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already exists 141d855 Address review comments on 9695/1 aaeefed Protection during peer updates 6eda33d Revert "Synchronize before INITing peerinfo list head after removing from list." f69db96 Remove unneeded line b43d2ec Address review comments on 9695/4 7781921 Address review comments on 9695/5 eb6467b Add some missing semi-colons 328a47f Remove synchronize_rcu from glusterd_friend_sm_transition_state 186e429 Run part of glusterd_friend_remove in critical section 55c0a2e Fix gluster (peer status/ pool list) with no peers 93f8dcf Use call_rcu to free peerinfo c36178c Introduce composite struct, gd_rcu_head [1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf [2]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9695 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Changes required for disperse volume heal commandsPranith Kumar K2015-03-101-102/+131
| | | | | | | | | | | | | - Include xattrop64-watchlist for index xlator for disperse volumes. - Change the functions that exist to consider disperse volumes also for sending commands to disperse xls in self-heal-daemon. Change-Id: Iae75a5d3dd5642454a2ebf5840feba35780d8adb BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cluster/ec: Add self-heal-daemon command handlersPranith Kumar K2015-03-091-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the changes required in ec xlator to handle index/full heal. Index healer threads: Ec xlator start an index healer thread per local brick. This thread keeps waking up every minute to check if there are any files to be healed based on the indices kept in index directory. Whenever child_up event comes, then also this index healer thread wakes up and crawls the indices and triggers heal. When self-heal-daemon is disabled on this particular volume then the healer thread keeps waiting until it is enabled again to perform heals. Full healer threads: Ec xlator starts a full healer thread for the local subvolume provided by glusterd to perform full crawl on the directory hierarchy to perform heals. Once the crawl completes the thread exits if no more full heals are issued. Changed xl-op prefix GF_AFR_OP to GF_SHD_OP to make it more generic. Change-Id: Idf9b2735d779a6253717be064173dfde6f8f824b BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9787 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>