summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
...
* geo-rep: Fix acl mounting in mountbroker setupKotresh HR2015-05-281-0/+1
| | | | | | | | | | | | Add acl option to geo-rep mount specification template (georep_mnt_desc_template) for mountbroker setup. Change-Id: I5b93ebb81bd308fc343c3b9e21c36c78acedcbaa BUG: 1223741 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10876 Tested-by: NetBSD Build System Reviewed-by: Venky Shankar <vshankar@redhat.com>
* glusterd/snapshot: Return correct errno in events of failure - PATCH 1Avra Sengupta2015-05-289-26/+77
| | | | | | | | | | | | | | | | RETCODE ERROR ------------------------------------------- 30800 Internal Error 30801 Another Transaction In Progress Change-Id: Ica7fd2e513b2c28717b6df73cfb2667725dbf057 BUG: 1212413 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10313 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* quota: quota.conf backward compatibility fixvmallika2015-05-284-13/+163
| | | | | | | | | | | | | | | | | | | In release-3.7 the format of quota.conf is changed. There is a backward compatibility issues during upgrade 1) There can be an issue when peer sync between node-3.6 and node-3.7 2) If the user sets/removes limit, there is will different format of file in node-3.6 and node-3.7 This patch fixes the issue: 1) restrict the user to execute command quota enable, limit-usage, remove 2) write quota.conf in older format if op-version is less than 3.6 Change-Id: Ib76f5a0a85394642159607a105cacda743e7d26b BUG: 1223739 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/10889 Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: function to create duplicate of volinfo should copy subvol_countMohammed Rafi KC2015-05-281-0/+1
| | | | | | | | | | | | | | when we create duplicate volfile from a existing volfile, we are not copying the variable subvol_count to the new volfile. Change-Id: I943aa7fdf1a2ca5bf57522cb2402b6b3165501ac BUG: 1215002 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10761 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* tiering/rebalance: Use separate pid/socket file for tieringMohammed Rafi KC2015-05-282-4/+16
| | | | | | | | | | | | | | When promotion/demotion daemon starts, it uses the same pidfile as rebalance. This patch will introduce a different pid file for the same. Change-Id: Ic484c53f51e00ae6b2d697748a9600b14829e23b BUG: 1221970 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10792 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* tiering/nfs: duplication of nodes in client graphMohammed Rafi KC2015-05-282-3/+3
| | | | | | | | | | | | | | | | | | | | When creating client volfiles, xlator tier-dht will be loaded for each volume. So for services like nfs have one or more volumes . So for each volume in the graph a tier-dht xlator will be created. So the graph parser will fail because of the redundant node in graph. By this change tier-dht will be renamed as volname-tier-dht Change-Id: I3c9b9c23ddcb853773a8a02be7fd8a5d09a7f972 BUG: 1222840 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10820 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com>
* tier: Do not allow detach-tier commands on a non-tiered volumeMohammed Rafi KC2015-05-282-1/+19
| | | | | | | | | | | | Change-Id: Ic92d25db68e40ef4a4388ef42affd1b3ee5a7ec6 BUG: 1221270 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10773 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* tiering: Correct errors in cli and glusterdMohammed Rafi KC2015-05-281-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem 1: volume info shows Cold Bricks instead of Tier type eg: Volume Name: patchy2 Type: Tier Volume ID: 28c25b8d-b8a1-45dc-b4b7-cbd0b344f58f Status: Started Number of Bricks: 3 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 1 Brick1: 10.70.1.35:/home/brick43 Cold Bricks: Cold Tier Type : Distribute Number of Bricks: 2 Brick2: 10.70.1.35:/home/brick19 Brick3: 10.70.1.35:/home/brick16 Options Reconfigured: Problem 2: Detach-tier sending enums of Rebalance detach-tier has it's own Enum to send with detach-tier command, using that enums will make more appropriate. Problem 3: Wrongly sets hot_brick count during the dictionary copying for response Change-Id: Icc054a999a679456881bc70511470d32ff8a86e4 BUG: 1211264 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10768 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* glusterd/tiering: Exchange tier info during glusted handshakeMohammed Rafi KC2015-05-281-0/+154
| | | | | | | | | | Change-Id: Ibc2f8eeb32d3e5dfd6945ca8a6d5f0f80a78ebac BUG: 1211264 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10449 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* features/bitrot: reimplement scrubbing frequencyVenky Shankar2015-05-281-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch reimplments existing scrub-frequency mechanism used to schedule scrubber runs. Existing mechanism uses periodic sleeps (waking up periodically on minimum granularity) and performing a number of tracking checks based on counters and sleep times. This patch does away with all the nifty counters and uses timer-wheel to schedule scrub runs. Scheduling changes are peformed by merely calculating the new expiry time and calling mod_timer() [mod_timer_pending() in some cases] making the code more debuggable and easier to follow. This also introduces "hourly" scrubbing tunable as an aid for testing scrubbing during development/testing cycle. One could also implement on-demand scrubbing with ease: by invoking mod_timer() with an expiry of one (1) second, thereby scheduling a scrub run the very next second. Change-Id: I6c7c5f0c6c9f886bf574d88c04cde14b76e60a8b BUG: 1224596 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/10893 Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: fix repeated connection to nfssvc failed msgsKrishnan Parthasarathi2015-05-282-13/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | ... and disable reconnect timer on rpc_clnt_disconnect. Root Cause ---------- gluster-NFS service wouldn't be started if there are no started volumes that have nfs service enabled for them. Before this fix we would initiate a connect even when the gluster-NFS service wasn't (re)started. Compounding that glusterd_conn_disconnect doesn't disable reconnect timer. So, it is possible that the reconnect timer was in execution when the timer event was attempted to be removed. Change-Id: Iadcb5cff9eafefa95eaf3a1a9413eeb682d3aaac BUG: 1222378 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/10830 Tested-by: NetBSD Build System Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* glusterd : allocate peerid to store in frame->cookieAtin Mukherjee2015-05-283-42/+94
| | | | | | | | | | | | | | | | | | | commit a1de3b05 was using peerid from the stack and storing it in the frame->cookie and in the subsequent callback it was referred. The existance of this variable is not guranteed in the cbk since its not dynamically allocated. Fix is to dynmacially manage peerid in the frame cookie. This patch also fixes one problem in gd_sync_task_begin () where unlock is not triggered if the cluster is running with lesser than 3.6 op-version resulting into commands failing with another transaction is in progress. Change-Id: I0d22cf663df53ef3769585703944577461061312 BUG: 1223213 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10842 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Fix conf->generation to stop new peers participating inAvra Sengupta2015-05-271-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | a transaction, while the transaction is in progress. Every peer gets a generation number, during it's inception. This generation number is used to identify the peer throiughout it's lifetime. This number is assigned based on the current generation number of the system, which is incremented with every peer that is added. The problem arises when we add a peer, and before it gets a rpc_connect we begin a transaction. In such a case, the peer gets considered in the transaction, but doesn't participate in it coz it isn't connected yet. The moment it gets the rpc notification and is connected, it starts participating in the transaction and all hell breaks loose. To resolve it, we should assign the peerinfo a new generation number everytime it's connected, so that this number will be greater than the generation number that the transaction is acting upon, and even though the peer is connected it will not participate in the transaction. We should also assign the new generation number of the peer to the peerctx, so that the framework that searches for peerinfos based on the generation number, will still function in the same manner. Removing ./tests/basic/volume-snapshot-clone.t from bad-tests. Also removed the duplicate entry of ./tests/bugs/snapshot/bug-1112559.t from bad-tests. Original entry was removed in http://review.gluster.org/10840 Change-Id: Ie25e3ecf59b19535b9cded7449e944221fac97a0 BUG: 1224290 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10895 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* glusterd: fix double-free of rebalance process' rpc objectKrishnan Parthasarathi2015-05-265-8/+72
| | | | | | | | | | | Change-Id: I0c79c4de47a160b1ecf3a8994eedc02e3f5002a9 BUG: 1223338 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/10872 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: populate errstr if gd_syncop_submit_request failsAtin Mukherjee2015-05-201-0/+3
| | | | | | | | | | | Change-Id: Ie4a3edef5d553fc07de53b46f9485c46a4305245 BUG: 1219732 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* quota/glusterd: on read call number of byte read should equal to buffer lengthGaurav Kumar Garg2015-05-141-2/+1
| | | | | | | | | | | | | | | | | glusterd is crashing when user try to set limit-usage on quota. Because in the read call number of byte is going to be read is more then buffer lenght. Change-Id: Ie507eb68ebc0d0daa1012baef1bf724e202e3baa BUG: 1221025 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10766 Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* dht: make lookup-unhashed=auto do something actually usefulJeff Darcy2015-05-104-31/+87
| | | | | | | | | | | | | | | | | | | | | | | | The key concept here is to determine whether a directory is "clean" by comparing its last-known-good topology to the current one for the volume. These are stored as "commit hashes" on the directory and the volume root respectively. The volume's commit hash changes whenever a brick is added or removed, and a fix-layout is done. A directory's commit hash changes only when a full rebalance (not just fix-layout) is done on it. If all bricks are present and have a directory commit hash that matches the volume commit hash, then we can assume that every file is in its "proper" place. Therefore, if we look for a file in that proper place and don't find it, we can assume it's not on any other subvolume and *safely* skip the global (broadcast to all) lookup. Change-Id: Id6ce4593ba1f7daffa74cfab591cb45960629ae3 BUG: 1219637 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/7702 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/tiering: volume info should display details about tierMohammed Rafi KC2015-05-106-3/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | >> gluster volume info patchy Volume Name: patchy Type: Tier Volume ID: 8bf1a1ca-6417-484f-821f-18973a7502a8 Status: Created Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: hostname:/home/brick30 Brick2: hostname:/home/brick31 Cold Bricks: Cold Tier Type : Disperse Number of Bricks: 1 x (4 + 2) = 6 Brick3: hostname:/home/brick20 Brick4: hostname:/home/brick21 Brick5: hostname:/home/brick23 Brick6: hostname:/home/brick24 Brick7: hostname:/home/brick25 Brick8: hostname:/home/brick26 Change-Id: I7b9025af81263ebecd641b4b6897b20db8b67195 BUG: 1212400 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10339 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli/tiering: display hot tier, and cold tier separatelyMohammed Rafi KC2015-05-093-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cli commands display the brick information without a way to distinguish hot tier, and cold tier. This patch will change all the cli related output, without changing the corresponding xml output. This patch will change following things >> gluster volume info Volume Name: patchy Type: Tier Volume ID: 7745d367-811a-4fe9-a500-d04e7afa94bf Status: Created Number of Bricks: 3 x 2 = 6 Transport-type: tcp Hot Bricks: Brick1: hostname:/home/brick21 Brick2: hostname:/home/brick20 Cold Bricks: Brick3: hostname:/home/brick19 Brick4: hostname:/home/brick16 Brick5: hostname:/home/brick17 Brick6: hostname:/home/brick18 >>gluster volume status Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick hostname:/home/brick21 49152 0 Y 4690 Brick hostname:/home/brick20 49153 0 Y 4707 Cold Bricks: Brick hostname:/home/brick19 49154 0 Y 4724 Brick hostname:/home/brick16 49155 0 Y 4741 Brick hostname:/home/brick17 49156 0 Y 4758 Brick hostname:/home/brick18 49157 0 Y 4775 NFS Server on localhost 2049 0 Y 4793 Task Status of Volume patchy ------------------------------------------------------------------------------ There are no active volume tasks >>gluster volume status pathy detail Status of volume: patchy Hot Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick21 TCP Port : 49162 RDMA Port : 0 Online : Y Pid : 22677 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick20 TCP Port : 49161 RDMA Port : 0 Online : Y Pid : 22660 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Cold Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick19 TCP Port : 49157 RDMA Port : 0 Online : Y Pid : 22501 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick16 TCP Port : 49158 RDMA Port : 0 Online : Y Pid : 22518 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick17 TCP Port : 49159 RDMA Port : 0 Online : Y Pid : 22535 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick18 TCP Port : 49160 RDMA Port : 0 Online : Y Pid : 22552 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Change-Id: I7d584eb8782129c12876cce2ba8ffba6c0a620bd BUG: 1206546 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10328 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* bitrot: Volfile generation should not proceed if node doesn't have any brick.Gaurav Kumar Garg2015-05-092-104/+105
| | | | | | | | | | | | | | | | | | | glusterd crashes when bitrot is enabled on a distributed volume from a node which doesn't host a brick. While generating volfile glusterd should check number of brick on that node. If node doesn't have any brick then graph generation for bitrot and scrubber should not proceed further. Change-Id: I2158113e20e93738cde2a22fd73f0ae6b22aae9e BUG: 1219784 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10664 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* geo-rep: Update Not Started to Created in code and docAravinda VK2015-05-091-1/+1
| | | | | | | | | | | | "Not Started" status is now "Created", replaced "Not Started" string in code and doc. Change-Id: If7d606c2cc8156e41291e7eebe9d0da4ad7ac28d Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 1219937 Reviewed-on: http://review.gluster.org/10698 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com>
* glusterd/tiering : cksum mismatch for tiered volumeMohammed Rafi KC2015-05-091-2/+2
| | | | | | | | | | | | | | | | | | | Once we updated the volinfo from orginator node, the hot type was overwritten with volume type. Then the same dictionary was sent to peer node to perform the commit of attach-tier, that will cause hot type to replace with volume type, eventually end up in cksum mismatch Change-Id: I402dceb4d672d0b3a7b91a92f52c1057050dbedc BUG: 1215660 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Conflicts: xlators/mgmt/glusterd/src/glusterd-brick-ops.c Reviewed-on: http://review.gluster.org/10406 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Invoking daemon reconfigure functions correctlyanand2015-05-092-7/+13
| | | | | | | | | | | | | | | | | | | | | | | | Problem : Scrub and bitd reconfigure functions were not invoking if quota is not enabled. Reason : In glusterd_svcs_reconfigure, if quota is not enabled then it is returning in the middle of the function without calling bitd and scrub reconfigure functions. Fix : If quota is not enable on volume, skip quota reconfigure and continue for other daemon reconfigure. This patch also address the state dump issue for bitd and quotad (logs the scrub and bitd info into state dump). Change-Id: I39ea004b70c95543c08496245be595b3ea044a29 BUG: 1219355 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10622 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/tiering: Enhance cli output for tieringMohammed Rafi KC2015-05-082-1/+10
| | | | | | | | | | | | | | Fix for handling cli output for attach-tier and detach-tier Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678 BUG: 1211562 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10284 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Fix corrupt gsyncd outputAravinda VK2015-05-081-2/+2
| | | | | | | | | | | | When gsyncd fails with Python traceback, glusterd fails parsing gsyncd output and shows error. BUG: 1219937 Change-Id: Ic32fd897c49a5325294a6588351b539c6e124338 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10694 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tiering: Remove unwanted check for tier type volumeMohammed Rafi KC2015-05-082-2/+3
| | | | | | | | | | Change-Id: I2def61ebf348558e5f6a138265e3329d9a5407a3 BUG: 1216898 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10494 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* glusterd: add counter support for tiered volumesDan Lambright2015-05-082-0/+57
| | | | | | | | | | | | | | | | This fix adds support to view the number of promoted or demoted files from the cli. The mechanism is isolmorphic to checking the status of volumes being rebalanced. gluster volume rebalance <vol> tier status Change-Id: I1b11ca27355ceec36c488967c23531202030e205 BUG: 1213063 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10292 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tiering: Do not allow some operations on tiered volumeMohammed Rafi KC2015-05-075-0/+111
| | | | | | | | | | | | | | | | | Some operations like add-brick,remove-brick,rebalance, replace-brick are not supported on tiered volume. But there is no code level check for this. This patch will allow to do the same Change-Id: I12689f4e902cf0cceaf6f7f29c71057305024977 BUG: 1205624 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10349 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* snapshot: Handshake with glusterd is not properMohammed Rafi KC2015-05-071-20/+127
| | | | | | | | | | | | | | | | | | | | If a snap is activated or deactivated, when a node is down, it is not retrieving the data properly during the handshake of glusterd With this patch, a version check will made when a glusterd is started running. If there is a mismach in version, then peers will exchange the healed data. Change-Id: I8bd2a347723db2194d3fa73295878b4dd2e9be5d BUG: 1122377 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9664 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com>
* dht/rebalance: Throttle rebalanceSusant Palai2015-05-071-0/+36
| | | | | | | | | | | | Throttle value will be "normal" by default. For throttling down, a thread will be put in to sleep. And for throttling up, gf_defrag_process_dir will wake up the sleeping threads. Change-Id: I74d530e3effd6e60e6eec81ccc8ff65789fa9c13 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/10526 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* bitrot/glusterd: Bitrot scrub pause/resume should give proper errorGaurav Kumar Garg2015-05-071-6/+40
| | | | | | | | | | | | | | bitrot scrubber paused/resume command should give proper error messages if scrubber already pause/resume and user again try to perform same operation on a volume. Change-Id: I01ad69c80f03b177535a4e5f1c95ab7709a804b0 BUG: 1210684 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10209 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Use generation number to find peerinfo in RPC notificationsKaushal M2015-05-075-8/+53
| | | | | | | | | | | | | | | | | | The generation number for each peerinfo object is unique. It can be used to find the exact peerinfo object, which is required for peer RPC notifications. Using hostname and uuid matching to find peerinfos can return incorrect peerinfos to be returned in certain cases like multi network peer probe. This could cause updates to happen to incorrect peerinfos. Change-Id: Ia0aada8214fd6d43381e5afd282e08d53a277251 BUG: 1215018 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/10495 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-0710-1830/+94
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/marker: turn off inode quotas by defaultvmallika2015-05-066-29/+150
| | | | | | | | | | | | | | | | | | | | | inode quota is a new feature implemented in glusterfs-3.7 if quota is enabled in the older version and is upgraded to a new version, we can hit setxattr spike during self-heal of inode quotas. So, when a quota is enabled, turn off inode-quotas with a xlator option. With this patch, we still account for inode quotas but only when a write operation is performed for a particular file. User will be able to query inode quotas once the Inode-quota xlator option is enabled. Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a BUG: 1209430 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10152 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* NFS-Ganesha : Locking global options fileMeghana Madhusudhan2015-05-064-17/+84
| | | | | | | | | | | | | | | | | | | | | Global option gluster features.ganesha enable writes into the global 'option' file. The snapshot feature also writes into the same file. To handle concurrent multiple transactions correctly, a new lock has to be introduced on this file. Every operation using this file needs to contest for the new lock type. Change-Id: Ia8a324d2a466717b39f2700599edd9f345b939a9 BUG: 1200254 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10130 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* Cache-invalidation : set to on/off depending on ganesha.enable valueMeghana Madhusudhan2015-05-051-1/+12
| | | | | | | | | | | | | | | | Multi-Head NFS-Ganesha servers need upcall (cache-invalidation) support to notify them in case of any changes to the files in the backend. Hence, upcall xlator option "features.cache-invalidation" needs to be enabled when ganesha.enable is set to 'on'. Similarly, this feature needs to be disabled when ganesha.enable is set to 'off' Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Change-Id: Ifdd1d50e48a2bd2a388f73c0b9e318c6092ac190 BUG: 1213752 Reviewed-on: http://review.gluster.org/10581 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd/tiering: disable tiering if cluster runs older gluster versionsDan Lambright2015-05-051-1/+43
| | | | | | | | | | | | Do not allow attach or detach tier to work if any of the nodes is running gluster code < 3.7. Change-Id: Id9af8f4057f6fad9cb703ec7645bc01eccb11fc1 BUG: 1218287 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10531 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Coverity fixGaurav Kumar Garg2015-05-052-4/+21
| | | | | | | | | | | | | | | | | | | CID: 1293504 (Calling xlator_set_option without checking return value ) CID: 1293502 (Dereferencing a pointer that might be null xl when calling xlator_set_option) CID: 1293500 (Assigning value from dict_get_int32(dict, "type", &type) to ret here, but that stored value is overwritten before it can be used.) Change-Id: I5314fb399480df70bd77bc374e3b573f2efd5710 BUG: 1093692 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10201 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com>
* geo-rep: Status EnhancementsAravinda VK2015-05-051-271/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Discussion in gluster-devel http://www.gluster.org/pipermail/gluster-devel/2015-April/044301.html MASTER NODE - Master Volume Node MASTER VOL - Master Volume name MASTER BRICK - Master Volume Brick SLAVE USER - Slave User to which Geo-rep session is established SLAVE - <SLAVE_NODE>::<SLAVE_VOL> used in Geo-rep Create command SLAVE NODE - Slave Node to which Master worker is connected STATUS - Worker Status(Created, Initializing, Active, Passive, Faulty, Paused, Stopped) CRAWL STATUS - Crawl type(Hybrid Crawl, History Crawl, Changelog Crawl) LAST_SYNCED - Last Synced Time(Local Time in CLI output and UTC in XML output) ENTRY - Number of entry Operations pending.(Resets on worker restart) DATA - Number of Data operations pending(Resets on worker restart) META - Number of Meta operations pending(Resets on worker restart) FAILURES - Number of Failures CHECKPOINT TIME - Checkpoint set Time(Local Time in CLI output and UTC in XML output) CHECKPOINT COMPLETED - Yes/No or N/A CHECKPOINT COMPLETION TIME - Checkpoint Completed Time(Local Time in CLI output and UTC in XML output) XML output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> cliOutput> geoRep> volume> name> sessions> session> session_slave> pair> master_node> master_brick> slave_user> slave/> slave_node> status> crawl_status> entry> data> meta> failures> checkpoint_completed> master_node_uuid> last_synced> checkpoint_time> checkpoint_completion_time> BUG: 1212410 Change-Id: I944a6c3c67f1e6d6baf9670b474233bec8f61ea3 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10121 Tested-by: NetBSD Build System Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* afr: add arbitration supportRavishankar N2015-05-051-15/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Add logic in afr to work in conjunction with the arbiter xlator when a replica 3 arbiter volume is created. More specifically, this patch: * Enables full locks for afr data transaction for such volumes. * Removes the upfront marking of pending xattrs at the time of pre-op and defer it to post-op. (This is an arbiter independent change and is made for all afr transactions.) * After pre-op stage, check if we can proceed with the fop stage without ending up in split-brain by examining the changelog xattrs. * Unwinds the fop with failure if only one source was available at the time of pre-op and the fop happened to fail on particular source brick. * Skips data self-heal if arbiter brick is the only source available. * Adds the arbiter-count option to the shd graph. This patch is a part of the arbiter logic implementation for 3 way AFR details of which can be found at http://review.gluster.org/#/c/9656/ Change-Id: I9603db9d04de5626eb2f4d8d959ef5b46113561d BUG: 1199985 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/10258 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* mgmt/glusterd: Porting messages to new logging frameworkNandaja Varma2015-05-045-150/+258
| | | | | | | | | | Change-Id: I25f3536446798ea1cffd6b5dfbb3d2398766fcf3 BUG: 1194640 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/9808 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* feature/changelog: Capture path for deletesKotresh HR2015-05-041-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: There is no way to get the path of deleted file if we have gfid from changelog since the file is already deleted. SOLUTION: Do a recursive readlink on parent gfid in backend .glusterfs path to get the complete path in I/O callpath in changelog translator and capture it in callback. The path captured is relative from the brick root. The field separator used is '\0'. e.g., ......\0<pgfid>/bname\0<relative-path>\0<next-record> ADDITIONAL REQUIRED CHANGES: 1. The changelog translator option called "changelog.capture-del-path" is introduced to enable or disable the capturing of deleted entry path. e.g., gluster vol set <vol-name> changelog.capture-del-path on/off If capture-del-path is disabled, '\0' is captured instead of relative path. e.g., ......\0<pgfid>/bname\0\0\0<next-record> 2. The minor number in the version of changelog is bumped up from v1.1 to v1.2. 3. If recursive readlink is failed for some reason, it will capture \0 in place of <relative path>. e.g., ......\0<pgfid>/bname\0\0\0<next-record> (same as when caputre-del-path option is disabled) 4. If bname argument passed to "resolve_pargfid_to_path" function is NULL and pargfid is ROOT, "." is returned. This is not the case with changelog, where bname is always passed. This is applicable to other consumers of "resolve_pargfid_to_path" routine. NOTE: Changelog parser should consider the above new changes and should parse accordingly. Change-Id: I040ed429b5aa7d391033fc6a540edbf07fc37827 BUG: 1214561 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10288 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: NetBSD Build System
* glusterd: Enable readdir-ahead by default on new volumesanand2015-05-041-1/+17
| | | | | | | | | | | | | | With gluster-3.7, 'performance.readdir-ahead' will be enabled by default on new volumes when the cluster op-version supports it. Change-Id: I44e76a69e7d1c11e6dfad72c941caf887bb810ee BUG: 1216187 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10433 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota: support for inode quota in quota.confvmallika2015-05-043-111/+180
| | | | | | | | | | | | | | | | | Currently when quota limit is set, corresponding gfid is set in quota.conf. This patch supports storing inode-quota limits in quota.conf and also stores additional byte for each gfid to differentiate between usage quota limit and inode quota limit. Change-Id: I444d7399407594edd280e640681679a784d4c46a BUG: 1202244 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10069 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: gluster volume status should show status of bitrot and scrubber daemonGaurav Kumar Garg2015-05-044-0/+128
| | | | | | | | | | | | | | | | | | | | | | | | Command gluster volume status <VOLNAME> should show the status of bitrot and scrubber daemon and its pid information. Along with displaying bitrot and scrubber daemon information in gluster volume status command there should be command to show its individual status separately. Command to show individual status of bitrot and scrubber daemon will following. command to show only bitd daemon information will be gluster volume status <VOLNAME> bitd command to show only scrubber daemon information gluster volume status <VOLNAME> scrub Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4 BUG: 1209752 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10175 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* features/bitrot: Per volume bitrot translatorGaurav Kumar Garg2015-05-035-154/+329
| | | | | | | | | | | | | | | | | | | | Currently whatever bitrot/scrubber tunable value user set for one volume that value is considering for all other volumes also. Each volume should act on their respective bitrot/scrubber tunable value. For handling bitrot/scrubber tunable value independently with respect to all the volume bitrot and scrubber translator should run seperatly for each volume. Change-Id: I1d9379508afe6cfd2f78e3ebf29c829c362d84a9 BUG: 1170075 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10352 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* ctr/libgfdb: Performance enhancer for unlink/create/rename/link fopsJoseph Fernandes2015-05-012-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | 1) ctr_link_consistency option for ctr xaltor is provided so that the user can choose to switch it on or off. /* For link consistency we do a double update i.e mark the link * during the wind and during the unwind we update/delete the link. * This has a performance hit. We give a choice here whether we need * link consistency to be spoton or not using link_consistency flag. * This will have only one link update */ 2) In delete the wind time recording is moved to unwind path. /* Special performance case: * Updating wind time in unwind for delete. This is done here * as in the wind path we will not know whether its the last * link or not. For a last link there is not use to update any * wind or unwind time!*/ Change-Id: I209472fb816f939db4a868b97ba053b028f17ea6 BUG: 1217786 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/10170 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/quota/tiering: Fixing volgen of quotadJoseph Fernandes2015-05-011-4/+10
| | | | | | | | | | | | | The quotad's graph generation was happening wrongly for tiered volume. The check is been inserted. Change-Id: I5554bc5280b0fbaec750e9008fdd930ad53a774f BUG: 1214219 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/10474 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* NFS-Ganesha: Handling CLI commands when NFS-Ganesha keys are setMeghana Madhusudhan2015-04-306-68/+238
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When ganesha.enable is set to on and features.ganesha is enabled, there are a few behaviour changes that should be seen in other volume operations. 1. ganesha.enable can be set to 'on' only when features.ganesha is set to 'enable' 2.When gluster vol is started, and if ganesha.enable key was set to 'on', it should automatically export the volume via NFS-Ganesha. 3.When ganesha.enable is set to 'on', and a volume is stopped, that volume should be unexported via NFS-Ganesha. 4. gluster vol reset <volname> If ganesha.enable was set to on, then unexport the volume via NFS-Ganesha. 5. gluster vol reset all If features.ganesha is set to enable, as part of reset all, set it to disable. This translates to teardown cluster. All the above problems are fixed by checking the global key and value, depending on the value, specific functions are called. And also, functions related to global commands are moved to cli-cmd-global.c Commit phase of features.ganesha enable/disable runs the ganesha-ha.sh setup/teardown respectively. Before the script begins, it is important that the NFS-Ganesha service starts on all the HA nodes. Having the start service commands in the commit phase could lead to problems. Moving the pre-requisite service start commands to the 'stage' phase. Change-Id: I5a256f94f8e1310ddcd5369f329b7168b2a24c47 BUG: 1200265 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10283 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* features/bitrot: Use global timer wheelVenky Shankar2015-04-301-1/+17
| | | | | | | | | | | Change-Id: I761927ea263b4144b851881f25791fda5b794f59 BUG: 1170075 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/10381 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>