summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-volume-ops.c
Commit message (Collapse)AuthorAgeFilesLines
* core: run many bricks within one glusterfsd processJeff Darcy2017-02-011-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Backport of: > Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb > BUG: 1385758 > Reviewed-on: https://review.gluster.org/14763 Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8 BUG: 1418091 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16496 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* tier : Tier as a servicehari gowtham2017-01-161-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tierd is implemented by separating from rebalance process. The commands affected: 1) Attach tier will trigger this process instead of old one 2) tier start and tier start force will also trigger this process. 3) volume status [tier] will show tier daemon as a process instead of task and normal tier status and tier detach status works. 4) tier stop implemented. 5) detach tier implemented separately along with new detach tier status 6) volume tier volname status will work using the changes. 7) volume set works This patch has separated the tier translator from the legacy DHT rebalance code. It now sends the RPCs from the CLI to glusterd separate to the DHT rebalance code. The daemon is now a service, similar to the snapshot daemon, and can be viewed using the volume status command. The code for the validation and commit phase are the same as the earlier tier validation code in DHT rebalance. The “brickop” phase has been changed so that the status command can use this framework. The service management framework is now used. DHT rebalance does not use this framework. This service framework takes care of : *) spawning the daemon, killing it and other such processes. *) volume set options , which are written on the volfile. *) restart and reconfigure functions. Restart is to restart the daemon at two points 1)after gluster goes down and comes up. 2) to stop detach tier. *) reconfigure is used to make immediate volfile changes. By doing this, we don’t restart the daemon. it has the code to rewrite the volfile for topological changes too (which comes into place during add and remove brick). With this patch the log, pid, and volfile are separated and put into respective directories. Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399 BUG: 1313838 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13365 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Handle volinfo->refcnt properly during volume start commandAvra Sengupta2016-12-121-10/+10
| | | | | | | | | | | | | | | | | | While running the volume start command, the refcnt of the volume is incremented. At the end of the command, the refcnt should also be decremented. This is currently not the case. This patch, makes sure the refcnt is also decremented at the end of the volume start command. Change-Id: I017b5039be5948df41dde6bc89d2955d5d18971f BUG: 1403780 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/16108 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : coverity fix for string overflowMuthu-vigneshwaran2016-12-061-1/+1
| | | | | | | | | | | | | | | | CID : 1357872, 1357873, 1351695 BUG: 789278 Change-Id: I2ee01a6054326f35de621ee7a1f2afd09c5738fe Signed-off-by: Muthu-vigneshwaran <mvignesh@redhat.com> Reviewed-on: http://review.gluster.org/15989 Tested-by: Muthu Vigneshwaran Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/afr: CLI for granular entry heal enablement/disablementKrutika Dhananjay2016-11-281-15/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When there are already existing non-granular indices created that are yet to be healed, if granular-entry-heal option is toggled from 'off' to 'on', AFR self-heal whenever it kicks in, will try to look for granular indices in 'entry-changes'. Because of the absence of name indices, granular entry healing logic will fail to heal these directories, and worse yet unset pending extended attributes with the assumption that are no entries that need heal. To get around this, a new CLI is introduced which will invoke glfsheal program to figure whether at the time an attempt is made to enable granular entry heal, there are pending heals on the volume OR there are one or more bricks that are down. If either of them is true, the command will be failed with the appropriate error. New CLI: gluster volume heal <VOL> granular-entry-heal {enable,disable} Change-Id: I1f4fe8162813b9068e198965d94169fee4adc099 BUG: 1370410 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/15747 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: enable default configurations post upgrade to >= 3.9.0 versionsAtin Mukherjee2016-10-161-8/+17
| | | | | | | | | | | | | | | | | | | With 3.8.0 onwards volume options like nfs.disable, transport.address-family have some default configuration value. If a volume was created pre upgrade to 3.8.0 or higher the default options are not set post upgrade. This patch takes care of putting the default values in the op-version bump up workflow. However these changes will only reflect from 3.9.0 onwards Change-Id: I9a8d848cd08d87ddcb80dbeac27eaae097d9cbeb BUG: 1379223 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/15568 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd : Introduce reset brickAnuradha Talur2016-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The command basically allows replace brick with src and dst bricks as same. Usage: gluster v reset-brick <volname> <hostname:brick-path> start This command kills the brick to be reset. Once this command is run, admin can do other manual operations that they need to do, like configuring some options for the brick. Once this is done, resetting the brick can be continued with the following options. gluster v reset-brick <vname> <hostname:brick> <hostname:brick> commit {force} Does the job of resetting the brick. 'force' option should be used when the brick already contains volinfo id. Problem: On doing a disk-replacement of a brick in a replicate volume the following 2 scenarios may occur : a) there is a chance that reads are served from this replaced-disk brick, which leads to empty reads. b) potential data loss if next writes succeed only on replaced brick, and heal is done to other bricks from this one. Solution: After disk-replacement, make sure that reset-brick command is run for that brick so that pending markers are set for the brick and it is not chosen as source for reads and heal. But, as of now replace-brick for the same brick-path is not allowed. In order to fix the above mentioned problem, same brick-path replace-brick is needed. With this patch reset-brick commit {force} will be allowed even when source and destination <hostname:brickpath> are identical as long as 1) destination brick is not alive 2) source and destination brick have the same brick uuid and path. Also, the destination brick after replace-brick will use the same port as the source brick. Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de BUG: 1266876 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12250 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: fix unused variable warnings/errorsKaleb S. KEITHLEY2016-08-291-12/+0
| | | | | | | | | | | | | | | | | | http://review.gluster.org/14085 fixes a/the "leak" - via the generated rpc/xdr headers - of pragmas that mask these warnings. However 14085 won't pass the smoke test until all the warnings are fixed. Change-Id: Ibe060202efb1f175a4348ff0927a7b44303d96e6 BUG: 1369124 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/15283 Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com>
* Revert "glusterd-ganesha : copy ganesha export configuration files during ↵Jiffin Tony Thottan2016-08-261-1/+1
| | | | | | | | | | | | | | | | | | | | | reboot" This reverts commit f71e2fa49af185779b9f43e146effd122d4e9da0. Reason: As part of sync up node reboot this patch copies ganesha export conf file from a source node. This change is no more require if the export files are available in shared storage. Change-Id: Id9c1ae78377bbd7d5d80aa1c14f534e30feaae97 BUG: 1355956 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/14907 Reviewed-by: soumya k <skoduri@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd/cli: coverity fixesAtin Mukherjee2016-06-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A downstream coverity run has revealed few of the following coverity defects. Since the downstream code is a clone of a specific upstream branch the defects hold true for the upstream as well. Defect type: NEGATIVE_RETURNS xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:641: negative_returns: "op_errno" is passed to a parameter that cannot be negative. Defect type: BUFFER_SIZE_WARNING xlators/mgmt/glusterd/src/glusterd-volume-ops.c:2124: buffer_size_warning: Calling strncpy with a maximum size argument of 261 bytes on destination array "volinfo->volname" of size 261 bytes might leave the destination string unterminated. Defect type: BUFFER_SIZE_WARNING xlators/mgmt/glusterd/src/glusterd-volgen.c:4888: buffer_size_warning: Calling strncpy with a maximum size argument of 261 bytes on destination array "volinfo->volname" of size 261 bytes might leave the destination string unterminated. Defect type: STRING_OVERFLOW xlators/mgmt/glusterd/src/glusterd-volgen.c:3449: string_overflow: You might overrun the 256 byte destination string "tmp_volname" by writing 261 bytes from "volinfo->volname". Defect type: BUFFER_SIZE_WARNING xlators/mgmt/glusterd/src/glusterd-utils.c:3392: buffer_size_warning: Calling strncpy with a maximum size argument of 261 bytes on destination array "new_volinfo->volname" of size 261 bytes might leave the destination string unterminated. Defect type: NO_EFFECT xlators/mgmt/glusterd/src/glusterd-utils.c:7359: remediation: Was "rebal->rebalance_id" formerly declared as a pointer? Defect type: USE_AFTER_FREE xlators/mgmt/glusterd/src/glusterd-utils.c:7115: pass_freed_arg: Passing freed pointer "volinfo" as an argument to "glusterd_friend_contains_vol_bricks". Defect type: DEADCODE cli/src/cli-cmd-parser.c:1767: dead_error_begin: Execution cannot reach this statement: "ret = -1;". Change-Id: Ie941bdf31923e2f39618dd94bfae16fdb3ad65f1 BUG: 789278 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14818 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: fail volume delete if one of the node is downAtin Mukherjee2016-06-101-0/+6
| | | | | | | | | | | | | | | | | Deleting a volume on a cluster where one of the node in the cluster is down is buggy since once that node comes back the resync of the same volume will happen. Till we bring in the soft delete feature tracked in http://review.gluster.org/12963 this is a safe guard to block the volume deletion. Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c BUG: 1344407 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14681 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: copy real_path from older brickinfo during brick importAtin Mukherjee2016-05-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | In glusterd_import_new_brick () new_brickinfo->real_path will not be populated for the first time and hence if the underlying file system is bad for the same brick, import will fail resulting in inconsistent configuration data. Fix is to populate real_path from old brickinfo object. Also there were many cases where we were unnecessarily calling realpath() and that may cause in failure. For eg - if a remove brick is executed with a brick whoose underlying file system has crashed, remove-brick fails since realpath() call fails. We'd need to call realpath() here as the value is of no use.Hence passing construct_realpath as _gf_false in glusterd_volume_brickinfo_get_by_brick () is a must in such cases. Change-Id: I7ec93871dc9e616f5d565ad5e540b2f1cacaf9dc BUG: 1335531 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14306 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd-ganesha : copy ganesha export configuration files during rebootJiffin Tony Thottan2016-05-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | glusterd creates export conf file for ganesha using hook script during volume start and ganesha_manage_export() for volume set command. But this routine is not added in glusterd restart scenario. Consider the following case, in a three node cluster a volume got exported via ganesha while one of the node is offline(glusterd is not running). When the node comes back online, that volume is not exported on that node due to the above mentioned issue. Also I have removed unused variables from glusterd_handle_ganesha_op() For this patch to work pcs cluster should running on that be node. Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5 BUG: 1330097 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/14063 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: default value of nfs.disable, change from false to trueKaleb S KEITHLEY2016-04-271-1/+1
| | | | | | | | | | | | | | | | | | Next step in eventual deprecation of glusterfs nfs server in favor of ganesha.nfsd. Also replace several open-coded strings with constant. Change-Id: If52f5e880191a14fd38e69b70a32b0300dd93a50 BUG: 1092414 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/13738 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: populate brickinfo->real_path conditionallyAtin Mukherjee2016-04-111-2/+4
| | | | | | | | | | | | | | | | | | | | glusterd_brickinfo_new_from_brick () is called from multiple places and one of them is glusterd_brick_rpc_notify where its very well possible that an underlying brick's file system has crashed and a disconnect event has been received. In this case glusterd tries to build the brickinfo from the brickid in the RPC request, however the same fails as glusterd_brickinfo_new_from_brick () fails from realpath. Fix is to skip populating real_path if its a disconnect event. Change-Id: I9d9149c64a9cf2247abb731f219c1b1eef037960 BUG: 1325841 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/13965 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: clean dead initializationsPrasanna Kumar Kalever2016-04-041-1/+0
| | | | | | | | | | | | | | | | This patch cleans unused variable initialization as well as their declarations which are no where used in the code Change-Id: I784165fc6e91297758079699dd9583d5203b7793 BUG: 1253831 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/11929 Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* afr: add mtime based split-brain resolution to CLIRavishankar N2016-03-291-0/+1
| | | | | | | | | | | | | | | | | | | Extended the CLI to include support for split-brain resolution based on mtime. The command syntax is: $:gluster volume heal <VOLNAME> split-brain latest-mtime <FILE> where <FILE> can be either the full file name as seen from the root of the volume (or) the gfid-string representation of the file. Change-Id: I7a16f72ff1a4495aa69f43f22758a9404e958b4f BUG: 1321322 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/13828 Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: volume should not start when server quorum is not metGaurav Kumar Garg2016-02-221-0/+14
| | | | | | | | | | | | | | | | | Currently when server quorum is not met then upon executing # gluster volume start [force] command its starting the volume. With this patch if server side quorum is not met then it will prevent starting of the volume. Change-Id: I39734b2dcf8e90c3c68bf2762d8350aecc82cc38 BUG: 1308402 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/13442 Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: Bug fixes for IPv6 supportNithin D2016-02-201-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Glusterd not working using ipv6 transport. The idea is with proper glusterd.vol configuration, 1. glusterd needs to listen on default port (240007) as IPv6 TCP listner. 2. Volume creation/deletion/mounting/add-bricks/delete-bricks/peer-probe needs to work using ipv6 addresses. 3. Bricks needs to listen on ipv6 addresses. All the above functionality is needed to say that glusterd supports ipv6 transport and this is broken. Fix: When "option transport.address-family inet6" option is present in glusterd.vol file, it is made sure that glusterd creates listeners using ipv6 sockets only and also the same information is saved inside brick volume files used by glusterfsd brick process when they are starting. Tests Run: Regression tests using ./run-tests.sh IPv4: Ran manually till tests/basic/rpm.t . IPv6: (Need to add the above mentioned config and also add an entry for "hostname ::1" in /etc/hosts) Started failing at ./tests/basic/glusterd/arbiter-volume-probe.t and ran successfully till here Unit Tests using Ipv6 peer probe add-bricks remove-bricks create volume replace-bricks start volume stop volume delete volume Change-Id: Iebc96e6cce748b5924ce5da17b0114600ec70a6e BUG: 1117886 Signed-off-by: Nithin D <nithind1988@yahoo.in> Reviewed-on: http://review.gluster.org/11988 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tier/glusterd: Check before starting tier daemon during volume startMohammed Rafi KC2015-12-081-4/+6
| | | | | | | | | | | | | | | | | We start tier daemon when volume is started without looking into the previous status. The problem with that if detach-tier is started and then volume force start is actually starting tier daemon. This is also fixes a problem where tier daemon is not starting after detach stop. Change-Id: I15b56a711e12f0e24f5ab123561258bd448621f7 BUG: 1286974 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12833 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd/tier: Reset to reconfigured values after detach-tierMohammed Rafi KC2015-11-261-20/+1
| | | | | | | | | | | | | | After detach-tier commit , we have to reset the option reconfigured for tier volume Change-Id: Iae0210259720d6ac14ccc0cc339dc9f54a0c4571 BUG: 1285046 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12736 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: Change volume start into v3 frameworkMohammed Rafi KC2015-11-251-9/+34
| | | | | | | | | | | | | | | | | | As part of volume start, if the volume is of tier type then we need to start tiering daemon also. But before starting tier daemon all the bricks should be started. So by changing volume start into v3 framework, we can do tier start in post validate phase Change-Id: If921067f4739e6b9a3239fc5717696eaf382c22a BUG: 1284372 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12718 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tests: spurious failure fix for bug-948686.tAtin Mukherjee2015-11-171-0/+22
| | | | | | | | | | | | | Ensured import volume and volume start doesn't race in volinfo by refcounting volinfo. Change-Id: I7467eccaba9a00fd63ba0121d8157df24d1c00a6 BUG: 1258714 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/12329 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* afr/glusterd: Fix naming issue in tier related changesMohammed Rafi KC2015-10-301-4/+4
| | | | | | | | | | | | | | changing some of the function names added recently as part of the tiering changes. Change-Id: I238831128ee00cdf83f8a80be937d3528d133099 BUG: 1275489 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12431 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* core: use syscall wrappers instead of direct syscalls -- glusterdKaleb S. KEITHLEY2015-10-281-1/+1
| | | | | | | | | | | | | | | various xlators and other components are invoking system calls directly instead of using the libglusterfs/syscall.[ch] wrappers. If not using the system call wrappers there should be a comment in the source explaining why the wrapper isn't used. Change-Id: I28bf2a5f7730b35914e7ab57fed91e1966b30073 BUG: 1267967 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/12379 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tier/shd: make shd commands compatible with tieringMohammed Rafi KC2015-10-121-12/+78
| | | | | | | | | | | | | | | | tiering volfiles may contain afr and disperse together or multiple time based on configuration. And the informations for those configurations are stored in tier_info. So most of the volgen code generation need to be changed to make compatible with it. Change-Id: I563d1ca6f281f59090ebd470b7fda1cc4b1b7e1d BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12135 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tiering/glusterd: start tier daemon during volume startMohammed Rafi KC2015-08-171-0/+4
| | | | | | | | | | | | | | | | | | | Tier daemon should always run with tier volume. If volume is stopped and started again, we manually need to start the tier-daemon, instead this patch will automatically trigger tier process along with volume start. A snapshot restored volume will not have node_state_info, so we need to create and store it dynamically Change-Id: I659387c914bec7a1b6929ee5cb61f7b406402075 BUG: 1238593 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/11525 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Set nfs.disable to "on" when global NFS-Ganesha key is enabledMeghana M2015-08-101-0/+24
| | | | | | | | | | | | | | | | | | "nfs.disable" gets set to "on" for all the existing volumes, when the command "gluster nfs-ganesha enable" is executed. When a new volume is created,it gets exported via Gluster-NFS on the nodes outside the NFS-Ganesha. To fix this, the "nfs.disable" key is set to "on" before starting the volume, whenever the global option is set to "enable". Change-Id: I7ce58928c36eadb8c122cded5bdcea271a0a4ffa BUG: 1251857 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11871 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd: initialize the daemon services on demandAtin Mukherjee2015-07-271-6/+0
| | | | | | | | | | | | | | | | | | | | | | As of now all the daemon services are initialized at glusterD init path. Since socket file path of per node daemon demands the uuid of the node, MY_UUID macro is invoked as part of the initialization. The above flow breaks the usecases where a gluster image is built following a template could be Dockerfile, Vagrantfile or any kind of virtualization environment. This means bringing instances of this image would have same UUIDs for the node resulting in peer probe failure. Solution is to lazily initialize the services on demand. Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e BUG: 1238135 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Correction in Error message for disperseAshish Pandey2015-07-061-2/+7
| | | | | | | | | | | | | | | | | | | volume create Problem: If all the bricks are on same server and creating "disperse" volume without using "force", it throws a failure message mentioning "replicate" as volume. Solution: Adding failure message for disperse volume too Change-Id: I9e466b1fe9dae8cf556903b1a2c4f0b270159841 BUG: 1232183 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/11250 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: Porting left out log messages to new frameworkNandaja Varma2015-06-261-11/+24
| | | | | | | | | | | Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647 BUG: 1235538 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/11388 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/uss/snapshot: Intialise snapdsvc after volfiles are createdAvra Sengupta2015-06-221-5/+7
| | | | | | | | | | | | | snapd svc should be initialised only after all relevant volfiles and directories are created. Change-Id: I96770cfc0b350599cd60ff74f5ecec08145c3105 BUG: 1231197 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11227 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/tier: configure tier daemon during volume restartMohammed Rafi KC2015-06-191-0/+4
| | | | | | | | | | | | | | rebalance daemon will be running on every tier volume for promoting/demoting the files. When volume/glusterd is restarted, then we need to configure the daemon. Change-Id: Ib565240a70edea2ec8bc1601c52b40c0783491d3 BUG: 1225330 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/10933 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* NFS-Ganesha: Automatically export vol that was exported before vol restartMeghana M2015-06-161-12/+2
| | | | | | | | | | | | | | | | | | | | Consider a volume that is exported via NFS-Ganesha. Stopping this volume will automatically unexport the volume. Starting this volume should automatically export it. Although the logic was already there, there was a bug in it. Fixing the same by introducing a hook script. Also with the new CLI options, the hook script S31ganesha-set.sh is no longer required. Hence, removing the same. Adding a comment to tell the user that one of the CLI commands will take a few minutes to complete. Change-Id: Ibff769ca04fef0c2a129c83fe31fc9c869350e8d BUG: 1231738 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11247 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* ops/glusterd: Porting messages to new logging frameworkNandaja Varma2015-06-161-148/+274
| | | | | | | | | | Change-Id: Iafeb07aabc1781d98f51c6c2627bf3bbdf493153 BUG: 1194640 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/9905 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* build: do not #include "config.h" in each fileNiels de Vos2015-05-291-5/+0
| | | | | | | | | | | | | | | | | | Instead of including config.h in each file, and have the additional config.h included from the compiler commandline (-include option). When a .c file tests for a certain #define, and config.h was not included, incorrect assumtions were made. With this change, it can not happen again. BUG: 1222319 Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10808 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-071-18/+0
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* NFS-Ganesha: Handling CLI commands when NFS-Ganesha keys are setMeghana Madhusudhan2015-04-301-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When ganesha.enable is set to on and features.ganesha is enabled, there are a few behaviour changes that should be seen in other volume operations. 1. ganesha.enable can be set to 'on' only when features.ganesha is set to 'enable' 2.When gluster vol is started, and if ganesha.enable key was set to 'on', it should automatically export the volume via NFS-Ganesha. 3.When ganesha.enable is set to 'on', and a volume is stopped, that volume should be unexported via NFS-Ganesha. 4. gluster vol reset <volname> If ganesha.enable was set to on, then unexport the volume via NFS-Ganesha. 5. gluster vol reset all If features.ganesha is set to enable, as part of reset all, set it to disable. This translates to teardown cluster. All the above problems are fixed by checking the global key and value, depending on the value, specific functions are called. And also, functions related to global commands are moved to cli-cmd-global.c Commit phase of features.ganesha enable/disable runs the ganesha-ha.sh setup/teardown respectively. Before the script begins, it is important that the NFS-Ganesha service starts on all the HA nodes. Having the start service commands in the commit phase could lead to problems. Moving the pre-requisite service start commands to the 'stage' phase. Change-Id: I5a256f94f8e1310ddcd5369f329b7168b2a24c47 BUG: 1200265 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10283 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* arbiter: load arbiter xlator on every 3rd brick of a replica 3 AFR subvolRavishankar N2015-04-271-0/+4
| | | | | | | | | | | | | | | | | | | Logic for adding the 'glusterd_brickinfo->group' member and using it to find the brick positon has been taken from http://review.gluster.org/#/c/9919. Thanks to Jeff Darcy for that. This patch is a part of the arbiter logic implementation for 3 way AFR details of which can be found at http://review.gluster.org/#/c/9656/ Change-Id: Idbfe4f29ee8e098e0102def8f38b32314316b188 BUG: 1199985 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/10257 Tested-by: NetBSD Build System Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Avoid conflict between contrib/uuid and system uuidEmmanuel Dreyfus2015-04-041-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glusterfs relies on Linux uuid implementation, which API is incompatible with most other systems's uuid. As a result, libglusterfs has to embed contrib/uuid, which is the Linux implementation, on non Linux systems. This implementation is incompatible with systtem's built in, but the symbols have the same names. Usually this is not a problem because when we link with -lglusterfs, libc's symbols are trumped. However there is a problem when a program not linked with -lglusterfs will dlopen() glusterfs component. In such a case, libc's uuid implementation is already loaded in the calling program, and it will be used instead of libglusterfs's implementation, causing crashes. A possible workaround is to use pre-load libglusterfs in the calling program (using LD_PRELOAD on NetBSD for instance), but such a mechanism is not portable, nor is it flexible. A much better approach is to rename libglusterfs's uuid_* functions to gf_uuid_* to avoid any possible conflict. This is what this change attempts. BUG: 1206587 Change-Id: I9ccd3e13afed1c7fc18508e92c7beb0f5d49f31a Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/10017 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* mgmt/glusterd: Changes required for disperse volume heal commandsPranith Kumar K2015-03-101-45/+84
| | | | | | | | | | | | | - Include xattrop64-watchlist for index xlator for disperse volumes. - Change the functions that exist to consider disperse volumes also for sending commands to disperse xls in self-heal-daemon. Change-Id: Iae75a5d3dd5642454a2ebf5840feba35780d8adb BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cluster/ec: Add self-heal-daemon command handlersPranith Kumar K2015-03-091-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the changes required in ec xlator to handle index/full heal. Index healer threads: Ec xlator start an index healer thread per local brick. This thread keeps waking up every minute to check if there are any files to be healed based on the indices kept in index directory. Whenever child_up event comes, then also this index healer thread wakes up and crawls the indices and triggers heal. When self-heal-daemon is disabled on this particular volume then the healer thread keeps waiting until it is enabled again to perform heals. Full healer threads: Ec xlator starts a full healer thread for the local subvolume provided by glusterd to perform full crawl on the directory hierarchy to perform heals. Once the crawl completes the thread exits if no more full heals are issued. Changed xl-op prefix GF_AFR_OP to GF_SHD_OP to make it more generic. Change-Id: Idf9b2735d779a6253717be064173dfde6f8f824b BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9787 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Replace libglusterfs lists with liburcu listsKaushal M2015-03-031-19/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces usage of the libglusterfs lists data structures and API in glusterd with the lists data structures and API from liburcu. The liburcu data structes and APIs are a drop-in replacement for libglusterfs lists. All usages have been changed to keep the code consistent, and free from confusion. NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data structures and API, as it holds rpc_transport_t objects, which is not a part of glusterd and is not being changed in this patch. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 6dac576 Replace libglusterfs lists with liburcu lists a51b5ab Fix compilation issues d98a06f Fix merge issues a5d918e Remove merge remnant 1cca113 More style cleanup 1917be3 Address review comments on 9624/1 8d10f13 Use cds_lists for glusterd_svc_t 524ad5d Add rculist header in glusterd-conn-helper.c 646f294 glusterd: add list_add_order API honouring rcu [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0 BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9624 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd: nfs,shd,quotad,snapd daemons refactoringAtin Mukherjee2015-02-201-9/+29
| | | | | | | | | | | | | This patch ports nfs, shd, quotad & snapd with the approach suggested in http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2 BUG: 1191486 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9428 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd/geo-rep: Allow replace/remove brick if geo-rep is stopped.Kotresh HR2015-02-161-29/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | Replace brick: If geo-replication was configured on a volume, replace brick used to fail. This patch allows replace brick to go through if all geo-rep sessions corresponding to that volume is stopped. Remove brick: There was no check for geo-replication for remove brick. Enforce 'remove brick commit' to fail if geo-rep session corresponding to volume is running. Allow 'remove brick commit' only if all of the geo-rep sessions corresponding to that volume is stopped. Code is re-organized for better readability. Change-Id: I02282c2764d8b81e319489c977847e6e437511a4 BUG: 1179638 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9402 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: ajeet jha <ajha@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Fix strtok_r parsing.Raghavendra Talur2015-01-211-3/+4
| | | | | | | | | | | | | | | | Found a bug where a replica 2 volume creation prompts saying the bricks are in the same hosts even when they are in different hosts. Change-Id: Ie55addae55c55e32ad2b5339530ab71f0e3711ab BUG: 1091935 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/9373 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Implement Volume heal enable/disablePranith Kumar K2015-01-201-1/+69
| | | | | | | | | | | | | | | | | | For volumes with replicate, disperse xlators, self-heal daemon should do healing. This patch provides enable/disable functionality for the xlators to be part of self-heal-daemon. Replicate already had this functionality with 'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it uniform for both types of volumes. Internally it still does 'volume set' based on the volume type. Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29 BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9358 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Refactor glusterd-utils.cAvra Sengupta2015-01-081-0/+1
| | | | | | | | | | | | | | | | Refactor glusterd-utils.c to create glusterd-snapshot-utils.c consisting of all snapshot utility functions. Change-Id: Id9823a2aec9b115f9c040c9940f288d4fe753d9b BUG: 1176770 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9391 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* rdma :mount fails for nfs protocol in rdma volumesJiffin Tony Thottan2014-11-191-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we mount rdma only volume or tcp,rdma volume using newly peer probed IP's(nfs-server on new nodes) through nfs protocol, mount fails for rdma only volume and mount happens with help of tcp protocol in the case of tcp,rdma volumes. That is for newly added servers will always get transport type as "socket". This is due to nfs_transport_type is exported correctly and imported wrongly. This can be verified by the following , * Create a rdma only volume or tcp,rdma volume * Add a new server into the trusted pool. * Checkout the client transport type specified nfs-server volgraph.It will be always tcp(socket type) instead of rdma. * And also for rdma only volume in the nfs log, we can see 'connection refused' message for every reconnect between nfs server and glusterfsd. BUG: 1157381 Change-Id: I6bd4979e31adfc72af92c1da06a332557b6289e2 Author: Jiffin Tony Thottan <jthottan@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/8975 Reviewed-by: Meghana M <mmadhusu@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com>
* rdma: client connection establishment takes more timeMohammed Rafi KC2014-11-181-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For rdma type only volume client connection establishment with server takes more than three seconds. Because for tcp,rdma type volume, will have 2 ports one for tcp and one for rdma, tcp port is stored with brickname and rdma port is stored as "brickname.rdma" during pamap_sighin. During the handshake when trying to get the brick port for rdma clients, since we are not aware of server transport type, we will append '.rdma' with brick name. So for tcp,rdma volume there will be an entry with '.rdma', but it will fail for rdma type only volume. So we will try again, this time without appending '.rdma' using a flag variable need_different_port, and it will succeed, but the reconnection happens only after 3 seconds. In this patch for rdma only type volume we will append '.rdma' during the pmap_signin. So during the handshake we will get the correct port for first try itself. Since we don't need to retry , we can remove the need_different_port flag variable. Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c BUG: 1153569 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8934 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>