summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-handler.c
Commit message (Collapse)AuthorAgeFilesLines
* cli: Proper xml output for "gluster peer status"v3.3.1qa2Kaushal M2012-08-301-0/+5
| | | | | | | | | Change-Id: I90952ba2ea606552cf4ad67dd296a440f90592d6 BUG: 847760 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3870 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Fixed incorrect assumptions in rpcsvc actors of glusterdKrishnan Parthasarathi2012-08-301-17/+24
| | | | | | | | | | | | | | | | | | Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3864 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com> Conflicts: xlators/mgmt/glusterd/src/glusterd-handler.c Change-Id: Iabfcb401de9d658e32433aa1e8c87b329cbd2cf7 BUG: 851109 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3876 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc: Reduce frame-timeout for glusterd connectionsKaushal M2012-08-171-4/+18
| | | | | | | | | | | | | | | | Reduce frame-timeout for glusterd connections from 30mins to 10 mins. 30mins is too long when compared to cli timeout of 2mins. Changing to 10mins reduces the disparity between cli and glusterd. Also, fix glusterfs_submit_reply() so that a reply is sent even if serialize failed. BUG: 843003 Change-Id: Ie8d5ec16fbbb54318a5935a47065e66fd3338b87 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/3812 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* calls to dict_allocate_and_serialize() are not 64-bit cleanKaleb S. KEITHLEY2012-08-031-10/+8
| | | | | | | | | | | | | | | | | | | | | | | All calls to dict_allocate_and_serialize() pass the address of a 32-bit type, but must cast it to the 64-bit pointer type (size_t *). This happens to work on LE machines, but even if it's apparently benign, it's still a bug. On BE machines it is not benign. GF_PROTOCOL_DICT_SERIALIZE() hacks around it by creating a size_t temp var, but that's, well, a hack, IMO when you consider that all the callers are actually passing &<u_int>; the param should just be a u_int * and eliminate the buggy casts and the temp var in the macro. Nobody apparently uses the Fedora/EPEL PPC RPMs, but they might. People are trying to build gluster.org bits on SPARC and tripping over this. Change-Id: I92ea139f9e3e91ddbbb32a51b96fa582a9515626 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> BUG: 838928 Reviewed-on: http://review.gluster.com/3643 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: On-wire changes required for probe/detach errstrKaushal M2012-05-231-0/+15
| | | | | | | | | | | | | | | This patch changes the on-wire structures for 'peer probe' and 'peer detach' to include op_errstr. These changes have been backported from patches feb99ca (glusterd, cli: Enable errstr for peer probe) and 3213a4e (glusterd,cli: Enable errstr for peer detach). The remaining changes will be included later on. Change-Id: I6e8e917f5ad928b80862d301c364cd4df56bb4c0 BUG: 816840 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/3387 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* core: coverity issues fixedAmar Tumballi2012-04-231-1/+3
| | | | | | | | | | | | this is not a complete set of issues getting fixed. Will address other issues in another patch. Change-Id: Ib01c7b11b205078cc4d0b3f11610751e32d14b69 Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 789278 Reviewed-on: http://review.gluster.com/3145 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: fix bugs of syncop for operationsAmar Tumballi2012-03-221-165/+0
| | | | | | | | | | | | | | | | | | * free the stack created for synctask * use different key than 'operation' in dictionary as thats being used already by other glusterd operations * send proper frame to 'rpc_clnt_submit()' API, as it gets used internally * also make sure to destroy the above frame in all _cbk() * move everything specific to synctask into one file, so it is easy to maintain Change-Id: Ia1a4414fffec6f9e51700295947eea4a2104a8c2 Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 805802 Reviewed-on: http://review.gluster.com/3000 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
* glusterd: bring in feature to use syncop for mgmt opsAmar Tumballi2012-03-211-0/+167
| | | | | | | | | | | | | | | | * new sycnop routines added to mgmt program * one should not use 'glusterd_op_begin()', instead can use the synctask framework, 'glusterd_op_begin_synctask()' * currently using for below operations: 'volume start', 'volume rebalance', 'volume quota', 'volume replace-brick' and 'volume add-brick' Change-Id: I0bee76d06790d5c5bb5db15d443b44af0e21f1c0 BUG: 762935 Signed-off-by: Amar Tumballi <amar@gluster.com> Reviewed-on: http://review.gluster.com/479 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli, glusterd, nfs: "volume status|profile|top" for nfs serversKaushal M2012-03-141-5/+10
| | | | | | | | | | | | | | | | | | | | | | Enables usage of volume monitoring operations "volume status", "volume top" and "volume profile" for nfs servers. These operations can be performed on nfs-servers by passing "nfs" as an option in cli. The output is similar to the normal brick outputs for these commands. The new syntaxes for the changed commands are as below, #gluster volume profile <VOLNAME> {start|info|stop} [nfs] #gluster volume top <VOLNAME> {[open|read|write|opendir|readdir [nfs]] |[read-perf|write-perf [nfs|{bs <size> count <count>}]]} [brick <brick>] [list-cnt <count>] #gluster volume status [all | <VOLNAME> [nfs|<BRICK>]] [detail|clients|mem|inode|fd|callpool] Change-Id: Ia6eb50c60aecacf9b413d3ea993f4cdd90ec0e07 BUG: 795267 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/2820 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
* glusterd: minor log fix in handle probe queryAmar Tumballi2012-03-121-1/+1
| | | | | | | | | | | now prints the destination hostname instead of self. Change-Id: If73158c36780d597a67ec9185d99083764966c04 Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 802265 Reviewed-on: http://review.gluster.com/2920 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: Peer(s) mustn't send updates about members not yet in clusterKrishnan Parthasarathi2012-03-121-12/+21
| | | | | | | | | | | | | | | | | Also, peerinfo is added to peers list synchoronous with the request triggering it. This ensures that atmost one request sees that the peer (in question) is not in peers list. Earlier, 'concurrent' handle_friend_update requests would see that a particular peer is not in the peers list yet, as the addition of the 'peer' into the list happened asynchronously, on the 'connect' event of the 'peer'. Change-Id: I6f017fb43079862fbe5ae7db8f9f4e4fefaa091d BUG: 801731 Signed-off-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-on: http://review.gluster.com/2918 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* fops/removexattr: prevent users from removing glusterfs xattrsRajesh Amaravathi2012-03-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | * Each xlator prevents the user from removing xlator-specific xattrs like trusted.gfid by handling it in respective removexattr functions. * For xlators which did not define remove and fremovexattr, the functions have been implemented with appropriate checks. xlator | fops-added _______________|__________________________ | 1. stripe | removexattr and fremovexattr 2. quota | removexattr and fremovexattr Change-Id: I98e22109717978134378bc75b2eca83fefb2abba BUG: 783525 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.com/2836 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* libglusterfs: fix GF_FREECsaba Henk2012-02-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Argument-taking macros should be possible to use with same syntax that of C functions. In particular (assuming FOO is a single-argument macro), FOO(bar) should break and if (cond) FOO(bar); else baz(); should compile. Change-Id: If852c128a7317dc0dda1c669be7c6af40501e48d BUG: 762061 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.com/2816 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* mempool: adjustments in pool sizesAmar Tumballi2012-02-221-2/+2
| | | | | | | | | | | | | | | | | * while creating 'rpc_clnt', the caller knows what would be the ideal load on it, so an extra argument to set some pool sizes * while creating 'rpcsvc', the caller knows what would be the ideal load of it, so an extra argument to set request pool size * cli memory footprint is reduced Change-Id: Ie245216525b450e3373ef55b654b4cd30741347f Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 765336 Reviewed-on: http://review.gluster.com/2784 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* cluster/afr: Add commands to see self-heald opsPranith Kumar K2012-02-201-0/+1
| | | | | | | | | Change-Id: Id92d3276e65a6c0fe61ab328b58b3954ae116c74 BUG: 763820 Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Reviewed-on: http://review.gluster.com/2775 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* cluster/dht: Rebalance will be a new glusterfs processshishirng2012-02-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | rebalance will not use any maintainance clients. It is replaced by syncops, with the volfile. Brickop (communication between glusterd<->glusterfs process) is used for status and stop commands. Dept-first traversal of dir is maintained, but data is migrated as and when encounterd. fix-layout (dir) do Complete migrate-data of dir fix-layout (subdir) done Rebalance state is saved in the vol file, for restart-ability. A disconnect event and pidfile state determine the defrag-status Signed-off-by: shishirng <shishirng@gluster.com> Change-Id: Iec6c80c84bbb2142d840242c28db3d5f5be94d01 BUG: 763844 Reviewed-on: http://review.gluster.com/2540 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* cli, glusterd : Added support for clear-locks command.Krishnan Parthasarathi2012-02-171-0/+68
| | | | | | | | | Change-Id: I8e7cd51d6e3dd968cced1ec4115b6811f2ab5c1b BUG: 789858 Signed-off-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-on: http://review.gluster.com/2552 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* cli: Enable output in XMLKaushal M2012-02-151-0/+59
| | | | | | | | | | | | | | | | | | | | | | | This patch enables gluster cli to output data in xml format. XML output can be obtained by passing "--xml" as an argument. A new "volume list" command, which lists the volumes present in a cluster, has been added. This can be used for obtaining a quick list of volumes. Several commands, including "volume top", "volume profile", "volume status" and "volume info", "volume list", have custom XML output routines. Other commands use either one of the 2 generic output routines, cli_xml_output_str() & cli_xml_output_dict(). NOTE: When using "all" for "volume status" and "volume info" the XML output will have multiple roots. Change-Id: I6117baa02ec06fda116177dbd401f66521263ac6 BUG: 790713 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/2753 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: Initialised op_sm/friend_sm before cluster restore.Krishnan Parthasarathi2012-02-071-53/+31
| | | | | | | | | | | Cleaned up peerinfo/rpc association. Change-Id: I11bcaa3ea1f2b86c6b4e235873a60bb5bf76a892 BUG: 786006 Signed-off-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-on: http://review.gluster.com/2725 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* rpc: extend actors with flag signing if privilege is requiredCsaba Henk2012-01-211-36/+36
| | | | | | | | | | | | Currently we allow the following RPC messages for unprivileged users: GLUSTER_CLI_GETWD, GLUSTER_CLI_MOUNT, GLUSTER_CLI_UMOUNT Change-Id: I05414f3ca7cbe47de45c5e5cfba1537efc774e6c BUG: 781256 Signed-off-by: Csaba Henk <csaba@gluster.com> Reviewed-on: http://review.gluster.com/2641 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* cli: trivial changes in cli-rpc-ops.cRajesh Amaravathi2012-01-191-6/+2
| | | | | | | | | | | | | | * the get_volume_cbk has been cleaned up(w.r.t whitespace) and a memset used where appropriate. some other functions have been affected(in a good way) by the whitespace-dealing commands in emacs. Change-Id: Iba473290e87747f8bb06d335db06c872a241d7cd BUG: 781333 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.com/2653 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: Fixed crash in peer probe found using efenceKrishnan Parthasarathi2012-01-191-10/+18
| | | | | | | | | | Change-Id: Ie09d1e4eb9a8d338f8e5cf6360b398b196141a81 BUG: 782718 Signed-off-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-on: http://review.gluster.com/2655 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pranithk@gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cli,glusterd: Display volume UUID in the output of 'volume info'Vijay Bellur2012-01-141-0/+10
| | | | | | | | | | | Cleaned up some leaks along the way. Change-Id: Ibc76c539eee935c0630f9580d0d914814b1a6fe1 BUG: 781445 Signed-off-by: Vijay Bellur <vijay@gluster.com> Reviewed-on: http://review.gluster.com/2643 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* cli: volume status enhancementRajesh Amaravathi2012-01-121-10/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Support "gluster volume status (all)" option to display all volumes' status. * On option "detail" appended to "gluster volume status *", amount of storage free, total storage, and backend filesystem details like inode size, inode count, free inodes, fs type, device name of each brick is displayed. * One can also obtain [detailed]status of only one brick. * Format of the enhanced volume status command is: "gluster volume status [all|<vol>] [<brick>] [detail]" * Some generic functions have been added to common-utils: skipword get_nth_word These functions enable parsing and fetching of words in a sentence. glusterd_get_brick_root (in glusterd) These are self explanatory. Change-Id: I6f40c1e19810f8504cd3b1786207364053d82e28 BUG: 765464 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.com/777 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amar@gluster.com>
* cli: validate the volume set command properlyRaghavendra Bhat2011-12-191-1/+1
| | | | | | | | | | | | | | | | For volume set command if after the volume name the key and the value of the option are not given, then gracefully exit by showing the proper usage of volume set, instead of sending the request to glusterd, which makes glusterd crash. Signed-off-by: Raghavendra Bhat <raghavendrabhat@gluster.com> Change-Id: I2f0d189a55663c7f47dddff35d4dc68fae16b755 BUG: 767591 Signed-off-by: Raghavendra Bhat <raghavendrabhat@gluster.com> Reviewed-on: http://review.gluster.com/797 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: Fail 'requests' from non-peers.Krishnan Parthasarathi2011-12-191-0/+35
| | | | | | | | | | | | | glusterd should not honour "volume op requests" from peers who are not part of the cluster. Change-Id: I6cb6d630a9da02ab060650f21edb46db8deb70e8 BUG: 767559 Signed-off-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-on: http://review.gluster.com/787 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* mgmt/glusterd: more cleanupAmar Tumballi2011-11-161-1/+1
| | | | | | | | | | | * 'GD_OP_RENAME_VOLUME' removed * unused handler function for rebalance is removed. Change-Id: Id081cb02a6445c09347bacc0fdf9cd600ff94e5d BUG: 3158 Reviewed-on: http://review.gluster.com/734 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* mgmt/glusterd: delete volume to have dictionary as contextAmar Tumballi2011-11-161-30/+11
| | | | | | | | | | | earlier only DELETE_VOLUME was having volume name as context, where as all other OPs used to have dictionary Change-Id: I5bfcc458bff3295374eb4f0b0a31f6134745debd BUG: 3158 Reviewed-on: http://review.gluster.com/718 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
* glusterd/cli: rpc cleanupAmar Tumballi2011-11-161-54/+20
| | | | | | | | | | | | no more backward compatibility between glusterd <-> glusterd Change-Id: Ibfcca1c7e315a90b2639c4cba8da19b11875051a BUG: 3158 Reviewed-on: http://review.gluster.com/610 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shishir Gowda <shishirng@gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* XDR: cli-glusterd xdr consolidationshishir gowda2011-11-161-104/+124
| | | | | | | | | | | | | | | | By using only 1 xdr struct for request and 1 xdr struct for response, we will be able scale better and also be able to parse the o/p better For request use- gf1_cli_req - contains dict For response use- gf1_cli_rsp - conains op_ret, op_errno, op_errstr, dict Change-Id: I94b034e1d8fa82dfd0cf96e7602d4039bc43fef3 BUG: 3720 Reviewed-on: http://review.gluster.com/662 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amar@gluster.com>
* glusterd: Removing delayed moving of op sm for stop vol/remove brick op.Krishnan Parthasarathi2011-10-141-2/+0
| | | | | | | | | | | | Earlier we waited for brick disconnect or 5s whichever is smaller, before we move op sm from brick op stage to commit stage. This involves a race where both the above mentioned events can happen 'concurrently' and result in double free. Change-Id: I8b1524afded84c20d55e29cfe2579ca872d2ac26 BUG: 3700 Reviewed-on: http://review.gluster.com/575 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amar@gluster.com>
* glusterd: cleanup unneeded volumes after peer detachKaushal M2011-10-011-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | Performs cleanup on the detached peer and in the cluster after a peer detach, and also adds a new check before starting detach. Cleanup - On the detached peer, cleanup removes the entries of those volumes on the peer that do not have all their bricks on it. This prevents these stale volumes from being added to a new cluster when peer is attached to one. In the cluster, all those volumes which have all their bricks on the detached peer are removed. Checks- Checks if all the peers in the cluster are online and connected, except the peer being detached, before starting detach. Using force will bypass this check and do detach. Change-Id: I4fef9ea3cc72ce8c4ce0a82b4ee8a1663a502061 BUG: 1926 Reviewed-on: http://review.gluster.com/431 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
* glusterd, cli: adds 'force' for 'peer detach'Kaushal M2011-10-011-3/+4
| | | | | | | | | | | Adds add a 'force' option to 'peer detach' to forcefully detach a peer from a cluster, even when the cluster contains volumes with bricks on the peer. Change-Id: I134df51c16a07345c8869b318141d427b572eba5 BUG: 3549 Reviewed-on: http://review.gluster.com/429 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
* cli : new volume statedump commandKaushal M2011-09-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes: 1. Add a new 'volume statedump' command, that performs statedumps of all the bricks in the volume and saves them in a specified location. 2. Add new server option 'server.statedump-path'. 3. Remove multiple function definitions in glusterd.h Statedump Information: The 'volume statedump' command performs statedumps on all the bricks in a given volume. The syntax of the command is, gluster volume statedump <VOLNAME> [type]...... Types include, * all * mem * iobuf * callpool * priv * fd * inode Defaults to 'all' when no type is specified. The statedump files are created by default in /tmp directory of the server on which the bricks are present. This path can be changed by setting the 'server.statedump-path' option. The statedump files will be named as, <brick-name>.<pid of brick process>.dump Change-Id: I01c0e1a8aad490da818e086d89f292bd2ed06fd4 BUG: 1964 Reviewed-on: http://review.gluster.com/321 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amar@gluster.com>
* glusterd: Should not cleanup peerinfo too soon.Krishnan Parthasarathi2011-09-271-0/+1
| | | | | | | | | | | | | | friend_remove_cbk cleans up peerinfo and the unrefs the associated rpc_clnt obj. When the cbk is run inside call_bail or saved_frames_unwind, we might end up destroying the rpc_clnt and associated saved_frames_pool while we are still using saved_frames to iterate through the frames. Change-Id: Idf7768478a6d07a87c7faeac5b70e13bcacd2641 BUG: 3511 Reviewed-on: http://review.gluster.com/510 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amar@gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: cleanup of volinfo '*_count' definitionsv3.3.0qa11Amar Tumballi2011-09-231-2/+7
| | | | | | | | | | | | | | | | | earlier, sub_count was having different meaning depending on the volume type. now, for replica and stripe count, one can directly access the 'replica_count' or 'stripe_count' to get the corresponding value from the volume info. 'sub_count' is preserved as is for backward compatibility. there is a new variable 'dist_leaf_count' to get info about how many bricks are present in one distribute sub volume. Change-Id: I5ea1c8f9ae08f584cca63b91ba69035c7e4350ca BUG: 3158 Reviewed-on: http://review.gluster.com/435 Reviewed-by: Krishnan Parthasarathi <kp@gluster.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Implemented cmd to trigger self-heal on a replicate volume.v3.3.0qa10Krishnan Parthasarathi2011-09-221-0/+37
| | | | | | | | | | | | | This cmd is used in the context of proactive self-heal for replicated volumes. User invokes the following cmd when (s)he suspects that self-heal needs to be done on a particular volume, gluster volume heal <VOLNAME>. Change-Id: I3954353b53488c28b70406e261808239b44997f3 BUG: 3602 Reviewed-on: http://review.gluster.com/454 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: profile cmd incorrectly reports all bricks down.Krishnan Parthasarathi2011-09-151-1/+1
| | | | | | | | | | | | If there are no bricks of a volume running 'local' to glusterd where the 'profile info' command is issued, glusterd incorrectly reports that all bricks of the volume are down. Change-Id: Idd703c991f0bcf59b76b9ef8f4ad8cd71960a55b BUG: 3553 Reviewed-on: http://review.gluster.com/430 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd / cli: mount-broker serviceCsaba Henk2011-09-121-0/+149
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mountbroker is configured in glusterd volfile through a DSL which is restriced enough to be able to appear in the role of the value of a volfile knob. Basically the DSL describes set-theorical requirements against the option set which is sent by the cli (in the hope of getting a mount with these options). If the requirements meet and the volume id and the uid who is to "own" the mount can be unambigously deduced from the given request, glusterd does the mount with the given parameters. The use case of geo-replication is sugared by means of volume options which then generate a complete mount-broker option set. Demo: - add the following option to your glusterd volfile: option mountbroker-root /tmp/mbr option mountbroker.fool EQL(volfile-id=pop*|user-map-root=*|volfile-server=localhost)&MEET(user-map-root=john|user-map-root=jane) - before starting glusterd, create /tmp/mbr owned by root with mode 0755 - with cli, do $ gluster system:: mount fool volfile-id=pop33 user-map-root=jane volfile-server=localhost - on succesful completion (volume pop33 exists and is started, jane is a valid username), the mount path will be echoed to you - you can get rid of the mount by $ gluster system:: umount <mount-path> Change-Id: I629cf64add0a45500d05becc3316f67cdb5b42ff BUG: 3482 Reviewed-on: http://review.gluster.com/128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: fail the 'peer probe' if the first connect attempt failsAmar Tumballi2011-09-071-4/+48
| | | | | | | | | | | so 'gluster peer probe' command doesn't hang till timeout (120s), instead it will send the proper error msg to client. Change-Id: I398fa16d526f869f1d27eeb57aeb7ee4451fbecd BUG: 1852 Reviewed-on: http://review.gluster.com/342 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* modify to the way we used XDR definitions files (.x files)Amar Tumballi2011-09-071-37/+23
| | | | | | | | | | | | | | | | | | | | | | | | Earlier: step 1: copy the existing <xdr>.x files to /tmp step 2: generate '.[ch]' files using 'rpcgen <xdr>.x' step 3: check diff with the to the existing files, add only your part of changes back to the original file. (ignore other changes). step 4: there is another file to write wrapper functions to convert structures to/from XDR buffers, update it with your new structure. step 5: use these wrapper functions in the newly written procedures. step 6: commit :-| Now: step 1: update (mostly adding only) the <xdr>.x file step 2: run '<path-to-src>/extras/generate-xdr-files.sh <xdr>.x' command step 3: implement rpc procedure to handle the request/response. step 4: commit :-) Change-Id: I219f9159fc980438c86e847c6b030be96e595ea2 BUG: 3488 Reviewed-on: http://review.gluster.com/341 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* mgmt/glusterd: code re-structuringAmar Tumballi2011-09-051-1323/+0
| | | | | | | | | | created new files per operations, (or group of operations) Change-Id: Iccb2a6a0cd9661bf940118344b2f7f723e23ab8b BUG: 3158 Reviewed-on: http://review.gluster.com/281 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: Removed local cli lockKrishnan Parthasarathi2011-09-041-282/+90
| | | | | | | | | | | | | | | | This change contains, - removal of the local cli lock used to serialize cli ops to a glusterd. - glusterd's state-machine can handle competing 'lockers' with guaranteed progress. - flush cluster lock on 'owner' disconnecting and as 'owner', send unlock to all on first peer disconnect. Change-Id: I25961436b0790b4196f2b3438b105c37279399ad BUG: 3320 Reviewed-on: http://review.gluster.com/123 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* mgmt/glusterd, cli: Introduce gluster volume status <volname>Vijay Bellur2011-08-191-1/+60
| | | | | | | | Change-Id: Iea835b9e448e736016da2e44e3c9bfff93f2fa78 BUG: 3439 Reviewed-on: http://review.gluster.com/259 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* Change Copyright current yearPranith Kumar K2011-08-101-1/+1
| | | | | | | | Change-Id: I2d10f2be44f518f496427f257988f1858e888084 BUG: 3348 Reviewed-on: http://review.gluster.com/200 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* LICENSE: s/GNU Affero General Public/GNU General Public/Pranith Kumar K2011-08-061-3/+3
| | | | | | | | Change-Id: I3914467611e573cccee0d22df93920cf1b2eb79f BUG: 3348 Reviewed-on: http://review.gluster.com/182 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* Variable IOBUF: Use variable iobuf for cli/glusterd/glusterfsd(mgmt)shishir gowda2011-07-311-19/+38
| | | | | | | | | | | | By using variable iobufs, xfer data size is no more limited to 128K (default). This helps in scaling. Change-Id: Iab453db9223d887306d150cd6fe0b1eae9c422cc BUG: 2472 Reviewed-on: http://review.gluster.com/13 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amar@gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* mgmt/glusterd: allow add brick in pure replicate setupRaghavendra Bhat2011-07-171-2/+5
| | | | | | | | | | | | | | Currently in pure replicate if the replica count is 4 and there are 4 bricks, then suppose 3 bricks are removed then add brick works only if 3 bricks are provided. Providing lesser bricks fails. This patch fixes that scenario. Signed-off-by: Raghavendra Bhat <raghavendrabhat@gluster.com> Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Signed-off-by: Anand Avati <avati@gluster.com> BUG: 1574 ([glusterfs-3.1.0qa18]: add-brick to a replicate volume fails if a brick is removed) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1574
* mgmt/Glusterd: Implementation volume set help/help-xmlKaushik BV2011-07-121-6/+11
| | | | | | | | Signed-off-by: Kaushik BV <kaushikbv@gluster.com> Signed-off-by: Anand Avati <avati@gluster.com> BUG: 2041 (volume set help option) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2041
* gluster volume info: fix the output of 'stripe-replicated' volumesAmar Tumballi2011-06-231-0/+5
| | | | | | | | | | also fix the glusterd-store to preserve the required information Signed-off-by: Amar Tumballi <amar@gluster.com> Signed-off-by: Anand Avati <avati@gluster.com> BUG: 3040 (need a way to create volumes with 'stripe+replicate' setup..) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=3040