summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.h
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/snapshot: Recording the snapshots missed in each brick.Avra Sengupta2014-04-021-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Persisting missing snapshot info on disk as well as in memory in the following format: -------------NODE-UUID--------------:---------SNAP-UUID--------------=BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:a17b4fe42c5a45f7a916438643edaa13= 3 :/brick/brick-dirs/brick3: 1 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:a17b4fe42c5a45f7a916438643edaa13= 3 :/brick/brick-dirs/brick3: 3 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:83a3cc05453b46b2a7eda4c9a9208638= 3 :/brick/brick-dirs/brick3: 1 : 1 This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list In memory we maintain the data as a list of glusterd_missed_snap_info in conf, the key for this list are the first two fields, i.e NODE-UUID:SNAP-UUID. For every NODE-UUID:SNAP-UUID, there can be multiple operations missed on multiple bricks. So we maintain a list of glusterd_snap_op_t for evert node of glusterd_missed_snap_info This list is maintained or updated during snapshot create, delete, and restore operations which are the only operations that if missed, are recorded in this list. During snapshot create, if a node is down, or a brick is down, we don't receive their mount point infos. snap_status of such bricks is marked as -1, and their brick details are added to this list. During snapshot delete, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. When a subsequent delete entry is processed for an already existing create entry, we just mark the create entries status as done (2), and don't add the delete entry to the list. During snapshot restore, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. Change-Id: I22578d14f81a54e13f6832966b70cd4cfdfd5b44 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7208 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: populate the snapshot volume list in the order when the ↵Vijaikumar M2014-03-251-0/+3
| | | | | | | | | | | | | | glusterd is restarted We are storing the snapshot objects in the order. We need to do the same for snapshot volumes Change-Id: Iea9594632e52d069f167cc8a840c78d0f7109947 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7307 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: Fix brick statusVijaikumar M2014-03-131-0/+4
| | | | | | | | | | Change-Id: I04ff2ddf5c644dde2051b8a692d287e87ba59942 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7240 Reviewed-by: Sachin Pandit <spandit@redhat.com> Tested-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com>
* glusterd/snapshot: Modified restore backendRajesh Joseph2014-03-101-1/+8
| | | | | | | | | | | | | | Now instead of creating volume store files first we constructing the in-memory volinfo first and then generate the backend store files. This gives lot of flexibility in restore operation. This patch also fixes the read-only issue with restored snaps. Change-Id: I51032228a5212fc3b90dc6e93f3539af3eb36074 BUG: 1064688 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7209 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* glusterd/snapshot: Snapshot create and delete changesVijaikumar M2014-03-061-19/+6
| | | | | | | | | | | | | | | | | | | With the snap driven approach, While creating the snapshot, We have to mention the snap-name first and then the volumes to be associated with that. Corresponding changes has been made in glusterd. While deleting the snapshot, we have to mention only the snapname. Corresponding changes has been made in glusterd. CLI changes for the same can be found here: http://review.gluster.org/#/c/6947/ Change-Id: I8bd8f471da5b728165da5f331faad3dde3486823 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7123 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: store location for snap driven changesVijaikumar M2014-03-031-3/+9
| | | | | | | | | | | | | | | | Currently snapshot volfiles are stored at: <workdir>/vols/<volname>/snaps/<snapvol> With snap driven approach we need to store the volfiles at: <workdir>/snaps/<snapname>/<snapvol> Change-Id: I8efdd5db29833b2b06b64a900cbb4c9b9a5d36b6 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7006 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* mgmt/glusterd : Having a separate list for snapshots.Sachin Pandit2014-01-151-0/+3
| | | | | | | | | Creating a separate list for snaps taken, as cluttering snaps in the volume list does not look neat. Change-Id: Ida4a183e95e8694b85ebb5a680d06b7d29a460a0 BUG: 1040947 Signed-off-by: Sachin Pandit <spandit@redhat.com>
* cli/glusterd: implement the snap and cg delete functionalitiesRaghavendra Bhat2013-12-121-0/+9
| | | | | Change-Id: Icdb66c89acdd043d0d6368c48ce2e01b1a40966f Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/snap: Fix for socket connect failures on brick startAvra Sengupta2013-12-101-1/+1
| | | | | Change-Id: I9f53bd4d83794c69c54e4a03f59425a1ca6a4ac3 Signed-off-by: Avra Sengupta <asengupt@redhat.com>
* glusterd/Jarvis: Added aggr rsp dict in mgmt frameworkAvra Sengupta2013-11-151-0/+2
| | | | | | | Also fixes snapshot config output Change-Id: Ia50d94492009cf73dbb99ba20117b9fa4c41048a Signed-off-by: Avra Sengupta <asengupt@redhat.com>
* mgmt/glusterd: handle snap volume store and actual volume store separatelyRaghavendra Bhat2013-11-151-1/+2
| | | | | Change-Id: I8b88fe94d0f9ee1089cafdda037abcf2f7a180ca Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* mgmt/glusterd: snapshot create commandRaghavendra Bhat2013-11-151-1/+18
| | | | | | | | | | | | | | | | | This is still a work in progress. As of now, these things are done: * Take the snapshot of the backend brick * Create the new volume for the snapshot * Create the brick and the client volfiles * Store the snapshot related info in /var/lib/glusterd * Create the snap object representing the snapshot TODO: Start the brick processes for the snapshot Change-Id: I26fbb0f8e5cf004d4c1dbca51819bab1cd1bac15 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/utils: Get brick mount's device nameshishir gowda2013-11-151-0/+6
| | | | | Change-Id: I03ff9e8094e7e36b28b521380949c7e9044c2e4e Signed-off-by: shishir gowda <sgowda@redhat.com>
* mgmt/glusterd: Introduce snapshot infrastructureshishir gowda2013-11-151-1/+1
| | | | | | | | API's for creating, adding, finding, removing snapshots and consistency groups are provided. Change-Id: Ic28da69a075b062aefdf14754c68259ca58bd427 Signed-off-by: shishir gowda <sgowda@redhat.com>
* gNFS: RFE for NFS connection behaviorSantosh Kumar Pradhan2013-11-141-0/+6
| | | | | | | | | | | | | | | Implement reconfigure() for NFS xlator so that volume set/reset wont restart the NFS server process. But few options can not be reconfigured dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be restarted. Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c BUG: 1027409 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/6236 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Relax extended attribute checks for volume create and add ↵Vijay Bellur2013-10-171-1/+1
| | | | | | | | | | | | | | brick force. Expectation with force is that user is aware of the consequences of sanity checks not being triggered. Change-Id: I79dfeed16a23829a7217cef33ab83f9f0ffae336 Signed-off-by: Vijay Bellur <vbellur@redhat.com> BUG: 1007509 Reviewed-on: http://review.gluster.org/5746 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-0/+2
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Allowing root@hostname::slave georep sessions to be created.Venky Shankar2013-09-041-1/+1
| | | | | | | | | | | | | | non-root@hostname::slave-vol geo-rep sessions are not supported. only hostname and root@hostname sessions are supported, and are treated as the same. Change-Id: I87551e1bd4ff4e0e6520c34eb3d944587cc65476 BUG: 998933 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : initiating gsyncd restart during add-brickAvra Sengupta2013-07-311-0/+16
| | | | | | | | | | | | | | | | | | | | | | | During add-brick, when a new brick is added in one of the nodes that was already a part of the existing volume, and gsyncd was already running on that node, then all gsyncd processes running on that node, for that particular master and any slave sessions will be restarted If a new brick is added in a new node, then after adding the brick, the user has to perform the following steps: 1. gluster system:: execute gsec_create 2. gluster volume geo-replication <master-vol> <slave-vol> create push-pem force 3. gluster volume geo-replication <master-vol> <slave-vol> start force Change-Id: I4b9633e176c80e4a7cf33f42ebfa47ab8fc283f1 BUG: 989532 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5416 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/common-utils: move hostname helper functions to common-utilsKrishnan Parthasarathi2013-07-041-3/+0
| | | | | | | | | Change-Id: If47e209cb61ea0eb74ee2d6ef9e9342b2d6ee13a BUG: 980838 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5261 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: More checks before starting rebalance/remove-brickKaushal M2013-07-021-0/+4
| | | | | | | | | | | | Check if a previous remove-brick operation has been committed before starting a new rebalance/remove-brick task. Change-Id: I553e5ba64a6a352ca91032ab1a17997051a4494e BUG: 963541 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5019 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Disable transport before cleaning up rpc objectKrishnan Parthasarathi2013-06-181-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: rpc_transport object, which is part of rpc_clnt, is destroyed prematurely. This is because, rpc_transport object is ref'd by socket layer and rpc layer. These ref's, until the synctask'izing of operations, were unref'd sequentially in the epoll thread. With more threads at play, the sequential unref guarantee is off. Fix: Shutting down the transport before proceeding with cleaning up of rpc_clnt object would serialize the unref's on the rpc_transport object and thus eliminating the race. Also, we don't store the address of brickinfo in brick's rpc notify function, to avoid the possibility of referring a freed brickinfo. Instead we use a string based id to 'reach' the corresponding brickinfo. Change-Id: If2739e2eeaee1e8b071ab2b6754b7ea0f81cfceb BUG: 962619 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5000 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: Make sure peerinfo->uuid_str is assignedPranith Kumar K2013-05-311-0/+2
| | | | | | | | | Change-Id: I9e2743ab61c8baee92a1dfd376ec4bb145776176 BUG: 963524 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5016 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Log hostname of the peer where there is cksum/version mismatchKrutika Dhananjay2013-05-021-1/+1
| | | | | | | | | Change-Id: I08065aaa3c140d4b02af4ca38f5f4d00d7f0c2bb BUG: 958739 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4937 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Introduce volume op-versionsKaushal M2013-04-261-0/+3
| | | | | | | | | | | | | | | | | | | | | | | Each volume is now associated with two op-versions, * op_version - the op-version of the highest op-versioned feature enabled * client_op_version - the op-version of the highest op-versioned feature enabled which affects the clients only. These two op-versions are generated dynamically and kept updated during runtime. Glusterd now uses the respective volumes' client-op-version during getspec requests. To achieve the above a new field in the vme table is introduced, client_option, this boolean field tells if the option is a client side option. Change-Id: I12c83b1dd29ab506026efd50d448cebbcee53c27 BUG: 907311 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4584 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: big lock - a coarse-grained locking to prevent racesKrishnan Parthasarathi2013-04-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are primarily three lists that are part of glusterd process, that are concurrently accessed. Namely, priv->volumes, priv->peers and volinfo->bricks_list. Big-lock approach ----------------- WHAT IS IT? Big lock is a coarse-grained lock which protects all three lists, mentioned above, from racy access. HOW DOES IT WORK? At any given point in time, glusterd's thread(s) are in execution _iff_ there is a preceding, inbound network event. Of course, the sigwaiter thread and timer thread are exceptions. A network event is an external trigger to glusterd, via the epoll thread, in the form of POLLIN and POLLERR. As long as we take the big-lock at all such entry points and yield it when we are done, we are guaranteed that all the network events, accessing the global lists, are serialised. This amounts to holding the big lock at - all the handlers of all the actors in glusterd. (POLLIN) - all the cbks in glusterd. (POLLIN) - rpc_notify (DISCONNECT event), if we access/modify one of the three lists. (POLLERR) In the case of synctask'ized volume operations, we must remember that, if we held the big lock for the entire duration of the handler, we may block other non-synctask rpc actors from executing. For eg, volume-start would block in PMAP SIGNIN, if done incorrectly. To prevent this, we need to yield the big lock, when we yield the synctask, and reacquire on waking up of the synctask. Change-Id: Ib929f9905b55fb6c3fc27fefb497a26dba058e4f BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4784 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: changes in 'volume create' behaviourKrutika Dhananjay2013-04-091-2/+7
| | | | | | | | | | | | | This patch incorporates all the changes suggested on the behaviour of 'volume create' command in http://review.gluster.org/#change,4214 (comment #14, to be precise). Change-Id: Iaac524a59738b177415595b18aa8a136090d3d25 BUG: 948729 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4740 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Added the validation function for subvols-per-directoryAvra Sengupta2013-02-281-0/+3
| | | | | | | | | Change-Id: Ie2259023b9001311a2032792639c3093054f6750 BUG: 896431 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4552 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: allow multiple instances of glusterd on one machineJeff Darcy2013-02-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is needed to support automated testing of cluster-communication features such as probing and quorum. In order to use this, you need to do the following preparatory steps. * Copy /var/lib/glusterd to another directory for each virtual host * Ensure that each virtual host has a different UUID in its glusterd.info Now you can start each copy of glusterd with the following xlator-options. * management.transport.socket.bind-address=$ip_address * management.working-directory=$unique_working_directory You can use 127.x.y.z addresses for binding without needing to assign them to interfaces explicitly. Note that you must use addresses, not names, because of some stuff in the socket code that's not worth fixing just for this usage, but after that you can use names in /etc/hosts instead. At this point you can issue CLI commands to a specific glusterd using the --remote-host option. So far probe, volume create/start/stop, mount, and basic I/O all seem to work as expected with multiple instances. Change-Id: I1beabb44cff8763d2774bc208b2ffcda27c1a550 BUG: 913555 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/4556 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Made volume-quota use synctask framework.Avra Sengupta2013-02-161-1/+1
| | | | | | | | | Change-Id: I4c275253144ed3ac11a701a56dd1116c002471ba BUG: 852147 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4495 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Made volume clear-locks use synctask framework.Avra Sengupta2013-02-081-0/+2
| | | | | | | | | Change-Id: Ia1fe3d0500d999c1f95b43c9e53947834e39d680 BUG: 852147 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4490 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Moved node rsp functions to glusterd-utils.cKrishnan Parthasarathi2013-02-031-0/+15
| | | | | | | | | | Change-Id: Ib4c4794563a5a694fab16f17c642f788399462f6 BUG: 852147 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4295 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: make 'glusterd_is_local_addr' return boolJulesWang2013-01-261-1/+1
| | | | | | | | | Change-Id: Id3bd0bfc4802c166f7a32b0cc6a726aeb5617b5d BUG: 890618 Signed-off-by: JulesWang <w.jq0722@gmail.com> Reviewed-on: http://review.gluster.org/4427 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. This patch reverts the revert commit 698deb33d731df6de84da8ae8ee4045e1543a168. BUG: 857330 Change-Id: Id43d7cb629a38f47f733fbc18cb4c5f2f0327c7a Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4294 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Revert "glusterd, cli: Task id's for async tasks"Anand Avati2012-12-041-6/+0
| | | | | | | | | | | This reverts commit ed15521d4e5af2b52b78fd33711e7562f5273bc6 Strangely, the test scripts are "silently" passing for failures too. Reverting patch for now. Change-Id: I802ec1634c7863dc373cc7dc4a47bd4baa72764e Reviewed-on: http://review.gluster.org/4267 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-041-0/+6
| | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. Change-Id: Ib0c0d12e0d6c8f72ace48d303d7ff3102157e876 BUG: 857330 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3942 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Consider nodesvc to be running after onlinePranith Kumar K2012-11-291-2/+2
| | | | | | | | | | | | | | | | | | | | | Definition of online in the message below is that the RPC_CLNT_CONNECT event arrives for the nfs/self-heal-daemon process. For automated tests, sometimes the script needs to wait until self-heal-daemon comes online, so that the relevant commands can be executed. Gluster volume status before this change printed whether the self-heal-daemon is running or not based on the lock availability on the pidfile. But there is a small window where the lock on pid file is present but the process is still not online. So the commands that were depending on this kept failing in the test script. Change-Id: I0e44e18b08d7b653d34fa170c1f187d91c888cd9 BUG: 858212 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/4236 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep / gsyncd,glusterd: do not hardcode socket pathCsaba Henk2012-11-281-0/+2
| | | | | | | | | | | | ... in gsyncd python code. Indeed, use the configuration mechanism to set it suitably from glusterd. Change-Id: I9fe2088b14d28588d1e64fe892740cc5755b8365 BUG: 868877 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.org/4143 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Implementation of server-side quorumPranith Kumar K2012-11-231-5/+26
| | | | | | | | | | | | Feature-page: http://www.gluster.org/community/documentation/index.php/Features/Server-quorum Change-Id: I747b222519e71022462343d2c1bcd3626e1f9c86 BUG: 839595 Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Reviewed-on: http://review.gluster.org/3811 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: 'volume set' changes for op-version supportKaushal M2012-10-311-0/+6
| | | | | | | | | | | | An op-version check is performed for the given keys during stage. The commit phase moves the cluster op-version to the required version if needed. Change-Id: Id5c387094dbec723df736b2ecdc49ff93c179e0e BUG: 814534 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3780 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: volume-start, add-brick and remove-brick to use synctask frameworkKrishnan Parthasarathi2012-10-111-4/+6
| | | | | | | | | | | | | | - Added volume-id validation to glusterd-syncop code. - All daemons are restarted using synctasks in init(). - glusterd_brick_start has wait/nowait variants to support volume commands using synctask framework and those that aren't. Change-Id: Ieec26fe1ea7e5faac88cc7798d93e4cc2b399d34 BUG: 862834 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3969 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Moved peer rsp handling functions to glusterd-utilsKrishnan Parthasarathi2012-10-111-0/+15
| | | | | | | | | | | - Moved inner functions used in conjunction with synctask, 'out'. Change-Id: I7fbfd9881ea58645c4295a9fa7163ddd15a45d2f BUG: 862834 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4066 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: glusterd_brick_stop should be race free wrt pmapKrishnan Parthasarathi2012-10-101-2/+4
| | | | | | | | | | | | This is important for the effort to make glusterd use synctask framework. Change-Id: I0affb10a342df99df8daccfd6eef8fa6dd63928c BUG: 862834 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4057 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Fix to log command status at the appropriate timeKrutika Dhananjay2012-09-201-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: In the existing implementation, the success/failure of execution of a command is decided (and logged) in glusterd handler functions. Strictly speaking, the logging mechanism must take into account what course the command takes within the state machine before concluding whether it succeeded or failed. FIX: This patch attempts to fix the above issue for vol commands. The format of the log message is as follows: for failure: <command string> : FAILED : <cause of failure> for success: <command string> : SUCCESS APPROACH (in a nutshell): * The command string is packed into dict at cli and sent to glusterd. * glusterd logs the command status just before doing a "submit_reply", which is called (either directly or indirectly via a call to glusterd_op_cli_send_response) at 2 places for every vol command: i. in handler functions, and ii. in glusterd_op_txn_complete In short, the failure of a command in the handler implies the command has indeed failed. However, its success in the handler does NOT necessarily mean the command succeeded/will succeed. Change-Id: I5a8a2ddc318ef2dc2a9699f704a6bcd2f0ab0277 BUG: 823081 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/3948 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-09-131-7/+6
| | | | | | | | | | | | License message changed for server-side, dual license GPLV2 and LGPLv3+. Change-Id: Ia9e53061b9d2df3b3ef3bc9778dceff77db46a09 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3940 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-08-281-14/+5
| | | | | | | | | | | | | | | | | | The license message is changed to Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com> This file is part of GlusterFS. This file is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. Change-Id: I07d2b63ed5fbbbd1884f1e74f2dd56013d15b0f4 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3858 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Made volume set help/help-xml a non-cluster operation.Krishnan Parthasarathi2012-08-021-0/+2
| | | | | | | | | | | | - Retained apparent redundant checks in stage, commit phase of set volume for the help options for backward compatibility Change-Id: Iaefe3805d6b5eeeced2e7e4870830edf3e61dc87 BUG: 844696 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.com/3761 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: Refactored brickinfo APIsKrishnan Parthasarathi2012-07-191-11/+5
| | | | | | | | | | | | | | | | | This patch modifies the existing brickinfo function signatures and/or names to do one thing right and call them by 'appropriate' names. - Decoupled brickinfo_get and is_brickpath_available - Removed dead comment about realpath(3) in canonicalize_path - Renamed glusterd_brickinfo_from_brick to glusterd_brickinfo_new_from_brick to make the name of the function reflect that an allocation is happening Change-Id: I29daba6d431ca799d43c927b9dfbaeda327e83e8 BUG: 764890 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.com/3668 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: cut out a standalone socket path calculation routineCsaba Henk2012-05-311-0/+3
| | | | | | | | | Change-Id: If5f196c9154ea59e37b83d3e4cad445fee6e9d45 BUG: 826512 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.com/3490 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pranithk@gluster.com>