summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-syncop.c
Commit message (Collapse)AuthorAgeFilesLines
...
* glusterd: group server-quorum related code togetherKrishnan Parthasarathi2015-04-011-0/+1
| | | | | | | | | | | | | | | Server-quorum implementation was spread in many files. This patch brings them all together into a single file, namely glusterd-server-quorum.c. All exported functions are available via glusterd-server-quorum.h Change-Id: I8fd77114b5bc6b05127cb8a6a641e0295f0be7bb BUG: 1205592 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9492 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Maintain local xaction_peer list for op-smAtin Mukherjee2015-03-261-20/+1
| | | | | | | | | | | | | | | http://review.gluster.org/9269 addresses maintaining local xaction_peers in syncop and mgmt_v3 framework. This patch is to maintain local xaction_peers list for op-sm framework as well. Change-Id: Idd8484463fed196b3b18c2df7f550a3302c6e138 BUG: 1204727 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9972 Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Do not use global opinfo in syncopAtin Mukherjee2015-03-231-2/+0
| | | | | | | | | | | | | | | | Global opinfo should not be referred by syncop framework as it uses local txn_opinfo for every transaction. There is one place in the codebase where the global opinfo is set with the local txn_opinfo which can lead to an incorrect opinfo for an on-going op-sm transaction which refers to the same global opinfo. Change-Id: Ida63a8871b8d03fe646146eddfd3f2473f1b1d7c BUG: 1202745 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9908 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* features/quota : Introducing inode quotavmallika2015-03-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ========================================================================== Inode quota ========================================================================== = Currently, the only way to retrieve the number of files/objects in a = = directory or volume is to do a crawl of the entire directory/volume. = = This is expensive and is not scalable. = = = = The proposed mechanism will provide an easier alternative to determine = = the count of files/objects in a directory or volume. = = = = The new mechanism proposes to store count of objects/files as part of = = an extended attribute of a directory. Each directory's extended = = attribute value will indicate the number of files/objects present = = in a tree with the directory being considered as the root of the tree. = = = = The count value can be accessed by performing a getxattr(). = = Cluster translators like afr, dht and stripe will perform aggregation = = of count values from various bricks when getxattr() happens on the key = = associated with file/object count. = A new interface is introduced: ------------------------------ limit-objects : limit the number of inodes at directory level list-objects : list the directories where the limit is set remove-objects : remove the limit from the directory ========================================================================== CLI COMMAND: gluster volume quota <volname> limit-objects <path> <number> [<percent>] * <number> is a hard-limit for number of objects limitation for path "<path>" If hard-limit is exceeded, creation of file/directory is no longer permitted. * <percent> is a soft-limit for number of objects creation for path "<path>" If soft-limit is exceeded, a warning is issued for each creation. CLI COMMAND: gluster volume quota <volname> remove-objects [path] ========================================================================== CLI COMMAND: gluster volume quota <volname> list-objects [path] ... Sample output: ------------------ Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------ -------------------------------------- /dir 10 80% 10 0 Yes Yes ========================================================================== [root@snapshot-28 dir]# ls a b file11 file12 file13 file14 file15 file16 file17 [root@snapshot-28 dir]# touch a1 touch: cannot touch `a1': Disk quota exceeded * Nine files are created in directory "dir" and directory is included in * the count too. Hence the limit "10" is reached and further file creation fails ========================================================================== Note: We have also done some re-factoring in cli for volume name validation. New function cli_validate_volname is created ========================================================================== Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b BUG: 1190108 Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9769 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Protect the peer list and peerinfos with RCU.Kaushal M2015-03-161-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The peer list and the peerinfo objects are now protected using RCU. Design patterns described in the Paul McKenney's RCU dissertation [1] (sections 5 and 6) have been used to convert existing non-RCU protected code to RCU protected code. Currently, we are only targetting guaranteeing the existence of the peerinfo objects, ie., we are only looking to protect deletes, not all updaters. We chose this, as protecting all updates is a much more complex task. The steps used to accomplish this are, 1. Remove all long lived direct references to peerinfo objects (apart from the peerinfo list). This includes references in glusterd_peerctx_t (RPC), glusterd_friend_sm_event_t (friend state machine) and others. This way no one has a reference to deleted peerinfo object. 2. Replace the direct references with indirect references, ie., use peer uuid and peer hostname as indirect references to the peerinfo object. Any reader or updater now uses the indirect references to get to the actual peerinfo object, using glusterd_peerinfo_find. Cases where a peerinfo cannot be found are handled gracefully. 3. The readers get and use the peerinfo object only within a RCU read critical section. This prevents the object from being deleted/freed when in actual use. 4. The deletion of a peerinfo object is done in a ordered manner (glusterd_peerinfo_destroy). The object is first removed from the peerinfo list using an atomic list remove, but the list head is not reset to allow existing list readers to complete correctly. We wait for readers to complete, before resetting the list head. This removes the object from the list completely. After this no new readers can get a reference to the object, and it can be freed. This change was developed on the git branch at [2]. This commit is a combination of the following commits on the development branch. d7999b9 Protect the glusterd_conf_t->peers_list with RCU. 0da85c4 Synchronize before INITing peerinfo list head after removing from list. 32ec28a Add missing rcu_read_unlock 8fed0b8 Correctly exit read critical section once peer is found. 63db857 Free peerctx only on rpc destruction 56eff26 Cleanup style issues e5f38b0 Indirection for events and friend_sm 3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already exists 141d855 Address review comments on 9695/1 aaeefed Protection during peer updates 6eda33d Revert "Synchronize before INITing peerinfo list head after removing from list." f69db96 Remove unneeded line b43d2ec Address review comments on 9695/4 7781921 Address review comments on 9695/5 eb6467b Add some missing semi-colons 328a47f Remove synchronize_rcu from glusterd_friend_sm_transition_state 186e429 Run part of glusterd_friend_remove in critical section 55c0a2e Fix gluster (peer status/ pool list) with no peers 93f8dcf Use call_rcu to free peerinfo c36178c Introduce composite struct, gd_rcu_head [1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf [2]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9695 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Changes required for disperse volume heal commandsPranith Kumar K2015-03-101-4/+3
| | | | | | | | | | | | | - Include xattrop64-watchlist for index xlator for disperse volumes. - Change the functions that exist to consider disperse volumes also for sending commands to disperse xls in self-heal-daemon. Change-Id: Iae75a5d3dd5642454a2ebf5840feba35780d8adb BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Replace libglusterfs lists with liburcu listsKaushal M2015-03-031-25/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces usage of the libglusterfs lists data structures and API in glusterd with the lists data structures and API from liburcu. The liburcu data structes and APIs are a drop-in replacement for libglusterfs lists. All usages have been changed to keep the code consistent, and free from confusion. NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data structures and API, as it holds rpc_transport_t objects, which is not a part of glusterd and is not being changed in this patch. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 6dac576 Replace libglusterfs lists with liburcu lists a51b5ab Fix compilation issues d98a06f Fix merge issues a5d918e Remove merge remnant 1cca113 More style cleanup 1917be3 Address review comments on 9624/1 8d10f13 Use cds_lists for glusterd_svc_t 524ad5d Add rculist header in glusterd-conn-helper.c 646f294 glusterd: add list_add_order API honouring rcu [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0 BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9624 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd: quorum validatation in glusterd syncop frameworkGauravKumarGarg2015-01-201-0/+11
| | | | | | | | | | | | | | | | | | | | Previously glusterd was not checking quorum validation in syncop framework. So when there is loss in quorum then few operation (for eg. add-brick, remove-brick, volume set) which is based on syncop framework passed successfully with out doing quorum validation check. With this change it will do quorum validation in syncop framework and it will block all operation (except volume set <quorum options> and "volume reset all" commands) when there is loss in quorum. Change-Id: I4c2ef16728d55c98a228bb86795023d9c1f4e9fb BUG: 1177132 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/9349 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Added iov error checking in rpc callbacks.Anand2015-01-201-2/+43
| | | | | | | | | | | | | | | | | | | | | Problem : glusterd was crashing with SIGABRT if rpc connection is failed in debug mode. Reason : It was happening due to iov is passing to assert() before checking rpc status in rpc call back function (rpc is calling callback function with setting rpc status as -1 and passing NULL to iov if connection is failed). Fix : Error checking for iov added after checking the rpc status verified and error messages are added properly . Change-Id: I35c05c438444d0454aadac4e45524565a7be68a8 BUG: 1181543 Signed-off-by: Anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/9449 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: use list_for_each_entry_safe for cleanupAvra Sengupta2015-01-081-3/+4
| | | | | | | | | | | | | | Use list_for_each_entry_safe() instead of list_for_each_entry() for cleanup of local xaction_peers list. Change-Id: I6d70c04dfb90cbbcd8d9fc4155b8e5e7d7612460 BUG: 1173414 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9416 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Refactor glusterd-utils.cAvra Sengupta2015-01-081-0/+1
| | | | | | | | | | | | | | | | Refactor glusterd-utils.c to create glusterd-snapshot-utils.c consisting of all snapshot utility functions. Change-Id: Id9823a2aec9b115f9c040c9940f288d4fe753d9b BUG: 1176770 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9391 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Maintain per transaction xaction_peers list in syncop & mgmt_v3Atin Mukherjee2014-12-221-56/+114
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In current implementation xaction_peers list is maintained in a global variable (glustrd_priv_t) for syncop/mgmt_v3. This means consistency and atomicity of peerinfo list across transactions is not guranteed when multiple syncop/mgmt_v3 transaction are going through. We had got into a problem in mgmt_v3-locks.t which was failing spuriously, the reason for that was two volume set operations (in two different volume) was going through simultaneouly and both of these transaction were manipulating the same xaction_peers structure which lead to a corrupted list. Because of which in some cases unlock request to peer was never triggered and we end up with having stale locks. Solution is to maintain a per transaction local xaction_peers list for every syncop. Please note I've identified this problem in op-sm area as well and a separate patch will be attempted to fix it. Finally thanks to Krishnan Parthasarathi and Kaushal M for your constant help to get to the root cause. Change-Id: Ib1eaac9e5c8fc319f4e7f8d2ad965bc1357a7c63 BUG: 1173414 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9269 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: print the peer name instead of a null UUID in a rpc failure messageAtin Mukherjee2014-10-091-55/+58
| | | | | | | | | | | | | This patch improves the failure message by printing the correct peer name instead of a blank uuid in case of rpc connection is lost/broken. Change-Id: Ia232792051f23896883b239982cb48130e3ce60e BUG: 1146902 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8597 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* gluster: Fix the recursive goto outs in the source code.Avra Sengupta2014-07-211-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Added a script check_goto.pl, that when run from the source code root, will scan all .c files to match the following pattern: label: if (condition) goto label; On finding such a pattern the script will print the file name and the line number. There are certain cases where the above recursive pattern is intended. Hence adding those labels to ignore-labels. Thanks Vijaikumar Mallikarjuna for the perl script. Also fixed all such existing errors Change-Id: I1b821d0a8c296f16e40faff20bd029bdc880c2e9 BUG: 1119256 Signed-off-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8307 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Improvements to peer identificationKaushal M2014-07-151-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch improves the peer identification mechanism in glusterd and lays down the framework for further improvements, including better multi network support in glusterd. This patch mainly does two things, 1. Extend the peerinfo object to store a list of addresses instead of a single hostname as it does now. This also includes changes to make the peer update behaviour of 'peer probe' to add to the list. 2. Improve glusterd_friend_find_by_hostname() to perform better matching of hostnames. glusterd_friend_find_by_hostname() now does and initial quick string compare against all the peer addresses known to glusterd, after which it tries a more thorough search using address resolution and matching the struc sockaddr's. The above two changes together improve the peer identification situation in glusterd a lot. More information regarding the problem this patch attempts to resolve and the approach chosen can be found at http://www.gluster.org/community/documentation/index.php/Features/Better_peer_identification This commit is a squashed commit of the following changes, the development branch of which can be viewed at, https://github.com/kshlm/glusterfs/tree/better-peer-identification or, https://forge.gluster.org/~kshlm/glusterfs-core/kshlms-glusterfs/commits/better-peer-identification commit 198f86e60fab74faf082eaa02657a4d8f60b92f0 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 14:34:06 2014 +0530 Update gluster.8 commit 35d597f3a6b3248373e727f7b7e889c92554d56c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 09:01:01 2014 +0530 Address review comments https://review.gluster.org/#/c/8238/3 commit 47b5331e17304477322bd2daed5bbed503c34ca1 Merge: c71b12c 78128af Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 08:41:39 2014 +0530 Merge branch 'master' into better-peer-identification commit c71b12c164330e8d19d1df4734ab34ef9a8caad2 Merge: 57bc9de 0f5719a Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:50:19 2014 +0530 Merge branch 'master' into better-peer-identification commit 57bc9de9e4f49ff2b1620df9906cda50a3527a25 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:49:08 2014 +0530 More fixes to review comments commit 5482cc363a687a9e246a0780ec88acd53e218501 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 18:36:40 2014 +0530 Code refactoring in peer-utils based on review comments https://review.gluster.org/#/c/8238/2/xlators/mgmt/glusterd/src/glusterd-peer-utils.c commit 89b22c34757178f64d5fbaffa31e6302f841c060 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:30:00 2014 +0530 Hostnames in peer status commit 63ebf9485cf50d736cf640238a1ab241671fcaf1 Merge: c8c8fdd f5f9721 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:06:33 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit c8c8fdd2104b5b6b8a1af739b1dd952b74e6dd66 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 18:35:27 2014 +0530 Hostnames in xml output commit 732a92a0167ad7b1d70edbc35ebd8307c2766ae1 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 15:12:10 2014 +0530 Add hostnames to cli rsp dict during list-friends commit fcf43e3e317508f0c225024738a988a4af8e9205 Merge: c0e2624 72d96e2 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 12:53:03 2014 +0530 Merge branch 'master' into better-peer-identification commit c0e262416728a3c536a8347a216e471eb2251535 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 16:11:19 2014 +0530 Use list_for_each_entry_safe when cleaning peer hostnames commit 6132e60224eb592f3657e535a12a3e72c772da42 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 15:52:19 2014 +0530 Fix crash in gd_add_friend_to_dict commit 88ffa9a508fd5aac0b2a76e6e76487ce0cab786a Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 13:19:44 2014 +0530 gd_peerinfo_destroy -> glusterd_peerinfo_destroy commit 4b36930a715b1e13cd1a77d136ef1cf78a06d574 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:50:12 2014 +0530 More refactoring commit ee559b081d608c6501c10ae22166f26eeb65690e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:14:40 2014 +0530 Major refactoring of code based on review comments at https://review.gluster.org/#/c/8238/1/xlators/mgmt/glusterd/src/glusterd-peer-utils.h commit e96dbc7bbb05fad2a9c424de41a394b8023fe48d Merge: 2613d1d 83c09b7 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 09:47:05 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 2613d1daebff0c56812de821c06ed4c16bb9d447 Merge: b242cf6 9a50211 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:28:57 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit b242cf66d95dd3dd5e3975aa430baa6bd74b8a29 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:08:18 2014 +0530 Fix a silly mistake, if (ctx->req) => if (ctx->req == NULL) commit c835ed26433830ceed57289143f596cf60421558 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 14:58:23 2014 +0530 Fix reverse probe. commit 9ede17f9329b854b02e8ad159f173244789fd08c Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:31:32 2014 +0530 Fix friend import for existing peers commit 891bf74c7350064dfb008d1b7294bcec28d680fd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:08:36 2014 +0530 Set first hostname in peerinfo->hostnames to peerinfo->hostname commit 9421d6a217381a7427a7d84f369280883ca4297a Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 12:21:40 2014 +0530 Fix gf_asprintf return val check in glusterd_store_peer_write commit defac978c1d94011ce8195e311839b9ffce057e7 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 11:16:13 2014 +0530 Fix store_retrieve_peers to correctly cleanup. commit 00a799f5de1121b0cb7421da8285f9407063e1bd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 10:52:11 2014 +0530 Update address list in glusterd_probe_cbk only when needed. commit 7a628e8a9c562d85709c69cfa13fb1774c521b75 Merge: d191985 dc46d5e Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 09:24:12 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit d1919858e6639d2b54d716a61f662d9752ec5ff1 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:59:49 2014 +0530 gf_compare_addrinfo -> gf_compare_sockaddr commit 31d8ef730d408f8d9ba8f504fa648f7dcd59da87 Merge: 93bbede 86ee233 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:16:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 93bbedeac5181e29f59b2acd08f638146812ec41 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:15:16 2014 +0530 Improve glusterd_friend_find_by_hostname glusterd_friend_find_by_hostname will now do an initial quick search for the peerinfo performing string comparisions on the given host string. It follows it with a more thorough match, by resolving the addresses and comparing addrinfos instead of strings. commit 2542cdbc45aa9cfcaf1f174686158d5565cdd07b Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 17:21:10 2014 +0530 New utility gf_compare_addrinfo commit 338676e8389a44bd91136eebd110197429c2566c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:55:56 2014 +0530 Use gd_peer_has_address instead of strcmp commit 28d45be51f594328741c44455bd80ac9d64ca501 Merge: 728266e 991dd5e Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:54:40 2014 +0530 Merge branch 'master' into better-peer-identification commit 728266eb16d5f5a4bf36266044425ae164337f99 Merge: 7d9b87b 2417de9 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 09:55:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 7d9b87b84955ec17daeaf88a3e7462914039430f Merge: b890625 e02275c Author: Kaushal M <kshlmster@gmail.com> Date: Tue Jul 1 08:41:40 2014 +0530 Merge pull request #4 from vpshastry/better-peer-identification Better peer identification commit e02275c52fb83c72ad082c098fd3e432c2b9c526 Merge: 75ee90d b890625 Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 16:44:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit 75ee90d2f272e49b94d24c9ca4571e89a83055ff Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 15:36:10 2014 +0530 glusterd: add to the list if the probed uuid pre-exists Signed-off-by: Varun Shastry <vshastry@redhat.com> commit b890625d8164c660695daef3285c67979eef723e Merge: 04c5d60 187a7a9 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 30 11:44:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 04c5d60cb938c8d94b214689580b40abb1b0ffcd Merge: 3a5bfa1 e01edb6 Author: Kaushal M <kshlmster@gmail.com> Date: Sat Jun 28 19:23:33 2014 +0530 Merge pull request #3 from vpshastry/better-peer-identification glusterd: search through the list of hostnames in the peerinfo commit 0c64f3346a977f9165ac55a84a1e03c40a7573a7 Merge: e01edb6 3a5bfa1 Author: Varun Shastry <vshastry@redhat.com> Date: Sat Jun 28 10:43:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit e01edb63153a1008db70b8fa76ae5b535e099326 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 12:29:36 2014 +0530 glusterd: search through the list of hostnames in the peerinfo Signed-off-by: Varun Shastry <vshastry@redhat.com> commit 3a5bfa15855e660db2bfde644727371dd2d618cc Merge: cda6d31 371ea35 Author: Kaushal M <kshlmster@gmail.com> Date: Fri Jun 27 11:31:17 2014 +0530 Merge pull request #1 from vpshastry/better-peer-identification glusterd: Add hostname to list instead of replaceing upon update commit 371ea354f198b4182382d5403c5960c0b2add6b6 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 11:24:54 2014 +0530 glusterd: Add hostname to list instead of replaceing upon update Signed-off-by: Varun Shastry <vshastry@redhat.com> commit cda6d3152886623ecbf46baf0048ebe0119b30b6 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:52:52 2014 +0530 Import address lists commit 6649b54aa0440130c08e827e0a1d1bbfb840eca9 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:15:37 2014 +0530 Implement export address list commit 55990034eead92bc9b936240029e460a4bf152d5 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:11:59 2014 +0530 Use first address in list to when setting up the peer RPC. commit a35fde8d19b9988eb04c652fb3a5e4f84d90ad00 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:03:04 2014 +0530 Properly free addresses on glusterd_peer_destroy commit 1988081db09ac9205f3dc7268cef8be267f3ce8b Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 17:52:35 2014 +0530 Restore peerinfo with address list implemented. commit 66f524d5749a12f4910dd6b06c9d91f37e1d831e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 23 13:02:23 2014 +0530 Move out all peer related utilities from glusterd-utils to glusterd-peer-utils commit 14a2a326a4dff11b55490dca2a14f39320931340 Author: Kaushal M <kaushal@redhat.com> Date: Tue May 27 12:16:41 2014 +0530 Compilation fix commit c59cd351d0a102d0d5f3ea9001fd33c4edcb262f Author: Kaushal M <kaushal@redhat.com> Date: Mon May 5 12:51:11 2014 +0530 Add store support for hostname list commit b70325f0beb884ad12645ef40185f0bf6cedd741 Author: Kaushal M <kaushal@redhat.com> Date: Fri May 2 15:58:07 2014 +0530 Add a hostnames list to glusterd_peerinfo_t glusterd_peerinfo_new will now init this list and add the given hostname as the lists first member. Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Varun Shastry <vshastry@redhat.com> Change-Id: Ief3c5d6d6f16571ee2fab0a45e638b9d6506a06e BUG: 1119547 Reviewed-on: http://review.gluster.org/8238 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : Fix for snap create preval for remote peer err msgJoseph Fernandes2014-07-021-56/+1
| | | | | | | | | | | | | Fix for the snap create prevalidation error collation when remote peer failed. Change-Id: If9563580eae4d9bc4d4d795f0b434f2c85b94007 BUG: 1101993 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7899 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Core: Fix issues reported by CppcheckLalatendu Mohanty2014-06-121-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixed in this patch: [glusterfs/extras/geo-rep/gsync-sync-gfid.c:105]: (error) Resource leak: fp [glusterfs/libglusterfs/src/xlator.c:651]: (error) Uninitialized variable: gfid [glusterfs/libglusterfs/src/xlator.c:652]: (error) Uninitialized variable: gfid [glusterfs/xlators/cluster/ha/src/ha.c:2699]: (error) Possible null pointer dereference: priv [glusterfs/xlators/features/changelog/src/changelog.c:1464]: (error) Possible null pointer dereference: priv [glusterfs/xlators/mgmt/glusterd/src/glusterd-mgmt-handler.c:865]: (error) Possible null pointer dereference: ctx [glusterfs/xlators/mgmt/glusterd/src/glusterd-mgmt-handler.c:194]: (error) Possible null pointer dereference: ctx [glusterfs/xlators/mgmt/glusterd/src/glusterd-syncop.c:1408]: (error) Possible null pointer dereference: this [glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:7002]: (error) Possible null pointer dereference: path_tokens Fixed in 3.4 and 3.5 branch (http://review.gluster.org/#/c/7583/ , http://review.gluster.org/#/c/7605/ will be backported in a separate patch) [glusterfs/xlators/mount/fuse/src/fuse-bridge.c:4688]: (error) Uninitialized variable: finh [glusterfs/xlators/mount/fuse/src/fuse-bridge.c:3081]: (error) Possible null pointer dereference: state [glusterfs/xlators/cluster/dht/src/dht-rebalance.c:1719]: (error) Possible null pointer dereference: ctx [glusterfs/xlators/cluster/stripe/src/stripe.c:4940]: (error) Possible null pointer dereference: local [glusterfs/xlators/mgmt/glusterd/src/glusterd-replace-brick.c:915]: (error) Resource leak: file [glusterfs/xlators/mgmt/glusterd/src/glusterd-replace-brick.c:999]: (error) Resource leak: file [glusterfs/xlators/mgmt/glusterd/src/glusterd-sm.c:248]: (error) Possible null pointer dereference: new_ev_ctx [glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:5297]: (error) Possible null pointer dereference: this [glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:6273]: (error) Possible null pointer dereference: this [glusterfs/xlators/performance/quick-read/src/quick-read.c:586]: (error) Possible null pointer dereference: iobuf [glusterfs/xlators/nfs/server/src/nfs-common.c:89]: (error) Dangerous usage of 'volname' (strncpy doesn't always null-terminate it). False positives [glusterfs/geo-replication/src/gsyncd.c:99]: (error) Memory leak: str [glusterfs/geo-replication/src/gsyncd.c:395]: (error) Memory leak: argv [glusterfs/xlators/nfs/server/src/nlm4.c:1199]: (error) Possible null pointer dereference: fde [glusterfs/xlators/mgmt/glusterd/src/glusterd-geo-rep.c:1659]: (error) Possible null pointer dereference: command [glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:7001]: (error) Possible null pointer dereference: path_tokens Insignificant/Don't care [glusterfs/contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 2) requires 'long *' but the argument type is 'unsigned long *'. [glusterfs/contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 3) requires 'long *' but the argument type is 'unsigned long *'. [glusterfs/extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. [glusterfs/xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr [glusterfs/xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr [glusterfs/xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr Change-Id: I7696ed1a2a9553b79f9714e10210a8d563a5abd8 BUG: 1091677 Signed-off-by: Lalatendu Mohanty <lmohanty@redhat.com> Reviewed-on: http://review.gluster.org/7693 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Better op-version values and rangesKaushal M2014-06-091-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | Till now, the op-version was an incrementing integer that was incremented by 1 for every Y release (when using the X.Y.Z release numbering). This is not flexible enough to handle backports of features into Z releases. Going forward, from the upcoming 3.6.0 release, the op-versions will be multi-digit integer values composed of the version numbers, instead of a simple incrementing integer. An X.Y.Z release will have XYZ as its op-version. Y and Z will always be 2 digits wide and will be padded with 0 if required. This way of bumping op-versions allows for gaps in between the subsequent Y releases. These gaps will allow backporting features from new Y releases into old Z releases. Change-Id: I463f82902d997ec07e76dae58ac935f33e6393c2 BUG: 1104997 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7963 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Perform Pending quorum actions after OpPranith Kumar K2014-05-081-0/+7
| | | | | | | | | Change-Id: I2bb67b5fb4a6f6dac892ef3206e7a79706018a6e BUG: 959986 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/4955 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Fetch brick mount_dirs during brick create.Avra Sengupta2014-05-061-6/+32
| | | | | | | | | | | | | | | | | Fetch the mount directory path for a brick, during volume create, add-brick, and replace-brick. When a snap-create is missed, use this mount directory information to create the brick path for the missed snap brick. Change-Id: Iad3eec96a32cf340f26bdf3f28e2f529e4b77e31 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7550 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Add a barrier brick-opKaushal M2014-04-291-1/+2
| | | | | | | | | | | | | | | | This patch introduces a new 'barrier' brick-op which will be used to activate/deactivate the barriering on the bricks. This includes barriering in the barrier xlator and in the changelog xlator. All the required code has been including a bricks select function, a payload builder and a brick-op handler. Change-Id: I91d9d77f691c2e89823f7dc4e84900ec40dc4dd2 BUG: 1060002 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6943 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-51/+60
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/Vol-Locks : Moving globals into glusterd priv and code refactoringAvra Sengupta2014-02-141-31/+54
| | | | | | | | | | | | | | | | | | | | | | Moved globals(vol_lock and txn_opinfo dicts and global_txn_id) into glusterd priv Moved glusterd_op_send_cli_response() out of gd_unlock_op_phase as gd_unlock_op_phase and glusterd_clear_txn_opinfo should only be called if the txn id has been successfully generated. The cli resp should be sent irrespective of that. Changed log levels from ERROR to WARNING for some volume lock logs where the logs are expected and is not an error Added logs for better transparency of transaction ids. Change-Id: Ifac9b23aa9f1648c9ae252cfd3ac50bb2ed46728 BUG: 1011470 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/6976 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Volume locks and transaction specific opinfosAvra Sengupta2014-02-101-54/+423
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this patch we are replacing the existing cluster-wide lock taken on glusterds across the cluster, with volume locks which are also taken on glusterds across the cluster, but are volume specific. So with the volume locks we are able to perform more than one gluster operation at the same time, as long as the operations are being performed on different volumes. We maintain a global list of volume-locks (using a dict for a list) where the key is the volume name, and which saves the uuid of the originator glusterd. These locks are held and released per volume transaction. In order to acheive multiple gluster operations occuring at the same time, we also separate opinfos in the op-state-machine, as a part of this patch. To do so, we generate a unique transaction-id (uuid) per gluster transaction. An opinfo is then associated with this transaction id, which is used throughout the transaction. We maintain a run-time global list(using a dict) of transaction-ids, and their respective opinfos to achieve this. Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8 BUG: 1011470 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5994 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Reset opinfo.op ONLY if lock succeededKrutika Dhananjay2014-02-041-2/+4
| | | | | | | | | | Change-Id: I0244a7f61a826b32f4c2dfe51e246f2593a38211 BUG: 1060434 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6885 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Aggregate tasks status in 'volume status [tasks]'Kaushal M2013-12-041-6/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, glusterd used to just send back the local status of a task in a 'volume status [tasks]' command. As the rebalance operation is distributed and asynchronus, this meant that different peers could give different status values for a rebalance or remove-brick task. With this patch, all the peers will send back the tasks status as a part of the 'volume status' commit op, and the origin peer will aggregate these to arrive at a final status for the task. The aggregation is only done for rebalance or remove-brick tasks. The replace-brick task will have the same status on all the peers (see comment in glusterd_volume_status_aggregate_tasks_status() for more information) and need not be aggregated. The rebalance process has 5 states, NOT_STARTED - rebalance process has not been started on this node STARTED - rebalance process has been started and is still running STOPPED - rebalance process was stopped by a 'rebalance/remove-brick stop' command COMPLETED - rebalance process completed successfully FAILED - rebalance process failed to complete successfully The aggregation is done using the following precedence, STARTED > FAILED > STOPPED > COMPLETED > NOT_STARTED The new changes make the 'volume status tasks' command a distributed command as we need to get the task status from all peers. The following tests were performed, - Start a remove-brick task and do a status command on a peer which doesn't have the brick being removed. The remove-brick status was given correctly as 'in progress' and 'completed', instead of 'not started' - Start a rebalance task, run the status command. The status moved to 'completed' only after rebalance completed on all nodes. Also, change the CLI xml output code for rebalance status to use the same algorithm for status aggregation. Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0 BUG: 1027094 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6230 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/glusterd: Changes to quota command Quota featureRaghavendra G2013-11-261-22/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | re-work. Following are the cli commands that are new/re-worked: ====================================================== volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} | volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} | volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool] volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] glusterd changes: ================= * Quota limits are now set as extended attributes by glusterd from the aux mount created by the cli. * The gfids of the directories on which quota limits are set for a given volume are stored in /var/lib/glusterd/vols/<volname>/quota.conf file in binary format, and whose cksum and version is stored in /var/lib/glusterd/vols/<volname>/quota.cksum. Original-author: Krutika Dhananjay <kdhananj@redhat.com> Original-author: Krishnan Parthasarathi <kparthas@redhat.com> BUG: 969461 Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-2/+9
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Adding transaction checks for cluster unlock.Avra Sengupta2013-09-201-3/+12
| | | | | | | | | | | | | | | | | | | | | | While a gluster command holding lock is in execution, any other gluster command which tries to run will fail to acquire the lock. As a result command#2 will follow the cleanup code flow, which also includes unlocking the held locks. As both the commands are run from the same node, command#2 will end up releasing the locks held by command#1 even before command#1 reaches completion. Now we call the unlock routine in the code path, of the cluster has been locked during the same transaction. Signed-off-by: Avra Sengupta <asengupt@redhat.com> Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834 BUG: 1008172 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5937 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-27/+138
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* glusterd-syncop: Fix unlocking and collating errorsKaushal M2013-06-041-27/+62
| | | | | | | | | | | | | * Only those peers which were locked need to be unlocked. * Fix location of collating errors in callbacks. The callback functions could miss collating errors if there was an rpc error. Change-Id: Ie27c2f1ec197da4f5077a4d6e032127954ce87cd BUG: 948686 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5087 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Set op_errstr to error string received from peerKrutika Dhananjay2013-05-161-0/+4
| | | | | | | | | | | | ... in case of volume op failure on remote host Change-Id: I7177dc02369dffa82f217496559532d18b7c7c7a BUG: 963628 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/5018 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Perform NULL check on rsp.op_errstr before using itKrutika Dhananjay2013-05-131-1/+1
| | | | | | | | | | Change-Id: Id18b215a91cf016964ea98d2f414293b82167d24 BUG: 962362 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4992 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Fix uuid to hostname conversion for 'volume status'Kaushal M2013-05-121-3/+4
| | | | | | | | | | Change-Id: I46c41c29c2d11652f6d8ccd5637be0ac9774fc1d BUG: 927648 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4873 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Give up big lock before performing any RPCKrishnan Parthasarathi2013-05-071-1/+13
| | | | | | | | | Change-Id: Ib0a772dc1cb9afc8adccd8f7092f480d2b525ea0 BUG: 960580 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4964 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Revert "glusterd: Fix spurious wakeups in glusterd syncops"Krishnan Parthasarathi2013-05-041-27/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit efa154bb0a4cac34d5a9610ec25d38eebe495f22. -- Following is Avati's analysis (edited) from gerrit -- The claim of the patch (being reverted) is that it in some cases cbkfn is missed. This is wrong analysis. cbk_fn is _always_ called. The patch treats ret > 0 as a "missed cbk". ret > 0 only means socket submission was not complete, and is queued to submit asynchronously when POLLOUT is raised. This is sufficient to guarantee that cbkfn is going to be called (either the socket errors or submission succeeds and reply eventually arrives). This commit also removes spurious barrier_wake(s). call backs are guaranteed to be called even if the transport is disconnected. This means, a 'wake' would be called if rpc_clnt_submit is called. Also, we count both successful and failed operations in a particular batch of operations for the synctask_barrier_wait. So, calling synctask_barrier_wake on failure of rpc_clnt_submit (say, due to network failure) would result in a spurious wake. Change-Id: I7d508c2a54b74a65b82f097742206bc777afc53a BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4922 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* synctask: implement barriers around yield, not the other wayAnand Avati2013-05-041-0/+2
| | | | | | | | | | | | | | | | | | | | In the current implementation, barriers are in the core of the syncprocessors. Wake()s are treated as syncbarrier wake. This is however delicate, as spurious wake()s of the synctask can mess up the accounting of the barrier and waking it prematurely. The fix is to keep yield() and wake() as the basic primitives, and implement barriers as an object impelemented on top of these primitives. This way, only an explicit barrier_wake() gets counted towards the barrier accounting, and spurious wakes will be truly safe. Change-Id: I8087f0f446113e5b2d0853431c0354335ccda076 BUG: 948686 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/4921 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Syncop callbks should take big lock tooKrishnan Parthasarathi2013-05-021-13/+48
| | | | | | | | | | Change-Id: I5ae71ab98f9a336dc9bbf0e7b2ec50a6ed42b0f5 BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4938 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: volume-sync needs to work with rejected peersKrishnan Parthasarathi2013-04-221-3/+5
| | | | | | | | | Change-Id: I970a51d3f62bcf414eb9552a68d1068430b93216 BUG: 950048 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4815 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: big lock - a coarse-grained locking to prevent racesKrishnan Parthasarathi2013-04-121-4/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are primarily three lists that are part of glusterd process, that are concurrently accessed. Namely, priv->volumes, priv->peers and volinfo->bricks_list. Big-lock approach ----------------- WHAT IS IT? Big lock is a coarse-grained lock which protects all three lists, mentioned above, from racy access. HOW DOES IT WORK? At any given point in time, glusterd's thread(s) are in execution _iff_ there is a preceding, inbound network event. Of course, the sigwaiter thread and timer thread are exceptions. A network event is an external trigger to glusterd, via the epoll thread, in the form of POLLIN and POLLERR. As long as we take the big-lock at all such entry points and yield it when we are done, we are guaranteed that all the network events, accessing the global lists, are serialised. This amounts to holding the big lock at - all the handlers of all the actors in glusterd. (POLLIN) - all the cbks in glusterd. (POLLIN) - rpc_notify (DISCONNECT event), if we access/modify one of the three lists. (POLLERR) In the case of synctask'ized volume operations, we must remember that, if we held the big lock for the entire duration of the handler, we may block other non-synctask rpc actors from executing. For eg, volume-start would block in PMAP SIGNIN, if done incorrectly. To prevent this, we need to yield the big lock, when we yield the synctask, and reacquire on waking up of the synctask. Change-Id: Ib929f9905b55fb6c3fc27fefb497a26dba058e4f BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4784 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Fixed spurious wakeups in glusterd syncopsKrishnan Parthasarathi2013-04-121-12/+21
| | | | | | | | | | | | | glusterd syncops perform a barrier_wake whenever rpc_clnt_submit returned -1. This is based on the wrong assumption that the cbkfn wasn't called. This would result in one more wakeup than there ought to be. Change-Id: I591e67c267f0e26d1145bf8fb5feeb2c13a751a1 BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4802 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Fixed volume-sync in synctask codepath.Krishnan Parthasarathi2013-03-101-9/+13
| | | | | | | | | | Change-Id: I2911d3ac80825310f84c5ba6bd7890e65e1ee219 BUG: 865700 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4624 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Increasing throughput of synctask based mgmt ops.Krishnan Parthasarathi2013-02-261-325/+374
| | | | | | | | | | Change-Id: Ibd963f78707b157fc4c9729aa87206cfd5ecfe81 BUG: 913662 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4570 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: Made gd_synctask_begin less 'monolithic' in terms of LOC.Krishnan Parthasarathi2013-02-191-147/+235
| | | | | | | | | Change-Id: I2dcaea56c2ca2c2c42c046ab7d2a39d586307868 BUG: 852147 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4507 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Made volume-heal use synctask framework.Avra Sengupta2013-02-161-7/+8
| | | | | | | | | | Change-Id: Ic6659335f18a3befcf9b8b3ca067883a2c889d03 BUG: 852147 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4493 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Made volume-quota use synctask framework.Avra Sengupta2013-02-161-1/+2
| | | | | | | | | Change-Id: I4c275253144ed3ac11a701a56dd1116c002471ba BUG: 852147 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4495 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Made volume clear-locks use synctask framework.Avra Sengupta2013-02-081-1/+9
| | | | | | | | | Change-Id: Ia1fe3d0500d999c1f95b43c9e53947834e39d680 BUG: 852147 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4490 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: do dict unref after sending reply to cliKrutika Dhananjay2013-02-031-3/+1
| | | | | | | | | | | | | | | | | This patch channelizes dict unrefs of dictionaries created from the cli req during volume ops to one common function - glusterd_to_cli() - which is guaranteed to be called irrespective of whether the command succeeds or fails. This patch also removes extra unrefs at a few places. Change-Id: Ic8ba7166387b5dfd1f5ae860539e1b7093a94662 BUG: 861044 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Made volume-status use synctask frameworkKrishnan Parthasarathi2013-02-031-0/+2
| | | | | | | | | Change-Id: Id4062799104e5831467ced65a43bfe377b6163f4 BUG: 852147 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4297 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Added syncop version of BRICK_OPKrishnan Parthasarathi2013-02-031-28/+242
| | | | | | | | | | | | - Made rsp dict available to all glusterd's STAGE/BRICK/COMMIT OP. Change-Id: I5d825d0670d0f1aa8a0603f2307b3600ff6ccfe4 BUG: 852147 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4296 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>