summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.c
Commit message (Collapse)AuthorAgeFilesLines
...
* quota/cli: improve cli error message when setting limit on invalid pathvmallika2015-03-301-1/+1
| | | | | | | | | | | Change-Id: I5976777adf770d42aa33ebbe3833fb14c1ff658e BUG: 1206535 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/10026 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Use txn_opinfo instead of global op_info in ↵Atin Mukherjee2015-03-291-5/+20
| | | | | | | | | | | | | | | | | | | | glusterd_volume_heal_use_rsp_dict Due to http://review.gluster.org/#/c/9908/ global opinfo is no more a valid place holder for keeping transaction information for syncop task. glusterd_volume_heal_use_rsp_dict () was referring to global op_info due to which the function was always asserting on the op code. Change-Id: I1d416fe4edb40962fe7a0f6ecf541602debac56e BUG: 1206655 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10034 Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: clean up global xaction_peer occurancesAtin Mukherjee2015-03-261-22/+0
| | | | | | | | | | | | | | With http://review.gluster.org/#/c/9972/ there is no more need to maintain xaction_peers in glusterd_conf_t. This patch cleans up code for all the occurances of xaction_peers. Change-Id: I4fbf2df0fa9b8a8751029be36be7f76f6464cc76 BUG: 1204727 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9980 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Maintain local xaction_peer list for op-smAtin Mukherjee2015-03-261-0/+19
| | | | | | | | | | | | | | | http://review.gluster.org/9269 addresses maintaining local xaction_peers in syncop and mgmt_v3 framework. This patch is to maintain local xaction_peers list for op-sm framework as well. Change-Id: Idd8484463fed196b3b18c2df7f550a3302c6e138 BUG: 1204727 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9972 Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Prevent possible deadlock due to glusterd_quorum_countKaushal M2015-03-251-5/+6
| | | | | | | | | | | | | | | | | | | | | | | Also rename the macro glusterd_quorum_count to GLUSTERD_QUORUM_COUNT so that it stands out as a macro. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 0fbd7ba Prevent possible deadlock due to glusterd_quorum_count 5da3062 Rename glusterd_quorum_count to GLUSTERD_QUORUM_COUNT b3aa3c4 Enclose GLUSTERD_QUORUM_COUNT definition in a do-while block [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic4b3949f303d72ce53e0139f62b83b8d13fb4e47 BUG: 1205186 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9978 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: generate volfile for BitDVenky Shankar2015-03-241-0/+18
| | | | | | | | | | | | | | * Implement the skeleton of bit-rot xlator. Original-Author: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Anand Nekkunti <anekkunt@redhat.com> Change-Id: If33218bdc694f5f09cb7b8097c4fdb74d7a23b2d BUG: 1170075 Reviewed-on: http://review.gluster.org/9710 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: CLI commands to create and manage tiered volumes.Dan Lambright2015-03-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A tiered volume is a normal volume with some number of new bricks representing "hot" storage. The "hot" bricks can be attached or detached dynamically to a normal volume. When this happens, a new graph is constructed. The root of the new graph is an instance of the tier translator. One subvolume of the tier translator leads to the old volume, and another leads to the new hot bricks. attach-tier <VOLNAME> [<replica> <COUNT>] <NEW-BRICK> ... [force] volume detach-tier <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> gluster volume rebalance <volume> tier start gluster volume rebalance <volume> tier stop gluster volume rebalance <volume> tier status The "tier start" CLI command starts a server side daemon. The daemon initiates file level migration based on caching policies. The daemon's status can be monitored and stopped. Note development on the "tier status" command is incomplete. It will be added in a subsequent patch. When the "hot" storage is detached, the tier translator is removed from the graph and the tiered volume reverts to its original state as described in the volume's info file. For more background and design see the feature page [1]. [1] http://www.gluster.org/community/documentation/index.php/Features/data-classification Change-Id: Ic8042ce37327b850b9e199236e5be3dae95d2472 BUG: 1194753 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/9753 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: cli command implementation for bitrot featuresGaurav Kumar Garg2015-03-181-0/+6
| | | | | | | | | | | | | | | | | CLI command for bitrot features. volume bitrot <volname> enable|disable Above command will enable/disable bitrot feature for particular volume. BUG: 1170075 Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Anand nekkunti <anekkunt@redhat.com> Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com> Reviewed-on: http://review.gluster.org/9866 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/quota : Introducing inode quotavmallika2015-03-181-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ========================================================================== Inode quota ========================================================================== = Currently, the only way to retrieve the number of files/objects in a = = directory or volume is to do a crawl of the entire directory/volume. = = This is expensive and is not scalable. = = = = The proposed mechanism will provide an easier alternative to determine = = the count of files/objects in a directory or volume. = = = = The new mechanism proposes to store count of objects/files as part of = = an extended attribute of a directory. Each directory's extended = = attribute value will indicate the number of files/objects present = = in a tree with the directory being considered as the root of the tree. = = = = The count value can be accessed by performing a getxattr(). = = Cluster translators like afr, dht and stripe will perform aggregation = = of count values from various bricks when getxattr() happens on the key = = associated with file/object count. = A new interface is introduced: ------------------------------ limit-objects : limit the number of inodes at directory level list-objects : list the directories where the limit is set remove-objects : remove the limit from the directory ========================================================================== CLI COMMAND: gluster volume quota <volname> limit-objects <path> <number> [<percent>] * <number> is a hard-limit for number of objects limitation for path "<path>" If hard-limit is exceeded, creation of file/directory is no longer permitted. * <percent> is a soft-limit for number of objects creation for path "<path>" If soft-limit is exceeded, a warning is issued for each creation. CLI COMMAND: gluster volume quota <volname> remove-objects [path] ========================================================================== CLI COMMAND: gluster volume quota <volname> list-objects [path] ... Sample output: ------------------ Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------ -------------------------------------- /dir 10 80% 10 0 Yes Yes ========================================================================== [root@snapshot-28 dir]# ls a b file11 file12 file13 file14 file15 file16 file17 [root@snapshot-28 dir]# touch a1 touch: cannot touch `a1': Disk quota exceeded * Nine files are created in directory "dir" and directory is included in * the count too. Hence the limit "10" is reached and further file creation fails ========================================================================== Note: We have also done some re-factoring in cli for volume name validation. New function cli_validate_volname is created ========================================================================== Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b BUG: 1190108 Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9769 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Protect the peer list and peerinfos with RCU.Kaushal M2015-03-161-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The peer list and the peerinfo objects are now protected using RCU. Design patterns described in the Paul McKenney's RCU dissertation [1] (sections 5 and 6) have been used to convert existing non-RCU protected code to RCU protected code. Currently, we are only targetting guaranteeing the existence of the peerinfo objects, ie., we are only looking to protect deletes, not all updaters. We chose this, as protecting all updates is a much more complex task. The steps used to accomplish this are, 1. Remove all long lived direct references to peerinfo objects (apart from the peerinfo list). This includes references in glusterd_peerctx_t (RPC), glusterd_friend_sm_event_t (friend state machine) and others. This way no one has a reference to deleted peerinfo object. 2. Replace the direct references with indirect references, ie., use peer uuid and peer hostname as indirect references to the peerinfo object. Any reader or updater now uses the indirect references to get to the actual peerinfo object, using glusterd_peerinfo_find. Cases where a peerinfo cannot be found are handled gracefully. 3. The readers get and use the peerinfo object only within a RCU read critical section. This prevents the object from being deleted/freed when in actual use. 4. The deletion of a peerinfo object is done in a ordered manner (glusterd_peerinfo_destroy). The object is first removed from the peerinfo list using an atomic list remove, but the list head is not reset to allow existing list readers to complete correctly. We wait for readers to complete, before resetting the list head. This removes the object from the list completely. After this no new readers can get a reference to the object, and it can be freed. This change was developed on the git branch at [2]. This commit is a combination of the following commits on the development branch. d7999b9 Protect the glusterd_conf_t->peers_list with RCU. 0da85c4 Synchronize before INITing peerinfo list head after removing from list. 32ec28a Add missing rcu_read_unlock 8fed0b8 Correctly exit read critical section once peer is found. 63db857 Free peerctx only on rpc destruction 56eff26 Cleanup style issues e5f38b0 Indirection for events and friend_sm 3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already exists 141d855 Address review comments on 9695/1 aaeefed Protection during peer updates 6eda33d Revert "Synchronize before INITing peerinfo list head after removing from list." f69db96 Remove unneeded line b43d2ec Address review comments on 9695/4 7781921 Address review comments on 9695/5 eb6467b Add some missing semi-colons 328a47f Remove synchronize_rcu from glusterd_friend_sm_transition_state 186e429 Run part of glusterd_friend_remove in critical section 55c0a2e Fix gluster (peer status/ pool list) with no peers 93f8dcf Use call_rcu to free peerinfo c36178c Introduce composite struct, gd_rcu_head [1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf [2]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9695 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cluster/ec: Add self-heal-daemon command handlersPranith Kumar K2015-03-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the changes required in ec xlator to handle index/full heal. Index healer threads: Ec xlator start an index healer thread per local brick. This thread keeps waking up every minute to check if there are any files to be healed based on the indices kept in index directory. Whenever child_up event comes, then also this index healer thread wakes up and crawls the indices and triggers heal. When self-heal-daemon is disabled on this particular volume then the healer thread keeps waiting until it is enabled again to perform heals. Full healer threads: Ec xlator starts a full healer thread for the local subvolume provided by glusterd to perform full crawl on the directory hierarchy to perform heals. Once the crawl completes the thread exits if no more full heals are issued. Changed xl-op prefix GF_AFR_OP to GF_SHD_OP to make it more generic. Change-Id: Idf9b2735d779a6253717be064173dfde6f8f824b BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9787 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Start quotad before spawning bricks during glusterd restartAvra Sengupta2015-03-041-4/+8
| | | | | | | | | | | | Change-Id: I66edc1b98b70a494e069df95a6f347634c8f862d BUG: 1198076 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9791 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Replace libglusterfs lists with liburcu listsKaushal M2015-03-031-86/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces usage of the libglusterfs lists data structures and API in glusterd with the lists data structures and API from liburcu. The liburcu data structes and APIs are a drop-in replacement for libglusterfs lists. All usages have been changed to keep the code consistent, and free from confusion. NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data structures and API, as it holds rpc_transport_t objects, which is not a part of glusterd and is not being changed in this patch. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 6dac576 Replace libglusterfs lists with liburcu lists a51b5ab Fix compilation issues d98a06f Fix merge issues a5d918e Remove merge remnant 1cca113 More style cleanup 1917be3 Address review comments on 9624/1 8d10f13 Use cds_lists for glusterd_svc_t 524ad5d Add rculist header in glusterd-conn-helper.c 646f294 glusterd: add list_add_order API honouring rcu [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0 BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9624 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd: nfs,shd,quotad,snapd daemons refactoringAtin Mukherjee2015-02-201-719/+112
| | | | | | | | | | | | | This patch ports nfs, shd, quotad & snapd with the approach suggested in http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2 BUG: 1191486 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9428 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* cli: volume status for tcp,rdma type volume display only tcp portMohammed Rafi KC2015-02-181-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For tcp,rdma type voumes, there will be two ports, one for tcp and one for rdma. But volume status command only display tcp port. By this change, adding an extra column for rdma port and changing the port to tcp port. Eg: >gluster volume status pathy >For tcp,rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 49153 Y 14158 >For rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 0 49153 Y 14158 For tcp type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 0 Y 14158 >gluster volume status patchy detail Status of volume: xcube2 ------------------------------------------------------------------------------ Brick : Brick brickname TCP Port : 49152 RDMA Port : 49153 Online : Y Pid : 14158 File System : ext4 Device : /dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 31.1GB Total Disk Space : 47.9GB Inode Count : 3203072 Free Inodes : 2926789 >gluster volume status xcube --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>xcube</volName> <nodeCount>2</nodeCount> <node> <hostname>hostname</hostname> <path>/home/brick1</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>49152</port> <ports> <tcp>49152</tcp> <rdma>N/A</rdma> </ports> <pid>5657</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>5665</pid> </node> <tasks/> </volume> </volumes> </volStatus> </cliOutput> Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51 BUG: 1164079 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9191 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/geo-rep: Allow replace/remove brick if geo-rep is stopped.Kotresh HR2015-02-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Replace brick: If geo-replication was configured on a volume, replace brick used to fail. This patch allows replace brick to go through if all geo-rep sessions corresponding to that volume is stopped. Remove brick: There was no check for geo-replication for remove brick. Enforce 'remove brick commit' to fail if geo-rep session corresponding to volume is running. Allow 'remove brick commit' only if all of the geo-rep sessions corresponding to that volume is stopped. Code is re-organized for better readability. Change-Id: I02282c2764d8b81e319489c977847e6e437511a4 BUG: 1179638 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9402 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: ajeet jha <ajha@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Fix spurious volume delete failureEmmanuel Dreyfus2015-01-211-1/+6
| | | | | | | | | | | | | | | | | If volume uses quota, volume delete operation should unmount the auxiliary quota mount usin glusterd_remove_auxiliary_mount(). This may fail with EBADF is the mount is already gone. In that situation, ignore the error so that volume delete succeeds. This fixes a spurious failure on NetBSD in tests/basic/quota.t 74-75 BUG: 1129939 Change-Id: I69325f71fc2c8af254db46f696c8669a4e6bd7e4 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/9468 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: quorum validatation in glusterd syncop frameworkGauravKumarGarg2015-01-201-0/+101
| | | | | | | | | | | | | | | | | | | | Previously glusterd was not checking quorum validation in syncop framework. So when there is loss in quorum then few operation (for eg. add-brick, remove-brick, volume set) which is based on syncop framework passed successfully with out doing quorum validation check. With this change it will do quorum validation in syncop framework and it will block all operation (except volume set <quorum options> and "volume reset all" commands) when there is loss in quorum. Change-Id: I4c2ef16728d55c98a228bb86795023d9c1f4e9fb BUG: 1177132 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/9349 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Implement Volume heal enable/disablePranith Kumar K2015-01-201-4/+17
| | | | | | | | | | | | | | | | | | For volumes with replicate, disperse xlators, self-heal daemon should do healing. This patch provides enable/disable functionality for the xlators to be part of self-heal-daemon. Replicate already had this functionality with 'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it uniform for both types of volumes. Internally it still does 'volume set' based on the volume type. Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29 BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9358 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* features/changelog: Cleanup .processing and .current directoryAravinda VK2015-01-181-72/+0
| | | | | | | | | | | | | | | | | On changelog_register cleanup .processing, .history/.processing, .current and .history/.current from the working directory. Moved glusterd_recursive_rmdir and glusterd_for_each_entry to common place(libglusterfs) and renamed as recursive_rmdir and GF_FOR_EACH_ENTRY_IN_DIR respectively BUG: 1162057 Change-Id: I1f98468a344cead039026762a805437b2f9e507b Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9082 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd: Refactor glusterd-utils.cAvra Sengupta2015-01-081-3677/+4
| | | | | | | | | | | | | | | | Refactor glusterd-utils.c to create glusterd-snapshot-utils.c consisting of all snapshot utility functions. Change-Id: Id9823a2aec9b115f9c040c9940f288d4fe753d9b BUG: 1176770 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9391 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: cluster quorum count check correctionAtin Mukherjee2015-01-061-18/+30
| | | | | | | | | | | | | | | | | Due to the recent change introduced by commit da9deb54df91dedc51ebe165f3a0be646455cb5b cluster quorum count calucation now depends on whether the peer list is either all peers or global transaction peer list or the local transaction peer list. Change-Id: I9f63af9a0cb3cfd6369b050247d0ef3ac93d760f BUG: 1173414 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9350 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: check_volume_exists should query in-memory representationKrishnan Parthasarathi2014-12-281-19/+2
| | | | | | | | | | | | | | ... instead of consulting the on-disk data directory. There is no reason why the on-disk is more accurate than the in-memory representation. In fact, it is the other way around when a node is reconciling volume/cluster configuration with the rest of the cluster. Change-Id: I786823efdf1d0f6b9e6fcdb72d51e5227c399ce1 BUG: 1176770 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9292 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Copy brick port no. if brick is runningAvra Sengupta2014-12-191-5/+18
| | | | | | | | | | | | Instead of relying on brickinfo->status, check if the brick process is running before copying the brick port number. Change-Id: I246465fa4cf4911da63a1c26bbb51cc4ed4630ac BUG: 1175700 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9297 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: do not restart nfs server when snapshot is deactivatedRaghavendra Bhat2014-12-181-0/+3
| | | | | | | | | Change-Id: Ie5eaa2beb4446640b22873f91e17da90d1cd8fad BUG: 1174625 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/9280 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* mgmt/glusterd: Out of bounds access to fs_info structPetr Medonos2014-12-011-1/+1
| | | | | | | | | | | Change-Id: Ifa0d4ac17f9da94660a7b7f567a0f07b5cec7aec BUG: 1164775 Signed-off-by: Petr Medonos <petr.medonos@etnetera.cz> Reviewed-on: http://review.gluster.org/9138 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/uss: Create rebalance volfile.Avra Sengupta2014-11-301-0/+13
| | | | | | | | | | | | | | | | | | | | Create a new rebalance volfile, which will not contain snap-view client translators, irrespective of the status of USS. This volfile, will be created and regenerated everytime the fuse-volfile is generated, and will be consumed by the rebalance process. Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af BUG: 1164711 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9190 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/uss: if snapd is not running, return success from ↵Atin Mukherjee2014-11-301-0/+3
| | | | | | | | | | | | | | | | | | glusterd_handle_snapd_option glusterd_handle_snapd_option was returning failure if snapd is not running because of which gluster commands were failing. Change-Id: I22286f4ecf28b57dfb6fb8ceb52ca8bdc66aec5d BUG: 1168803 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9206 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* core: fix Ubuntu code audit (cppcheck) resultsKaleb S. KEITHLEY2014-11-251-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | See also http://review.gluster.org/#/c/7693/, BZ 1091677 AFAICT these are false positives: [geo-replication/src/gsyncd.c:100]: (error) Memory leak: str [geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde [xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr Test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. [tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments. the remainder are fixed with this change-set: [cli/src/cli-rpc-ops.c:8883]: (error) Possible null pointer dereference: local [cli/src/cli-rpc-ops.c:8886]: (error) Possible null pointer dereference: local [contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 2) requires 'long *' but the argument type is 'unsigned long *'. [contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 3) requires 'long *' but the argument type is 'unsigned long *'. [xlators/cluster/dht/src/dht-rebalance.c:1734]: (error) Possible null pointer dereference: ctx [xlators/cluster/stripe/src/stripe.c:4940]: (error) Possible null pointer dereference: local [xlators/mgmt/glusterd/src/glusterd-geo-rep.c:1718]: (error) Possible null pointer dereference: command [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:942]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:1026]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-sm.c:249]: (error) Possible null pointer dereference: new_ev_ctx [xlators/mgmt/glusterd/src/glusterd-snapshot.c:6917]: (error) Possible null pointer dereference: volinfo [xlators/mgmt/glusterd/src/glusterd-utils.c:4517]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:6662]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:7708]: (error) Possible null pointer dereference: this [xlators/mount/fuse/src/fuse-bridge.c:4687]: (error) Uninitialized variable: finh [xlators/mount/fuse/src/fuse-bridge.c:3080]: (error) Possible null pointer dereference: state [xlators/nfs/server/src/nfs-common.c:89]: (error) Dangerous usage of 'volname' (strncpy doesn't always null-terminate it). [xlators/performance/quick-read/src/quick-read.c:586]: (error) Possible null pointer dereference: iobuf Rerunning cppcheck after fixing the above: As before, test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. [tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments. As before, false positive: [geo-replication/src/gsyncd.c:100]: (error) Memory leak: str [geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde [xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr False positive after fix: [xlators/performance/quick-read/src/quick-read.c:584]: (error) Possible null pointer dereference: iobuf Change-Id: I20e0e3ac1d600b2f2120b8d8536cd6d9e17023e8 BUG: 1109180 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8064 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma: Client volfile name change for supporting rdmaAnoop C S2014-11-191-2/+2
| | | | | | | | | | | | | | | | | | | For rdma only volumes, daemons like snapd, glustershd etc make use of tcp transport for their operations. This patch will introduce the support of rdma by default for those daemons in rdma only volumes. In order to accomodate this change we rename the tcp client volfile labels from <volname>-fuse.vol to <volname>.tcp-fuse.vol Change-Id: Id9727b97d00e62a4a1556b9c0c56653d45c8fe1d BUG: 1164079 Signed-off-by: Anoop C S <achiraya@redhat.com> Reviewed-on: http://review.gluster.org/9146 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* rdma :mount fails for nfs protocol in rdma volumesJiffin Tony Thottan2014-11-191-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we mount rdma only volume or tcp,rdma volume using newly peer probed IP's(nfs-server on new nodes) through nfs protocol, mount fails for rdma only volume and mount happens with help of tcp protocol in the case of tcp,rdma volumes. That is for newly added servers will always get transport type as "socket". This is due to nfs_transport_type is exported correctly and imported wrongly. This can be verified by the following , * Create a rdma only volume or tcp,rdma volume * Add a new server into the trusted pool. * Checkout the client transport type specified nfs-server volgraph.It will be always tcp(socket type) instead of rdma. * And also for rdma only volume in the nfs log, we can see 'connection refused' message for every reconnect between nfs server and glusterfsd. BUG: 1157381 Change-Id: I6bd4979e31adfc72af92c1da06a332557b6289e2 Author: Jiffin Tony Thottan <jthottan@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/8975 Reviewed-by: Meghana M <mmadhusu@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com>
* rdma: Wrong volfile fetch on fuse mounting tcp,rdma volume via rdmaAnoop C S2014-11-181-18/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of now for both tcp only volumes and rdma only volumes, volfile names are in the format <volname>-fuse.vol. This patch will change the client volfile namings as shown below. * TCP mounts always use <volname>-fuse.vol * RDMA mounts always use <volname>.rdma-fuse.vol Following the above naming convention, for tcp,rdma volumes both volfiles will be present under /var/lib/glusterd/vols/<volname>/ such that rdma only volume can be mounted as mount -t glusterfs -o transport=rdma <server/ip>:/<volname> <mount-point> OR mount -t glusterfs <server/ip>:/<volname>.rdma <mount-point> The above command format can also be used to fuse mount a tcp,rdma volume via rdma transport. When we try to fuse mount a tcp,rdma volume with transport-type as rdma it silently mounts via tcp. This change will also make sure that it fetches the correct volfile based on the transport-type specified from client side. BUG: 1131502 Change-Id: I34da4b01ac813b69494a43188f51145457412923 Signed-off-by: Anoop C S <achiraya@redhat.com> Reviewed-on: http://review.gluster.org/8498 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: client connection establishment takes more timeMohammed Rafi KC2014-11-181-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For rdma type only volume client connection establishment with server takes more than three seconds. Because for tcp,rdma type volume, will have 2 ports one for tcp and one for rdma, tcp port is stored with brickname and rdma port is stored as "brickname.rdma" during pamap_sighin. During the handshake when trying to get the brick port for rdma clients, since we are not aware of server transport type, we will append '.rdma' with brick name. So for tcp,rdma volume there will be an entry with '.rdma', but it will fail for rdma type only volume. So we will try again, this time without appending '.rdma' using a flag variable need_different_port, and it will succeed, but the reconnection happens only after 3 seconds. In this patch for rdma only type volume we will append '.rdma' during the pmap_signin. So during the handshake we will get the correct port for first try itself. Since we don't need to retry , we can remove the need_different_port flag variable. Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c BUG: 1153569 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8934 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* USS : Kill snapd during glusterd restart if USS is disabledSachin Pandit2014-11-171-5/+25
| | | | | | | | | | | | | | | | | | | | | | | Problem : When glusterd is down on one of the nodes and during that time if USS is disabled then snapd will still be running in the node where glusterd was down. Solution : during restart of glusterd check if USS is disabled, if so then issue a kill for snapd. NOTE : The test case which I wrote in my previous patchset is facing some spurious failures, hence I thought of removing that test case. I'll add the test case once the issue is resolved. Change-Id: I2870ebb4b257d863cdfc319e8485b19e932576e9 BUG: 1161015 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9062 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Validate the options of ussvmallika2014-11-141-6/+11
| | | | | | | | | | | Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524 BUG: 1111554 Signed-off-by: Varun Shastry <vshastry@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8133 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Don't append nouuid mount option for snapshot brickvmallika2014-11-131-1/+34
| | | | | | | | | | | | | if original brick already has this option Change-Id: I2841d2ac371a3e9505f6061f35d1d447946c0bae BUG: 1133456 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8526 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* uss/gluster: Move all uss related logs into subfoldervmallika2014-11-121-4/+12
| | | | | | | | | | | | | | | | | | | | | | For USS we have 1 snapd log per volume and as many snap logs for volume. For example if there are 4 volumes having 256 snaps each and USS is enabled than total number of logs under /var/log/glusterfs for USS would be 1028 logs. Total logs = (4(snapd per volume) + 4(volumes)*256(snaps)) = 1028 Hence, it makes sense to move into into sub-folder structure like /var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs> Change-Id: I29262e6458c3906916923cd67d1145d6ae10bec3 BUG: 1160534 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9050 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: op state machine shouldn't use global peer listAtin Mukherjee2014-10-281-0/+22
| | | | | | | | | | | | | | | | | | | | Problem : op state machine was relying on the global peer list while sending lock/stage/unlock commit rpc requests to the peers in the cluster. Trusting on global peer list structure is dangerous as this structure gets modified if any peer modification command is attempted in the cluster when there is a ongoing transaction going through the state machine. An ideal usecase of this problem when rebalance is in progress and peer probe is executed rebalance op-sm and peer probe may run into race making peerinfo structure go for toss. Solution: Use local copy of peer list (xaction_peers) in glusterd op-sm. Change-Id: I1ff7118dc6a9a72633e2e87b7ab7bae1796595e0 BUG: 1152890 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8932 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: really get the inode size for a brickNiels de Vos2014-10-271-12/+17
| | | | | | | | | | | | | | | The device to get the inode size from does not get passed to the tool (tune2fs, xfs_info or the like) that is called. This is probably just an oversight. While correcting this, cleanup some bits of the function too. Change-Id: Ida45852cba061631fb304bc7dd5286df1a808010 BUG: 1130462 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8492 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: make bricks respect 'transport.socket.bind-address'Niels de Vos2014-10-081-0/+8
| | | | | | | | | | | | | | | When GlusterD starts the brick processes, these will listen on all interfaces. When the 'transport.socket.bind-address' option is set in glusterd.vol, the brick processes should only listen on the specified hostname or IP-address. Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba BUG: 1149863 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8910 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: pass the bind-address to starting servicesNiels de Vos2014-10-071-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | When the transport.socket.bind-address option is set to a hostname or ip-address, the services started by GlusterD fail to connect to the management daemon. GlusterD always forces the services to connect to the "localhost" hostname, even if it is not listening on that address. GlusterD should take the transport.socket.bind-address option into consideration, and pass that to the glusterfs-clients with the -s or --volfile commandline parameter. Note that this is not a change that removes all hard-coded dependencies on "localhost". This change merely makes it possible to start required services when the transport.socket.bind-address option is set. Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2 BUG: 1149863 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8908 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Do not hardcode umount(8) path, emulate lazy umountEmmanuel Dreyfus2014-10-031-12/+2
| | | | | | | | | | | | | | | | | | 1) Use a system-dependent macro for umount(8) location instead of relying on $PATH to find it, for security and portability sake. 2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations, which is only supported on Linux; On Linux behavior in unchanged. On other systems, we fork an external process (umountd) that will take care of periodically attempt to unmount, and optionally rmdir. BUG: 1129939 Change-Id: Ia91167c0652f8ddab85136324b08f87c5ac1e51d Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8649 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Csaba Henk <csaba@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Authenticate management handshake requestsKaushal M2014-09-231-0/+14
| | | | | | | | | | | | | | | | | | | Management handshake requests, which are used to validate op-version supported by the peers, are now only allowed if, - the glusterd doesn't have any other peer, or - the request was sent by another peer. This prevents the op-version of a peer being changed because of a connection attempt by an invalid peer. Change-Id: I248c386ed5ec4f8360e7b5e7f9ab74b7e8a7fc65 BUG: 1109741 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8126 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Prevent rebalance starting with old clientsKaushal M2014-09-031-0/+39
| | | | | | | | | | | | | | | | | Glusterd will prevent rebalance from starting when clients older than glusterfs-v3.6.0 are connected to a volume. This is needed as running rebalance with old clients connected could lead to data loss in some cases. The DHT xlator on newer clients (>= 3.6.0) has been fixed to prevent the data loss issues. Change-Id: If58640236382a2fc13f73f6b43777f01713859f7 BUG: 1136201 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8583 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/glusterd: Support of volume get for a specific volume optionAtin Mukherjee2014-08-261-0/+278
| | | | | | | | | | | | | | This patch introduces a cli command to display a specific volume option/all volume options of a specific volume with the following usage: Usage: volume get <VOLNAME> <key|all> Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d BUG: 983317 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8305 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Fix glustershd detection on volume restartEmmanuel Dreyfus2014-08-251-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | On NetBSD and FreeBSD, doing a 'gluster volume start $volume force' causes NFS server, quotad, snapd and glustershd to be undetected by glusterd once the volume has restarted. 'gluster volume status' shows the three processes as 'N' in the online column, while they have been launched successfully. This happens because glusterd attempts to connect to its child processes just between the child does a unlink() on the socket in __socket_server_bind() and the time it calls bind() and listen(). Different scheduling policy may explain why the problem does not happen on Linux, but it may pop up some day since we make no guaranteed assumptions here. This patchet works this around by introducing a boolean transport.socket.ignore-enoent option, set by nfs and glustershd, which prevents ENOENT to be fatal and cause glusterd to retry and suceed later. Behavior of other clients is unaffected. BUG: 1129939 Change-Id: Ifdc4d45b2513743ed42ee235a5c61a086321644c Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8403 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep/glusterd: API to check active geo-rep session for the volumeKotresh H R2014-08-211-19/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | Requirement: Snapshot needs an API to fail the CLI if any geo-rep session is active for that volume. Solution: A function "gd_vol_is_geo_rep_active" is provided to check if any geo-rep session is active for that volume. An in memory dict called 'gsync_running_slaves' is maintained in 'volinfo' structure to keep track of active geo-rep session for the volume. The key 'slavenode::slavevol' with value 'running' is added whenever geo-rep is started/resumed into the dict and the same is removed if stopped/paused. So the 'count' in dict is used to decide whether the geo-rep is active or not for that volume. Also added "this->name" in gf_log in routines which this patch is touched. Change-Id: I2b5de7dd686541c6b89c0fd0f7a4dbc92eecfac5 BUG: 1129008 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8459 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* NetBSD /dev/fuse detectionEmmanuel Dreyfus2014-08-131-0/+4
| | | | | | | | | | | | | | NetBSD's FUSE being pure userland implementation, there is no /dev/fuse to open. Test /dev/puffs (kernel fs-in-userland subsystem supporting FUSE) insead. BUG: 764655 Change-Id: Ia65e95c246dc31ea2839cf64d7c851430828542e Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8478 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* feature/geo-rep: Keep marker.tstamp's mtime unchangeable during snapshot.Kotresh H R2014-08-041-0/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Geo-replicatoin does a full xsync crawl after snapshot restoration of slave and master. It does not do history crawl. Analysis: Marker creates 'marker.tstamp' file when geo-rep is started for the first time. The virtual extended attribute 'trusted.glusterfs.volume-mark' is maintained and whenever it is queried on gluster mount point, marker fills it on the fly and returns the combination of uuid, ctime of marker.tstamp and others. So ctime of marker.tstamp, in other sense 'volume-mark' marks the geo-rep start time when the session is freshly created. From the above, after the first filesystem crawl(xsync) is done during first geo-rep start, stime should always be less than 'volume-mark'. So whenever stime is less than volume-mark, it does full filesystem crawl (xsync). Root Cause: When snapshot is restored, marker.tstamp file is freshly created losing the timestamps, it was originally created with. Solution: 1. Change is made to depend on mtime instead of ctime. 2. mtime and atime of marker.tstamp is restored back when snapshot is created and restored. Change-Id: I4891b112f4aedc50cfae402832c50c5145807d7a BUG: 1125918 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8401 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Inherit the mount options of a original brickVijaikumar M2014-08-031-15/+103
| | | | | | | | | | | | | | | | | | | | when creating snapshots When creating a snapshot a LVM is created at the backend and is mounted under /var/run/gluster/snaps/... However, this mount does not inherit the mount options for the original brick acting as the parent for the snap. If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss. Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b BUG: 1125180 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8394 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>