summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.h
Commit message (Collapse)AuthorAgeFilesLines
* snapshot : copying nfs-ganesha export fileJiffin Tony Thottan2015-10-301-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While taking snapshot, the export file used by the volume should copy to snap directory. So that when restore of snapshot happens, the volume can retain all its configuration for exporting via nfs-ganesha. The export file is stored at "/etc/ganesha/export" in the following format "export.<volname>.conf" The fix handles given cases in the following manner : case a: The nfs-ganesha(global) is ON during snapshot and restore. i.) Volume was exported during snapshot. When we restore snapshot, then volume should be exported back with old configuration file. ii.) Volume was unexported during snapshot. When we restore snapshot, then volume should unexported again. case b: The nfs-ganesha is ON during snapshot and OFF during restore Volume was exported during snapshot. When we restore snapshot, the conf will be copied to corresponding location and if nfs-ganesha enabled again, then volume will be exported. For the clones, export conf file will created in /etc/ganesha/export and then export it via ganesha. Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20 BUG: 1257709 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/12034 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* snapshot:cleanup snaps during unprobeMohammed Rafi KC2015-08-261-0/+5
| | | | | | | | | | | | | | | When doing an unprobe, the volume that doesnot contain any brick of the particular node will be deleted. So the snaps associated with that volume should also delete Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff BUG: 1203185 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9930 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: Return correct errno in events of failure - PATCH 2Avra Sengupta2015-06-021-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | ENUM RETCODE ERROR ------------------------------------------------------------- EG_INTRNL 30800 Internal Error EG_OPNOTSUP 30801 Gluster Op Not Supported EG_ANOTRANS 30802 Another Transaction in Progress EG_BRCKDWN 30803 One or more brick is down EG_NODEDWN 30804 One or more node is down EG_HRDLMT 30805 Hard Limit is reached EG_NOVOL 30806 Volume does not exist EG_NOSNAP 30807 Snap does not exist EG_RBALRUN 30808 Rebalance is running EG_VOLRUN 30809 Volume is running EG_VOLSTP 30810 Volume is not running EG_VOLEXST 30811 Volume exists EG_SNAPEXST 30812 Snapshot exists EG_ISSNAP 30813 Volume is a snap volume EG_GEOREPRUN 30814 Geo-Replication is running EG_NOTTHINP 30815 Bricks are not thinly provisioned Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275 BUG: 1212413 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10588 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : While snapshot restore, compute quota checksum.Sachin Pandit2015-04-101-1/+2
| | | | | | | | | | | | | | | | | | Problem : During snapshot restore we anyways copy the quota conf file after that we need to compute the checksum for that. If not, there might be a checksum mismatch during glusterd handshake. Solution : Compute a checksum file for quota conf file if its present. Change-Id: Ic4a6567c6ede9923443abf4ca59380679be88094 BUG: 1202436 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9901 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: compute quorum on peers in clusterKrishnan Parthasarathi2015-04-021-4/+2
| | | | | | | | | | | | ... and not on peers participating in an ongoing transaction. Change-Id: I6bdb80fd3bf3e7593fdf37e45a441d4a490469b8 BUG: 1205592 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9493 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Protect the peer list and peerinfos with RCU.Kaushal M2015-03-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The peer list and the peerinfo objects are now protected using RCU. Design patterns described in the Paul McKenney's RCU dissertation [1] (sections 5 and 6) have been used to convert existing non-RCU protected code to RCU protected code. Currently, we are only targetting guaranteeing the existence of the peerinfo objects, ie., we are only looking to protect deletes, not all updaters. We chose this, as protecting all updates is a much more complex task. The steps used to accomplish this are, 1. Remove all long lived direct references to peerinfo objects (apart from the peerinfo list). This includes references in glusterd_peerctx_t (RPC), glusterd_friend_sm_event_t (friend state machine) and others. This way no one has a reference to deleted peerinfo object. 2. Replace the direct references with indirect references, ie., use peer uuid and peer hostname as indirect references to the peerinfo object. Any reader or updater now uses the indirect references to get to the actual peerinfo object, using glusterd_peerinfo_find. Cases where a peerinfo cannot be found are handled gracefully. 3. The readers get and use the peerinfo object only within a RCU read critical section. This prevents the object from being deleted/freed when in actual use. 4. The deletion of a peerinfo object is done in a ordered manner (glusterd_peerinfo_destroy). The object is first removed from the peerinfo list using an atomic list remove, but the list head is not reset to allow existing list readers to complete correctly. We wait for readers to complete, before resetting the list head. This removes the object from the list completely. After this no new readers can get a reference to the object, and it can be freed. This change was developed on the git branch at [2]. This commit is a combination of the following commits on the development branch. d7999b9 Protect the glusterd_conf_t->peers_list with RCU. 0da85c4 Synchronize before INITing peerinfo list head after removing from list. 32ec28a Add missing rcu_read_unlock 8fed0b8 Correctly exit read critical section once peer is found. 63db857 Free peerctx only on rpc destruction 56eff26 Cleanup style issues e5f38b0 Indirection for events and friend_sm 3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already exists 141d855 Address review comments on 9695/1 aaeefed Protection during peer updates 6eda33d Revert "Synchronize before INITing peerinfo list head after removing from list." f69db96 Remove unneeded line b43d2ec Address review comments on 9695/4 7781921 Address review comments on 9695/5 eb6467b Add some missing semi-colons 328a47f Remove synchronize_rcu from glusterd_friend_sm_transition_state 186e429 Run part of glusterd_friend_remove in critical section 55c0a2e Fix gluster (peer status/ pool list) with no peers 93f8dcf Use call_rcu to free peerinfo c36178c Introduce composite struct, gd_rcu_head [1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf [2]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9695 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Replace libglusterfs lists with liburcu listsKaushal M2015-03-031-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces usage of the libglusterfs lists data structures and API in glusterd with the lists data structures and API from liburcu. The liburcu data structes and APIs are a drop-in replacement for libglusterfs lists. All usages have been changed to keep the code consistent, and free from confusion. NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data structures and API, as it holds rpc_transport_t objects, which is not a part of glusterd and is not being changed in this patch. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 6dac576 Replace libglusterfs lists with liburcu lists a51b5ab Fix compilation issues d98a06f Fix merge issues a5d918e Remove merge remnant 1cca113 More style cleanup 1917be3 Address review comments on 9624/1 8d10f13 Use cds_lists for glusterd_svc_t 524ad5d Add rculist header in glusterd-conn-helper.c 646f294 glusterd: add list_add_order API honouring rcu [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0 BUG: 1191030 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9624 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd: nfs,shd,quotad,snapd daemons refactoringAtin Mukherjee2015-02-201-41/+0
| | | | | | | | | | | | | This patch ports nfs, shd, quotad & snapd with the approach suggested in http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2 BUG: 1191486 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9428 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* glusterd/snap: Fix restore cleanupAvra Sengupta2015-01-261-1/+1
| | | | | | | | | | | | | | | | If restore commit is successful on the originator and a few nodes, but fails on some other node, restore cleanup should restate the volume and the snapshot in question as it was before the command was run. Change-Id: I7bb0becc7f052f55bc818018bc84770944e76c80 BUG: 1181418 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9441 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Refactor glusterd-utils.cAvra Sengupta2015-01-081-0/+200
Refactor glusterd-utils.c to create glusterd-snapshot-utils.c consisting of all snapshot utility functions. Change-Id: Id9823a2aec9b115f9c040c9940f288d4fe753d9b BUG: 1176770 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9391 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>