summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-snapshot.c
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/snapshot : Fail the snapshot create operation if geo-rep is running.Sachin Pandit2014-08-211-1/+11
| | | | | | | | | | | | | | | | | As one of the recommandations for taking a snapshot is not to have an active geo-replication session, its better to display an error saying session is active when snapshot create command is issued. Change-Id: I94593dbd2659610e033ca316176dda1ac8dc5ce6 BUG: 1129038 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Inherit the mount options of a original brickVijaikumar M2014-08-031-87/+117
| | | | | | | | | | | | | | | | | | | | when creating snapshots When creating a snapshot a LVM is created at the backend and is mounted under /var/run/gluster/snaps/... However, this mount does not inherit the mount options for the original brick acting as the parent for the snap. If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss. Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b BUG: 1125180 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8394 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Error msg for snapshot status for no existing snapsVijaikumar M2014-07-301-35/+42
| | | | | | | | | | | | | | | | | should be aligned with error messages of info and list When a snapshot operation like status, info, list performed on a non-existing snapshot. For Status error message is displayed as 'Snap not found' For List and Info error message is displayed as 'Snapshot does not exist' Have the consistant error message all the places Change-Id: I7b241217dba62fda844481731a6858e4ecb12897 BUG: 1119641 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8309 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Proper err msg for snapshot create commandRajesh Joseph2014-07-301-0/+77
| | | | | | | | | | | | | | | problem: Snapshot command fails if one or more bricks are not thinly provisioned. But the error message is a generic error message which is confusing to the user. fix: Provide correct error message in case of failure. Change-Id: Iad247f966423a8f73ef6da57cab7ed6cddc05861 BUG: 1123646 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8377 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* feature/snapshot : Interface to delete all snapshots belongingSachin Pandit2014-07-251-25/+186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to a system as-well-as to a particular volume. Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1112613 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot: Print correct error message on cliVijaikumar M2014-07-241-0/+10
| | | | | | | | | | | | | | | | | | | | for snapshot operation performed on a cluster with op-version less than 30600 Currently we get error message as on cli 'Another transaction is in progress Please try again after sometime' when a snapshot operation is performed on a cluster with op-version less than 30600. We need to print the correct error message in this case. Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141 BUG: 1122816 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8371 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/snapshot : Dont display the snapshot hard-limit, soft-limitSachin Pandit2014-07-241-140/+145
| | | | | | | | | | | | | | | | | | | | | | | | | and auto-delete value in gluster volume info. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1113476 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Fix for resource leak coverity bug 1223045.Raghavendra Talur2014-07-141-0/+1
| | | | | | | | | | Change-Id: I96261e7f5cd7b5550d3100750c80190dd932a8ab BUG: 789278 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/8252 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: Update fstype for local bricks onlyAvra Sengupta2014-07-141-11/+18
| | | | | | | | | | | | | | | | While creating snapshot, update fstype for local bricks only and not for bricks hosted on other nodes Also returning ret as 0, in case no cleanup is required in post-validation, so that a post-validation failure is not logged, every time a pre-validation failure happens. Change-Id: I6364e33cfd9528e0a988ee48f3443239ee884336 BUG: 1111060 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8272 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/snapshot: provide --xml support for all snapshot commandRajesh Joseph2014-07-121-5/+81
| | | | | | | | | | | | | | Now --xml option can be used with all snapshot command. It will form the cli output in xml form. Change-Id: Ifc0ac31d2a9f91e136e87f3b51a629df7dba94e8 BUG: 1096610 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7663 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Change file-system uuid to file-system labelRajesh Joseph2014-07-031-83/+57
| | | | | | | | | | | | | | | | | | | | | Problem: In XFS changing file-system UUID with xfs_admin is causing too much delay with large file-system. The time taken by xfs_admin tool to change UUID is directly proportional to the size of the file system. Cause: In XFS file-system UUID is stored in file-system superblock. Therefore for chaning UUID all the superblock needs to be changed. Fix: Instead of using file-system UUID use file-system label. Change-Id: Ifb4c668fb29cfc1c89d9b221abc8d09dc09589ec BUG: 1115107 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8215 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* porting: Port for FreeBSD rebased from Mike Ma's effortsHarshavardhana2014-07-021-3/+8
| | | | | | | | | | | | | | | | | | | - Provides a working Gluster Management Daemon, CLI - Provides a working GlusterFS server, GlusterNFS server - Provides a working GlusterFS client - execinfo port from FreeBSD is moved into ./contrib/libexecinfo for ease of portability on NetBSD. (FreeBSD 10 and OSX provide execinfo natively) - More portability cleanups for Darwin, FreeBSD and NetBSD - Provides a new rc script for FreeBSD Change-Id: I8dff336f97479ca5a7f9b8c6b730051c0f8ac46f BUG: 1111774 Original-Author: Mike Ma <mikemandarine@gmail.com> Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8141 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd/snapshot: Correct the mount path checkAvra Sengupta2014-06-301-1/+1
| | | | | | | | | | | | | | | | | | | Before removing a lvm, we check if the lvm is mounted on the brick path. If not, we remove the brick path only. Correcting this check to support restore cases, where the volname is not the non-hyphanated uuid, but the original volume's name. Change-Id: If158f4651d36efa2f94523458faf826230e9c76a BUG: 1113975 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8192 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: verify for lvm commandsHarshavardhana2014-06-281-1/+47
| | | | | | | | | | | | | | On non-Linux platforms we need to verify the run time availability of LVM specific commands and fail accordingly with a message. Change-Id: Ie1e3870648f01ee129e390e2240c66e0c6249b90 BUG: 1061685 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8165 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Sachin Pandit <spandit@redhat.com>
* glusterd/snapshot: delete temporary folder after snapshot createRajesh Joseph2014-06-231-1/+9
| | | | | | | | | | | | | | | | snapshot create create temporary folders in /tmp location with the name xfsmountXXXXXX. It should be cleaned up after snapshot create Change-Id: Idd0c480c1eee7f0fdeba92ae427510faac0f5234 BUG: 1111614 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8138 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: cli error message correctedRajesh Joseph2014-06-231-1/+27
| | | | | | | | | | | | | | | snapshot delete on failure used to give invalid error message. Change-Id: I65d6edf8004c9a1bb91f28fa987b2d1629134013 BUG: 1111603 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8137 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Updating "global-option-version" inAvra Sengupta2014-06-181-5/+14
| | | | | | | | | | | | | | | | | /var/lib/glusterd/options When auto-delete option is set, we should update the "global-option-version" in /var/lib/glusterd/options to the next version. Change-Id: Ic561f33531a27cb8cca01d25632511205927268a BUG: 1104642 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8099 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot : override postvalidate for few snapshot commands.Sachin Pandit2014-06-161-0/+4
| | | | | | | | | | | | | | | | | | snapshot info, list, config display and status does not require any operations to be performed during postvalidate stage. Hence it is better to override these commands in postvalidate. Or else there will be a warning in log message saying "postvalidation failed". Change-Id: I14d64f7bf9adee8821067dd74d5027215d7b02f8 BUG: 1106406 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8014 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Get snapshot info dynamically via new rpc and infra for snapview-server to ↵Anand Subramanian2014-06-151-0/+96
| | | | | | | | | | | refresh snaplist BUG: 1105439 Change-Id: I4bb312a53d88f6f4955e69a3ef2b4955ec17f26d Signed-off-by: Anand Subramanian <anands@redhat.com> Reviewed-on: http://review.gluster.org/8001 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Update fstype and fsuuidAvra Sengupta2014-06-131-14/+14
| | | | | | | | | | | | | Update fstype and fsuuid before creating missed snapshot Change-Id: Ie9af0065fab288bd1c1a26396428f0cee6335f97 BUG: 1109142 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8062 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Fix snap delete cleanupAvra Sengupta2014-06-131-19/+70
| | | | | | | | | | | | | | | | | | | During commit the first thing that should happen is the snaps should be marked as GD_SNAP_STATUS_DECOMMISSION. So that if the node goes down the marked snap can be cleaned up when the node is coming back. Also during snap handshake while accepting snapshot from peer, we should check if the snap in question is decomissioned. In that case we shouldn't be accepting the peer data. Change-Id: Ib4e38d1b6bf49411928623fbc9f72f2b37b72086 BUG: 1104714 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7996 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Fix for snapshot restore failureRajesh Joseph2014-06-131-9/+9
| | | | | | | | | | | | | | | | | | | Problem: If restore fails due to quorum failure then the subsequent restore fails. Cause: Volume store is backed up during prevalidate failure and it is reverted or cleaned based on failure or success of the commit phase. In case of quorum failure we were not reverting the backup. Fix: Take backup when all the validation is done including quorum check. Change-Id: I55d57f6ee85fac04a0e6cbd0291a402601c725d6 BUG: 1109024 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8056 Reviewed-by: Sachin Pandit <spandit@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : Provide enable/disable option for snapshot auto-delete ↵Sachin Pandit2014-06-131-16/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | feature. This patch provides an interface to enable or disable the auto-delete feature. Syntax : gluster snapshot config auto-delete <enable/disable> DETAILS : 1) When auto-delete feature is disabled, If the the soft-limit is reached then user is given a warning about exceeding soft-limit along with successful snapshot creation message (oldest snapshot is not deleted). And upon reaching hard-limit further snapshot creation is not allowed. Example : ------------------------------------------------------------------ |Case - 1: Upon reaching soft-limit | |Snapshot create : snap successfully created. |Warning : soft-limit of volume (vol) is reached. Snapshot creation |is not possible once hard-limit is reached. | |----------------------------------------------------- |Case - 2: Upon reaching hard-limit | |Snapshot create : snap creation failed. |Error : hard-limit of volume (vol) is reached, Hence it is not |possible to take further snapshots. Please delete few snapshots |of the volume (vol) before taking another snapshot. ------------------------------------------------------------------ 2) When auto-delete feature is enabled, then as soon as the soft-limit is reached the oldest snapshot is deleted for every successful snapshot creation (same as existing method), With this it is made sure that number of snapshot created is not more than snap-max-hard-limit. Change-Id: Ie3ca64bbd2c763371f541cd2e378314e73b695b4 BUG: 1105415 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8017 Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Update file-system uuid during snap creationRajesh Joseph2014-06-121-0/+212
| | | | | | | | | | | | | | | | | | | After the brick snapshot file-system UUID of the origin brick and the snapshot brick will be identical. If user is using file-system UUID to mount the backend bricks then this will result in unexpected behaviour. Fix: After taking the LVM snapshot create new UUID for the snapshot brick. Change-Id: I339c90abd72dd392de195b674ea22217e63dfd48 BUG: 1105484 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8002 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : Store the global snapshot config limit in options.Sachin Pandit2014-06-111-43/+175
| | | | | | | | | | | | | | | | | | | | Problem : Initially we used to save the global config limit in glusterd.info, The problem with that approach was glusterd.info is local to a particular glusterd and hence is not synced during the handshake of glusterds. Solution : Store the global snapshot config in options, which is synced during handshake. Change-Id: I4c688bb4052a57df28aadba8581b14e2ddb510ef BUG: 1104642 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7971 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Remove mnt pt, while removing lvmAvra Sengupta2014-06-101-8/+66
| | | | | | | | | | | | | | | | | | | | | During a delete/restore snapshot remove the mnt point dir created at /var/run/gluster/snaps along with the lvm mounted on it. Also removed glusterd_recreate_vol_brick_mounts and renamed glusterd_store_recreate_brick_mounts as glusterd_recreate_vol_brick_mounts, inorder to reduce redundant multiple calls Change-Id: Ie87907ebbf8ef790fba07ef8581d2eb8d55658d0 BUG: 1101463 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7889 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Don't update the snap volume's op_versionAvra Sengupta2014-06-101-1/+3
| | | | | | | | | | | | | | | | | | | | | After the features.barrier key is removed from the snap_vol dict, it's op_version should also be updated, so that it doesn't inherit the op_version of the original volume when it had the key set. Also in glusterd_volinfo_dup(), which duplicates volinfo the volume op_version should not be updated, rather the original volume's op_version should be used. Change-Id: Ib2250700f649e6c906b14aeccee5e78f71d69780 BUG: 1104944 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7986 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: delete only the oldest snapshot as part of autodeleteRaghavendra Bhat2014-06-091-27/+21
| | | | | | | | | | | | | | | | | | | | | | | | Now this is how autodelete behaves: 1) Upon creation when the max-soft-limit exceeds, the oldest snapshot is deleted. i.e if hard limit is 100 and by default soft limit is 90. Whenever snap count reaches 90, every new create results in the deletion of the oldest snapshot. 2) If the snap limit (hard) is reconfigured and brought down the current snap count, then no snapshot is deleted and the next snapshot create fails as the current snapcount is more than the hard limit. 3) When soft limit is brought down: say max limit is 100 and by default soft limit is 90 and current snap count is 90. Now, soft limit is changed to 70 (70%). In this case the snapshot count remains 90 itself (i.e extra 20 snapshots more than the soft limit are not deleted) and for every new create the oldest snapshot is deleted. Change-Id: I1f3b4b9eb093649f8628ebf09a15bd2570ddf340 BUG: 1103665 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7951 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Handle rpc_connect failure in the event handlerVijaikumar M2014-06-051-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently rpc_connect calls the notification function on failure in the same thread, glusterd notification holds the big_lock and hence big_lock is released before rpc_connect In snapshot creation, releasing the big-lock before completeing operation can cause problem like deadlock or memory corruption. Bricks are started as part of snapshot created operation. brick_start releases the big_lock when doing brick_connect and this might cause glusterd crash. There is a similar issue in bug# 1088355. Solution is let the event handler handle the failure than doing it in the rpc_connect. Change-Id: I088d44092ce845a07516c1d67abd02b220e08b38 BUG: 1101507 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7843 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd/snapshot : Remove the barrier key from snap volinfoSachin Pandit2014-06-041-0/+9
| | | | | | | | | | | | | | | | | | | before generating brick volfile. Problem : During snapshot creation if I/O is in progress, then barrier value is enabled. Hence during snapshot create and in-turn snapshot restore the barrier value is set to enable. Because of this further I/O on the mount point fails. Solution : Remove the barrier key from newly created snap volinfo before generating the brick volfiles. Change-Id: I180b3adfbb364159fd353b2d0fb630e004099aa5 BUG: 1098487 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7892 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/status : First fetch the snapcount and then send the rpc callSachin Pandit2014-06-031-162/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | for individual snapshots for snapshot status Problem : Initially, we used to do all the calculation in the glusterd side, once all the information related to snap was fetched, it was aggregated into one dictionary and that was sent back to CLI. Problem with this approach was, when number of snapshots are very high then CLI will timeout. Solution: First fetch snapcount and snapname from glusterd, then make a individual calls using the snapname fetched. This will resolve the timeout problem. Change-Id: I32609b3898ed227c804dd4d8ee4516f081240756 BUG: 1087676 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7456 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Use external umount for un-mounting snapshotVijaikumar M2014-05-291-17/+15
| | | | | | | | | | | | | | | umount2 doesn't cleanup /etc/mtab entry after doing umount, so use external umount command for un-mount operation. Change-Id: I1a91a700433e2bf15dd21e6fcdd9da54441048d1 BUG: 1098084 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7775 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: brick_start shouldn't be done from child threadVijaikumar M2014-05-221-8/+19
| | | | | | | | | | | | | | | | | | | | | | | When creating a volume snapshot, the back-end operation 'taking a lvm_snapshot and starting brick' for the each brick are executed in parallel using synctask framework. brick_start was releasing a big_lock with brick_connect and does a lock again. This will cause a deadlock in some race condition where main-thread waiting for one of the synctask thread to finish and synctask-thread waiting for the big_lock. Solution is not to start_brick from from synctask Change-Id: Iaaf0be3070fb71e63c2de8fc2938d2b77d40057d BUG: 1100218 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7842 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* NetBSD build fixesEmmanuel Dreyfus2014-05-171-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Shell scripts: == is specific to bash and ksh. Use = instead. - Shell scripts: use sh instead of bash if bash functionnality is not used - Shell scripts: ${var/search/replace} is specific to bash - sed: The -i option is specific to GNU sed. - Makefiles: $< outside of generic rules only work in GNU make. - xdrproc_t() is not universally defined as variadic. Do not specify third argument if it is not used - NetBSD FUSE specific: only include <perfuse.h> in FUSE client code, it harms in other locations - configure: Search for gettext() in libintl as NetBSD stores it there - Like MacOS X, NetBSD has unmount(2) and not umount(2) (un vs u) Some other build issues previously included in this change were removed: - __THROW macro, addressed in http://review.gluster.com/#/c/7757/ - getmntent() compat shared with MacOS X, in http://review.gluster.com/#/c/7722/ This patchset adds warning fixes for mount_glusterfs BUG: 764655 Change-Id: I2f1faf8ff96362d3e2baf237b943df619011f1f4 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/7783 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
* contrib: Cross platform fixes after recent commitsHarshavardhana2014-05-171-3/+3
| | | | | | | | | | | | | | | - provide a getment_r () version which behaves as re-entrant with some caveats for NetBSD/OSX specific. - some apparent warning issues fixed, always use PRI* format specification avoid using %ld i.e not portable Change-Id: Ib3d1a73b426e38b436b356355b97db0104a1a4a5 BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7722 Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: delete oldest snapshot upon exceeding soft-limitRaghavendra Bhat2014-05-081-0/+101
| | | | | | | | | | Change-Id: I2d6ebae3ced1910f2dee43eeb9fc430e9f31073f BUG: 1061685 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7587 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: volume gets deleted if restore failsRajesh Joseph2014-05-081-0/+8
| | | | | | | | | | | | | | | If the restore command fails in pre-validate phase then main volume gets deleted. Fix: Perform cleanup only when pre-validate passes. Change-Id: I7128c8582c3dd166a5683babb7e136ad0b56f0ac BUG: 1061685 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7665 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Don't release big_lock before completing snapshot creationVijaikumar M2014-05-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Releasing the big-lock can cause problem like deadlock or memory corruption. Same happened with bug 1091926 where glusterd on node-2 entered a commit phase and released a big-lock. The originator node received timeout for the commit phase and triggered a post-validate cleanup to the node-2. Now node-2 continued to work with the object that are alreday cleaned-up and resulted in a crash. Solution is to not to release big-lock in the commit phase of snapshot creation. Change-Id: I571194fdb0b0ecc91bd13f2a9fc92fe4338d14dc BUG: 1091926 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7579 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Execute lvm snapshots in parallelVijaikumar M2014-05-081-49/+213
| | | | | | | | | | | | | | | | | Back-end LVM Snapshot is executed parallely as synop task This helps is gaining performance when there are more bricks in a node. This patch also removes unwanted logs printed in snapshot cleanup Change-Id: I3174cb4547ebb670eca37a98eb9d75ecb0672a90 BUG: 1061685 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Add brick-count suffix for the LVM snapshotVijaikumar M2014-05-081-258/+205
| | | | | | | | | | | | | | | | When there are more than one brick created from the same LVM volume group, there will be a conflict with the LVM snapshot name we use. Solution is to add a brick-count suffix to the LVM snapshot name Change-Id: I7258e69fe0b50e86b81c66ab1db523ab3c7cbae0 BUG: 1091934 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7581 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: quorum check before taking the snapshotRaghavendra Bhat2014-05-071-15/+159
| | | | | | | | | | | | | | | | | | | | without force option: quorum fails if glusterds are not in quorum. If glusterd are in quorum, then volume quorum (i.e quorum of the bricks) is checked. volume quorum fails even if one of the bricks are down. with force option: even though the glusterds are not in quorum, and some bricks are down, the quorum check of the volume (i.e bricks) is done and if the volume quorum is met, snapshot is taken. Change-Id: I06971e45d5cf09880032ef038bfe011e6c244268 BUG: 1061685 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7463 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snashot: Perform missed snap createsAvra Sengupta2014-05-061-62/+222
| | | | | | | | | | | | | | | | | | | | | | | | | When a brick is started, and the glusterfsd process requests for volfile, the brick_name is sent in the req dict. In glusterd, after fetching the spec the brick_name is looked up in the missed_snap_list, and any missing snap creates on the same brick are performed. After this, the glusterd responds back with the specfile. Also collate brick data from the node's hosting the bricks during restore. In case the data is absent, the local node's data is used. This is needed to ensure that, during a restore we collect the information created when a missed snap create is performed. Change-Id: I47cefdeba96f2702be810965734cf0fac61d3d2d BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7551 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Fetch brick mount_dirs during brick create.Avra Sengupta2014-05-061-81/+34
| | | | | | | | | | | | | | | | | Fetch the mount directory path for a brick, during volume create, add-brick, and replace-brick. When a snap-create is missed, use this mount directory information to create the brick path for the missed snap brick. Change-Id: Iad3eec96a32cf340f26bdf3f28e2f529e4b77e31 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7550 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: umount2 on OSX/NetBSD is unmountHarshavardhana2014-05-031-9/+21
| | | | | | | | | | Change-Id: I8de4d47bb2a54b915243ea029cce2585fba34876 BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7651 Reviewed-by: Justin Clift <justin@gluster.org> Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: handle postvalidate carefully when prevalidate failsRaghavendra Bhat2014-05-031-22/+43
| | | | | | | | | | | | | | * Also changed the order of peers retrieval and snapshot retrieval upon glusterd start, so that the snapshot bricks can be properly resolved while cleaning up the snapshots. Change-Id: I120704e4412a9cadb8d90a9b7969f2b4a1196bc5 BUG: 1061685 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7494 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Restore cleanupRajesh Joseph2014-05-021-28/+425
| | | | | | | | | | | | | | | If restores fails for some reason then we should revert the restore operation. To do so we take the backup of vols folder before doing a restore and if the restore fails then we revert the changes done. Change-Id: I97f72aec3a34fc122bf137beb336e94db3a04dff BUG: 1061685 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7548 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Activation and De-activation of snapshotJoseph Fernandes2014-05-021-40/+293
| | | | | | | | | | | | | | | | | | | | | Previously, snapshots by default were activated on creation and there was no option to activate or deactivate them on demand. This will allow the user to activate and deactivate on demand. The CLI goes as follows 1) Activate the snap using a command "gluster snapshot activate <snapname> [force]" 2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>" Note: Even now the snapshot will be activated during creation. Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f BUG: 1061685 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7476 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Copy the quota config and cksum file before taking a ↵Sachin Pandit2014-05-011-2/+17
| | | | | | | | | | | | | | | | | | | | | | snapshot Quota config and cksum file needs to be copied before taking a snapshot, so that when a snapshot is restored these files is copied back to the original place, and the restored snap volume can make use of these quota files. Before taking a snapshot the quota files are copied to /var/lib/glusterd/snaps/<snapname>/quota/ Change-Id: Id175f28d4ee47be64d7491c6aae81a1794928490 BUG: 1061685 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7527 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Copy geo-rep status and config files before taking a ↵Sachin Pandit2014-05-011-0/+212
| | | | | | | | | | | | | | | | | | | | | | | | | | | snapshot. geo-rep status and conf files needs to be copied before taking a snapshot. The idea here is, when the snapshot is restored, these config and status files needs to be placed back in geo-replication folder so that geo-replication can start with the same state it was when taking a snapshot. Details : Before a snapshot is taken, Copy the status and config files present in /var/lib/glusterd/geo-replication/. The files copied are gsyncd.conf and status files of each session belonging to a volume whose snapshot is about to be taken. Change-Id: I0234ecd846883350c59777c2505290729de0ce05 BUG: 1061685 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7495 Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Barrier code integration with snapshot codebase.Sachin Pandit2014-05-011-0/+39
| | | | | | | | | | | | | | | | | As we have new barrier translator in place, we are making use of that during snapshot phase. During snapshot create (pre-commit), we enable the barrier feature and after the commit we disable it. Change-Id: I94212b1c06b0d9b12255ee98313e2d8549b34b17 BUG: 1061685 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7561 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>