summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.h
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Prevent rebalance starting with old clientsKaushal M2014-09-031-0/+3
| | | | | | | | | | | | | | | | | Glusterd will prevent rebalance from starting when clients older than glusterfs-v3.6.0 are connected to a volume. This is needed as running rebalance with old clients connected could lead to data loss in some cases. The DHT xlator on newer clients (>= 3.6.0) has been fixed to prevent the data loss issues. Change-Id: If58640236382a2fc13f73f6b43777f01713859f7 BUG: 1136201 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8583 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/glusterd: Support of volume get for a specific volume optionAtin Mukherjee2014-08-261-0/+9
| | | | | | | | | | | | | | This patch introduces a cli command to display a specific volume option/all volume options of a specific volume with the following usage: Usage: volume get <VOLNAME> <key|all> Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d BUG: 983317 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8305 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep/glusterd: API to check active geo-rep session for the volumeKotresh H R2014-08-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Requirement: Snapshot needs an API to fail the CLI if any geo-rep session is active for that volume. Solution: A function "gd_vol_is_geo_rep_active" is provided to check if any geo-rep session is active for that volume. An in memory dict called 'gsync_running_slaves' is maintained in 'volinfo' structure to keep track of active geo-rep session for the volume. The key 'slavenode::slavevol' with value 'running' is added whenever geo-rep is started/resumed into the dict and the same is removed if stopped/paused. So the 'count' in dict is used to decide whether the geo-rep is active or not for that volume. Also added "this->name" in gf_log in routines which this patch is touched. Change-Id: I2b5de7dd686541c6b89c0fd0f7a4dbc92eecfac5 BUG: 1129008 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8459 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Inherit the mount options of a original brickVijaikumar M2014-08-031-5/+3
| | | | | | | | | | | | | | | | | | | | when creating snapshots When creating a snapshot a LVM is created at the backend and is mounted under /var/run/gluster/snaps/... However, this mount does not inherit the mount options for the original brick acting as the parent for the snap. If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss. Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b BUG: 1125180 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8394 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/snapshot : Dont display the snapshot hard-limit, soft-limitSachin Pandit2014-07-241-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | and auto-delete value in gluster volume info. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1113476 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Improvements to peer identificationKaushal M2014-07-151-43/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch improves the peer identification mechanism in glusterd and lays down the framework for further improvements, including better multi network support in glusterd. This patch mainly does two things, 1. Extend the peerinfo object to store a list of addresses instead of a single hostname as it does now. This also includes changes to make the peer update behaviour of 'peer probe' to add to the list. 2. Improve glusterd_friend_find_by_hostname() to perform better matching of hostnames. glusterd_friend_find_by_hostname() now does and initial quick string compare against all the peer addresses known to glusterd, after which it tries a more thorough search using address resolution and matching the struc sockaddr's. The above two changes together improve the peer identification situation in glusterd a lot. More information regarding the problem this patch attempts to resolve and the approach chosen can be found at http://www.gluster.org/community/documentation/index.php/Features/Better_peer_identification This commit is a squashed commit of the following changes, the development branch of which can be viewed at, https://github.com/kshlm/glusterfs/tree/better-peer-identification or, https://forge.gluster.org/~kshlm/glusterfs-core/kshlms-glusterfs/commits/better-peer-identification commit 198f86e60fab74faf082eaa02657a4d8f60b92f0 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 14:34:06 2014 +0530 Update gluster.8 commit 35d597f3a6b3248373e727f7b7e889c92554d56c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 09:01:01 2014 +0530 Address review comments https://review.gluster.org/#/c/8238/3 commit 47b5331e17304477322bd2daed5bbed503c34ca1 Merge: c71b12c 78128af Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 08:41:39 2014 +0530 Merge branch 'master' into better-peer-identification commit c71b12c164330e8d19d1df4734ab34ef9a8caad2 Merge: 57bc9de 0f5719a Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:50:19 2014 +0530 Merge branch 'master' into better-peer-identification commit 57bc9de9e4f49ff2b1620df9906cda50a3527a25 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:49:08 2014 +0530 More fixes to review comments commit 5482cc363a687a9e246a0780ec88acd53e218501 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 18:36:40 2014 +0530 Code refactoring in peer-utils based on review comments https://review.gluster.org/#/c/8238/2/xlators/mgmt/glusterd/src/glusterd-peer-utils.c commit 89b22c34757178f64d5fbaffa31e6302f841c060 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:30:00 2014 +0530 Hostnames in peer status commit 63ebf9485cf50d736cf640238a1ab241671fcaf1 Merge: c8c8fdd f5f9721 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:06:33 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit c8c8fdd2104b5b6b8a1af739b1dd952b74e6dd66 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 18:35:27 2014 +0530 Hostnames in xml output commit 732a92a0167ad7b1d70edbc35ebd8307c2766ae1 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 15:12:10 2014 +0530 Add hostnames to cli rsp dict during list-friends commit fcf43e3e317508f0c225024738a988a4af8e9205 Merge: c0e2624 72d96e2 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 12:53:03 2014 +0530 Merge branch 'master' into better-peer-identification commit c0e262416728a3c536a8347a216e471eb2251535 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 16:11:19 2014 +0530 Use list_for_each_entry_safe when cleaning peer hostnames commit 6132e60224eb592f3657e535a12a3e72c772da42 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 15:52:19 2014 +0530 Fix crash in gd_add_friend_to_dict commit 88ffa9a508fd5aac0b2a76e6e76487ce0cab786a Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 13:19:44 2014 +0530 gd_peerinfo_destroy -> glusterd_peerinfo_destroy commit 4b36930a715b1e13cd1a77d136ef1cf78a06d574 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:50:12 2014 +0530 More refactoring commit ee559b081d608c6501c10ae22166f26eeb65690e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:14:40 2014 +0530 Major refactoring of code based on review comments at https://review.gluster.org/#/c/8238/1/xlators/mgmt/glusterd/src/glusterd-peer-utils.h commit e96dbc7bbb05fad2a9c424de41a394b8023fe48d Merge: 2613d1d 83c09b7 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 09:47:05 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 2613d1daebff0c56812de821c06ed4c16bb9d447 Merge: b242cf6 9a50211 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:28:57 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit b242cf66d95dd3dd5e3975aa430baa6bd74b8a29 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:08:18 2014 +0530 Fix a silly mistake, if (ctx->req) => if (ctx->req == NULL) commit c835ed26433830ceed57289143f596cf60421558 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 14:58:23 2014 +0530 Fix reverse probe. commit 9ede17f9329b854b02e8ad159f173244789fd08c Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:31:32 2014 +0530 Fix friend import for existing peers commit 891bf74c7350064dfb008d1b7294bcec28d680fd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:08:36 2014 +0530 Set first hostname in peerinfo->hostnames to peerinfo->hostname commit 9421d6a217381a7427a7d84f369280883ca4297a Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 12:21:40 2014 +0530 Fix gf_asprintf return val check in glusterd_store_peer_write commit defac978c1d94011ce8195e311839b9ffce057e7 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 11:16:13 2014 +0530 Fix store_retrieve_peers to correctly cleanup. commit 00a799f5de1121b0cb7421da8285f9407063e1bd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 10:52:11 2014 +0530 Update address list in glusterd_probe_cbk only when needed. commit 7a628e8a9c562d85709c69cfa13fb1774c521b75 Merge: d191985 dc46d5e Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 09:24:12 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit d1919858e6639d2b54d716a61f662d9752ec5ff1 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:59:49 2014 +0530 gf_compare_addrinfo -> gf_compare_sockaddr commit 31d8ef730d408f8d9ba8f504fa648f7dcd59da87 Merge: 93bbede 86ee233 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:16:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 93bbedeac5181e29f59b2acd08f638146812ec41 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:15:16 2014 +0530 Improve glusterd_friend_find_by_hostname glusterd_friend_find_by_hostname will now do an initial quick search for the peerinfo performing string comparisions on the given host string. It follows it with a more thorough match, by resolving the addresses and comparing addrinfos instead of strings. commit 2542cdbc45aa9cfcaf1f174686158d5565cdd07b Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 17:21:10 2014 +0530 New utility gf_compare_addrinfo commit 338676e8389a44bd91136eebd110197429c2566c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:55:56 2014 +0530 Use gd_peer_has_address instead of strcmp commit 28d45be51f594328741c44455bd80ac9d64ca501 Merge: 728266e 991dd5e Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:54:40 2014 +0530 Merge branch 'master' into better-peer-identification commit 728266eb16d5f5a4bf36266044425ae164337f99 Merge: 7d9b87b 2417de9 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 09:55:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 7d9b87b84955ec17daeaf88a3e7462914039430f Merge: b890625 e02275c Author: Kaushal M <kshlmster@gmail.com> Date: Tue Jul 1 08:41:40 2014 +0530 Merge pull request #4 from vpshastry/better-peer-identification Better peer identification commit e02275c52fb83c72ad082c098fd3e432c2b9c526 Merge: 75ee90d b890625 Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 16:44:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit 75ee90d2f272e49b94d24c9ca4571e89a83055ff Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 15:36:10 2014 +0530 glusterd: add to the list if the probed uuid pre-exists Signed-off-by: Varun Shastry <vshastry@redhat.com> commit b890625d8164c660695daef3285c67979eef723e Merge: 04c5d60 187a7a9 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 30 11:44:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 04c5d60cb938c8d94b214689580b40abb1b0ffcd Merge: 3a5bfa1 e01edb6 Author: Kaushal M <kshlmster@gmail.com> Date: Sat Jun 28 19:23:33 2014 +0530 Merge pull request #3 from vpshastry/better-peer-identification glusterd: search through the list of hostnames in the peerinfo commit 0c64f3346a977f9165ac55a84a1e03c40a7573a7 Merge: e01edb6 3a5bfa1 Author: Varun Shastry <vshastry@redhat.com> Date: Sat Jun 28 10:43:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit e01edb63153a1008db70b8fa76ae5b535e099326 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 12:29:36 2014 +0530 glusterd: search through the list of hostnames in the peerinfo Signed-off-by: Varun Shastry <vshastry@redhat.com> commit 3a5bfa15855e660db2bfde644727371dd2d618cc Merge: cda6d31 371ea35 Author: Kaushal M <kshlmster@gmail.com> Date: Fri Jun 27 11:31:17 2014 +0530 Merge pull request #1 from vpshastry/better-peer-identification glusterd: Add hostname to list instead of replaceing upon update commit 371ea354f198b4182382d5403c5960c0b2add6b6 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 11:24:54 2014 +0530 glusterd: Add hostname to list instead of replaceing upon update Signed-off-by: Varun Shastry <vshastry@redhat.com> commit cda6d3152886623ecbf46baf0048ebe0119b30b6 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:52:52 2014 +0530 Import address lists commit 6649b54aa0440130c08e827e0a1d1bbfb840eca9 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:15:37 2014 +0530 Implement export address list commit 55990034eead92bc9b936240029e460a4bf152d5 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:11:59 2014 +0530 Use first address in list to when setting up the peer RPC. commit a35fde8d19b9988eb04c652fb3a5e4f84d90ad00 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:03:04 2014 +0530 Properly free addresses on glusterd_peer_destroy commit 1988081db09ac9205f3dc7268cef8be267f3ce8b Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 17:52:35 2014 +0530 Restore peerinfo with address list implemented. commit 66f524d5749a12f4910dd6b06c9d91f37e1d831e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 23 13:02:23 2014 +0530 Move out all peer related utilities from glusterd-utils to glusterd-peer-utils commit 14a2a326a4dff11b55490dca2a14f39320931340 Author: Kaushal M <kaushal@redhat.com> Date: Tue May 27 12:16:41 2014 +0530 Compilation fix commit c59cd351d0a102d0d5f3ea9001fd33c4edcb262f Author: Kaushal M <kaushal@redhat.com> Date: Mon May 5 12:51:11 2014 +0530 Add store support for hostname list commit b70325f0beb884ad12645ef40185f0bf6cedd741 Author: Kaushal M <kaushal@redhat.com> Date: Fri May 2 15:58:07 2014 +0530 Add a hostnames list to glusterd_peerinfo_t glusterd_peerinfo_new will now init this list and add the given hostname as the lists first member. Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Varun Shastry <vshastry@redhat.com> Change-Id: Ief3c5d6d6f16571ee2fab0a45e638b9d6506a06e BUG: 1119547 Reviewed-on: http://review.gluster.org/8238 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: fixing glusterd quorum during snap operationJoseph Fernandes2014-07-041-2/+11
| | | | | | | | | | | | | | | | During a snapshot operation, glusterd quorum will be checked only on transaction peers, which are selected in the begin of the operation, and not on the entire peer list which is susceptible for change for any peer attach operation. Change-Id: I089e3262cb45bc1ea4a3cef48408a9039d3fbdb9 BUG: 1114403 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/8200 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot: Change file-system uuid to file-system labelRajesh Joseph2014-07-031-2/+3
| | | | | | | | | | | | | | | | | | | | | Problem: In XFS changing file-system UUID with xfs_admin is causing too much delay with large file-system. The time taken by xfs_admin tool to change UUID is directly proportional to the size of the file system. Cause: In XFS file-system UUID is stored in file-system superblock. Therefore for chaning UUID all the superblock needs to be changed. Fix: Instead of using file-system UUID use file-system label. Change-Id: Ifb4c668fb29cfc1c89d9b221abc8d09dc09589ec BUG: 1115107 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8215 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: display snapd status as part of volume statusRaghavendra Bhat2014-06-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | * Made changes to save the port used by snapd in the info file for the volume i.e. <glusterd-working-directory>/vols/<volname>/info This is how the gluster volume status of a volume would look like for which the uss feature is enabled. [root@tatooine ~]# gluster volume status vol Status of volume: vol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick tatooine:/export1/vol 49155 Y 5041 Snapshot Daemon on localhost 49156 Y 5080 NFS Server on localhost 2049 Y 5087 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks Change-Id: I8f3e5d7d764a728497c2a5279a07486317bd7c6d BUG: 1111041 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8114 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Use blocking quotad start only on quota enableKaushal M2014-06-161-0/+6
| | | | | | | | | | | | | | | | | | | Having quotad always being using the blocking runner variant is problematic. In some cases where quotad was started from the epoll thread, it lead to a deadlock which lead to glusterd becoming unresponsive. This patch makes the default quotad_start function use the non-blocking runner_nowait variant. The blocking start is used only when quotad is started on quota enable command. Change-Id: Ib27042748d69ea28e68badcfaddf61589aae4eba BUG: 1109872 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8082 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Get snapshot info dynamically via new rpc and infra for snapview-server to ↵Anand Subramanian2014-06-151-0/+7
| | | | | | | | | | | refresh snaplist BUG: 1105439 Change-Id: I4bb312a53d88f6f4955e69a3ef2b4955ec17f26d Signed-off-by: Anand Subramanian <anands@redhat.com> Reviewed-on: http://review.gluster.org/8001 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Update fstype and fsuuidAvra Sengupta2014-06-131-1/+9
| | | | | | | | | | | | | Update fstype and fsuuid before creating missed snapshot Change-Id: Ie9af0065fab288bd1c1a26396428f0cee6335f97 BUG: 1109142 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8062 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Fix snap delete cleanupAvra Sengupta2014-06-131-0/+5
| | | | | | | | | | | | | | | | | | | During commit the first thing that should happen is the snaps should be marked as GD_SNAP_STATUS_DECOMMISSION. So that if the node goes down the marked snap can be cleaned up when the node is coming back. Also during snap handshake while accepting snapshot from peer, we should check if the snap in question is decomissioned. In that case we shouldn't be accepting the peer data. Change-Id: Ib4e38d1b6bf49411928623fbc9f72f2b37b72086 BUG: 1104714 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7996 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : Provide enable/disable option for snapshot auto-delete ↵Sachin Pandit2014-06-131-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | feature. This patch provides an interface to enable or disable the auto-delete feature. Syntax : gluster snapshot config auto-delete <enable/disable> DETAILS : 1) When auto-delete feature is disabled, If the the soft-limit is reached then user is given a warning about exceeding soft-limit along with successful snapshot creation message (oldest snapshot is not deleted). And upon reaching hard-limit further snapshot creation is not allowed. Example : ------------------------------------------------------------------ |Case - 1: Upon reaching soft-limit | |Snapshot create : snap successfully created. |Warning : soft-limit of volume (vol) is reached. Snapshot creation |is not possible once hard-limit is reached. | |----------------------------------------------------- |Case - 2: Upon reaching hard-limit | |Snapshot create : snap creation failed. |Error : hard-limit of volume (vol) is reached, Hence it is not |possible to take further snapshots. Please delete few snapshots |of the volume (vol) before taking another snapshot. ------------------------------------------------------------------ 2) When auto-delete feature is enabled, then as soon as the soft-limit is reached the oldest snapshot is deleted for every successful snapshot creation (same as existing method), With this it is made sure that number of snapshot created is not more than snap-max-hard-limit. Change-Id: Ie3ca64bbd2c763371f541cd2e378314e73b695b4 BUG: 1105415 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8017 Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: Create the conf file path correctlyAvra Sengupta2014-06-121-2/+2
| | | | | | | | | | | | | | In case of mount brocker, the conf file path needs to be correctly created, and then fetch the status file Change-Id: Iaa1b04ee46f10961a7056e834170d68282c36efa BUG: 1104649 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7977 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/snapshot : Store the global snapshot config limit in options.Sachin Pandit2014-06-111-0/+3
| | | | | | | | | | | | | | | | | | | | Problem : Initially we used to save the global config limit in glusterd.info, The problem with that approach was glusterd.info is local to a particular glusterd and hence is not synced during the handshake of glusterds. Solution : Store the global snapshot config in options, which is synced during handshake. Change-Id: I4c688bb4052a57df28aadba8581b14e2ddb510ef BUG: 1104642 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7971 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* feature/geo-rep: Fix to retain pause state of gsyncd on restart.Kotresh H R2014-06-051-1/+1
| | | | | | | | | | | | | | | A new gsyncd options '--pause-on-start' is introduced. When node reboots, if the status is paused, gsyncd is started with this option. After gsyncd spawns worker and agent, worker will send SIGSTOP to negative pid of monitor to enter pause mode. Change-Id: I5aad82c9a9fc8c243f384940b77d25e26e520d6d BUG: 1101410 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7885 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd: Changes to provide interface for USSVarun Shastry2014-06-031-1/+53
| | | | | | | | | | | | | | | The changes which consists of the translators for the USS (User Servicable Snapshots) is submitted as a separate patch. Current patch provides the CLI access to the feature. Change-Id: I6b98a42fcfa82f0870d8048fe0bb53141565e9c6 BUG: 1094815 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/7705 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Use external umount for un-mounting snapshotVijaikumar M2014-05-291-0/+3
| | | | | | | | | | | | | | | umount2 doesn't cleanup /etc/mtab entry after doing umount, so use external umount command for un-mount operation. Change-Id: I1a91a700433e2bf15dd21e6fcdd9da54441048d1 BUG: 1098084 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7775 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Invoke restore cleanup for missed restoresAvra Sengupta2014-05-271-0/+5
| | | | | | | | | | | | | | | | While performing missed restores invoke restore cleanup, to cleanup the snap file (from which the vol was restored) and also the old volinfo and if the old volume is a restored volume, then its lvm too. Change-Id: Ifa5700c69f49fa0e22e0060a039c2e5c0b02b585 BUG: 1100324 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7848 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* gsyncd / geo-rep: Partial support for Non-root geo-replication.Venky Shankar2014-05-141-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | This patch enables geo-replication to be run as an unprivileged user. As of now, this is just the partial support, but is very close to achieve full functionality. Current limitation * Geo-replication executed Gluster CLI commands on the slave via SSH. On a non-root setup, Gluster CLI would run as an unprivileged user, failing to execute the command. As a workaround (for testing), setuid(2) Gluster CLI executable or use the glusterd option to accept commands by unprivileged CLI process. The nature of cli commands are "system::" commands (for key management) and remote volume info fetching. Remote volume info fetching has been modified to use --remote-host gluster cli option rather than ssh and remote cli execution. Change-Id: Ica89e2ba9b7f48fd6e1c876c477d7822dc693617 BUG: 1077452 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/7658 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Allow setting volume options by default based on op-versionKaushal M2014-05-141-0/+4
| | | | | | | | | | | | | | | | | | A new function glusterd_enable_default_options is introduced, which will set some volume options on a volume based on op-version. This function is called near the end of the volume create and will allow some options to be enabled based on op-version on newly created volumes. This will also be called during volume reset, to reset the options to their default values if they had changed. Change-Id: I91057d9e42409b17a884728b43ae3721328d4831 BUG: 1096616 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7734 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : Quorum check should not be made if weSachin Pandit2014-05-131-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | perform snapshot status command. Problem : Snapshot status command used to fail as it used to hit the quorum check path. Solution : The condition checking where snapname is fetched based on the presence of snap_volume is moved inside create switch case. And also moved the chunk of code which does the actual quorum check to new function to make the code more readable. Change-Id: Idda2d7c576cdfab3a7d087bfa74bfa616372c20e BUG: 1096700 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7737 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: On gaining quorum spawn_daemons in new threadKaushal M2014-05-121-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | During startup, if a glusterd has peers, it waits till quorum is obtained to spawn bricks and other services. If peers are not present, the daemons are started during glusterd' startup itself. The spawning of daemons as a quorum action was done without using a seperate thread, unlike the spawn on startup. Since, quotad was launched using the blocking runner_run api, this leads to the thread being blocked. The calling thread is almost always the epoll thread and this leads to a deadlock. The runner_run call blocks the epoll thread waiting for quotad to start, as a result glusterd cannot serve any requests. But the startup of quotad is blocked as it cannot fetch the volfile from glusterd. The fix for this is to launch the spawn daemons task in a seperate thread. This will free up the epoll thread and prevents the above deadlock from happening. Change-Id: Ife47b3591223cdfdfb2b4ea8dcd73e63f18e8749 BUG: 1095585 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7703 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Execute lvm snapshots in parallelVijaikumar M2014-05-081-1/+2
| | | | | | | | | | | | | | | | | Back-end LVM Snapshot is executed parallely as synop task This helps is gaining performance when there are more bricks in a node. This patch also removes unwanted logs printed in snapshot cleanup Change-Id: I3174cb4547ebb670eca37a98eb9d75ecb0672a90 BUG: 1061685 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Add brick-count suffix for the LVM snapshotVijaikumar M2014-05-081-7/+8
| | | | | | | | | | | | | | | | When there are more than one brick created from the same LVM volume group, there will be a conflict with the LVM snapshot name we use. Solution is to add a brick-count suffix to the LVM snapshot name Change-Id: I7258e69fe0b50e86b81c66ab1db523ab3c7cbae0 BUG: 1091934 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7581 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: quorum check before taking the snapshotRaghavendra Bhat2014-05-071-0/+20
| | | | | | | | | | | | | | | | | | | | without force option: quorum fails if glusterds are not in quorum. If glusterd are in quorum, then volume quorum (i.e quorum of the bricks) is checked. volume quorum fails even if one of the bricks are down. with force option: even though the glusterds are not in quorum, and some bricks are down, the quorum check of the volume (i.e bricks) is done and if the volume quorum is met, snapshot is taken. Change-Id: I06971e45d5cf09880032ef038bfe011e6c244268 BUG: 1061685 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7463 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snashot: Perform missed snap createsAvra Sengupta2014-05-061-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | | When a brick is started, and the glusterfsd process requests for volfile, the brick_name is sent in the req dict. In glusterd, after fetching the spec the brick_name is looked up in the missed_snap_list, and any missing snap creates on the same brick are performed. After this, the glusterd responds back with the specfile. Also collate brick data from the node's hosting the bricks during restore. In case the data is absent, the local node's data is used. This is needed to ensure that, during a restore we collect the information created when a missed snap create is performed. Change-Id: I47cefdeba96f2702be810965734cf0fac61d3d2d BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7551 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Fetch brick mount_dirs during brick create.Avra Sengupta2014-05-061-0/+7
| | | | | | | | | | | | | | | | | Fetch the mount directory path for a brick, during volume create, add-brick, and replace-brick. When a snap-create is missed, use this mount directory information to create the brick path for the missed snap brick. Change-Id: Iad3eec96a32cf340f26bdf3f28e2f529e4b77e31 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7550 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Restore cleanupRajesh Joseph2014-05-021-0/+3
| | | | | | | | | | | | | | | If restores fails for some reason then we should revert the restore operation. To do so we take the backup of vols folder before doing a restore and if the restore fails then we revert the changes done. Change-Id: I97f72aec3a34fc122bf137beb336e94db3a04dff BUG: 1061685 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7548 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Copy the quota config and cksum file before taking a ↵Sachin Pandit2014-05-011-0/+4
| | | | | | | | | | | | | | | | | | | | | | snapshot Quota config and cksum file needs to be copied before taking a snapshot, so that when a snapshot is restored these files is copied back to the original place, and the restored snap volume can make use of these quota files. Before taking a snapshot the quota files are copied to /var/lib/glusterd/snaps/<snapname>/quota/ Change-Id: Id175f28d4ee47be64d7491c6aae81a1794928490 BUG: 1061685 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7527 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Copy geo-rep status and config files before taking a ↵Sachin Pandit2014-05-011-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | snapshot. geo-rep status and conf files needs to be copied before taking a snapshot. The idea here is, when the snapshot is restored, these config and status files needs to be placed back in geo-replication folder so that geo-replication can start with the same state it was when taking a snapshot. Details : Before a snapshot is taken, Copy the status and config files present in /var/lib/glusterd/geo-replication/. The files copied are gsyncd.conf and status files of each session belonging to a volume whose snapshot is about to be taken. Change-Id: I0234ecd846883350c59777c2505290729de0ce05 BUG: 1061685 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7495 Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: Looks for state_file and pid-file in gsyncd_template.confAvra Sengupta2014-05-011-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If entries like state_file or pid-file are missing in the gsyncd.conf or if the gsyncd.conf is also missing, glusterd looks for the missing configs in the gsyncd_template.conf status will display "Config Corrupted" as long as the entry is missing in the config file. Missing state-file entry in both config and template will not allow starting a geo-rep session. However stop force will successfully stop an already running session, if the state-file entries are missing in both the config file and the template, as long as either of them have a pid-file entry. if the pid-file entry is missing in the gsyncd.conf file, starting a geo-rep session will not be allowed. if the pid-file entry is missing in an already started session, then stop force will fetch it from the config template and stop the session. if the pid-file entry is missing in both the config and the template, stop force will fail with appropriate error stating pid-file entry is missing. Change-Id: I81d7cbc4af085d82895bbef46ca732555aa5365d BUG: 1059092 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/6856 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Compare and update snapshots during peer handshakeAvra Sengupta2014-04-281-1/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During a peer-handshake, after the volumes have synced, and the list of missed snapshots have synced, the node will perform the pending deletes and restores on this list. At this point, the current snapshot list in the node will be updated, and hence in case of conflicts arising during snapshot handshake, the peer hosting the bricks will be given precedence Likewise, if there will be a conflict, and both peers will be in the same state, i.e either both would be hosting bricks or both would not be hosting bricks, then a decision can't be taken and a peer-reject will happen. glusterd_compare_and_update_snap() implements the following algorithm to perform the above task: Step 1: Start. Step 2: Check if the peer is missing a delete on the said snap. If yes, goto step 6. Step 3: Check if there is a conflict between the peer's data and the local snap. If no, goto step 5. Step 4: As there is a conflict, check if both the peer and the local nodes are hosting bricks. Based on the results perform the following: Peer Hosts Bricks Local Node Hosts Bricks Action Yes Yes Goto Step 7 No No Goto Step 7 Yes No Goto Step 8 No Yes Goto Step 6 Step 5: Check if the local node is missing the peer's data. If yes, goto step 9. Step 6: It's a no-op. Goto step 10 Step 7: Peer Reject. Goto step 10 Step 8: Delete local node's data. Step 9: Accept Peer Data. Step 10: Stop Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7525 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Rename the export dictionary as peer_dataAvra Sengupta2014-04-281-5/+6
| | | | | | | | | | | | | | | | | | During a glusterd handshake, a dictionary is passed among the peers which contains, info of volumes, global opts, and now also info of snaps and list of missed snaps As it now contains more than just volume specific data, renaming the dict in the code-base from "vols" to "peer_data" Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7524 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Recreate the mount dirs and mount the lvm snapshots on ↵Avra Sengupta2014-04-281-0/+3
| | | | | | | | | | | | | | | | | | node reboot. The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or /run/gluster/snaps. These paths being on a tempfs, on reboot are removed. So when glusterd starts, we need to recreate these paths, activate the respective logical volumes (lvm snapshots of the bricks), and mount these logical volumes at their respective paths. Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7452 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Perform missed snap deletes and restores.Avra Sengupta2014-04-281-0/+5
| | | | | | | | | | | | | | | | | Replacing is_volume_restored(gf_boolean_t) with restored_from_snap(uuid_t) in glusterd_volinfo_ Also removed gd_restore_snap_volume from glusterd-volgen.c to glusterd-snapshot.c Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6 BUG: 1089906 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7455 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Adding snap_vol_id and snap_uuid to missed_snap_listAvra Sengupta2014-04-271-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Persisting missing snapshot info on disk as well as in memory in the following format: -------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 1 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 3 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638: 3 :/brick/brick-dirs/brick3: 1 : 1 This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list In memory we maintain the data as a list of glusterd_missed_snap_info in conf, the key for this list are the first two fields, i.e NODE-UUID:SNAP-UUID. For every NODE-UUID:SNAP-UUID, there can be multiple operations missed on multiple bricks. So we maintain a list of glusterd_snap_op_t for every node of glusterd_missed_snap_info This list is maintained or updated during snapshot create, delete, and restore operations which are the only operations that if missed, are recorded in this list. During snapshot create, if a node is down, or a brick is down, we don't receive their mount point infos. snap_status of such bricks is marked as -1, and their brick details are added to this list. During snapshot delete, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. When a subsequent delete entry is processed for an already existing create entry, we just mark the create entries status as done (2), and don't add the delete entry to the list. During snapshot restore, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. Like delete when a subsequent restore entry is processed for an already existing create entry, we just mark the create entries status as done (2), and don't add the restore entry to the list. Change-Id: I54f63e28d3c40555d0f84528f38227103171f594 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7454 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot-handshake: Perform handshake of missed_snaps_list.Avra Sengupta2014-04-251-0/+6
| | | | | | | | | | | | | | | | In a handshake, create a union of the missed_snap_lists of the two peers. If an entry is present, its no op. If an entry is pendng, and the peer entry is done, mark own entry as done. If an entry is done, and the peer ertry is pending, its a no-op. If its a new entry, add it. Change-Id: Idbfa49cc34871631ba8c7c56d915666311024887 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7453 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: MacOSX Porting fixesHarshavardhana2014-04-241-3/+5
| | | | | | | | | | | | | | | | | | | | | git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs Working functionality on MacOSX - GlusterD (management daemon) - GlusterCLI (management cli) - GlusterFS FUSE (using OSXFUSE) - GlusterNFS (without NLM - issues with rpc.statd) Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Dennis Schafroth <dennis@schafroth.com> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Dennis Schafroth <dennis@schafroth.com> Reviewed-on: http://review.gluster.org/7503 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Improve detach check validationKaushal M2014-04-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch improves the validation for the 'peer detach' command. A check for if volumes exist with some bricks on the peer being detached validation is added in peer detach code flow (even force would have this validation). This patch also gurantees that peer detach doesn't fail for a volume with all its brick on the peer which is getting detached and there are no other bricks on this peer. The following steps need to be followed for removing a downed and unrecoverable peer. * If a replacement system is available - add it to the cluster - use replace-brick to migrate bricks of the downed peer to the new peer (since data cannot be recovered anyway use the 'replace-brick commit force' command) or, If no replacement system is available, - remove bricks of the downed peer using 'remove-brick' Change-Id: Ie85ac5b66e87bec365fdedd8352b645bb25e1c33 BUG: 983590 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/5325 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: op-version check for brickops.Ravishankar N2014-03-241-0/+3
| | | | | | | | | | | | | | | | | | | | | | cluster op-version must be atleast 4 for add/remove brick to proceed. This change is required for the new afr-changelog xattr changes that will be done for glusterFS 3.6 (http://review.gluster.org/#/c/7155/). In add-brick, the check is done only when replica count is increased because only that will affect the AFR xattrs. In remove-brick, the check is unconditional failing which there will be inconsistencies in the client xlator names amongst the volfiles of different peers. Change-Id: If981da2f33899aed585ab70bb11c09a093c9d8e6 BUG: 1066778 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7122 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: persistent client xlator/ afr changelog namesRavishankar N2014-03-241-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | -Add a unique brick-id field to glusterd_brickinfo_t -Persist the id to the brickinfo file -Use the brick-id as the client xlator name during vol create, add-brick and replace-brick operations. -For older volumes,generate the id in-memory during glusterd restore but defer writing it to the brickinfo file until the next volume set operation. -send and receive the brick-ids during peer probe. Feature page: www.gluster.org/community/documentation/index.php/Features/persistent-AFR-changelog-xattributes Related patch: http://review.gluster.org/#/c/7122 Change-Id: Ib7f1570004e33f4144476410eec2b84df4e41448 BUG: 1066778 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7155 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Volume locks and transaction specific opinfosAvra Sengupta2014-02-101-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this patch we are replacing the existing cluster-wide lock taken on glusterds across the cluster, with volume locks which are also taken on glusterds across the cluster, but are volume specific. So with the volume locks we are able to perform more than one gluster operation at the same time, as long as the operations are being performed on different volumes. We maintain a global list of volume-locks (using a dict for a list) where the key is the volume name, and which saves the uuid of the originator glusterd. These locks are held and released per volume transaction. In order to acheive multiple gluster operations occuring at the same time, we also separate opinfos in the op-state-machine, as a part of this patch. To do so, we generate a unique transaction-id (uuid) per gluster transaction. An opinfo is then associated with this transaction id, which is used throughout the transaction. We maintain a run-time global list(using a dict) of transaction-ids, and their respective opinfos to achieve this. Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8 BUG: 1011470 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5994 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: add volinfo to the list data structure in an orderVijaykumar M2014-02-081-0/+3
| | | | | | | | | | | | | | | | Currently volinfo is added at the end of the list while creating a volume. On gluster restart, readdir will not provide the ordered list and the data is populated in the same order as readdir. Solution is to insert the volinfo to the list in an order Change-Id: I1716ac6abbd7dd301a7125425fc413c6833f7a48 BUG: 1039912 Signed-off-by: Vijaykumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/6472 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: make volinfo a refcnt'ed object.Krishnan Parthasarathi2013-12-201-0/+6
| | | | | | | | | | | | | Add glusterd_volinfo_remove(..) which removes @volinfo from the list of volumes in the cluster and performs an unref on @volinfo Change-Id: I5f546ca58f61bc334ab1bab4c51c4a21e1f66161 BUG: 1038051 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/6521 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc,glusterd: Use rpc_clnt notifyfn to cleanup mydataKaushal M2013-12-161-0/+3
| | | | | | | | | | | | | | | | | | | | | | | rpc: - On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly cleanup the saved mydata on this event. - Break the reconnect chain when an rpc client is disabled. This will prevent new disconnect events which can lead to crashes. glusterd: - Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify - Use a common glusterd_rpc_clnt_unref() function throught glusterd in place of rpc_clnt_unref(). This function correctly gives up the big-lock before performing the unref. Change-Id: I93230441c5089039643fc9f5632477ef1b695348 BUG: 962619 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5512 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: more glusterd and cli fixes for geo-rep.Ajeet Jha2013-12-121-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | -> handle option validation cases in reset case. -> Creating valid conf path when glusterd restarts. -> Reading the gsyncd worker thread status and displaying it. -> Displaying status-detail per worker. -> Fetch checkpoint info in geo-rep status. -> use-tarssh value validation added. misc: misc geo-rep fixes based on cluster, logrotate etc.. -> cluster/dht: fix 'stime' getxattr getting overwritten. -> cluster/afr: return max of 'stime' values in subvol. -> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary. -> cluster/dht: fix convoluted logic while aggregating. -> cluster/*: fix 'stime' min/max fetch logic. Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85 BUG: 1036552 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Improve rebalance handling during volume syncKaushal M2013-12-101-0/+3
| | | | | | | | | | | | | | | | Glusterd will now correctly copy existing rebalance information when a volinfo is updated during volume sync. If the existing rebalance information was stale, then any existing rebalance process will be termimnated. A new rebalance process will be started only if there is no existing rebalance process. The rebalance process will not be started if the existing rebalance session had completed, failed or been stopped. Change-Id: I68c5984267c188734da76770ba557662d4ea3ee0 BUG: 1036464 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6334 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>