summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Fix management encryption issues with GlusterDv3.6.4Kaushal M2015-07-132-8/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of commit 01b82c6 from master Management encryption was enabled incorrectly in GlusterD leading to issues of cluster deadlocks. This has been fixed with this commit. The fix is in two parts, 1. Correctly enable encrytion for the TCP listener in GlusterD and re-enable own-threads for encrypted connections. Without this, GlusterD could try to esatblish the blocking SSL connects in the epoll thread, for eg. when handling friend updates, which could lead to cluster deadlocks. 2. Explicitly enable encryption for outgoing peer connections. Without enabling encryption explicitly for outgoing connections was causing SSL socket events to be handled in the epoll thread. Some events, like disconnects during peer detach, could lead to connection attempts to happen in the epoll thread, leading to deadlocks again. Change-Id: I438c2b43f7b1965c0e04d95c000144118d36272c BUG: 1241785 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/11612 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd: use a real host name (instead of numeric) when we have oneJeff Darcy2015-07-121-0/+6
| | | | | | | | | | | | Change-Id: Ie9cc201204d3d613e3e585cab066a07283db902c BUG: 1241275 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/11569 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: Store peerinfo after updating hostnamesKaushal M2015-07-101-0/+4
| | | | | | | | | | | | Backport of https://review.gluster.org/11365 Change-Id: I1d36ac63de810061d60edb28b6f591ae45d5cd3a BUG: 1234846 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/11395 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* core: fix Ubuntu code audit (cppcheck) resultsKaleb S. KEITHLEY2015-06-105-16/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | See also http://review.gluster.org/#/c/8064/, BZ 1109180, and http://review.gluster.org/#/c/7693/, BZ 1091677 AFAICT these are false positives: [geo-replication/src/gsyncd.c:100]: (error) Memory leak: str [geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde [xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr Test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. [tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments. the remainder are fixed with this change-set: [cli/src/cli-rpc-ops.c:8883]: (error) Possible null pointer dereference: local [cli/src/cli-rpc-ops.c:8886]: (error) Possible null pointer dereference: local [contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 2) requires 'long *' but the argument type is 'unsigned long *'. [contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 3) requires 'long *' but the argument type is 'unsigned long *'. [xlators/cluster/dht/src/dht-rebalance.c:1734]: (error) Possible null pointer dereference: ctx [xlators/cluster/stripe/src/stripe.c:4940]: (error) Possible null pointer dereference: local [xlators/mgmt/glusterd/src/glusterd-geo-rep.c:1718]: (error) Possible null pointer dereference: command [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:942]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:1026]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-sm.c:249]: (error) Possible null pointer dereference: new_ev_ctx [xlators/mgmt/glusterd/src/glusterd-snapshot.c:6917]: (error) Possible null pointer dereference: volinfo [xlators/mgmt/glusterd/src/glusterd-utils.c:4517]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:6662]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:7708]: (error) Possible null pointer dereference: this [xlators/mount/fuse/src/fuse-bridge.c:4687]: (error) Uninitialized variable: finh [xlators/mount/fuse/src/fuse-bridge.c:3080]: (error) Possible null pointer dereference: state [xlators/nfs/server/src/nfs-common.c:89]: (error) Dangerous usage of 'volname' (strncpy doesn't always null-terminate it). [xlators/performance/quick-read/src/quick-read.c:586]: (error) Possible null pointer dereference: iobuf Rerunning cppcheck after fixing the above: As before, test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. [tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments. As before, false positive: [geo-replication/src/gsyncd.c:100]: (error) Memory leak: str [geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde [xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr False positive after fix: [xlators/performance/quick-read/src/quick-read.c:584]: (error) Possible null pointer dereference: iobuf Change-Id: Ia5a256281156dd1df2fa900caea7694d9d0a7077 BUG: 1122290 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8351 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd: Maintain local xaction_peer list for op-smAtin Mukherjee2015-05-287-38/+64
| | | | | | | | | | | | | | | | | | | | http://review.gluster.org/9269 addresses maintaining local xaction_peers in syncop and mgmt_v3 framework. This patch is to maintain local xaction_peers list for op-sm framework as well. Backport of http://review.gluster.org/#/c/9972/ Change-Id: Idd8484463fed196b3b18c2df7f550a3302c6e138 BUG: 1206429 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9972 Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/10023 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* Fix case mistake for MKDIR_P in MakefilesEmmanuel Dreyfus2015-05-191-1/+1
| | | | | | | | | | | | | | | | | | Some makefiles used $(mkdir_p) instead of the corectly defined $(MKDIR_P). The former is substituted as an empty string, leading to possible failures depending of the user shell tolerance. NetBSD's /bin/sh seems to choke more easily than Linux's /bin/bash, but if the later does not fail, it does not created the intended directories anyway. Backport of I8caed4000f3c91cb3a685453848fb854793945ed BUG: 1138897 Change-Id: I48a24231ba90aa3e84bf433b63ae5cb50307f042 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/10278 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* cluster/afr: Make read child match check in afr optionalKrutika Dhananjay2015-03-251-0/+6
| | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/9917 Having this particular check which was introduced by commit c57c455347a72ebf0085add49ff59aae26c7a70d causes a drop in performance in readdirp. So the behavior is made configurable with this patch. Change-Id: I4a19813cfc786504340264a5a5533a0c43a1d4a4 BUG: 1202673 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/9929 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd: don't start gluster-nfs when NFS is disabledKrishnan Parthasarathi2015-03-251-1/+27
| | | | | | | | | | | | | Backport of http://review.gluster.org/9835 Change-Id: Iff9c8e8d2233048f3e5c9ee8b5af38ba10193cb9 BUG: 1199936 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9843 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/quota : remove quota-deem-statfs and quota-timeout values when ↵Sachin Pandit2015-03-131-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | quota is disabled. problem : If quota is disabled then all the options associated with quota is removed, except quota-deem-statfs and quota-timeout. When gluster volume info is issued then the user can see that quota is disabled whereas quota-deem-statfs and quota-timeout values still exist. Solution : remove quota-deem-statfs and quota-timeout option when quota is disabled NOTE : If features.quota-deem-statfs is turned on, it takes quota limits into consideration while estimating fs size. Change-Id: I8cca6a8f47d2355799228643aedc8fc03896cfad BUG: 1200258 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8924 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9845 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd : release cluster wide locks in op-sm during failuresAtin Mukherjee2015-03-034-69/+183
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glusterd op-sm infrastructure has some loophole in handing error cases in locking/unlocking phases which ends up having stale locks restricting further transactions to go through. This patch still doesn't handle all possible unlocking error cases as the framework neither has retry mechanism nor the lock timeout. For eg - if unlocking fails in one of the peer, cluster wide lock is not released and further transaction can not be made until and unless originator node/the node where unlocking failed is restarted. Following test cases were executed (with the help of gdb) after applying this patch: * RPC timesout in lock cbk * Decoding of RPC response in lock cbk fails * RPC response is received from unknown peer in lock cbk * Setting peerinfo in dictionary fails while sending lock request for first peer in the list * Setting peerinfo in dictionary fails while sending lock request for other peers * Lock RPC could not be sent for peers For all above test cases the success criteria is not to have any stale locks Patch link : http://review.gluster.org/9012 Change-Id: Ia1550341c31005c7850ee1b2697161c9ca04b01a BUG: 1179136 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9012 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9393 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd: Maintain per transaction xaction_peers list in syncop & mgmt_v3Atin Mukherjee2015-02-269-166/+275
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In current implementation xaction_peers list is maintained in a global variable (glustrd_priv_t) for syncop/mgmt_v3. This means consistency and atomicity of peerinfo list across transactions is not guranteed when multiple syncop/mgmt_v3 transaction are going through. We had got into a problem in mgmt_v3-locks.t which was failing spuriously, the reason for that was two volume set operations (in two different volume) was going through simultaneouly and both of these transaction were manipulating the same xaction_peers structure which lead to a corrupted list. Because of which in some cases unlock request to peer was never triggered and we end up with having stale locks. Solution is to maintain a per transaction local xaction_peers list for every syncop. Please note I've identified this problem in op-sm area as well and a separate patch will be attempted to fix it. Finally thanks to Krishnan Parthasarathi and Kaushal M for your constant help to get to the root cause. Backport URL : http://review.gluster.org/#/c/9269/ http://review.gluster.org/#/c/9422/ http://review.gluster.org/#/c/9350/ Change-Id: Ib1eaac9e5c8fc319f4e7f8d2ad965bc1357a7c63 BUG: 1176756 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9269 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9328 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd: Fix spurious volume delete failurev3.6.3beta1Emmanuel Dreyfus2015-02-111-1/+6
| | | | | | | | | | | | | | | | | | | If volume uses quota, volume delete operation should unmount the auxiliary quota mount usin glusterd_remove_auxiliary_mount(). This may fail with EBADF is the mount is already gone. In that situation, ignore the error so that volume delete succeeds. This fixes a spurious failure on NetBSD in tests/basic/quota.t 74-75 Backport of I69325f71fc2c8af254db46f696c8669a4e6bd7e4 BUG: 1138897 Change-Id: If0d382d44a956bb9fd8c41299f82affdf2ee0618 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/9484 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: do not restart nfs server when snapshot is deactivatedv3.6.2beta2Raghavendra Bhat2015-01-091-0/+3
| | | | | | | | | | | | | | | | | > Change-Id: Ie5eaa2beb4446640b22873f91e17da90d1cd8fad > BUG: 1174625 > Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> > Reviewed-on: http://review.gluster.org/9280 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Kaushal M <kaushal@redhat.com> > Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Change-Id: Ic5049f919fb444b45b3372a3b486183ed46d60f8 BUG: 1180404 Reviewed-on: http://review.gluster.org/9425 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* features/snapview-client: handle readdir requests differently for samba.Raghavendra Bhat2015-01-081-0/+9
| | | | | | | | | | | | | * For samba export, the entry point is also added to the readdir response. Change-Id: I825c017e0f16db1f1890bb56e086f36e6558a1c2 BUG: 1175742 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/9218 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9344
* glusterd/uss: Create rebalance volfile.Avra Sengupta2015-01-085-21/+117
| | | | | | | | | | | | | | | | | | | | | | | Create a new rebalance volfile, which will not contain snap-view client translators, irrespective of the status of USS. This volfile, will be created and regenerated everytime the fuse-volfile is generated, and will be consumed by the rebalance process. Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af BUG: 1175758 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9190 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9339 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/uss: if snapd is not running, return success from ↵Atin Mukherjee2015-01-081-0/+3
| | | | | | | | | | | | | | | | | | | | | glusterd_handle_snapd_option. glusterd_handle_snapd_option was returning failure if snapd is not running because of which gluster commands were failing. Change-Id: I22286f4ecf28b57dfb6fb8ceb52ca8bdc66aec5d BUG: 1175765 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9206 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9311 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: Client volfile name change for supporting rdmaAnoop C S2015-01-062-4/+22
| | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/9146 For rdma only volumes, daemons like snapd, glustershd etc make use of tcp transport for their operations. This patch will introduce the support of rdma by default for those daemons in rdma only volumes. In order to accomodate this change we rename the tcp client volfile labels from <volname>-fuse.vol to <volname>.tcp-fuse.vol Change-Id: Id5e5db0680a07fa6b6d003bad45748464cd7658e BUG: 1166515 Signed-off-by: Anoop C S <achiraya@redhat.com> Reviewed-on: http://review.gluster.org/9146 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9183 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: Wrong volfile fetch on fuse mounting tcp,rdma volume via rdmaAnoop C S2015-01-064-82/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8498 As of now for both tcp only volumes and rdma only volumes, volfile names are in the format <volname>-fuse.vol. This patch will change the client volfile namings as shown below. * TCP mounts always use <volname>-fuse.vol * RDMA mounts always use <volname>.rdma-fuse.vol Following the above naming convention, for tcp,rdma volumes both volfiles will be present under /var/lib/glusterd/vols/<volname>/ such that rdma only volume can be mounted as mount -t glusterfs -o transport=rdma <server/ip>:/<volname> <mount-point> OR mount -t glusterfs <server/ip>:/<volname>.rdma <mount-point> The above command format can also be used to fuse mount a tcp,rdma volume via rdma transport. When we try to fuse mount a tcp,rdma volume with transport-type as rdma it silently mounts via tcp. This change will also make sure that it fetches the correct volfile based on the transport-type specified from client side. Change-Id: Id8b74c1c3e1e7fd323463061f8b13dd623fa6876 BUG: 1166515 Signed-off-by: Anoop C S <achiraya@redhat.com> Reviewed-on: http://review.gluster.org/8498 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9182 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma: client connection establishment takes more timeMohammed Rafi KC2015-01-063-16/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8934 For rdma type only volume client connection establishment with server takes more than three seconds. Because for tcp,rdma type volume, will have 2 ports one for tcp and one for rdma, tcp port is stored with brickname and rdma port is stored as "brickname.rdma" during pamap_sighin. During the handshake when trying to get the brick port for rdma clients, since we are not aware of server transport type, we will append '.rdma' with brick name. So for tcp,rdma volume there will be an entry with '.rdma', but it will fail for rdma type only volume. So we will try again, this time without appending '.rdma' using a flag variable need_different_port, and it will succeed, but the reconnection happens only after 3 seconds. In this patch for rdma only type volume we will append '.rdma' during the pmap_signin. So during the handshake we will get the correct port for first try itself. Since we don't need to retry , we can remove the need_different_port flag variable. Change-Id: I82a8a27f0e65a2e287f321e5e8292d86c6baf5b4 BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8934 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9177 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma:rdma fuse mount hangs for tcp,rdma volumes if brick is down.Mohammed Rafi KC2015-01-061-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8762 When we try to mount a tcp,rdma volume as rdma transport using FUSE protocol, then mount will hang if the brick is down. When we kill a process, signal will be received in glusterfsd process and it will call pmap_signout with port listening on tcp only. In case of the tcp,rdma there will be two ports, and port which is listening for rdma will not called for sign out. So the mount process will try to connect to a port which is not open and it will keep trying to connect. This patch will call pmap_signout for rdma port also, So when mount tries to get the brick port,it will fail. Change-Id: I73f90d7340afa3b0b1278924206f1488e4094a62 BUG: 1166515 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8762 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/9176 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* rdma :mount fails for nfs protocol in rdma volumesJiffin Tony Thottan2015-01-066-19/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we mount rdma only volume or tcp,rdma volume using newly peer probed IP's(nfs-server on new nodes) through nfs protocol, mount fails for rdma only volume and mount happens with help of tcp protocol in the case of tcp,rdma volumes. That is for newly added servers will always get transport type as "socket". This is due to nfs_transport_type is exported correctly and imported wrongly. This can be verified by the following , * Create a rdma only volume or tcp,rdma volume * Add a new server into the trusted pool. * Checkout the client transport type specified nfs-server volgraph.It will be always tcp(socket type) instead of rdma. * And also for rdma only volume in the nfs log, we can see 'connection refused' message for every reconnect between nfs server and glusterfsd. Backport of http://review.gluster.org/8975 cherry picked from commit f380e2029d608f97e3ba9a728605e1d798b09e8d >BUG: 1157381 >Change-Id: I6bd4979e31adfc72af92c1da06a332557b6289e2 >Author: Jiffin Tony Thottan <jthottan@redhat.com> >Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> >Reviewed-on: http://review.gluster.org/8975 >Reviewed-by: Meghana M <mmadhusu@redhat.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Reviewed-by: Niels de Vos <ndevos@redhat.com> >Tested-by: Niels de Vos <ndevos@redhat.com> Change-Id: I328c17b07e877fe3b29ca832bf6f2291cea16bbe BUG: 1166505 Reviewed-on: http://review.gluster.org/9172 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* USS : Kill snapd during glusterd restart if USS is disabledSachin Pandit2014-12-241-5/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem : When glusterd is down on one of the nodes and during that time if USS is disabled then snapd will still be running in the node where glusterd was down. Solution : during restart of glusterd check if USS is disabled, if so then issue a kill for snapd. NOTE : The test case which I wrote in my previous patchset is facing some spurious failures, hence I thought of removing that test case. I'll add the test case once the issue is resolved. Change-Id: I2870ebb4b257d863cdfc319e8485b19e932576e9 BUG: 1175735 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9062 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9307 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* mgmt/glusterd: Validate the options of ussvmallika2014-12-242-7/+15
| | | | | | | | | | | | | | Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524 BUG: 1175755 Signed-off-by: Varun Shastry <vshastry@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8133 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9304 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/snapshot: Don't append nouuid mount option for snapshot brick if ↵Sachin Pandit2014-12-242-1/+36
| | | | | | | | | | | | | | | | original brick already has this option Change-Id: I2841d2ac371a3e9505f6061f35d1d447946c0bae BUG: 1175732 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8526 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9303 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/snapshot: Check if LVM device path exists before deleteAvra Sengupta2014-12-221-46/+59
| | | | | | | | | | | | | | | | | | | | Check if the LV is present before deleting the LV. In case where the LV is absent (already deleted?), need not fail the snap delete operation. Also check if the LV is mounted before trying umount. In case it isn't umounted, only remove the LV. Change-Id: I0f5b2674797299d8748c6fac5b091f0caba65ca4 BUG: 1175754 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8954 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9299 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* uss/gluster: Move all uss related logs into subfolder.vmallika2014-12-181-4/+12
| | | | | | | | | | | | | | | | | | | | | | | | | For USS we have 1 snapd log per volume and as many snap logs for volume. For example if there are 4 volumes having 256 snaps each and USS is enabled than total number of logs under /var/log/glusterfs for USS would be 1028 logs. Total logs = (4(snapd per volume) + 4(volumes)*256(snaps)) = 1028 Hence, it makes sense to move into into sub-folder structure like /var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs> Change-Id: I29262e6458c3906916923cd67d1145d6ae10bec3 BUG: 1175728 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9050 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9298 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* USS : Display only the activated snapshots.Sachin Pandit2014-12-181-0/+6
| | | | | | | | | | | | | | | | | | | Instead of displaying all the snapshots in the uss world, it is better if we display only the activated snapshots. Change-Id: I70d3ec212b62ec15956ae3e826bc4201d8dedd17 BUG: 1170548 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8958 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9242 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/snapshot: Snapshot should be deactivated when it is created.vmallika2014-12-182-134/+203
| | | | | | | | | | | | | | | | | | | | By default snapshot should be deactivated and this should be a configurable option. This behaviour can be configured by the command below: gluster snapshot config activate-on-create <enable|disable> Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513 BUG: 1170921 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8985 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9241 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* uss/gluster: Fix typo error in the description for USS under "gluster volume ↵vmallika2014-12-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | set help" gluster volume set help for uss shows "User Servicable Snapshots" whereas it should be "User Serviceable Snapshots" > Change-Id: I3cc8b3ea2cb6d209e1a12678eb7d0e68f4160d99 > BUG: 1160236 > Signed-off-by: vmallika <vmallika@redhat.com> > Reviewed-on: http://review.gluster.org/9041 > Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> > Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Change-Id: Id2de0e353d3307023da9239f6dee8b59e8eb0d8f BUG: 1175645 Reviewed-on: http://review.gluster.org/9295 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Raghavendra Bhat <raghavendra@redhat.com>
* mgmt/glusterd: Out of bounds access to fs_info structPetr Medonos2014-12-121-1/+1
| | | | | | | | | Change-Id: Ibc75713d35c9cbafd493c8cf6b5294eaf29f05d4 BUG: 1163920 Signed-off-by: Petr Medonos <petr.medonos@etnetera.cz> Reviewed-on: http://review.gluster.org/9126 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: Fix glusterd crash in non-originator slave node.Kotresh HR2014-11-121-0/+1
| | | | | | | | | | | | | | | | | | | | Problem: glusterd crashes in non-originator slave node during geo-rep create push-pem. Cause: In glusterd_op_copy_file, the value of the key "common_pem_contents" is freed explicitly even after dict_set is successful when it is taken cared by dict_free. Solution: Free only in failure cases before dict_set. BUG: 1159210 Change-Id: I726f923915fc24de6588469c27f2cc996c20c59d Reviewed-On: http://review.gluster.org/9018/ Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9026 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* geo-rep/glusterd: Enable changelog and marker during geo-rep create.Kotresh HR2014-11-121-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: Geo-rep misses few a files to sync when I/O happenned during geo-rep start. ANALYSES: To use the available changelogs to handle deletes/renames, 'xsync upper limit' is introduced which limits the xsync crawl till the changelog register time. But there is a small time interval between the changelog register time and the time changelog actually enabled. If there is I/O between this interval, it will not be synced through xsync as it is beyond changelog register time and not through changelog also as changelog is not actually enabled. SOLUTION: Enable changelog and marker during geo-rep create instead of geo-rep start so that entries are captured in changelog and above said interval is nullified. BUG: 1159205 Change-Id: If5203eb1cfcbde3999f97a5f1a6a1af4875ac358 Reviewed-on: http://review.gluster.org/8650 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9023 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/geo-rep: Fix race in updating status fileKotresh HR2014-11-121-18/+26
| | | | | | | | | | | | | | | | | | | | | | | | When geo-rep is in paused state and a node in a cluster is rebooted, the geo-rep status goes to "faulty (Paused)" and no worker processes are started on that node yet. In this state, when geo-rep is resumed, there is a race in updating status file between glusterd and gsyncd itself as geo-rep is resumed first and then status is updated. glusterd tries to update to previous state and gsyncd tries to update it to "Initializing...(Paused)" on restart as it was paused previously. If gsyncd on restart wins, the state is always paused but the process is not acutally paused. So the solution is glusterd to update the status file and then resume. BUG: 1159195 Change-Id: I4c06f42226db98f5a3c49b90f31ecf6cf2b6d0cb Reviewed-on: http://review.gluster.org/8911 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9021 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd: make bricks respect 'transport.socket.bind-address'Niels de Vos2014-10-211-0/+8
| | | | | | | | | | | | | | | | | | | | | | | When GlusterD starts the brick processes, these will listen on all interfaces. When the 'transport.socket.bind-address' option is set in glusterd.vol, the brick processes should only listen on the specified hostname or IP-address. Cherry picked from commit 430b874c4f1a171c106a9e1e6507e14e79805a1d: > Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba > BUG: 1149863 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/8910 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba BUG: 1151745 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8951 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: pass the bind-address to starting servicesNiels de Vos2014-10-211-4/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the transport.socket.bind-address option is set to a hostname or ip-address, the services started by GlusterD fail to connect to the management daemon. GlusterD always forces the services to connect to the "localhost" hostname, even if it is not listening on that address. GlusterD should take the transport.socket.bind-address option into consideration, and pass that to the glusterfs-clients with the -s or --volfile commandline parameter. Note that this is not a change that removes all hard-coded dependencies on "localhost". This change merely makes it possible to start required services when the transport.socket.bind-address option is set. Cherry picked from commit 283fa797f4bf98130b42c36972305b8cb6e5aaaf: > Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2 > BUG: 1149863 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/8908 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Jeff Darcy <jdarcy@redhat.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2 BUG: 1151745 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8950 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Do not hardcode umount(8) path, emulate lazy umountEmmanuel Dreyfus2014-10-035-74/+26
| | | | | | | | | | | | | | | | | | | | 1) Use a system-dependent macro for umount(8) location instead of relying on $PATH to find it, for security and portability sake. 2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations, which is only supported on Linux; On Linux behavior in unchanged. On other systems, we fork an external process (umountd) that will take care of periodically attempt to unmount, and optionally rmdir. Backport of Ia91167c0652f8ddab85136324b08f87c5ac1edd51d BUG: 1138897 Change-Id: I9d82c87e85af0dee79f2de39bc697c486b7103c8 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8863 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Csaba Henk <csaba@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/quota: Heal pgfid xattr on existing data when the quota is enablevmallika2014-09-301-3/+3
| | | | | | | | | | | | | | | | | | | | | This is a backport of http://review.gluster.org/#/c/8878/ The pgfid extended attributes are used to construct the ancestry path (from the file to the volume root) for nameless lookups on files. As NFS relies on nameless lookups heavily, quota enforcement through NFS would be inconsistent if quota were to be enabled on a volume with existing data. Solution is to heal the pgfid extended attributes as a part of lookup perfomed by quota-crawl process. In a posix lookup check for pgfid xattr and if it is missing set the xattr. BUG: 1147953 Change-Id: I707d91a056e07452bfd1e070af5eddaa752a84ac Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8890 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Improve debugging experience for glusterd locksKrishnan Parthasarathi2014-09-261-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | Today, when glusterd's internal locking mechanism fails with invalid type or when another competing lock is being held, the log message doesn't provide enough information directly as to which command saw this (first). Following is a snippet of how a failure would look in the log file. This would greatly assist in debugging. [2014-09-03 04:57:58.549418] E [glusterd-locks.c:520:glusterd_mgmt_v3_lock] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(__glusterd_handle_create_volume+0x801) [0x7f30b071e651] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x2c) [0x7f30b072e19c] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x55d) [0x7f30b072de6d]))) 0-management: Invalid entity. Cannot perform locking operation on vol types Change-Id: I0595f49d60e620e8b065f3506bdb147ccee383a7 BUG: 1145093 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8842 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* OSX/FreeBSD: Regression fixHarshavardhana2014-09-241-4/+16
| | | | | | | | | | | Introduced in "1f6e992f1aaa676be5bd47d17e58f1171825cf43" Change-Id: Id684e2f082def7d01ef3c258ea6598da6205591f BUG: 1117822 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8840 Reviewed-by: Justin Clift <justin@gluster.org> Tested-by: Justin Clift <justin@gluster.org>
* Do not hardcode setfattr(1) pathEmmanuel Dreyfus2014-09-241-1/+13
| | | | | | | | | | | | | | Turn setfattr(1) absolute path into an OS-dependant macro. Let compiler option override it to fit custom installation if needed. Backport of I8f469c5741a85b6e8d8f6299a9540b3d64611d2f BUG: 1138897 Change-Id: I279752f2ec5db1abc25830cb9a23290cc401d517 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8828 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Add last successful glusterd lock backtraceKrishnan Parthasarathi2014-09-241-0/+14
| | | | | | | | | | | | | | | Also, moved the backtrace fetching logic to a separate function. Modified the backtrace fetching logic able to work under memory pressure conditions. Change-Id: Ie38bea425a085770f41831314aeda95595177ece BUG:1145093 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8794 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Authenticate management handshake requestsKaushal M2014-09-243-0/+67
| | | | | | | | | | | | | | | | | | | | | | Backport of 371bb42 glusterd: Authenticate management handshake requests from master. Management handshake requests, which are used to validate op-version supported by the peers, are now only allowed if, - the glusterd doesn't have any other peer, or - the request was sent by another peer. This prevents the op-version of a peer being changed because of a connection attempt by an invalid peer. BUG: 1144978 Change-Id: I5a909dad37e9873efe8b75dad41b7af71ce91c3d Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8819 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Set the rlimit for Open FDs to higher value.Vijaikumar M2014-09-231-0/+18
| | | | | | | | | | | | | | | | | | | | Default 'open FD limit' is 1024. As the number of volumes/bricks increases, brick-to-glusterd socket FDs also increases in glusterd and runs out of the limit. Solution is to set the 'Open FD' limit to higher value in glusterd Change-Id: Iaa60b2155df2fa5a0759e054bdebffbc09f63ec1 BUG: 1145095 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8578 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8807
* glusterd/snapshot : Fail the snapshot create operation if geo-rep is runningSachin Pandit2014-09-231-1/+11
| | | | | | | | | | | | | | | | | | | As one of the recommandations for taking a snapshot is not to have an active geo-replication session, its better to display an error saying session is active when snapshot create command is issued. Change-Id: I94593dbd2659610e033ca316176dda1ac8dc5ce6 BUG: 1145091 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8804
* glusterd/snapshot: Inherit the mount options of a original brick when ↵Vijaikumar M2014-09-237-122/+250
| | | | | | | | | | | | | | | | | | | | | | | | creating snapshots. When creating a snapshot a LVM is created at the backend and is mounted under /var/run/gluster/snaps/... However, this mount does not inherit the mount options for the original brick acting as the parent for the snap. If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss. Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b BUG: 1145088 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8394 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8802 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Error msg for snapshot status for no existing snaps should be ↵Vijaikumar M2014-09-231-35/+42
| | | | | | | | | | | | | | | | | | | | aligned with error messages of info and list. When a snapshot operation like status, info, list performed on a non-existing snapshot. For Status error message is displayed as 'Snap not found' For List and Info error message is displayed as 'Snapshot does not exist' Have the consistant error message all the places Change-Id: I7b241217dba62fda844481731a6858e4ecb12897 BUG: 1145087 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8309 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8801 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/snapshot: Proper err msg for snapshot create command.Rajesh Joseph2014-09-231-0/+77
| | | | | | | | | | | | | | | | | | | problem: Snapshot command fails if one or more bricks are not thinly provisioned. But the error message is a generic error message which is confusing to the user. fix: Provide correct error message in case of failure. Change-Id: Iad247f966423a8f73ef6da57cab7ed6cddc05861 BUG: 1145086 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8377 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8800 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* feature/snapshot : Interface to delete all snapshots belonging to a system ↵Sachin Pandit2014-09-231-25/+186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as-well-as to a particular volume Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1145083 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8798 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* glusterd/snapshot: Print correct error message on cli for snapshot operation ↵Vijaikumar M2014-09-231-0/+10
| | | | | | | | | | | | | | | | | | | | | performed on a cluster with op-version less than 30600. Currently we get error message as on cli 'Another transaction is in progress Please try again after sometime' when a snapshot operation is performed on a cluster with op-version less than 30600. We need to print the correct error message in this case. Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141 BUG: 1145068 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8371 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8796
* cli/snapshot : gluster volume info should not show the options which are not ↵Sachin Pandit2014-09-234-257/+222
| | | | | | | | | | | | | | | | | | | | | | | | | | | | set explicitly. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1145020 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8793 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>