summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
* cluster/dht: Fix anomaly checkshishir gowda2013-09-231-3/+11
| | | | | | | | | | | | | We were wrongly detecting holes/overlaps for already accounted errors. Additionally, sort should also handle zero'ed out layout Change-Id: Ic3d13e1d735b914f9acc01fe919bc90656baea48 BUG: 1003851 Signed-off-by: shishir gowda <sgowda@redhat.com> Reviewed-on: http://review.gluster.org/5762 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli: Status detail cli parse check and vol geo status crash fixAvra Sengupta2013-09-211-7/+12
| | | | | | | | | | Change-Id: I1841864273fc4242de15fbfcf76fd5de40269f28 BUG: 1006249 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5889 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Adding transaction checks for cluster unlock.Avra Sengupta2013-09-201-3/+12
| | | | | | | | | | | | | | | | | | | | | | While a gluster command holding lock is in execution, any other gluster command which tries to run will fail to acquire the lock. As a result command#2 will follow the cleanup code flow, which also includes unlocking the held locks. As both the commands are run from the same node, command#2 will end up releasing the locks held by command#1 even before command#1 reaches completion. Now we call the unlock routine in the code path, of the cluster has been locked during the same transaction. Signed-off-by: Avra Sengupta <asengupt@redhat.com> Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834 BUG: 1008172 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5937 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: NFS daemon is limiting IOs to 64KBSantosh Kumar Pradhan2013-09-204-15/+61
| | | | | | | | | | | | | | | | | | | | | | | Problem: Gluster NFS server is hard-coding the max rsize/wsize to 64KB which is very less for NFS running over 10GE NIC. The existing options nfs.read-size, nfs.write-size are not working as expected. FIX: Make the options nfs.read-size (for rsize) and nfs.write-size (for wsize) work to tune the NFS I/O size. Value range would be 4KB(Min)-64KB(Default)-1MB(max). NB: Credit to "Richard Wareing" for catching it. Change-Id: I2754ecb0975692304308be8bcf496c713355f1c8 BUG: 1009223 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/5964 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* distribute: Rebalance should provide even disk space distributionHarshavardhana2013-09-191-15/+30
| | | | | | | | | | | | | | | | | | | | | | | | | Earlier disk space check had an issue which didn't provide the needed functionality to avoid migration when the destination had lesser available space, scenario we need to avoid is stated below : During rebalance `migrate-data` - Destination subvol experiences a `reduction` in 'blocks' of free space, at the same time source subvol gains certain 'blocks' of free space. A valid check is necessary here to avoid errorneous move to destination where the space could be scantily available. This patch provides a proper fix in place by subtracting necessary file blocks from destination and adding those blocks to source. Change-Id: I9c7840716a4256ef614ffc0fbfd9f2b456ac28c8 BUG: 982919 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5961 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shishir Gowda <sgowda@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* NFS : Coverity Fix.meghana2013-09-1911-158/+195
| | | | | | | | | | | | NFS defects reported by Coverity run are fixed. Change-Id: Ib66847e8e66fb4a06b312c80814f9eafb032eba2 BUG: 996390 Signed-off-by: meghana <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/5660 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Varun Shastry <vshastry@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-194-2/+103
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: Have common inode-write-fop cbkPranith Kumar K2013-09-184-672/+581
| | | | | | | | | | Change-Id: Ia7b324b86d6a7051d187106d7a060155e77defc5 BUG: 910217 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5238 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Don't reset rebalance status on add-brickKaushal M2013-09-181-9/+0
| | | | | | | | | | | | | | | | | | | | The rebalance status was being reset to 'Not started' when add-brick was performed. This would lead to odd cases where a 'rebalance status' on a volume would show status as 'not started' but would also include the rebalance statistics. This also affected the showing of asynchronus task status in 'volume status' command. By not resetting the status prevent the above issues from happening. Since we use the running/not-running of the rebalance process as the check when performing other operations we can safely leave the rebalance stats collected on an add-brick. Change-Id: I4c69d9c789d081c6de7e7a81dd0d4eba2e83ec17 BUG: 1006247 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5895 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Revert "cluster/distribute: Rebalance should also verify free inodes"Anand Avati2013-09-171-22/+5
| | | | | | | | | | | | | | | This reverts commit 215fea41a96479312a5ab8783c13b30ab9fe00fa Realized soon after merging, that this patch is actually useless. Checking for inode utilization on dst relative to src is of no use because dst is already storing linkfile and inode is already consumed no matter what. We might as well perform the rebalance and save on an inode on the src node. Change-Id: I7b8b8d35d9063a710e1bee1c7c6c7739fcc45ad4 Reviewed-on: http://review.gluster.org/5960 Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Anand Avati <avati@redhat.com>
* cluster/distribute: Rebalance should also verify free inodesHarshavardhana2013-09-171-5/+22
| | | | | | | | | | | | | | | | | | | | | Currently during `MIGRATE_DATA` we never verified about the total inode usage among new and old bricks. Such checks are available for disk space usage but it is also needed for inodes during data migration. Such a check leads to uniform outcome of file distribution upon rebalance. Patch provides: - Check dst_inodes < src_inodes, a friendly `warning` message to indicate we have to ignore such an attempt - Rename __dht_check_free_space() --> __dht_check_free_space_and_inodes() Change-Id: I7bc4dd8b507883f0fb836300e99f0bb083493f5f BUG: 982919 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5948 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* fuse-bridge: enable --fopen-keep-cache based on FUSE_AUTO_INVAL_DATA.Anand Avati2013-09-172-3/+42
| | | | | | | | | | | | | If kernel supports FUSE_AUTO_INVAL_DATA then it is safe(r) to turn on --fopen-keep-cache mode by default. Users report significant improvement in perf by enabling the mode. Change-Id: Icf9df4b7b43950d7e25302d9c2a1a7d14571a9a9 BUG: 990744 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5770 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
* qemu-block: support readdirpBrian Foster2013-09-171-0/+58
| | | | | | | | | | | | | | Support the readdirp fop in qemu-block to ensure that image files are handled correctly when readdirp is enabled. E.g., without readdirp support, incorrect stat data for formatted files can be reported back (and cached) via the client. BUG: 986775 Change-Id: Ibc4bd0b42548953ebe30832f3d853bb68095f0ac Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-on: http://review.gluster.org/5946 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Blocking invalid values for geo-rep config optionsAvra Sengupta2013-09-172-9/+64
| | | | | | | | | | | Change-Id: Ia9ee44763a9c2798b26d3225bf03a974d7ece21f BUG: 998962 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5666 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mount/fuse: Implement forget in cbks for fuse.Vijay Bellur2013-09-161-0/+8
| | | | | | | | | | | | | | | | | | | | With the introduction of inode_ctx_set in fuse as part of 2991503d014, forget cbk gets called for fuse xlator. Though nothing needs to be done inf forget_cbk, excessive log messages of the following kind are observed: [2013-09-16 06:09:50.758063] W [defaults.c:1331:default_forget] (-->/usr/local/lib/glusterfs/3git/xlator/mount/fuse.so(+0xa1f2) [0x7f51432781f2] (-->/usr/local/lib/libglusterfs.so.0(inode_unref+0x3c) [0x7f5144e5 816c] (-->/usr/local/lib/libglusterfs.so.0(+0x2d061) [0x7f5144e58061]))) 0-fuse: xlator does not implement forget_cbk This patch prevents such log messages from being seen. Signed-off-by: Vijay Bellur <vbellur@redhat.com> BUG: 979910 Change-Id: Ie5874138f46822b10ff4213bd1134d78330ec460 Reviewed-on: http://review.gluster.org/5932 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Task parameters in xml outputKaushal M2013-09-133-6/+165
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces task parameters for the asynchronus task shown in volume status. The parameters are only given for xml output. The parameters shown currently are, - source and destination bricks for replace-brick tasks ...... <tasks> <task> <type>Replace brick</type> <id>3d1a1005-9d2e-4ae0-bd62-577bc1d333a3</id> <status>1</status> <params> <srcBrick>archm:/export/test4</srcBrick> <dstBrick>archm:/export/test-replace1</dstBrick> </params> </task> </tasks> ...... - list of bricks being removed for remove-brick tasks ...... <tasks> <task> <type>Remove brick</type> <id>901c20ca-0da2-41de-8669-5f0caca6b846</id> <status>1</status> <params> <brick>archm:/export/test2</brick> <brick>archm:/export/test3</brick> </params> </task> </tasks> ...... The changes for non-xml output will be done in a subsequent patch. Change-Id: I322afe2f83ed8adeddb99f7962c25911204dc204 BUG: 916577 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5771 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* qemu-block: fix building from distribution tarball when glib2-devel is installedNiels de Vos2013-09-121-0/+8
| | | | | | | | | | | | | | | | | | | | | | Building RPMs from a 'make dist' tarball fails when qemu-block is enabled. Enabling is done automatically when the glib2 development files are available (enabled by ./configure). Manual building with: $ ./autogen.sh && ./configure && make dist && rpmbuild -ta *.tar.gz Building in mock works fine, glib2-devel is not installed by default so the qemu-block xlator gets disabled. This change also adds glib2-devel to the BuildRequires in the glusterfs.spec file, causing the qemu-block xlator to be built by default, and included in the glusterfs RPM. Change-Id: Ibb73628772586d9e07bbfde7a8ff2fc973489086 BUG: 986775 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/5896 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Update sub_count on remove brickVijay Bellur2013-09-111-0/+1
| | | | | | | | | | Change-Id: I7c17de39da03c6b2764790581e097936da406695 BUG: 1002556 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/5893 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpc/rpc-lib: rpcsvc should reply when rpc_err is setPranith Kumar K2013-09-111-176/+179
| | | | | | | | | | | | | | | | | | | | | Problem: When requests are received on a connection before setvolume is done, creating frame from the requests fail because there is no association of the transport with the conn(i.e. xl_private). xl_private is set only on set_volume. In such cases error response is not sent from server xlator to that request because of which operations on mount point are hanging. Fix: Set actor return value to RPCSVC_ACTOR_ERROR so that response is sent even in these cases. Change-Id: I74d7bc6849fde6c734008d67c1f4bc9d9f7a84f9 BUG: 1006367 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5892 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Allow bumping down a peer's op-version during probeKaushal M2013-09-101-15/+9
| | | | | | | | | | | | | | | | | | Earlier, a peer running a higher op-version couldn't be probed into a cluster running at a lower op-version. This created issues when trying to expand an upgraded cluster. This patch changes this behaviour. The cluster no longer rejects a peer being probed if its op-version is higher than the cluster op-version. The peer will reduce its op-version if it doesn't have any volumes. If the peer contains volumes and needs to reduce its op-version, it fails the handshake and the probe fails. Change-Id: I12c6c873922799e1557b7184e956baea643d0dea BUG: 1005038 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5715 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Added missing MY_UUID conversion in gsync stagingAvra Sengupta2013-09-101-0/+2
| | | | | | | | | Change-Id: Ia4bf607e044d50636fb0a599a2ff91059f5b17aa BUG: 1006177 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5887 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/dht: assign layout onto missing directories tooAnand Avati2013-09-091-4/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | The current self-healing algorithm is ignoring missing directories for assigning new layout. When lookup() is racing against mkdir() or when self-healing a half-done mkdir(), the layout assignment split must happen based on the final number of directories, and not the currently existing number of directories (because we finish mkdir() of missing directories before hash layout assignment). Without this fix, concurrent mkdir() and lookup() will step on each others feet, create a messed up layout on disk, and end up with different in-memory layouts. Once two clients have different in-memory layouts, creation of subdirectory will not arbitrate on the same hashed subvolume and will result in GFID mismatch of the sub-directory. Change-Id: Ia47acad67c265060405984c822b4d37512b9dbb3 BUG: 907072 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5849 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Peter Portante <pportant@redhat.com> Tested-by: Peter Portante <pportant@redhat.com>
* glusterfsd: Round robin DNS should not be relied upon withHarshavardhana2013-09-061-73/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | config service availability for clients. Backupvolfile server as it stands is slow and prone to errors with mount script and its combination with RRDNS. Instead in theory it should use all the available nodes in 'trusted pool' by default (Right now we don't have a mechanism in place for this) Nevertheless this patch provides a scenario where a list of volfile-server can be provided on command as shown below ----------------------------------------------------------------- $ glusterfs -s server1 .. -s serverN --volfile-id=<volname> \ <mount_point> ----------------------------------------------------------------- OR ----------------------------------------------------------------- $ mount -t glusterfs -obackup-volfile-servers=<server2>: \ <server3>:...:<serverN> <server1>:/<volname> <mount_point> ----------------------------------------------------------------- Here ':' is used as a separator for mount script parsing Now these will be remembered and recursively attempted for fetching vol-file until exhausted. This would ensure that the clients get 'volume' configs in a consistent manner avoiding the need to poll through RRDNS. Change-Id: If808bb8a52e6034c61574cdae3ac4e7e83513a40 BUG: 986429 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5400 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gfid-access: Error logs for ga_newfile_parse_argsAvra Sengupta2013-09-051-11/+50
| | | | | | | | | | Change-Id: I7aab98a70793bee936272f0b795f4c22c3733dd2 BUG: 1001055 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5705 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* gsyncd / geo-rep: "disjoint" cascading geo-replication sessionsVenky Shankar2013-09-041-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | Slave's xtime is now stored on the master itself (and that too only on the root), which implies it cannot be propogated to the cascaded slave. Thus the intermediate master now makes use of it's own volume information to propogate volume-mark and xtime. On starting Geo-Replication "geo-replication.ignore-pid-check" marker option is enabled, which is an override for the client-pid check in marker. This options triggers marker update only for geo-replication auxillary mount (client-pid == -1). Since gsyncd not does setxattr() directly on the bricks, this option won't trigger a chain of spurious metadata updates that would need to be processed by gsyncd. Change-Id: If50c5ef275dfb6b4ff4fd35be2565587e2fdf3e1 BUG: 996371 Original Author: Venky Shankar <vshankar@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5592 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* features/marker: force xtime updates (configurable) for client-pid = -1Venky Shankar2013-09-044-6/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is required by Geo-Replication that does auxillary mount with client-pid as -1 (which has special treatment at specific places in GlusterFS), to trigger xtime updates on the intermediate master in a cascading setup. Marker too had a check to "not" mark updates for geo-replication's auxillary mounts. With the new geo-replication design, xtimes are not set by the master on the slave for all entities. Due to this cascading setups were broken. This patch introduces "geo-replication.ignore-pid-check" option as a "override" for the client-pid check for gsyncd's client-pid. When this options is enabled, marker start "marking" even if the updates are from the special client. Geo-Replication on the detection of itself being an intermediate master, enables this option. Change-Id: I9f7140edd12fef5480595ee0f93f35b94cdb8345 BUG: 996371 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5591 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gsyncd / geo-rep: Introduce basic crawl instrumentationVenky Shankar2013-09-041-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch extends the persistent instrumentation work done by Aravinda (@avishwa), by introducing a handfull of instrumentation variables for crawl. These variables are "pulled up" by glusterd in the event of a geo-replication status cli command and looks something like below: "Uptime=00:21:10;FilesSyned=2982;FilesPending=0;BytesPending=0;DeletesPending=0;" "FilesPending", "BytesPending" and "DeletesPending" are short-lived variables that are non-zero when a changelog is being processes (ie. when an active sync in ongoing). After a successfull changelog process "FilesPending" is summed up into "FilesSynced". The three short-lived variabled are then reset to zero and the data is persisted Additionally this patch also reverts some of the changes made for BZ #986929 (those were not needed). Change-Id: I948f1a0884ca71bc5e5bcfdc017d16c8c54fc30b BUG: 990420 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5441 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Added op-version checks to geo-rep commands.Venky Shankar2013-09-041-0/+43
| | | | | | | | | | | | | | Added op-version checks to all geo-rep commands. Min op-version should be 2. Change-Id: I942d897404e11e4d53123409731ba5cd252668fe BUG: 847839 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5732 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/gverify: Check for passwordless ssh in gverify.Venky Shankar2013-09-042-79/+77
| | | | | | | | | | Change-Id: I8c2d398114ad4534bcc052f9a5be8bbb2e7e2582 BUG: 999531 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5677 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Allowing root@hostname::slave georep sessions to be created.Venky Shankar2013-09-043-27/+110
| | | | | | | | | | | | | | non-root@hostname::slave-vol geo-rep sessions are not supported. only hostname and root@hostname sessions are supported, and are treated as the same. Change-Id: I87551e1bd4ff4e0e6520c34eb3d944587cc65476 BUG: 998933 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/gverify.sh: Stops session being created with invalid slave detailsVenky Shankar2013-09-041-8/+68
| | | | | | | | | | | | | create force will fail with proper message, if the ip is not reachable, or is unable to fetch slave details. Change-Id: I44a3ba777b37702ffd0e48e9cb46c51e293327d4 BUG: 988314 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5516 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-043-117/+142
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* performance/readdir-ahead: introduce directory read-ahead translatorBrian Foster2013-09-047-1/+658
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a translator to improve the performance of typical, sequential directory reads (i.e., ls). readdir-ahead begins preloading the contents of a directory on open and serves readdir requests from the preloaded content. readdir-ahead is currently implemented to only handle the single threaded directory read case. readdir-ahead is currently disabled by default. It can be enabled with the following command: gluster volume set <volname> readdir-ahead on The following are results of a getdents test on a single brick volume. Test info: - Single VM, gluster client/server. - Volume mounted with native client using --gid-timeout=2. - getdents on single directory with 100k 0-byte files. Test results: - !readdir-ahead read 3120080 bytes from offset 0 3 MiB, 4348 ops, 0:00:07.00 (416.590 KiB/sec and 594.4737 ops/sec) - readdir-ahead read 3120080 bytes from offset 0 3 MiB, 4348 ops, 0:00:03.00 (820.116 KiB/sec and 1170.3043 ops/sec) BUG: 980517 Change-Id: Ieceb9e1eb47d1d5b5af8da2bf03839537364653f Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-on: http://review.gluster.org/4519 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Regenerate client volfiles during upgradeshishir gowda2013-09-033-7/+30
| | | | | | | | | | | Change-Id: I1442bc1d115a9c6ecf139a0ca9da74d07e0fe928 BUG: 1003855 Signed-off-by: shishir gowda <sgowda@redhat.com> Reviewed-on: http://review.gluster.org/5764 Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: Set size based source only when sizes are unequalPranith Kumar K2013-09-031-0/+13
| | | | | | | | | Change-Id: I18583f14edf1011401be15744371e2a6b79d75cc BUG: 1003842 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5763 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* afr: make NOP truncate/ftruncate efficientAnand Avati2013-09-032-0/+17
| | | | | | | | | | | | | If truncate/ftruncate is called with the offset as the current size of file, then skip the durability fsync and unwind quickly. Change-Id: I0baec68d96c6d4d8217d33bd9738f7ed0d1b40c5 BUG: 958118 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5737 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* features/qemu-block: support for QCOW2 and QED formatsAnand Avati2013-09-0316-1/+2781
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for internals snapshots using QCOW2 and general framework for external snapshots (next patch) with QCOW2 and QED. For internal snapshots, the file must be "initialized" or "formatted" into QCOW2 format, and specify a file size. Snapshots can be created, deleted, and applied ("goto"). e.g: // Format and Initialize sh# setfattr -n trusted.glusterfs.block-format -v qcow2:10GB /mnt/imgfile sh# ls -l /mnt/imgfile -rw-r--r-- 1 root root 10G Jul 18 21:20 imgfile // Create a snapshot sh# setfattr -n trusted.glusterfs.block-snapshot-create -v name1 imgfile // Apply a snapshot sh# setfattr -n trusted.gluterfs.block-snapshot-goto -v name1 imgfile Change-Id: If993e057a9455967ba3fa9dcabb7f74b8b2cf4c3 BUG: 986775 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5367 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
* features/locks : Improves debuggability of inode/entry locks.Anuradha Talur2013-09-037-41/+126
| | | | | | | | | | | | | | | | | Prints, in the statedump, the information about the mount that performed the inode/entry lk. For the entrylks that are granted after a blocked state, the blocked time is not printed. A patch for that will be sent later. Change-Id: Ib0c1ed21fa9328b435f96b590dd343f59814a08d BUG: 915629 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/5712 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Improvement in logging of self heal completion statusVenkatesh Somyajulu2013-08-296-30/+338
| | | | | | | | | | | Additional information for source and sinks are added. Change-Id: I1704956ff86ac3ae36744efe7499c1d1c43faeaf BUG: 968301 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5638 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: Reset attempted count before attempting blocking lockPranith Kumar K2013-08-291-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: internal_lock->lk_attempted_count keeps track of the number of blocking locks attempted. lk_expected_count keeps track of the number locks expected. Here are the sequence of steps that happen which lead to the illution that a full file lock is achieved, even without attempting any lock. 2 mounts are doing dd on same file. Both of them witness a brick going down and coming back up again. Both of the mounts issue self-heal 1) Both mount-1, mount-2 attempt full file locks in self-heal domain. lets say mount-1 got the lock, mount-2 attempts blocking lock. 2) mount-1 attempts full file lock in data domain. It goes into blocking mode because some other writes are in progress. Eventually it gets the lock. But this results in lk_attempted_count to be still as 2 and will not be reset. It completes syncing the data. 3) mount-1 before unlocking final small range lock attempts full file lock in data domain to figure out the source/sink. This will be put into blocked mode again because some other writes are in progress. But this time seeing the stale value of lk_attempted_count being equal to lk_expected_count, blocking_lock phase thinks it completed locking without acquiring a single lock :-O. 4) mount-1 reads xattrs without any lock but since it does not modify the xattrs, no harm is done by this phase. It tries to do unlocks and the unlocks will fail because the locks are never taken in data domain. mount-1 also unlocks self-heal domain locks. Our beloved mount-2 now gets the chance to cause horror :-(. 5) mount-2 gets the full range blocking lock in self-heal domain. Please note that this sets lk_attempted_count to 2. 6) mount-2 attempts full range lock in data domain, since there are still writes on going, it switches to blocking mode. But since lk_attempted_count is 2 which is same as lk_expected_count, blocking phase locks thinks it actually got the full range locks even though not a single lock request went out the wire. 7) mount-2 reads the change-log xattrs, which would give the number of operations in progress (lets call this 'X'). It does the syncing and at the end of the sync decrements the changelog by 'X'. But since that 'X' was introduced by 'X' number of transactions that are in progress, they also decrement the changelog by 'X'. Effectively for 'X' operations 'X' number of pre-ops are done but 2 times 'X' number of post-ops are done resulting in -ve changelog numbers. Fix: Reset the lk_attempted_count and inode locks array that is used to remember locks that are granted. Change-Id: Ic0a79cd16f32392ea7c790511343c73592bbe6bd BUG: 1002698 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5736 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* nfs: prevent NFS server crash when upgrading from 3.2.x serverAnand Avati2013-08-291-0/+5
| | | | | | | | | | | | | | | | | | | After an upgrade the NFS3 filehandle size changed (became smaller), but when doing a live ugprade the client would send the old handle (expect ESTALE and do fresh lookup). But when reading the old handle we were reading it into a structure which was limited to the size of the new handle, while we should have been reading into a buffer which is as big as the NFS3 spec permits the handle size to be. The actor functions declare the structure on the stack. So the overflow is resulting in a stack corruption. Change-Id: Ie930875ac9db46b43d1cb8ad1e6d89cdaeded7ca BUG: 1002385 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5730 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: unlock before aborting transactionAnand Avati2013-08-291-0/+2
| | | | | | | | | | | Else this results in a missing frame causing a hang Change-Id: Ib5f3dc6a3999449faa2853cee2944af2fb065a20 BUG: 1002399 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5731 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/stripe: enable coalesce mode by defaultBrian Foster2013-08-281-4/+4
| | | | | | | | | | | | It has been available for a while now and is probably the sane default due to the more efficient layout and performance benefit. BUG: 1001207 Change-Id: I6275f9741866c0afd6e685f8dc5867a86485fd20 Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-on: http://review.gluster.org/5624 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: Add special handling for failure postopsPranith Kumar K2013-08-284-26/+71
| | | | | | | | | | | | | | | | Idea is to not leave the file in FOOL-FOOL scenario in case on all the bricks data transaction failed with EDQUOT to avoid increasing un-necessary load of self-heals in the system. For directory transactions don't leave pending changelog in case the failures are seen on all the subvolumes. Change-Id: I38a5561d1d581a78347a76a4a509514e4a0c3fb7 BUG: 969461 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5709 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* stripe: remove unused param, handle mem alloc failureKaleb S. KEITHLEY2013-08-281-2/+2
| | | | | | | | | Change-Id: I9c27b1edab111031ca8eea9cc49480ea01e39089 BUG: 1002207 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/5716 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* synctask: minor enhancementsAnand Avati2013-08-281-4/+1
| | | | | | | | | | | | | | | | | | | | | | | - Enhance syncenv_new() to accept scaling parameters of syncproc. Previously the scaling parameters were hardcoded and decided at compile time. - New API synctask_create() which returns the created synctask. This is similar to synctask_new which only returned the status of whether a synctask could be created or not. The meaning of NULL cbk in synctask_create() means the task is "joinable". Until synctask_join() is called on such a synctask, the task is not reaped and resources are not destroyed. The task would be in a zombie state after synctask_fn returns and before synctask_join() is called. Change-Id: I368ec9037de9510d2ba951f0aad86aaf18d9a6b6 BUG: 986775 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5365 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
* cluster/afr: Don't delay post op in cases of failuresPranith Kumar K2013-08-283-11/+36
| | | | | | | | | | Change-Id: Ib0c3af6babc61dc3ed45252582876e2f243d6446 BUG: 958118 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5635 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* nfs: persistent caching of connected NFS-clientsNiels de Vos2013-08-286-57/+456
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce /var/lib/glusterfs/nfs/rmtab to contain a list of NFS-clients which have a volume mounted. The volume option 'nfs.mount-rmtab' can be set to an alternative filename. When the file is located on shared storage, multiple gNFS servers can use the same file to present a single NFS-server. This cache is read when a system administrator calls 'showmount -a' and updated when an NFS-client calls MNT or UMNT from the MOUNT protocol. Usage: - create a volume for storing the shared rmtab file - mount the volume on all storage servers, at the same location - make sure that the volume is mounted at boot (add to /etc/fstab) - place the rmtab file on the volume: # gluster volume set <VOLUME> nfs.mount-rmtab <MOUNTPOINT>/<FILENAME> - any subsequent mount requests will add an entry to this file - 'showmount -a' requests will return the NFS-clients using the cluster Note: The NFS-server does currently not support reconfigure(). When a configuration option is set/changed, the NFS-server glusterfs process gets restarted. This causes the active NFS-clients to be forgotten (the entries are saved in the old rmtab, but we do not have a reference to that file any more, so we can't re-add them). Therefor a re-mount done by the NFS-clients is needed before they get listed in the rmtab again. Change-Id: I58f47135d60ad112849d647bea4e1129683dd2b3 BUG: 904065 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/4430 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* mount/fuse: perform lookup() on inodes linked through readdirplusAnand Avati2013-08-233-8/+66
| | | | | | | | | | | | | Some xlators still require lookup() fop to be sent for proper working. This patch remembers inodes which have been linked through readdiprlus and makes the resolver send lookups on them. Change-Id: Ibe8a04a659539d90dfc794521b51bf2bda017a0b BUG: 979910 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5267 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* core: remove GLUSTERFS_CREATE_MODE_KEY usageAmar Tumballi2013-08-233-35/+5
| | | | | | | | | Change-Id: I23b8cb7223b91a55af1cd4214f61bbe0e87351f6 BUG: 952029 Signed-off-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: http://review.gluster.org/5683 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>