summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorshishir gowda <shishirng@gluster.com>2010-07-16 08:37:29 +0000
committerAnand V. Avati <avati@dev.gluster.com>2010-07-19 05:10:40 -0700
commitce0e4c3d6eb710f29cb02166fdfa9e6afb013df6 (patch)
tree8ba494f214b2e72d2954840da8db17e030a8b81d
parent56c182ca23a7552dfa4c19667f82ca1313fb9e55 (diff)
return ENOENT instead of ESTALE for links in client for stripe
Instead of returning a ESTALE for links and symlink return a ENOENT, as they only exist on the FIRSTCHILD, and any lookup fails in the other bricks. Signed-off-by: shishir gowda <shishirng@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 1152 (ln on symlink returns NFS STALE error even on success) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1152
-rw-r--r--xlators/protocol/client/src/client3_1-fops.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/xlators/protocol/client/src/client3_1-fops.c b/xlators/protocol/client/src/client3_1-fops.c
index 4c222adbce4..3734c90955f 100644
--- a/xlators/protocol/client/src/client3_1-fops.c
+++ b/xlators/protocol/client/src/client3_1-fops.c
@@ -1949,6 +1949,7 @@ client3_1_lookup_cbk (struct rpc_req *req, struct iovec *iov, int count,
}
out:
+ rsp.op_errno = op_errno;
frame->local = NULL;
STACK_UNWIND_STRICT (lookup, frame, rsp.op_ret, rsp.op_errno, inode,
&stbuf, xattr, &postparent);
@@ -2961,6 +2962,7 @@ client3_1_link (call_frame_t *frame, xlator_t *this,
"RENAME %"PRId64"/%s (%s): failed to get remote inode "
"number for source parent", args->oldloc->parent->ino,
args->oldloc->name, args->oldloc->path);
+ op_errno = ENOENT;
goto unwind;
}
@@ -4558,6 +4560,7 @@ client3_1_setattr (call_frame_t *frame, xlator_t *this,
"STAT %"PRId64" (%s): "
"failed to get remote inode number",
args->loc->inode->ino, args->loc->path);
+ op_errno = ENOENT;
goto unwind;
}
req.path = (char *)args->loc->path;
/gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/)
+describes how to configure and use NFS-Ganesha. Users that prefer to use the
+gnfs server (NFSv3 only) can enable the service per volume with the following
+command:
+
+```bash
+# gluster volume set <VOLUME> nfs.disable false
+```
+
+Existing volumes that have gnfs enabled will remain enabled unless explicitly
+disabled. You cannot run both gnfs and NFS-Ganesha servers on the same host.
+
+The plan is to phase gnfs out of Gluster over the next several releases,
+starting with documenting it as officially deprecated, then not compiling and
+packaging the components, and ultimately removing the component sources from the
+source tree.
+
### SEEK
-*Notes for users:* All modern filesystems support SEEK_DATA and SEEK_HOLE with
-the lseek() systemcall. This improves performance when reading sparse files.
-GlusterFS now supports the SEEK operation as well. Linux kernel 4.5 comes with
-an improved FUSE module where lseek() can be used. QEMU can now detect holes in
-VM images when using the Gluster-block driver.
+*Notes for users:*
+All modern filesystems support SEEK_DATA and SEEK_HOLE with the lseek()
+systemcall. This improves performance when reading sparse files. GlusterFS now
+supports the SEEK operation as well. Linux kernel 4.5 comes with an improved
+FUSE module where lseek() can be used. QEMU can now detect holes in VM images
+when using the Gluster-block driver.
-*Limitations:* The deprecated stripe functionality has not been extended with
-SEEK. SEEK for sharding has not been implemented yet, and is expected to follow
-later in a 3.8 update (bug 1301647). NFS-Ganesha will support SEEK over NFSv4 in
-the near future, posisbly with the upcoming nfs-ganesha 2.4.
+*Limitations:*
+The deprecated stripe functionality has not been extended with SEEK. SEEK for
+sharding has not been implemented yet, and is expected to follow later in a 3.8
+update (bug 1301647). NFS-Ganesha will support SEEK over NFSv4 in the near
+future, posisbly with the upcoming nfs-ganesha 2.4.
-#### Tiering aware Geo-replication
+### Tiering aware Geo-replication
*Notes for users:*
-Tiering moves files between hot/cold tier bricks.
-Geo-replication syncs files from bricks in Master volume to Slave volume.
-With this, Users can configure geo-replication session in a Tiering based volume.
+Tiering moves files between hot/cold tier bricks. Geo-replication syncs files
+from bricks in Master volume to Slave volume. With this, Users can configure
+geo-replication session in a Tiering based volume.
*Limitations:*
Configuring geo-replication session in Tiering based volume is same as earlier.
-But, before attaching / detaching tier few steps needs to be followd:
+But, before attaching/detaching tier, a few steps needs to be followd:
-Before attaching a tier to a volume with existing geo-replication session, the session needs to be stopped.
-Please find detailed steps here:
+Before attaching a tier to a volume with an existing geo-replication session,
+the session needs to be stopped. Please find detailed steps here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Attach_Volumes.html#idp11442496
-While detaching a tier from a Tiering based volume with existing geo-replication session, checkpoint of session needs to be done.
-Please find detailed steps here:
+While detaching a tier from a Tiering based volume with existing geo-replication
+session, checkpoint of session needs to be done. Please find detailed steps
+here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Detach_Tier.html#idp32905264
+### Automagic unsplit-brain by [ctime|mtime|size|majority] for AFR
+*Notes for users:*
+A new volume option has been introduced called `cluster.favorite-child-policy`.
+It can be used to automatically resolve split-brains in replica volumes without
+having to use the gluster CLI or the `fuse-mount-setfattr-based` methods to
+manually select a source. The healing automcatically happens based on various
+policies that this option takes. See `gluster volume set help|grep
+cluster.favorite-child-policy -A3` for the various policies that you can set.
+The default value is 'none' , i.e. this feature is not enabled by default.
+
+*Limitations:*
+`cluster.favorite-child-policy` applies to