summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorDustin Black <dblack@redhat.com>2016-06-02 13:00:18 -0400
committerNiels de Vos <ndevos@redhat.com>2016-06-07 18:38:43 -0700
commit5c8e4d6195ce0c7aef7aa0aff66ee8bccf30b013 (patch)
tree37b7a4f519bfdc55d4804567bd584d4df873c0c6 /doc
parent7d3cb8f729297137e5d84b28cce7507a9ca48f12 (diff)
Added release notes and cleaned up formatting
- Mandatory lock support for Multiprotocol environment - Gluster/NFS disabled by default - Automagic unsplit-brain by [ctime|mtime|size|majority] for AFR - glusterfs-coreutils packaged for Fedora and CentOS Storage SIG - WORM, Retention and Compliance Change-Id: I17c0ac9e3c44205f657bd4445daa0c23b0fe1a82 BUG: 1317278 Signed-off-by: Dustin Black <dblack@redhat.com> Reviewed-on: http://review.gluster.org/14616 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/release-notes/3.8.0.md144
1 files changed, 126 insertions, 18 deletions
diff --git a/doc/release-notes/3.8.0.md b/doc/release-notes/3.8.0.md
index 58059fe5e33..3c0d71b8952 100644
--- a/doc/release-notes/3.8.0.md
+++ b/doc/release-notes/3.8.0.md
@@ -31,36 +31,144 @@ Building directly from the git repository has not changed.
### FIXME: insert more useful release notes here
+### Mandatory lock support for Multiprotocol environment
+*Notes for users:*
+With this release GlusterFS is now capable of performing file operations based
+on core mandatory locking concepts. Apart from Linux kernel style semantics,
+GlusterFS volumes can now be configured in a special mode where all traditional
+fcntl locks are treated mandatory so as to detect the presence of locks before
+every data modifying file operations acting on a particluar byte range. This
+will help applications to operate on more accurate data during concurrent access
+of various byte ranges within a file. Please refer Administration Guide for more
+details.
+
+http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Mandatory%20Locks/
+
+### Gluster/NFS disabled by default
+*Notes for users:*
+The legacy Gluster NFS server (a.k.a. gnfs) is now disabled by default when new
+volumes are created. Users are encouraged to use NFS-Ganesha with FSAL_GLUSTER
+instead of gnfs. NFS-Ganesha is a full feature server that is being actively
+developed and maintained. It supports NFSv3, NFSv4, and NFSv4.1. The
+documentation
+(http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/)
+describes how to configure and use NFS-Ganesha. Users that prefer to use the
+gnfs server (NFSv3 only) can enable the service per volume with the following
+command:
+
+```bash
+# gluster volume set <VOLUME> nfs.disable false
+```
+
+Existing volumes that have gnfs enabled will remain enabled unless explicitly
+disabled. You cannot run both gnfs and NFS-Ganesha servers on the same host.
+
+The plan is to phase gnfs out of Gluster over the next several releases,
+starting with documenting it as officially deprecated, then not compiling and
+packaging the components, and ultimately removing the component sources from the
+source tree.
+
### SEEK
-*Notes for users:* All modern filesystems support SEEK_DATA and SEEK_HOLE with
-the lseek() systemcall. This improves performance when reading sparse files.
-GlusterFS now supports the SEEK operation as well. Linux kernel 4.5 comes with
-an improved FUSE module where lseek() can be used. QEMU can now detect holes in
-VM images when using the Gluster-block driver.
+*Notes for users:*
+All modern filesystems support SEEK_DATA and SEEK_HOLE with the lseek()
+systemcall. This improves performance when reading sparse files. GlusterFS now
+supports the SEEK operation as well. Linux kernel 4.5 comes with an improved
+FUSE module where lseek() can be used. QEMU can now detect holes in VM images
+when using the Gluster-block driver.
-*Limitations:* The deprecated stripe functionality has not been extended with
-SEEK. SEEK for sharding has not been implemented yet, and is expected to follow
-later in a 3.8 update (bug 1301647). NFS-Ganesha will support SEEK over NFSv4 in
-the near future, posisbly with the upcoming nfs-ganesha 2.4.
+*Limitations:*
+The deprecated stripe functionality has not been extended with SEEK. SEEK for
+sharding has not been implemented yet, and is expected to follow later in a 3.8
+update (bug 1301647). NFS-Ganesha will support SEEK over NFSv4 in the near
+future, posisbly with the upcoming nfs-ganesha 2.4.
-#### Tiering aware Geo-replication
+### Tiering aware Geo-replication
*Notes for users:*
-Tiering moves files between hot/cold tier bricks.
-Geo-replication syncs files from bricks in Master volume to Slave volume.
-With this, Users can configure geo-replication session in a Tiering based volume.
+Tiering moves files between hot/cold tier bricks. Geo-replication syncs files
+from bricks in Master volume to Slave volume. With this, Users can configure
+geo-replication session in a Tiering based volume.
*Limitations:*
Configuring geo-replication session in Tiering based volume is same as earlier.
-But, before attaching / detaching tier few steps needs to be followd:
+But, before attaching/detaching tier, a few steps needs to be followd:
-Before attaching a tier to a volume with existing geo-replication session, the session needs to be stopped.
-Please find detailed steps here:
+Before attaching a tier to a volume with an existing geo-replication session,
+the session needs to be stopped. Please find detailed steps here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Attach_Volumes.html#idp11442496
-While detaching a tier from a Tiering based volume with existing geo-replication session, checkpoint of session needs to be done.
-Please find detailed steps here:
+While detaching a tier from a Tiering based volume with existing geo-replication
+session, checkpoint of session needs to be done. Please find detailed steps
+here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Detach_Tier.html#idp32905264
+### Automagic unsplit-brain by [ctime|mtime|size|majority] for AFR
+*Notes for users:*
+A new volume option has been introduced called `cluster.favorite-child-policy`.
+It can be used to automatically resolve split-brains in replica volumes without
+having to use the gluster CLI or the `fuse-mount-setfattr-based` methods to
+manually select a source. The healing automcatically happens based on various
+policies that this option takes. See `gluster volume set help|grep
+cluster.favorite-child-policy -A3` for the various policies that you can set.
+The default value is 'none' , i.e. this feature is not enabled by default.
+
+*Limitations:*
+`cluster.favorite-child-policy` applies to all files of the volume. It is
+assumed that if this option is enabled with a particular policy, you don't care
+to examine the split-brain files on a per file basis and use the approrpiate
+gluster split-brain resolution CLI to resolve them individually with different
+policies.
+
+### glusterfs-coreutils packaged for Fedora and CentOS Storage SIG
+*Notes for users:*
+These are set of coreutils designed to act on GlusterFS volumes using its native
+C API similar to standard Linux coreutils like cp, ls, mv etc. Anyone can easily
+make use of these utilities to directly access volumes without mounting the same
+via some protocol. Please refer Admin guide for more details
+
+http://gluster.readthedocs.org/en/latest/Administrator%20Guide/GlusterFS%20Coreutils/
+
+### WORM, Retention and Compliance
+*Notes for users:*
+This feature is about having WORM based compliance/archiving solution in
+GlusterFS. This adds the file-level WORM/Retention feature to the existing
+implementation of the WORM translator which works at volume level. Users can
+switch between either volume-level WORM or file-level WORM/Retention features.
+This feature will work only if the "read-only" and "worm" options on the volume
+are "off" and the "worm-file-level option" is "on" on the volume. A file can be
+in any of these three states:
+
+1. Normal: Where we can perform normal operations on the files
+2. WORM-Retained: Where file will be immutable and undeletable
+3. WORM: Where file will be immutable but deletable
+
+Added four volume set options:
+1. `features.worm-file-level`: It enables the file level WORM/Retention feature.
+ It is "off" by default
+2. `features.retention-mode`: Takes two values
+ 1. `relax`: allows users to increase or decrease the retention period of a
+ WORM/Retained file (Can not be decreased below the modification time of the
+ file)
+ 2. `enterprise`: allows users only to increase the retention period of a
+ WORM/Retained file
+3. `features.auto-commit-period`: time period at/after which the auto commit
+ feature should look for the dormant files for state transition. Default value
+ is 180 seconds
+4. `features.default-retention-period`: time period till which a file should be
+ undeletable. This value is also used to find the dormant files, i.e., files
+ which are not modified for this much time, will qualify for state transition.
+ Default value is 120 seconds
+
+User can do the manual transition by using the `chmod -w <filename>` or
+equivalent command or the lazy auto-commit will take place when I/O triggered
+using timeouts for untouched files. The next I/O(link, unlink, rename, truncate)
+will cause the transition
+
+Limitations:
+1. No Data validation of Read-only data i.e Integration with bitrot not done
+2. Internal operations like tiering, rebalancing, self-healing will fail on
+ WORMed files
+3. No control on ctime
+
## Bugs addressed