summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPranith Kumar K <pkarampu@redhat.com>2016-11-14 12:15:26 +0530
committerPranith Kumar Karampuri <pkarampu@redhat.com>2016-11-15 06:45:21 -0800
commit96770bc3c2d44a20cd9fe0b2ca02fac28ae750a1 (patch)
treeb70da2c3c61868bd35d0c3e9a38cd66f50818e57
parent82b29e0de686b7bd9f36ddf49376fc07f0c42125 (diff)
doc: finalize release-notes for 3.9.0v3.9.0
Added the BZs from last RC till now. Added Release notes for the major features as well. BUG: 1350744 Change-Id: Iee1dae0d5776a3e3695bfb087591964f66802dc9 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/15841 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
-rw-r--r--doc/release-notes/3.9.0.md305
1 files changed, 297 insertions, 8 deletions
diff --git a/doc/release-notes/3.9.0.md b/doc/release-notes/3.9.0.md
index 675760d279b..ca4020d4af8 100644
--- a/doc/release-notes/3.9.0.md
+++ b/doc/release-notes/3.9.0.md
@@ -1,17 +1,258 @@
-# Work in progress release notes for Gluster 3.9.0 (RC1)
+# Release notes for Gluster 3.9.0
-These are the current release notes for Release Condidate 1. Follow up changes
-will add more user friendly notes and instructions.
+This is a major release that includes a huge number of changes. Many
+improvements contribute to better support of Gluster with containers and
+running your storage on the same server as your hypervisors. Lots of work has
+been done to integrate with other projects that are part of the Open Source
+storage ecosystem.
-The release-notes are being worked on by maintainers and the developers of the
-different features. Assistance of others is welcome! Contributions can be done
-in [this etherpad](https://public.pad.fsfe.org/p/glusterfs-3.9-release-notes).
+The most notable features and changes are documented on this page. A full list
+of bugs that has been addressed is included further below.
-(FIXME: insert useful release notes here)
+## Major changes and features
+
+### Introducing reset-brick command
+*Notes for users:*
+The reset-brick command provides support to reformat/replace the disk(s)
+represented by a brick within a volume. This is helpful when a disk goes bad etc
+
+Start reset process -
+```bash
+gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH start
+```
+The above command kills the respective brick process. Now the brick can be reformatted.
+
+To restart the brick after modifying configuration -
+```bash
+gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit
+```
+If the brick was killed to replace the brick with same brick path, restart with following command -
+```bash
+gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit force
+```
+
+*Limitations:*
+1. resetting a brick kills a brick process in concern. During this
+period the brick will not be available for IO's.
+2. Replacing a brick with this command will work only if both the brick paths
+are same and belong to same volume.
+
+### Get node level status of a cluster
+
+*Notes for users:*
+The get-state command provides node level status of a trusted storage pool from
+the point of view of glusterd in a parseable format. Using get-state command,
+external applications can invoke the command on all nodes of the cluster, and
+parse and collate the data obtained from all these nodes to get a complete
+picture of the state of the cluster.
+
+```bash
+# gluster get-state <glusterd> [odir <path/to/output/dir] [file <filename>]
+```
+This would dump data points that reflect the local state representation of the
+cluster as maintained in glusterd (no other daemons are supported as of now)
+to a file inside the specified output directory. The default output directory
+and filename is /var/run/gluster and glusterd_state_<timestamp> respectively.
+
+Following are the sections in the output:
+1. `Global`: UUID and op-version of glusterd
+2. `Global options`: Displays cluster specific options that have been set
+explicitly through the volume set command.
+3. `Peers`: Displays the peer node information including its hostname and
+connection status
+4. `Volumes`: Displays the list of volumes created on this node along with
+detailed information on each volume.
+5. `Services`: Displays the list of the services configured on this node along
+with their corresponding statuses.
+
+*Limitations:*
+1. This only supports glusterd.
+2. Does not provide complete cluster state. Data to be collated from all nodes
+by external application to get the complete cluster state.
+
+### Multi threaded self-heal for Disperse volumes
+*Notes for users:*
+Users now have the ability to configure multi-threaded self-heal in disperse volumes using the following commands:
+```bash
+Option below can be used to control number of parallel heals in SHD
+# gluster volume set <volname> disperse.shd-max-threads [1-64] # default is 1
+Option below can be used to control number of heals that can wait in SHD
+# gluster volume set <volname> disperse.shd-wait-qlength [1-65536] # default is 1024
+```
+
+### Hardware extention acceleration in Disperse volumes
+*Notes for users:*
+If the user has hardware that has special instructions which can be used in erasure code calculations on the client it will be automatically used. At the moment this support is added for cpu-extentions: `x64`, `sse`, `avx`
+
+### Lock revocation feature
+*Notes for users:*
+1. Motivation: Prevents cluster instability by mis-behaving clients causing bricks to OOM due to inode/entry lock pile-ups.
+2. Adds option to strip clients of entry/inode locks after N seconds
+3. Adds option to clear ALL locks should the revocation threshold get hit
+4. Adds option to clear all or granted locks should the max-blocked threshold get hit (can be used in combination w/ revocation-clear-all).
+5. Adds logging to indicate revocation event & reason
+6. Options are:
+```bash
+# gluster volume set <volname> features.locks-revocation-secs <integer; 0 to disable>
+# gluster volume set <volname> features.locks-revocation-clear-all [on/off]
+# gluster volume set <volname> features.locks-revocation-max-blocked <integer>
+```
+
+### On demand scrubbing for Bitrot Detection:
+*Notes for users:* With 'ondemand' scrub option, you don't need to wait for the scrub-frequency
+to expire. As the option name itself says, the scrubber can be initiated on demand to detect
+the corruption. If the scrubber is already running, this option is a no op.
+```bash
+# gluster volume bitrot <volume-name> scrub ondemand
+```
+ ### Improvements in Gluster NFS-Ganesha integration
+*Notes for users:*
+With this release the major change done is to store all the ganesha related configuration files in the shared storage volume mount point instead of having separate local copy in '/etc/ganesha' folder on each node.
+
+For new users, before enabling nfs-ganesha
+
+1. create a directory named *nfs-ganesha* in the shared storage mount point (*/var/run/gluster/shared_storage/*)
+
+2. Create *ganesha.conf* & *ganesha-ha.conf* in that directory with the required details filled in.
+
+For existing users, before starting nfs-ganesha service do the following :
+
+1. Copy all the contents of */etc/ganesha* directory (including *.export_added* file) to */var/run/gluster/shared_storage/nfs-ganesha* from any of the ganesha nodes
+
+2. Create symlink using */var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf* on */etc/ganesha* one each node in ganesha-cluster
+
+3. Change path for each export entry in *ganesha.conf* file
+
+```sh
+Example: if a volume "test" was exported, then ganesha.conf shall have below export entry -
+ %include "/etc/ganesha/exports/export.test.conf" export entry.
+Change that line to
+ %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.test.conf"
+```
+
+In addition, following changes have been made -
+* The entity "HA_VOL_SERVER= " in *ganesha-ha.conf* is no longer required.
+* A new resource-agent called portblock (available in >= *resource-agents-3.9.5* package) is added to the cluster configuration to speed up the nfs-client connections post IP failover or failback. This may be noticed while looking at the cluster configuration status using the command *pcs status*.
+
+### Availability of python bindings to libgfapi
+
+The official python bindings for GlusterFS libgfapi C library interface is
+mostly API complete. The complete API reference and documentation can be
+found at [libgfapi-python.rtfd.io](http://libgfapi-python.rtfd.io/)
+
+The python bindings have been packaged and has been made available over
+[PyPI](https://pypi.python.org/pypi/gfapi/).
+
+### Small file improvements in Gluster with md-cache (Experimental)
+*Notes for users:*
+With this release, metadata cache on the client side is integrated with the
+cache-invalidation feature so that the clients can cache longer without
+compromising on consistency. By enabling, the metadata cache and cache
+invalidation feature and extending the cache timeout to 600s, we have seen
+performance improvements in metadata operation like creates, ls/stat, chmod,
+rename, delete. The perf improvements is significant in SMB access of gluster
+volume, but as a cascading effect the improvements is also seen on FUSE/Native
+access and NFS access.
+
+Use the below options in the order mentioned, to enable the features:
+```bash
+ # gluster volume set <volname> features.cache-invalidation on
+ # gluster volume set <volname> features.cache-invalidation-timeout 600
+ # gluster volume set <volname> performance.stat-prefetch on
+ # gluster volume set <volname> performance.cache-invalidation on
+ # gluster volume set <volname> performance.cache-samba-metadata on # Only for SMB access
+ # gluster volume set <volname> performance.md-cache-timeout 600
+```
+
+### Real time Cluster notifications using Events APIs
+Let us imagine we have a Gluster monitoring system which displays
+list of volumes and its state, to show the realtime status, monitoring
+app need to query the Gluster in regular interval to check volume
+status, new volumes etc. Assume if the polling interval is 5 seconds
+then monitoring app has to run gluster volume info command ~17000
+times a day!
+
+With Gluster 3.9 release, Gluster provides close to realtime
+notification and alerts for the Gluster cluster state changes and
+alerts. Webhooks can be registered to listen to Events emitted by
+Gluster. More details about this new feature is available here.
+
+http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Events%20APIs
+
+### Geo-replication improvements
+#### Documentation improvements:
+
+Upstream documentation is rewritten to reflect the latest version of
+Geo-replication. Removed the stale/duplicate documentation. We are
+still working on to add Troubleshooting, Cluster expand/shrink notes
+to it. Latest version of documentation is available here
+http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication
+
+#### Geo-replication Events are available for Events API consumers:
+Events APIs is the new Gluster feature available with 3.9 release,
+most of the events from Geo-replication are added to eventsapi.
+
+Read more about the Events APIs and Geo-replication events here
+http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Events%20APIs
+
+#### New simplified command to setup Non root Geo-replication
+
+Non root Geo-replication setup was not easy with multiple manual
+steps. Non root Geo-replication steps are simplified. Read more about
+the new steps in Admin guide.
+
+http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/#slave-user-setup
+
+#### New command to generate SSH keys(Alternative command to `gsec_create`)
+
+`gluster system:: execute gsec_create` command generates ssh keys in
+every Master cluster nodes and copies to initiated node. This command
+silently ignores error if any node is down in cluster. It will not
+collect SSH keys from that node. When Geo-rep create push-pem command
+is issued it will copy public keys from those nodes which were up
+during gsec_create. This causes Geo-rep to go to Faulty when that
+master node tries to make the connection to slave nodes. With the new
+command, output shows if any Master node was down while generating ssh
+keys. Read more about `gluster-georep-sshkey
+
+http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/#setting-up-the-environment-for-geo-replication
+
+#### Logging improvements
+
+New logs are added, now from the log we can clearly understand what is
+going on. Note: This feature may change logging format of existing log
+messages, Please update your parsers if used to parse Geo-rep logs.
+
+Patch: http://review.gluster.org/15710
+
+#### New Configuration options available: changelog-log-level
+
+All the changelog related log messages are logged in
+`/var/log/glusterfs/geo-replication/<SESSION>/*.changes.log` in Master
+nodes. Log level was hard coded as `TRACE` for Changelog logs. New
+configuration option provided to modify the changelog log level and
+defaulted to `INFO`
+
+## Behavior changes
+- [#1221623](https://bugzilla.redhat.com/1221623): Earlier the ports GlusterD
+ used to allocate for the daemons like brick processes, quotad, shd et all
+ were persistent through the volume's life cycle, so every restart of the
+ process(es) or a node reboot will try to use the same ports which were
+ allocated for the first time. With release-3.9 onwards, GlusterD will try to
+ allocate a fresh port once a daemon is restarted or the node is rebooted.
+- [#1348944](https://bugzilla.redhat.com/1348944): with 3.9 release the default
+ log file for glusterd has been renamed to glusterd.log from
+ etc-glusterfs-glusterd.vol.log
+
+## Known Issues
+- [#1387878](https://bugzilla.redhat.com/1387878):add-brick on a vm-store
+ configuration which has sharding enabled is leading to vm corruption. To work
+ around this issue, one can scale up by creating more volumes until this issue
+ is fixed.
## Bugs addressed
-A total of 510 (FIXME) patches has been sent, addressing 375 (FIXME) bugs:
+A total of 571 patches has been sent, addressing 422 bugs:
- [#762184](https://bugzilla.redhat.com/762184): Support mandatory locking in glusterfs
- [#789278](https://bugzilla.redhat.com/789278): Issues reported by Coverity static analysis tool
@@ -237,6 +478,7 @@ A total of 510 (FIXME) patches has been sent, addressing 375 (FIXME) bugs:
- [#1350371](https://bugzilla.redhat.com/1350371): ganesha/glusterd : remove 'HA_VOL_SERVER' from ganesha-ha.conf
- [#1350383](https://bugzilla.redhat.com/1350383): distaf: Modified distaf gluster config file
- [#1350427](https://bugzilla.redhat.com/1350427): distaf: Modified tier_attach() to get bricks path for attaching tier from the available bricks in server
+- [#1350744](https://bugzilla.redhat.com/1350744): GlusterFS 3.9.0 tracker
- [#1350793](https://bugzilla.redhat.com/1350793): build: remove absolute paths from glusterfs spec file
- [#1350867](https://bugzilla.redhat.com/1350867): RFE: FEATURE: Lock revocation for features/locks xlator
- [#1351021](https://bugzilla.redhat.com/1351021): [DHT]: Rebalance info for remove brick operation is not showing after glusterd restart
@@ -319,6 +561,7 @@ A total of 510 (FIXME) patches has been sent, addressing 375 (FIXME) bugs:
- [#1364026](https://bugzilla.redhat.com/1364026): glfs_fini() crashes with SIGSEGV
- [#1364420](https://bugzilla.redhat.com/1364420): [RFE] History Crawl performance improvement
- [#1364449](https://bugzilla.redhat.com/1364449): posix: honour fsync flags in posix_do_zerofill
+- [#1364529](https://bugzilla.redhat.com/1364529): api: revert glfs_ipc_xd intended for 4.0
- [#1365455](https://bugzilla.redhat.com/1365455): [AFR]: Files not available in the mount point after converting Distributed volume type to Replicated one.
- [#1365489](https://bugzilla.redhat.com/1365489): glfs_truncate missing
- [#1365506](https://bugzilla.redhat.com/1365506): gfapi: use const qualifier for glfs_*timens()
@@ -388,3 +631,49 @@ A total of 510 (FIXME) patches has been sent, addressing 375 (FIXME) bugs:
- [#1376477](https://bugzilla.redhat.com/1376477): [RFE] DHT Events
- [#1376874](https://bugzilla.redhat.com/1376874): RFE : move ganesha related configuration into shared storage
- [#1377288](https://bugzilla.redhat.com/1377288): The GlusterFS Callback RPC-calls always use RPC/XID 42
+- [#1377386](https://bugzilla.redhat.com/1377386): glusterd experiencing repeated connect/disconnect messages when shd is down
+- [#1377570](https://bugzilla.redhat.com/1377570): EC: Set/unset dirty flag for all the update operations
+- [#1378814](https://bugzilla.redhat.com/1378814): Files not being opened with o_direct flag during random read operation (Glusterfs 3.8.2)
+- [#1378948](https://bugzilla.redhat.com/1378948): removal of file from nfs mount crashes ganesha server
+- [#1379028](https://bugzilla.redhat.com/1379028): Modifications to AFR Events
+- [#1379287](https://bugzilla.redhat.com/1379287): warning messages seen in glusterd logs for each 'gluster volume status' command
+- [#1379528](https://bugzilla.redhat.com/1379528): Poor smallfile read performance on Arbiter volume compared to Replica 3 volume
+- [#1379707](https://bugzilla.redhat.com/1379707): gfapi: Fix fd ref leaks
+- [#1379996](https://bugzilla.redhat.com/1379996): Volume restart couldn't re-export the volume exported via ganesha.
+- [#1380252](https://bugzilla.redhat.com/1380252): glusterd fails to start without installing glusterfs-events package
+- [#1383591](https://bugzilla.redhat.com/1383591): glfs_realpath() should not return malloc()'d allocated memory
+- [#1383692](https://bugzilla.redhat.com/1383692): GlusterFS fails to build on old Linux distros with linux/oom.h missing
+- [#1383913](https://bugzilla.redhat.com/1383913): spurious heal info as pending heal entries never end on an EC volume while IOs are going on
+- [#1385224](https://bugzilla.redhat.com/1385224): arbiter volume write performance is bad with sharding
+- [#1385236](https://bugzilla.redhat.com/1385236): invalid argument warning messages seen in fuse client logs 2016-09-30 06:34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x58722) 0-dict: !this || !value for key=link-count [Invalid argument]
+- [#1385451](https://bugzilla.redhat.com/1385451): "nfs.disable: on" is not showing in Vol info by default for the 3.7.x volumes after updating to 3.9.0
+- [#1386072](https://bugzilla.redhat.com/1386072): Spurious permission denied problems observed
+- [#1386178](https://bugzilla.redhat.com/1386178): eventsapi/georep: Events are not available for Checkpoint and Status Change
+- [#1386338](https://bugzilla.redhat.com/1386338): pmap_signin event fails to update brickinfo->signed_in flag
+- [#1387099](https://bugzilla.redhat.com/1387099): Boolean attributes are published as string
+- [#1387492](https://bugzilla.redhat.com/1387492): Error and warning message getting while removing glusterfs-events package
+- [#1387502](https://bugzilla.redhat.com/1387502): Incorrect volume type in the "glusterd_state" file generated using CLI "gluster get-state"
+- [#1387564](https://bugzilla.redhat.com/1387564): [Eventing]: UUID is showing zeros in the event message for the peer probe operation.
+- [#1387894](https://bugzilla.redhat.com/1387894): Regression caused by enabling client-io-threads by default
+- [#1387960](https://bugzilla.redhat.com/1387960): Sequential volume start&stop is failing with SSL enabled setup.
+- [#1387964](https://bugzilla.redhat.com/1387964): [Eventing]: 'gluster vol bitrot <volname> scrub ondemand' does not produce an event
+- [#1387975](https://bugzilla.redhat.com/1387975): Continuous warning messages getting when one of the cluster node is down on SSL setup.
+- [#1387981](https://bugzilla.redhat.com/1387981): [Eventing]: 'gluster volume tier <volname> start force' does not generate a TIER_START event
+- [#1387984](https://bugzilla.redhat.com/1387984): Add a test script for compound fops changes in AFR
+- [#1387990](https://bugzilla.redhat.com/1387990): [RFE] Geo-replication Logging Improvements
+- [#1388150](https://bugzilla.redhat.com/1388150): geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
+- [#1388323](https://bugzilla.redhat.com/1388323): fuse mount point not accessible
+- [#1388350](https://bugzilla.redhat.com/1388350): Memory Leaks in snapshot code path
+- [#1388470](https://bugzilla.redhat.com/1388470): throw warning to show that older tier commands are depricated and will be removed.
+- [#1388563](https://bugzilla.redhat.com/1388563): [Eventing]: 'VOLUME_REBALANCE' event messages have an incorrect volume name
+- [#1388579](https://bugzilla.redhat.com/1388579): crypt: changes needed for openssl-1.1 (coming in Fedora 26)
+- [#1388731](https://bugzilla.redhat.com/1388731): [GSS]glusterfind pre session hangs indefinitely in RHGS 3.1.3
+- [#1388912](https://bugzilla.redhat.com/1388912): glusterfs can't self heal character dev file for invalid dev_t parameters
+- [#1389675](https://bugzilla.redhat.com/1389675): Experimental translators and 4.0 features need to be disabled for release-3.9
+- [#1389742](https://bugzilla.redhat.com/1389742): build: incorrect Requires: for portblock resource agent
+- [#1390837](https://bugzilla.redhat.com/1390837): write-behind: flush stuck by former failed write
+- [#1391448](https://bugzilla.redhat.com/1391448): md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
+- [#1392286](https://bugzilla.redhat.com/1392286): gfapi clients crash while using async calls due to double fd_unref
+- [#1392718](https://bugzilla.redhat.com/1392718): Quota version not changing in the quota.conf after upgrading to 3.7.1 from 3.6.1
+- [#1392844](https://bugzilla.redhat.com/1392844): Hosted Engine VM paused post replace-brick operation
+- [#1392869](https://bugzilla.redhat.com/1392869): The FUSE client log is filling up with posix_acl_default and posix_acl_access messages