summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--doc/release-notes/4.0.0.md442
1 files changed, 287 insertions, 155 deletions
diff --git a/doc/release-notes/4.0.0.md b/doc/release-notes/4.0.0.md
index 61eb6d805ec..caac4c25691 100644
--- a/doc/release-notes/4.0.0.md
+++ b/doc/release-notes/4.0.0.md
@@ -1,17 +1,22 @@
# Release notes for Gluster 4.0.0
-** DRAFT DRAFT ! DRAFT !!!**
-
-**Header for the release - TBD**
+The Gluster community celebrates 13 years of development with this latest
+release, Gluster 4.0. Including improved integration with containers, an
+enhanced user experience, and a next-generation management framework,
+4.0 release solidifies Gluster as the storage choice for scale out distributed
+file system and cloud-native developers.
The most notable features and changes are documented on this page. A full list
of bugs that have been addressed is included further below.
+- [Announcements](#announcements)
- [Major changes and features](#major-changes-and-features)
- [Major issues](#major-issues)
- [Bugs addressed in the release](#bugs-addressed)
-Further, as 3.13 is a short term maintenance release, features included
+## Announcements
+
+1. As 3.13 is a short term maintenance release, features included
in that release are available with 4.0.0 as well, and could be of interest to
users upgrading to 4.0.0 from older than 3.13 releases. The 3.13 [release notes](http://docs.gluster.org/en/latest/release-notes/)
captures the list of features that were introduced with 3.13.
@@ -19,6 +24,19 @@ captures the list of features that were introduced with 3.13.
**NOTE:** As 3.13 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.0.0. ([reference](https://www.gluster.org/release-schedule/))
+2. Releases that recieve maintenence updates post 4.0 release are, 3.10, 3.12,
+4.0 ([reference](https://www.gluster.org/release-schedule/))
+
+3. With this release, the CentOS storage SIG will not build server packages for
+CentOS6. Server packages will be available for CentOS7 only. For ease of
+migrations, client packages on CentOS6 will be published and maintained.
+
+**NOTE**: This change was announced [here](http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html)
+
+4. TBD: Release version changes
+
+5. TBD: Glusterd2 as the mainstream management chioce from the next release
+
## Major changes and features
Features are categorized into the following sections,
@@ -34,6 +52,7 @@ Features are categorized into the following sections,
#### 1. GlusterD2
**Notes for users:**
+TBD
- Need GD2 team to fill in enough links and information here covering,
- What it is
- Install and configuration steps
@@ -47,11 +66,11 @@ Features are categorized into the following sections,
The lack of live monitoring support on top of GlusterFS till date was a
limiting factor for many users (and in many cases for developers too).
-[Statedump](docs.gluster.org/en/latest/Troubleshooting/statedump/) did some work of helping during debugging, but was heavy for
+[Statedump](docs.gluster.org/en/latest/Troubleshooting/statedump/) is useful for debugging, but is heavy for
live monitoring.
Further, the existence of `debug/io-stats` translator was not known to many and
-`gluster volume profile` was not recommended as it took a hit on performance.
+`gluster volume profile` was not recommended as it impacted performance.
With this release, glusterfs's core infrastructure itself gets some mechanisms
to provide internal information, that avoids the heavy weight nature of prior
@@ -59,9 +78,8 @@ monitoring mechanisms.
#### 1. Metrics collection across every FOP in every xlator
**Notes for users:**
-Gluster, with this release, has in-built latency measures in the xlator
-abstraction, thus enabling capture of metrics and usage patterns across
-workloads.
+Gluster now has in-built latency measures in the xlator abstraction, thus
+enabling capture of metrics and usage patterns across workloads.
These measures are currently enabled by default.
@@ -69,12 +87,12 @@ These measures are currently enabled by default.
This feature is auto-enabled and cannot be disabled.
Providing means to disable the same in future releases also may not be made
-available, as the data generated is deemed critical to understand and
-troubleshoot gluster usage patterns.
+available, as the data generated is deemed critical to understand, tune, and
+troubleshoot gluster.
#### 2. Monitoring support
**Notes for users:**
-Currently, only project which consumes the metrics and provides basic
+Currently, the only project which consumes metrics and provides basic
monitoring is [glustermetrics](https://github.com/amarts/glustermetrics), which provides a good idea on how to
utilize the metrics dumped from the processes.
@@ -95,68 +113,50 @@ deemed to have a good copy, instead of all, thus improving the efficiency of the
operation.
#### 2. Allow md-cache to serve nameless lookup from cache
-With this enhancement, some more lookups will be served from the cache. The workloads
-that will perform better with this change are:
-- Directory listing in FUSE mount with ACL enabled, will be faster.
-- NFS workloads also will be benifited.
-
**Notes for users:**
-Issue: https://github.com/gluster/glusterfs/issues/232
-There is no specific option to enable this feature, it the default behaviour
-when metadata caching is enabled. To enable metadata caching please refet to
-Performance Tuning section in Admin Guide.
-
-**Limitations:**
-None
-
-**Known Issues:**
-None
+The md-cache translator is enhanced to cache nameless lookups (typically seen
+with NFS workloads). This helps speed up overall operations on the volume
+reducing the number of lookups done over the network. Typical workloads that
+will benefit from this enhancement are,
+- NFS based access
+- Directory listing with FUSE, when ACLs are enabled
#### 3. md-cache: Allow runtime addition of xattrs to the list of xattrs that md-cache caches
- Md-cache caches the stat and xattrs on the client side. In all the previous releases, the
-list of xattrs that can be cached in the client side was hardcoded in the libraries, user could
-only enable or disable the prelisted xattrs cache. With this feature, we can add new xattrs to
-the list of xattrs that can be cached on the client side.
-
**Notes for users:**
-Issue: https://github.com/gluster/glusterfs/issues/297
-To add xattrs to the cache list, execute the following command:
- ```
- # gluster volume set <volname> xattr-cache-list "comma separated xattr list"
- ```
-The option like "cache-samba-metadata" "cache-swift-metadata" which also adds some xattrs
-to the cache list, still works. The new option "xattr-cache-list" appends to the list
-generated by the old options. Refer to the Performance Tuning section in the Admin Guide
-for further information.
+md-cache was enhanced to cache extended attributes of a file or directory, for
+gluster specific attributes. This has now been enhanced to cache user provided
+attributes (xattrs) as well.
-**Limitations:**
-Currently, setting this option overwrites the previous value set for this option.
-The append to the existing list of xattr is currently not available
+To add specific xattrs to the cache list, use the following command:
+```
+# gluster volume set <volname> xattr-cache-list "<xattr-name>,<xattr-name>,..."
+```
+Existing options, such as "cache-samba-metadata" "cache-swift-metadata" continue
+to function. The new option "xattr-cache-list" appends to the list generated by
+the existing options.
-**Known Issues:**
-None
+**Limitations:**
+Setting this option overwrites the previous value set for this option. The
+append to the existing list of xattr is not supported with this release.
#### 4. Cache last stripe of an EC volume while write is going on
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/256
-- Release notes:
- - Needs option documentation and also use-case details, as to when to enable this
-
-**Limitations:**
+Disperse translator now has the option to retain a writethrough cache of the
+last write stripe. This helps in improved small append sequential IO patterns
+by reducing the need to read a partial stripe for appending operations.
-
-**Known Issues:**
+To enable this use,
+```
+# gluster volume set <volname> disperse.stripe-cache <N>
+```
+Where, <N> is the number of stripes to cache.
#### 5. tie-breaker logic for blocking inodelks/entrylk in SHD
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/354
-- Release notes:
- - Internal feature to enable faster(?) self-heal, mention it as such
-
-**Limitations:**
-
-
-**Known Issues:**
+Self-heal deamon locking has been enhanced to identify situations where an
+slefheal deamon is actively working on an inode. This enables other selfheal
+deamons to proceed with other entries in the queue, than waiting on a particular
+entry, thus preventing starvation among selfheal threads.
#### 6. Independent eager-lock options for file and directory accesses
**Notes for users:**
@@ -172,102 +172,99 @@ for these users while still keeping best performance for file accesses.
### Geo-replication
#### 1. JSON output for Geo-rep status and config for Glusterd2 integration
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/361
-- Added option --json to the gsyncd.py script
- - Are end users going to use the status command here?
- - Documentation update?
-- Release notes:
- - If there is no end user impact, no specific notes are needed for the same
-
-**Limitations:**
+gsyncd config and status-get commands can now return data in JSON formats.
-
-**Known Issues:**
+Use --json while using these commands to get the required outputs in JSON.
#### 2. Enhance Geo-replication to use Volinfo from Config file
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/396
-- Documentation needs to be updated with the changed conf file section
-- Release notes:
- - Need documentation of explanation of the process in the notes, and also when to use the same
-
-**Limitations:**
+Geo-replication config file is enhanced to accept additional fields that contain
+required master and slave volume information. Use the option,
+`--use-gconf-volinfo` when using the monitor command, for it to pick these up
+from the configuration file.
-
-**Known Issues:**
+Config is enhanced with the following fields,
+```
+[vars]
+ master-bricks=NODEID:HOSTNAME:PATH,..
+ slave-bricks=NODEID:HOSTNAME,..
+ master-volume-id=
+ slave-volume-id=
+ master-replica-count=
+ master-disperse_count=
+```
+Note: Exising Geo-replication is not affected since this is activated only
+when the option `--use-gconf-volinfo` is passed while spawning `gsyncd monitor`
#### 3. Geo-replication: Improve gverify.sh logs
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/395
-- Release notes:
- - Needs documentation around new log file locations and possibly strings to check for mount failures etc.
-
-**Limitations:**
-
-
-**Known Issues:**
+gverify log file names and locations are changed as follows,
+1. Slave log file is changed from `<logdir>/geo-replication-slaves/slave.log`
+ to, `<logdir>/geo-replication/gverify-slavemnt.log`
+2. Master log file is separated from the slave log file under,
+ `<logdir>/geo-replication/gverify-mastermnt.log`
#### 4. Geo-rep: Cleanup stale (unusable) XSYNC changelogs.
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/376
-- Release notes:
- - A note on change in behavior maybe needed
-
-**Limitations:**
-
-
-**Known Issues:**
+Stale xsync logs were not cleaned up, causing accumulation of these on the
+system. This change cleans up the stale xsync logs, if geo-replication has to
+restart from a faulty state.
#### 5. Improve gsyncd configuration and arguments handling
**Notes for users:**
- https://github.com/gluster/glusterfs/issues/73
- Release notes:
- Needs user facing documentation for newer options and such
- - There seems to be code improvement as well in the patches, so that may not be needed in the release notes
+ - There seems to be code improvement as well in the patches,
+ so that may not be needed in the release notes
**Limitations:**
-
**Known Issues:**
### Standalone
#### 1. Ability to force permissions while creating files/directories on a volume
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/301
-- Adds options,
- - "create-mask" and "create-directory-mask"
- - "force-create-mode" and "force-create-directory"
- - End user documentation is required
-- Release notes:
- - Need a summary of what the option would enable and how to set the same
- - If relevant user documentation is added, we maybe able to point to the same as well
+Options have been added to the posix translator, to override default umask
+values with which files and directories are created. This is particularly useful
+when sharing content by applications based on GID. As the defaule mode bits
+prevent such useful sharing, and supercede ACLs in this regard, these options
+are provided to control this behavior.
-**Limitations:**
+Command usage is as follows:
+```
+# gluster volume set <volume name> storage.<option-name> <value>
+```
+The valid `<value>` ranges from 0000 to 0777
+`<option-name>` are:
+ - create-mask
+ - create-directory-mask
+ - force-create-mode
+ - force-create-directory
-**Known Issues:**
+Options "create-mask" and "create-directory-mask" are added to remove the
+mode bits set on a file or directory when its created. Default value of these
+options is 0777. Options "force-create-mode" and "force-create-directory" sets
+the default permission for a file or directory irrespective of the clients
+umask. Default value of these options is 0000.
#### 2. Replace MD5 usage to enable FIPS support
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/230
-
Previously, if gluster was run on a FIPS enabled system, it used to crash
because MD5 is not FIPS compliant and gluster consumes MD5 checksum in
-various places like self-heal and geo-rep. This has been fixed by
+various places like self-heal and geo-replication. This has been fixed by
replacing MD5 with SHA256 which is FIPS compliant.
- However, in order for AFR self-heal to work correctly during rolling upgrade
+However, in order for AFR self-heal to work correctly during rolling upgrade
to 4.0, we have tied this to a volume option called `fips-mode-rchecksum`.
-i.e. `gluster volume set <VOLNAME> fips-mode-rchecksum on` has to be performed
-for the posix_rchecksum() FOP (which is called by self-heal logic) to use SHA256.
-If it is 'off', it continues to use MD5 checksum, allowing hassle free upgrade.
+`gluster volume set <VOLNAME> fips-mode-rchecksum on` has to be performed post
+upgrade to change the defaults from MD5 to SHA256. Post this gluster processes
+will run clean on a FIPS enabled system.
-Once glusterfs 3.x is EOL'ed, we could make the 'fips-mode-rchecksum'
-option a no-op and let posix_rchecksum use SHA256 unconditionally.
-
-In summary, if you want to be FIPS compliant for now, ensure all nodes are on
-4.0 and then set this volume option.
+NOTE: Once glusterfs 3.x is EOL'ed, the usage of the option to control this
+change will be removed.
#### 3. Dentry fop serializer xlator on brick stack
**Notes for users:**
@@ -303,26 +300,30 @@ The default is always enabled, as in the older releases.
#### 5. Add option in POSIX to limit hardlinks per inode
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/370
-- Release notes:
- - Use github description
-
-**Limitations:**
+Added an option to POSIX that limits the number of hard links that can be
+created against an inode (file). This helps when there needs to be a different
+hardlink limit than what the local FS provides for the bricks.
+The option to control this behavior is,
+```
+# gluster volume set <volname> storage.max-hardlinks <N>
+```
+Where, `<N>` is 0-0xFFFFFFFF. If the local file system that the brick is using
+has a lower limit than this setting, that would be honoured.
-**Known Issues:**
+Default is set to 100, setting this to 0 turns it off and leaves it to the
+local file system defaults. Setting it to 1 turns off hard links.
#### 6. Enhancements for directory listing in readdirp
**Notes for users:**
-- https://github.com/gluster/glusterfs/issues/239
-- Potentially has some behavioral change in the way rebalance works, needs some documentation efforts there
-- Release notes:
- - Note the changes in rebalance behavior, any performance gains?
-
-**Limitations:**
+Prior to this release, rebalance performed a fix-layout on a directory before
+healing its subdirectories. If there were a lot of subdirs, it could take a
+while before all subdirs were created on the newly added bricks. This led to
+some missed directory listings.
-
-**Known Issues:**
+This is changed with this relase to process children directories before the
+parents, thereby changing the way rebalance acts (files within sub directories
+are migrated first) and also resolving the directory listing issue.
### Developer related
#### 1. xlators should not provide init(), fini() and others directly, but have class_methods
@@ -342,17 +343,21 @@ The older mechanism is still supported, but not preferred.
#### 2. Framework for distributed testing
**Notes for developers:**
-- https://github.com/gluster/glusterfs/issues/374
-- Developer README already present
-- Release notes:
- - Can be handled by the release team
+A new framework for running the regression tests for Gluster is added. The
+[README](https://github.com/gluster/glusterfs/blob/release-4.0/extras/distributed-testing/README) has details on how to use the same.
#### 3. New API for acquiring mandatory locks
**Notes for developers:**
-- https://github.com/gluster/glusterfs/issues/393
-- Code comment for API exists, and release notes can point to that
-- Release notes:
- - Can be handled by the release team
+The current API for byte-range locks glfs_posix_lock doesn't allow
+applications to specify whether it is advisory or mandatory type lock. This
+particular change is to introduce an extended byte-range lock API with an
+additional argument for including the byte-range lock mode to be one among
+advisory(default) or mandatory.
+
+Refer to the [header](https://github.com/gluster/glusterfs/blob/release-4.0/api/src/glfs.h#L777) for details on how to use this API.
+
+A sample test program can be found [here](https://github.com/gluster/glusterfs/blob/release-4.0/tests/basic/gfapi/mandatory-lock-optimal.c) that also helps in understanding the
+usage of this API.
#### 4. New on-wire protocol (XDR) needed to support iattx and cleaner dictionary structure
**Notes for developers:**
@@ -372,20 +377,14 @@ An example of better encoding dictionary values for wire transfers can be seen
[Here](https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/rpc-for-glusterfs.new-versions.md) is some additional information on Gluster RPC programs for the inquisitive.
-#### 5. Leases support on GlusterFS
+#### 5. The protocol xlators should prevent sending binary values in a dict over the networks
**Notes for developers:**
-- https://github.com/gluster/glusterfs/issues/350
-- Developer documentation is available in the header file
-- Release notes:
- - Can be handled by the release team
-
-#### 6. The protocol xlators should prevent sending binary values in a dict over the networks
-**Notes for developers:**
-- https://github.com/gluster/glusterfs/issues/203
-- Release notes:
- - Point to developer documentation if any
+Dict data over the wire in Gluster was sent in binary. This has been changed
+with this release, as the on-wire protocol wire is also new, to send XDR encoded
+dict values across. In the future, any new dict type needs to also handle the
+required XDR encoding of the same.
-#### 7. Translator to handle 'global' options
+#### 6. Translator to handle 'global' options
**Notes for developers:**
GlusterFS process has around 50 command line arguments to itself. While many of
the options are initial settings, many others can change its value in volume
@@ -400,10 +399,143 @@ the same to the global option translator. An example is provided [here](https://
## Major issues
-**TBD**
+**None**
## Bugs addressed
Bugs addressed since release-3.13.0 are listed below.
-**TBD**
+- [#827334](https://bugzilla.redhat.com/827334): gfid is not there in the fsetattr and rchecksum requests being sent from protocol client
+- [#1336889](https://bugzilla.redhat.com/1336889): Gluster's XDR does not conform to RFC spec
+- [#1369028](https://bugzilla.redhat.com/1369028): rpc: Change the way client uuid is built
+- [#1370116](https://bugzilla.redhat.com/1370116): Tests : Adding a test to check for inode leak
+- [#1428060](https://bugzilla.redhat.com/1428060): write-behind: Allow trickling-writes to be configurable, fix usage of page_size and window_size
+- [#1430305](https://bugzilla.redhat.com/1430305): Fix memory leak in rebalance
+- [#1431955](https://bugzilla.redhat.com/1431955): [Disperse] Implement open fd heal for disperse volume
+- [#1440659](https://bugzilla.redhat.com/1440659): Add events to notify disk getting fill
+- [#1443145](https://bugzilla.redhat.com/1443145): Free runtime allocated resources upon graph switch or glfs_fini()
+- [#1446381](https://bugzilla.redhat.com/1446381): detach start does not kill the tierd
+- [#1467250](https://bugzilla.redhat.com/1467250): Accessing a file when source brick is down results in that FOP being hung
+- [#1467614](https://bugzilla.redhat.com/1467614): Gluster read/write performance improvements on NVMe backend
+- [#1469487](https://bugzilla.redhat.com/1469487): sys_xxx() functions should guard against bad return values from fs
+- [#1471031](https://bugzilla.redhat.com/1471031): dht_(f)xattrop does not implement migration checks
+- [#1471753](https://bugzilla.redhat.com/1471753): [disperse] Keep stripe in in-memory cache for the non aligned write
+- [#1474768](https://bugzilla.redhat.com/1474768): The output of the "gluster help" command is difficult to read
+- [#1479528](https://bugzilla.redhat.com/1479528): Rebalance estimate(ETA) shows wrong details(as intial message of 10min wait reappears) when still in progress
+- [#1480491](https://bugzilla.redhat.com/1480491): tests: Enable geo-rep test cases
+- [#1482064](https://bugzilla.redhat.com/1482064): Bringing down data bricks in cyclic order results in arbiter brick becoming the source for heal.
+- [#1488103](https://bugzilla.redhat.com/1488103): Rebalance fails on NetBSD because fallocate is not implemented
+- [#1492625](https://bugzilla.redhat.com/1492625): Directory listings on fuse mount are very slow due to small number of getdents() entries
+- [#1496335](https://bugzilla.redhat.com/1496335): Extreme Load from self-heal
+- [#1498966](https://bugzilla.redhat.com/1498966): Test case ./tests/bugs/bug-1371806_1.t is failing
+- [#1499566](https://bugzilla.redhat.com/1499566): [Geo-rep]: Directory renames are not synced in hybrid crawl
+- [#1501054](https://bugzilla.redhat.com/1501054): Structured logging support for Gluster logs
+- [#1501132](https://bugzilla.redhat.com/1501132): posix health check should validate time taken between write timestamp and read timestamp cycle
+- [#1502610](https://bugzilla.redhat.com/1502610): disperse eager-lock degrades performance for file create workloads
+- [#1503227](https://bugzilla.redhat.com/1503227): [RFE] Changelog option in a gluster volume disables with no warning if geo-rep is configured
+- [#1505660](https://bugzilla.redhat.com/1505660): [QUOTA] man page of gluster should be updated to list quota commands
+- [#1506104](https://bugzilla.redhat.com/1506104): gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
+- [#1506140](https://bugzilla.redhat.com/1506140): Add quorum checks in post-op
+- [#1506197](https://bugzilla.redhat.com/1506197): [Parallel-Readdir]Warning messages in client log saying 'parallel-readdir' is not recognized.
+- [#1508898](https://bugzilla.redhat.com/1508898): Add new configuration option to manage deletion of Worm files
+- [#1508947](https://bugzilla.redhat.com/1508947): glusterfs: Include path in pkgconfig file is wrong
+- [#1509189](https://bugzilla.redhat.com/1509189): timer: Possible race condition between gf_timer_* routines
+- [#1509254](https://bugzilla.redhat.com/1509254): snapshot remove does not cleans lvm for deactivated snaps
+- [#1509340](https://bugzilla.redhat.com/1509340): glusterd does not write pidfile correctly when forking
+- [#1509412](https://bugzilla.redhat.com/1509412): Change default versions of certain features to 3.13 from 4.0
+- [#1509644](https://bugzilla.redhat.com/1509644): rpc: make actor search parallel
+- [#1509647](https://bugzilla.redhat.com/1509647): rpc: optimize fop program lookup
+- [#1509845](https://bugzilla.redhat.com/1509845): In distribute volume after glusterd restart, brick goes offline
+- [#1510324](https://bugzilla.redhat.com/1510324): Master branch is broken because of the conflicts
+- [#1510397](https://bugzilla.redhat.com/1510397): Compiler atomic built-ins are not correctly detected
+- [#1510401](https://bugzilla.redhat.com/1510401): fstat returns ENOENT/ESTALE
+- [#1510415](https://bugzilla.redhat.com/1510415): spurious failure of tests/bugs/glusterd/bug-1345727-bricks-stop-on-no-quorum-validation.t
+- [#1510874](https://bugzilla.redhat.com/1510874): print-backtrace.sh failing with cpio version 2.11 or older
+- [#1510940](https://bugzilla.redhat.com/1510940): The number of bytes of the quota specified in version 3.7 or later is incorrect
+- [#1511310](https://bugzilla.redhat.com/1511310): Test bug-1483058-replace-brick-quorum-validation.t fails inconsistently
+- [#1511339](https://bugzilla.redhat.com/1511339): In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
+- [#1512437](https://bugzilla.redhat.com/1512437): parallel-readdir = TRUE prevents directories listing
+- [#1512451](https://bugzilla.redhat.com/1512451): Not able to create snapshot
+- [#1512455](https://bugzilla.redhat.com/1512455): glustereventsd hardcodes working-directory
+- [#1512483](https://bugzilla.redhat.com/1512483): Not all files synced using geo-replication
+- [#1513692](https://bugzilla.redhat.com/1513692): io-stats appends now instead of overwriting which floods filesystem with logs
+- [#1513928](https://bugzilla.redhat.com/1513928): call stack group list leaks
+- [#1514329](https://bugzilla.redhat.com/1514329): bug-1247563.t is failing on master
+- [#1515161](https://bugzilla.redhat.com/1515161): Memory leak in locks xlator
+- [#1515163](https://bugzilla.redhat.com/1515163): centos regression fails for tests/bugs/replicate/bug-1292379.t
+- [#1515266](https://bugzilla.redhat.com/1515266): Prevent ec from continue processing heal operations after PARENT_DOWN
+- [#1516206](https://bugzilla.redhat.com/1516206): EC DISCARD doesn't punch hole properly
+- [#1517068](https://bugzilla.redhat.com/1517068): Unable to change the Slave configurations
+- [#1517554](https://bugzilla.redhat.com/1517554): help for volume profile is not in man page
+- [#1517633](https://bugzilla.redhat.com/1517633): Geo-rep: access-mount config is not working
+- [#1517904](https://bugzilla.redhat.com/1517904): tests/bugs/core/multiplex-limit-issue-151.t fails sometimes in upstream master
+- [#1517961](https://bugzilla.redhat.com/1517961): Failure of some regression tests on Centos7 (passes on centos6)
+- [#1518508](https://bugzilla.redhat.com/1518508): Change GD_OP_VERSION to 3_13_0 from 3_12_0 for RFE https://bugzilla.redhat.com/show_bug.cgi?id=1464350
+- [#1518582](https://bugzilla.redhat.com/1518582): Reduce lock contention on fdtable lookup
+- [#1519598](https://bugzilla.redhat.com/1519598): Reduce lock contention on protocol client manipulating fd
+- [#1520245](https://bugzilla.redhat.com/1520245): High mem/cpu usage, brick processes not starting and ssl encryption issues while testing scaling with multiplexing (500-800 vols)
+- [#1520758](https://bugzilla.redhat.com/1520758): [Disperse] Add stripe in cache even if file/data does not exist
+- [#1520974](https://bugzilla.redhat.com/1520974): Compiler warning in dht-common.c because of a switch statement on a boolean
+- [#1521013](https://bugzilla.redhat.com/1521013): rfc.sh should allow custom remote names for ORIGIN
+- [#1521014](https://bugzilla.redhat.com/1521014): quota_unlink_cbk crashes when loc.inode is null
+- [#1521116](https://bugzilla.redhat.com/1521116): Absorb all test fixes from 3.8-fb branch into master
+- [#1521213](https://bugzilla.redhat.com/1521213): crash when gifs_set_logging is called concurrently
+- [#1522651](https://bugzilla.redhat.com/1522651): rdma transport may access an obsolete item in gf_rdma_device_t->all_mr, and causes glusterfsd/glusterfs process crash.
+- [#1522662](https://bugzilla.redhat.com/1522662): Store allocated objects in the mem_acct
+- [#1522775](https://bugzilla.redhat.com/1522775): glusterd consuming high memory
+- [#1522847](https://bugzilla.redhat.com/1522847): gNFS Bug Fixes
+- [#1522950](https://bugzilla.redhat.com/1522950): io-threads is unnecessarily calling accurate time calls on every FOP
+- [#1522968](https://bugzilla.redhat.com/1522968): glusterd bug fixes
+- [#1523295](https://bugzilla.redhat.com/1523295): md-cache should have an option to cache STATFS calls
+- [#1523353](https://bugzilla.redhat.com/1523353): io-stats bugs and features
+- [#1524252](https://bugzilla.redhat.com/1524252): quick-read: Discard cache for fallocate, zerofill and discard ops
+- [#1524365](https://bugzilla.redhat.com/1524365): feature/bitrot: remove internal xattrs from lookup cbk
+- [#1524816](https://bugzilla.redhat.com/1524816): heketi was not removing the LVs associated with Bricks removed when Gluster Volumes were deleted
+- [#1526402](https://bugzilla.redhat.com/1526402): glusterd crashes when 'gluster volume set help' is executed
+- [#1526780](https://bugzilla.redhat.com/1526780): ./run-tests-in-vagrant.sh fails because of disabled Gluster/NFS
+- [#1528558](https://bugzilla.redhat.com/1528558): /usr/sbin/glusterfs crashing on Red Hat OpenShift Container Platform node
+- [#1528975](https://bugzilla.redhat.com/1528975): Fedora 28 (Rawhide) renamed the pyxattr package to python2-pyxattr
+- [#1529440](https://bugzilla.redhat.com/1529440): Files are not rebalanced if destination brick(available size) is of smaller size than source brick(available size)
+- [#1529463](https://bugzilla.redhat.com/1529463): JWT support without external dependency
+- [#1529480](https://bugzilla.redhat.com/1529480): Improve geo-replication logging
+- [#1529488](https://bugzilla.redhat.com/1529488): entries not getting cleared post healing of softlinks (stale entries showing up in heal info)
+- [#1529515](https://bugzilla.redhat.com/1529515): AFR: 3-way-replication: gluster volume set cluster.quorum-count should validate max no. of brick count to accept
+- [#1529883](https://bugzilla.redhat.com/1529883): glusterfind is extremely slow if there are lots of changes
+- [#1530281](https://bugzilla.redhat.com/1530281): glustershd fails to start on a volume force start after a brick is down
+- [#1530910](https://bugzilla.redhat.com/1530910): Use after free in cli_cmd_volume_create_cbk
+- [#1531149](https://bugzilla.redhat.com/1531149): memory leak: get-state leaking memory in small amounts
+- [#1531987](https://bugzilla.redhat.com/1531987): increment of a boolean expression warning
+- [#1532238](https://bugzilla.redhat.com/1532238): Failed to access volume via Samba with undefined symbol from socket.so
+- [#1532591](https://bugzilla.redhat.com/1532591): Tests: Geo-rep tests are failing in few regression machines
+- [#1533594](https://bugzilla.redhat.com/1533594): EC test fails when brick mux is enabled
+- [#1533736](https://bugzilla.redhat.com/1533736): posix_statfs returns incorrect f_bfree values if brick is full.
+- [#1533804](https://bugzilla.redhat.com/1533804): readdir-ahead: change of cache-size should be atomic
+- [#1533815](https://bugzilla.redhat.com/1533815): Mark ./tests/basic/ec/heal-info.t as bad
+- [#1534602](https://bugzilla.redhat.com/1534602): FUSE reverse notificatons are not written to fuse dump
+- [#1535438](https://bugzilla.redhat.com/1535438): Take full lock on files in 3 way replication
+- [#1535772](https://bugzilla.redhat.com/1535772): Random GlusterFSD process dies during rebalance
+- [#1536913](https://bugzilla.redhat.com/1536913): tests/bugs/cli/bug-822830.t fails on Centos 7 and locally
+- [#1538723](https://bugzilla.redhat.com/1538723): build: glibc has removed legacy rpc headers and rpcgen in Fedora28, use libtirpc
+- [#1539657](https://bugzilla.redhat.com/1539657): Georeplication tests intermittently fail
+- [#1539701](https://bugzilla.redhat.com/1539701): gsyncd is running gluster command to get config file path is not required
+- [#1539842](https://bugzilla.redhat.com/1539842): GlusterFS 4.0.0 tracker
+- [#1540438](https://bugzilla.redhat.com/1540438): Remove lock recovery logic from client and server protocol translators
+- [#1540554](https://bugzilla.redhat.com/1540554): Optimize glusterd_import_friend_volume code path
+- [#1540882](https://bugzilla.redhat.com/1540882): Do lock conflict check correctly for wait-list
+- [#1541117](https://bugzilla.redhat.com/1541117): sdfs: crashes if the features is enabled
+- [#1541277](https://bugzilla.redhat.com/1541277): dht_layout_t leak in dht_populate_inode_for_dentry
+- [#1541880](https://bugzilla.redhat.com/1541880): Volume wrong size
+- [#1541928](https://bugzilla.redhat.com/1541928): A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
+- [#1542380](https://bugzilla.redhat.com/1542380): Changes to self-heal logic w.r.t. detecting of split-brains
+- [#1542382](https://bugzilla.redhat.com/1542382): Add quorum checks in post-op
+- [#1542829](https://bugzilla.redhat.com/1542829): Too many log messages about dictionary and options
+- [#1543487](https://bugzilla.redhat.com/1543487): dht_lookup_unlink_of_false_linkto_cbk fails with "Permission denied"
+- [#1543706](https://bugzilla.redhat.com/1543706): glusterd fails to attach brick during restart of the node
+- [#1543711](https://bugzilla.redhat.com/1543711): glustershd/glusterd is not using right port when connecting to glusterfsd process
+- [#1544366](https://bugzilla.redhat.com/1544366): Rolling upgrade to 4.0 is broken
+- [#1544638](https://bugzilla.redhat.com/1544638): 3.8 -> 3.10 rolling upgrade fails (same for 3.12 or 3.13) on Ubuntu 14
+- [#1545724](https://bugzilla.redhat.com/1545724): libgfrpc does not export IPv6 RPC methods even with --with-ipv6-default
+- [#1547635](https://bugzilla.redhat.com/1547635): add option to bulld rpm without server
+- [#1547842](https://bugzilla.redhat.com/1547842): Typo error in __dht_check_free_space function log message
+- [#1548264](https://bugzilla.redhat.com/1548264): [Rebalance] "Migrate file failed: <filepath>: failed to get xattr [No data available]" warnings in rebalance logs
+- [#1548271](https://bugzilla.redhat.com/1548271): DHT calls dht_lookup_everywhere for 1xn volumes