summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorShyamsundarR <srangana@redhat.com>2017-12-01 19:04:39 -0500
committerShyamsundarR <srangana@redhat.com>2017-12-01 19:04:54 -0500
commitaee30521e006a21b8f8d9a65ffe40d87125ccc17 (patch)
tree6ba098d35c80aa294725f68e93b9e6bbaaa7815e
parent3a591d1207e4bdc86e39d805dba0e8d7a85d7e75 (diff)
doc: Added final release notes for 3.13.0v3.13.0
Change-Id: I1f2d42f6c24ba55414888bcaf938a905e34ecb4d BUG: 1510012 Signed-off-by: ShyamsundarR <srangana@redhat.com>
-rw-r--r--doc/release-notes/3.13.0.md326
1 files changed, 242 insertions, 84 deletions
diff --git a/doc/release-notes/3.13.0.md b/doc/release-notes/3.13.0.md
index ddaa66f..687f69d 100644
--- a/doc/release-notes/3.13.0.md
+++ b/doc/release-notes/3.13.0.md
@@ -1,7 +1,7 @@
# Release notes for Gluster 3.13.0
-This is a major Gluster release that includes, a range of usability related
-features, GFAPI developer enhancements to Gluster, among other bug fixes.
+This is a major release that includes a range of features enhancing usability;
+enhancements to GFAPI for developers and a set of bug fixes.
The most notable features and changes are documented on this page. A full list
of bugs that have been addressed is included further below.
@@ -11,9 +11,8 @@ of bugs that have been addressed is included further below.
### Addition of summary option to the heal info CLI
**Notes for users:**
-Gluster heal info CLI has been enhanced with a 'summary' option, that helps in
-displaying the statistics of entries pending heal, in split-brain and,
-currently being healed, per brick.
+The Gluster heal info CLI now has a 'summary' option displaying the statistics
+of entries pending heal, in split-brain and currently being healed, per brick.
Usage:
```
@@ -37,25 +36,17 @@ Number of entries in split-brain: 1
Number of entries possibly healing: 0
```
-This option supports xml format output, when the CLI is provided with the
---xml option.
+Using the --xml option with the CLI results in the output in XML format.
NOTE: Summary information is obtained in a similar fashion to detailed
information, thus time taken for the command to complete would still be the
same, and not faster.
-**Limitations:**
-
-None
-
-**Known Issues:**
-None
-
### Addition of checks for allowing lookups in AFR and removal of 'cluster.quorum-reads' volume option.
- ** Notes for users:**
+** Notes for users:**
-Traditionally, AFR has never failed lookup unless there is a gfid mismatch.
+Previously, AFR has never failed lookup unless there is a gfid mismatch.
This behavior is being changed with this release, as a part of
fixing [Bug#1515572](https://bugzilla.redhat.com/show_bug.cgi?id=1515572).
@@ -81,102 +72,102 @@ Further reference: [mailing list discussions on topic](http://lists.gluster.org/
**Notes for users:**
-If an user wants to have a finer control on the number of ports to be exposed
-for glusterd to allocate for its daemons, one can define the upper limit of
-max-port value in glusterd.vol. The default max-port value is set to 65535 and
-currently the entry of this configuration is commented out. If an user wants to
-configure this value, please set the desired value and uncomment this option and
-restart the glusterd service.
-
-**Limitations:**
-
-
-**Known Issues:**
+Glusterd configuration provides an option to control number of ports that can
+be used by gluster daemons on a node.
+The option is named "max-port" and can be set in the glusterd.vol file per-node
+to the desired maximum.
### Prevention of other processes accessing the mounted brick snapshots
**Notes for users:**
+Snapshot of gluster bricks are now only mounted when the snapshot is active, or
+when these are restored. Prior to this snapshots of gluster volumes were mounted
+by default across the entire life-cycle of the snapshot.
+This behavior is transparent to users and managed by the gluster
+processes.
-**Limitations:**
-
-
-**Known Issues:**
-
-
-### Thin client feature
+### Enabling thin client
**Notes for users:**
+Gluster client stack encompasses the cluster translators (like distribution and
+replication or disperse). This is in addition to the usual caching translators
+on the client stacks. In certain cases this makes the client footprint larger
+than sustainable and also incurs frequent client updates.
+The thin client feature, moves the clustering translators (like distribute and
+other translators below it) and a few caching translators to a managed protocol
+endpoint (called gfproxy) on the gluster server nodes, thus thinning the client
+stack.
-**Limitations:**
+Usage:
+```
+# gluster volume set <volname> config.gfproxyd enable
+```
+The above enables the gfproxy protocol service on the server nodes. To mount a
+client that interacts with this end point, use the --thin-client mount option.
-**Known Issues:**
+Example:
+```
+# glusterfs --thin-client --volfile-id=<volname> --volfile-server=<host> <mountpoint>
+```
+**Limitations:**
+This feature is a technical preview in the 3.13.0 release, and will be improved
+in the upcoming releases.
-### Ability to reserve backend storage space
+### Ability to reserve back-end storage space
**Notes for users:**
+Posix translator is enhanced with an option that enables reserving disk space
+on the bricks. This reserved space is not used by the client mounts thus
+preventing disk full scenarios, as disk expansion or cluster expansion is more
+tedious to achieve when back-end bricks are full.
+When the bricks have free space equal to or lesser than the reserved space,
+mount points using the brick would get ENOSPC errors.
-**Limitations:**
-
-
-**Known Issues:**
+The default value for the option is 1(%) of the brick size. If set to 0(%) this
+feature is disabled. The option takes a numeric percentage value, that reserves
+up to that percentage of disk space.
+Usage:
+```
+# gluster volume set <volname> storage.reserve <number>
+```
### List all the connected clients for a brick and also exported bricks/snapshots from each brick process
**Notes for users:**
+Gluster CLI is enhanced with an option to list all connected clients to a volume
+(or all volumes) and also the list of exported bricks and snapshots for the
+volume.
+Usage:
+```
+# gluster volume status <volname/all> client-list
+```
-**Limitations:**
-
-
-**Known Issues:**
-
-
-### Improved write performance with Disperse xlator, by introducing parallel writes to file
+### Improved write performance with Disperse xlator
**Notes for users:**
+Disperse translator has been enhanced to support parallel writes, that hence
+improves the performance of write operations when using disperse volumes.
-
-**Limitations:**
-
-
-**Known Issues:**
-
+This feature is enabled by default, and can be toggled using the boolean option,
+'disperse.parallel-writes'
### Disperse xlator now supports discard operations
**Notes for users:**
-This feature enables users to punch hole in files
-created on disperse volumes.
+This feature enables users to punch hole in files created on disperse volumes.
Usage:
-
+```
# fallocate -p -o <offset> -l <len> <file_name>
-
-**Limitations:**
-
-None.
-
-**Known Issues:**
-
-None.
-
-### Addressed several compilation warnings with gcc 7.x
-
-**Notes for users:**
-
-
-**Limitations:**
-
-
-**Known Issues:**
-
+```
### Included details about memory pools in statedumps
@@ -205,7 +196,7 @@ have to register callback function. This routine shall be invoked
asynchronously, in gluster thread context, in case of any upcalls sent by the
backend server.
-```sh
+```
int glfs_upcall_register (struct glfs *fs, uint32_t event_list,
glfs_upcall_cbk cbk, void *data);
int glfs_upcall_unregister (struct glfs *fs, uint32_t event_list);
@@ -239,13 +230,11 @@ standardized way that is transparent to the `libgfapi` applications.
### Provided a new xlator to delay fops, to aid slow brick response simulation and debugging
**Notes for developers:**
+Like error-gen translator, a new translator that introduces delays for FOPs is
+added to the code base. This can help determine issues around slow(er) client
+responses and enable better qualification of the translator stacks.
-
-**Limitations:**
-
-
-**Known Issues:**
-
+For usage refer to this [test case](https://github.com/gluster/glusterfs/blob/v3.13.0rc0/tests/features/delay-gen.t).
## Major issues
1. Expanding a gluster volume that is sharded may cause file corruption
@@ -261,4 +250,173 @@ standardized way that is transparent to the `libgfapi` applications.
Bugs addressed since release-3.12.0 are listed below.
-**To Be Done**
+- [#1248393](https://bugzilla.redhat.com/1248393): DHT: readdirp fails to read some directories.
+- [#1258561](https://bugzilla.redhat.com/1258561): Gluster puts PID files in wrong place
+- [#1261463](https://bugzilla.redhat.com/1261463): AFR : [RFE] Improvements needed in "gluster volume heal info" commands
+- [#1294051](https://bugzilla.redhat.com/1294051): Though files are in split-brain able to perform writes to the file
+- [#1328994](https://bugzilla.redhat.com/1328994): When a feature fails needing a higher opversion, the message should state what version it needs.
+- [#1335251](https://bugzilla.redhat.com/1335251): mgmt/glusterd: clang compile warnings in glusterd-snapshot.c
+- [#1350406](https://bugzilla.redhat.com/1350406): [storage/posix] - posix_do_futimes function not implemented
+- [#1365683](https://bugzilla.redhat.com/1365683): Fix crash bug when mnt3_resolve_subdir_cbk fails
+- [#1371806](https://bugzilla.redhat.com/1371806): DHT :- inconsistent 'custom extended attributes',uid and gid, Access permission (for directories) if User set/modifies it after bringing one or more sub-volume down
+- [#1376326](https://bugzilla.redhat.com/1376326): separating attach tier and add brick
+- [#1388509](https://bugzilla.redhat.com/1388509): gluster volume heal info "healed" and "heal-failed" showing wrong information
+- [#1395492](https://bugzilla.redhat.com/1395492): trace/error-gen be turned on together while use 'volume set' command to set one of them
+- [#1396327](https://bugzilla.redhat.com/1396327): gluster core dump due to assert failed GF_ASSERT (brick_index < wordcount);
+- [#1406898](https://bugzilla.redhat.com/1406898): Need build time option to default to IPv6
+- [#1428063](https://bugzilla.redhat.com/1428063): gfproxy: Introduce new server-side daemon called GFProxy
+- [#1432046](https://bugzilla.redhat.com/1432046): symlinks trigger faulty geo-replication state (rsnapshot usecase)
+- [#1443145](https://bugzilla.redhat.com/1443145): Free runtime allocated resources upon graph switch or glfs_fini()
+- [#1445663](https://bugzilla.redhat.com/1445663): Improve performance with xattrop update.
+- [#1451434](https://bugzilla.redhat.com/1451434): Use a bitmap to store local node info instead of conf->local_nodeuuids[i].uuids
+- [#1454590](https://bugzilla.redhat.com/1454590): run.c demo mode broken
+- [#1457985](https://bugzilla.redhat.com/1457985): Rebalance estimate time sometimes shows negative values
+- [#1460514](https://bugzilla.redhat.com/1460514): [Ganesha] : Ganesha crashes while cluster enters failover/failback mode
+- [#1461018](https://bugzilla.redhat.com/1461018): Implement DISCARD FOP for EC
+- [#1462969](https://bugzilla.redhat.com/1462969): Peer-file parsing is too fragile
+- [#1467209](https://bugzilla.redhat.com/1467209): [Scale] : Rebalance ETA shows the initial estimate to be ~140 days,finishes within 18 hours though.
+- [#1467614](https://bugzilla.redhat.com/1467614): Gluster read/write performance improvements on NVMe backend
+- [#1468291](https://bugzilla.redhat.com/1468291): NFS Sub directory is getting mounted on solaris 10 even when the permission is restricted in nfs.export-dir volume option
+- [#1471366](https://bugzilla.redhat.com/1471366): Posix xlator needs to reserve disk space to prevent the brick from getting full.
+- [#1472267](https://bugzilla.redhat.com/1472267): glusterd fails to start
+- [#1472609](https://bugzilla.redhat.com/1472609): Root path xattr does not heal correctly in certain cases when volume is in stopped state
+- [#1472758](https://bugzilla.redhat.com/1472758): Running sysbench on vm disk from plain distribute gluster volume causes disk corruption
+- [#1472961](https://bugzilla.redhat.com/1472961): [GNFS+EC] lock is being granted to 2 different client for the same data range at a time after performing lock acquire/release from the clients1
+- [#1473026](https://bugzilla.redhat.com/1473026): replace-brick failure leaves glusterd in inconsistent state
+- [#1473636](https://bugzilla.redhat.com/1473636): Launch metadata heal in discover code path.
+- [#1474180](https://bugzilla.redhat.com/1474180): [Scale] : Client logs flooded with "inode context is NULL" error messages
+- [#1474190](https://bugzilla.redhat.com/1474190): cassandra fails on gluster-block with both replicate and ec volumes
+- [#1474309](https://bugzilla.redhat.com/1474309): Disperse: Coverity issue
+- [#1474318](https://bugzilla.redhat.com/1474318): dht remove-brick status does not indicate failures files not migrated because of a lack of space
+- [#1474639](https://bugzilla.redhat.com/1474639): [Scale] : Rebalance Logs are bulky.
+- [#1475255](https://bugzilla.redhat.com/1475255): [Geo-rep]: Geo-rep hangs in changelog mode
+- [#1475282](https://bugzilla.redhat.com/1475282): [Remove-brick] Few files are getting migrated eventhough the bricks crossed cluster.min-free-disk value
+- [#1475300](https://bugzilla.redhat.com/1475300): implementation of fallocate call in read-only xlator
+- [#1475308](https://bugzilla.redhat.com/1475308): [geo-rep]: few of the self healed hardlinks on master did not sync to slave
+- [#1475605](https://bugzilla.redhat.com/1475605): gluster-block default shard-size should be 64MB
+- [#1475632](https://bugzilla.redhat.com/1475632): Brick Multiplexing: Brick process crashed at changetimerecorder(ctr) translator when restarting volumes
+- [#1476205](https://bugzilla.redhat.com/1476205): [EC]: md5sum mismatches every time for a file from the fuse client on EC volume
+- [#1476295](https://bugzilla.redhat.com/1476295): md-cache uses incorrect xattr keynames for GF_POSIX_ACL keys
+- [#1476324](https://bugzilla.redhat.com/1476324): md-cache: xattr values should not be checked with string functions
+- [#1476410](https://bugzilla.redhat.com/1476410): glusterd: code lacks clarity of logic in glusterd_get_quorum_cluster_counts()
+- [#1476665](https://bugzilla.redhat.com/1476665): [Perf] : Large file sequential reads are off target by ~38% on FUSE/Ganesha
+- [#1476668](https://bugzilla.redhat.com/1476668): [Disperse] : Improve heal info command to handle obvious cases
+- [#1476719](https://bugzilla.redhat.com/1476719): glusterd: flow in glusterd_validate_quorum() could be streamlined
+- [#1476785](https://bugzilla.redhat.com/1476785): scripts: invalid test in S32gluster_enable_shared_storage.sh
+- [#1476861](https://bugzilla.redhat.com/1476861): packaging: /var/lib/glusterd/options should be %config(noreplace)
+- [#1476957](https://bugzilla.redhat.com/1476957): peer-parsing.t fails on NetBSD
+- [#1477169](https://bugzilla.redhat.com/1477169): AFR entry self heal removes a directory's .glusterfs symlink.
+- [#1477404](https://bugzilla.redhat.com/1477404): eager-lock should be off for cassandra to work at the moment
+- [#1477488](https://bugzilla.redhat.com/1477488): Permission denied errors when appending files after readdir
+- [#1478297](https://bugzilla.redhat.com/1478297): Add NULL gfid checks before creating file
+- [#1478710](https://bugzilla.redhat.com/1478710): when gluster pod is restarted, bricks from the restarted pod fails to connect to fuse, self-heal etc
+- [#1479030](https://bugzilla.redhat.com/1479030): nfs process crashed in "nfs3_getattr"
+- [#1480099](https://bugzilla.redhat.com/1480099): More useful error - replace 'not optimal'
+- [#1480445](https://bugzilla.redhat.com/1480445): Log entry of files skipped/failed during rebalance operation
+- [#1480525](https://bugzilla.redhat.com/1480525): Make choose-local configurable through `volume-set` command
+- [#1480591](https://bugzilla.redhat.com/1480591): [Scale] : I/O errors on multiple gNFS mounts with "Stale file handle" during rebalance of an erasure coded volume.
+- [#1481199](https://bugzilla.redhat.com/1481199): mempool: run-time crash when built with --disable-mempool
+- [#1481600](https://bugzilla.redhat.com/1481600): rpc: client_t and related objects leaked due to incorrect ref counts
+- [#1482023](https://bugzilla.redhat.com/1482023): snpashots issues with other processes accessing the mounted brick snapshots
+- [#1482344](https://bugzilla.redhat.com/1482344): Negative Test: glusterd crashes for some of the volume options if set at cluster level
+- [#1482906](https://bugzilla.redhat.com/1482906): /var/lib/glusterd/peers File had a blank line, Stopped Glusterd from starting
+- [#1482923](https://bugzilla.redhat.com/1482923): afr: check op_ret value in __afr_selfheal_name_impunge
+- [#1483058](https://bugzilla.redhat.com/1483058): [quorum]: Replace brick is happened when Quorum not met.
+- [#1483995](https://bugzilla.redhat.com/1483995): packaging: use rdma-core(-devel) instead of ibverbs, rdmacm; disable rdma on armv7hl
+- [#1484215](https://bugzilla.redhat.com/1484215): Add Deepshika has CI Peer
+- [#1484225](https://bugzilla.redhat.com/1484225): [rpc]: EPOLLERR - disconnecting now messages every 3 secs after completing rebalance
+- [#1484246](https://bugzilla.redhat.com/1484246): [PATCH] incorrect xattr list handling on FreeBSD
+- [#1484490](https://bugzilla.redhat.com/1484490): File-level WORM allows mv over read-only files
+- [#1484709](https://bugzilla.redhat.com/1484709): [geo-rep+qr]: Crashes observed at slave from qr_lookup_sbk during rename/hardlink/rebalance cases
+- [#1484722](https://bugzilla.redhat.com/1484722): return ENOSYS for 'non readable' FOPs
+- [#1485962](https://bugzilla.redhat.com/1485962): gluster-block profile needs to have strict-o-direct
+- [#1486134](https://bugzilla.redhat.com/1486134): glusterfsd (brick) process crashed
+- [#1487644](https://bugzilla.redhat.com/1487644): Fix reference to readthedocs.io in source code and elsewhere
+- [#1487830](https://bugzilla.redhat.com/1487830): scripts: mount.glusterfs contains non-portable bashisms
+- [#1487840](https://bugzilla.redhat.com/1487840): glusterd: spelling errors reported by Debian maintainer
+- [#1488354](https://bugzilla.redhat.com/1488354): gluster-blockd process crashed and core generated
+- [#1488399](https://bugzilla.redhat.com/1488399): Crash in dht_check_and_open_fd_on_subvol_task()
+- [#1488546](https://bugzilla.redhat.com/1488546): [RHHI] cannot boot vms created from template when disk format = qcow2
+- [#1488808](https://bugzilla.redhat.com/1488808): Warning on FreeBSD regarding -Wformat-extra-args
+- [#1488829](https://bugzilla.redhat.com/1488829): Fix unused variable when TCP_USER_TIMEOUT is undefined
+- [#1488840](https://bugzilla.redhat.com/1488840): Fix guard define on nl-cache
+- [#1488906](https://bugzilla.redhat.com/1488906): Fix clagn/gcc warning for umountd
+- [#1488909](https://bugzilla.redhat.com/1488909): Fix the type of 'len' in posix.c, clang is showing a warning
+- [#1488913](https://bugzilla.redhat.com/1488913): Sub-directory mount details are incorrect in /proc/mounts
+- [#1489432](https://bugzilla.redhat.com/1489432): disallow replace brick operation on plain distribute volume
+- [#1489823](https://bugzilla.redhat.com/1489823): set the shard-block-size to 64MB in virt profile
+- [#1490642](https://bugzilla.redhat.com/1490642): glusterfs client crash when removing directories
+- [#1490897](https://bugzilla.redhat.com/1490897): GlusterD returns a bad memory pointer in glusterd_get_args_from_dict()
+- [#1491025](https://bugzilla.redhat.com/1491025): rpc: TLSv1_2_method() is deprecated in OpenSSL-1.1
+- [#1491670](https://bugzilla.redhat.com/1491670): [afr] split-brain observed on T files post hardlink and rename in x3 volume
+- [#1492109](https://bugzilla.redhat.com/1492109): Provide brick list as part of VOLUME_CREATE event.
+- [#1492542](https://bugzilla.redhat.com/1492542): Gluster v status client-list prints wrong output for multiplexed volume.
+- [#1492849](https://bugzilla.redhat.com/1492849): xlator/tier: flood of -Wformat-truncation warnings with gcc-7.
+- [#1492851](https://bugzilla.redhat.com/1492851): xlator/bitrot: flood of -Wformat-truncation warnings with gcc-7.
+- [#1492968](https://bugzilla.redhat.com/1492968): CLIENT_CONNECT event is not notified by eventsapi
+- [#1492996](https://bugzilla.redhat.com/1492996): Readdirp is considerably slower than readdir on acl clients
+- [#1493133](https://bugzilla.redhat.com/1493133): GlusterFS failed to build while running `make`
+- [#1493415](https://bugzilla.redhat.com/1493415): self-heal daemon stuck
+- [#1493539](https://bugzilla.redhat.com/1493539): AFR_SUBVOL_UP and AFR_SUBVOLS_DOWN events not working
+- [#1493893](https://bugzilla.redhat.com/1493893): gluster volume asks for confirmation for disperse volume even with force
+- [#1493967](https://bugzilla.redhat.com/1493967): glusterd ends up with multiple uuids for the same node
+- [#1495384](https://bugzilla.redhat.com/1495384): Gluster 3.12.1 Packages require manual systemctl daemon reload after install
+- [#1495436](https://bugzilla.redhat.com/1495436): [geo-rep]: Scheduler help needs correction for description of --no-color
+- [#1496363](https://bugzilla.redhat.com/1496363): Add generated HMAC token in header for webhook calls
+- [#1496379](https://bugzilla.redhat.com/1496379): glusterfs process consume huge memory on both server and client node
+- [#1496675](https://bugzilla.redhat.com/1496675): Verify pool pointer before destroying it
+- [#1498570](https://bugzilla.redhat.com/1498570): client-io-threads option not working for replicated volumes
+- [#1499004](https://bugzilla.redhat.com/1499004): [Glusterd] Volume operations fail on a (tiered) volume because of a stale lock held by one of the nodes
+- [#1499159](https://bugzilla.redhat.com/1499159): [geo-rep]: Improve the output message to reflect the real failure with schedule_georep script
+- [#1499180](https://bugzilla.redhat.com/1499180): [geo-rep]: Observed "Operation not supported" error with traceback on slave log
+- [#1499391](https://bugzilla.redhat.com/1499391): [geo-rep]: Worker crashes with OSError: [Errno 61] No data available
+- [#1499393](https://bugzilla.redhat.com/1499393): [geo-rep] master worker crash with interrupted system call
+- [#1499509](https://bugzilla.redhat.com/1499509): Brick Multiplexing: Gluster volume start force complains with command "Error : Request timed out" when there are multiple volumes
+- [#1499641](https://bugzilla.redhat.com/1499641): gfapi: API needed to set lk_owner
+- [#1499663](https://bugzilla.redhat.com/1499663): Mark test case ./tests/bugs/bug-1371806_1.t as a bad test case.
+- [#1499933](https://bugzilla.redhat.com/1499933): md-cache: Add additional samba and macOS specific EAs to mdcache
+- [#1500269](https://bugzilla.redhat.com/1500269): opening a file that is destination of rename results in ENOENT errors
+- [#1500284](https://bugzilla.redhat.com/1500284): [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
+- [#1500346](https://bugzilla.redhat.com/1500346): [geo-rep]: Incorrect last sync "0" during hystory crawl after upgrade/stop-start
+- [#1500433](https://bugzilla.redhat.com/1500433): [geo-rep]: RSYNC throwing internal errors
+- [#1500649](https://bugzilla.redhat.com/1500649): Shellcheck errors in hook scripts
+- [#1501235](https://bugzilla.redhat.com/1501235): [SNAPSHOT] Unable to mount a snapshot on client
+- [#1501317](https://bugzilla.redhat.com/1501317): glusterfs fails to build twice in a row
+- [#1501390](https://bugzilla.redhat.com/1501390): Intermittent failure in tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t on NetBSD
+- [#1502253](https://bugzilla.redhat.com/1502253): snapshot_scheduler crashes when SELinux is absent on the system
+- [#1503246](https://bugzilla.redhat.com/1503246): clean up port map on brick disconnect
+- [#1503394](https://bugzilla.redhat.com/1503394): Mishandling null check at send_brick_req of glusterfsd/src/gf_attach.c
+- [#1503424](https://bugzilla.redhat.com/1503424): server.allow-insecure should be visible in "gluster volume set help"
+- [#1503510](https://bugzilla.redhat.com/1503510): [BitRot] man page of gluster needs to be updated for scrub-frequency
+- [#1503519](https://bugzilla.redhat.com/1503519): default timeout of 5min not honored for analyzing split-brain files post setfattr replica.split-brain-heal-finalize
+- [#1503983](https://bugzilla.redhat.com/1503983): Wrong usage of getopt shell command in hook-scripts
+- [#1505253](https://bugzilla.redhat.com/1505253): Update .t test files to use the new tier commands
+- [#1505323](https://bugzilla.redhat.com/1505323): When sub-dir is mounted on Fuse client,adding bricks to the same volume unmounts the subdir from fuse client
+- [#1505325](https://bugzilla.redhat.com/1505325): Potential use of NULL `this` variable before it gets initialized
+- [#1505527](https://bugzilla.redhat.com/1505527): Posix compliance rename test fails on fuse subdir mount
+- [#1505663](https://bugzilla.redhat.com/1505663): [GSS] gluster volume status command is missing in man page
+- [#1505807](https://bugzilla.redhat.com/1505807): files are appendable on file-based worm volume
+- [#1506083](https://bugzilla.redhat.com/1506083): Ignore disk space reserve check for internal FOPS
+- [#1506513](https://bugzilla.redhat.com/1506513): stale brick processes getting created and volume status shows brick as down(pkill glusterfsd glusterfs ,glusterd restart)
+- [#1506589](https://bugzilla.redhat.com/1506589): Brick port mismatch
+- [#1506903](https://bugzilla.redhat.com/1506903): Event webhook should work with HTTPS urls
+- [#1507466](https://bugzilla.redhat.com/1507466): reset-brick commit force failed with glusterd_volume_brickinfo_get Returning -1
+- [#1508898](https://bugzilla.redhat.com/1508898): Add new configuration option to manage deletion of Worm files
+- [#1509789](https://bugzilla.redhat.com/1509789): The output of the "gluster help" command is difficult to read
+- [#1510012](https://bugzilla.redhat.com/1510012): GlusterFS 3.13.0 tracker
+- [#1510019](https://bugzilla.redhat.com/1510019): Change default versions of certain features to 3.13 from 4.0
+- [#1510022](https://bugzilla.redhat.com/1510022): Revert experimental and 4.0 features to prepare for 3.13 release
+- [#1511274](https://bugzilla.redhat.com/1511274): Rebalance estimate(ETA) shows wrong details(as intial message of 10min wait reappears) when still in progress
+- [#1511293](https://bugzilla.redhat.com/1511293): In distribute volume after glusterd restart, brick goes offline
+- [#1511768](https://bugzilla.redhat.com/1511768): In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
+- [#1512435](https://bugzilla.redhat.com/1512435): Test bug-1483058-replace-brick-quorum-validation.t fails inconsistently
+- [#1512460](https://bugzilla.redhat.com/1512460): disperse eager-lock degrades performance for file create workloads
+- [#1513259](https://bugzilla.redhat.com/1513259): NetBSD port
+- [#1514419](https://bugzilla.redhat.com/1514419): gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
+- [#1515045](https://bugzilla.redhat.com/1515045): bug-1247563.t is failing on master
+- [#1515572](https://bugzilla.redhat.com/1515572): Accessing a file when source brick is down results in that FOP being hung
+- [#1516313](https://bugzilla.redhat.com/1516313): Bringing down data bricks in cyclic order results in arbiter brick becoming the source for heal.
+- [#1517692](https://bugzilla.redhat.com/1517692): Memory leak in locks xlator
+- [#1518257](https://bugzilla.redhat.com/1518257): EC DISCARD doesn't punch hole properly
+- [#1518512](https://bugzilla.redhat.com/1518512): Change GD_OP_VERSION to 3_13_0 from 3_12_0 for RFE https://bugzilla.redhat.com/show_bug.cgi?id=1464350
+- [#1518744](https://bugzilla.redhat.com/1518744): Add release notes about DISCARD on EC volume