summaryrefslogtreecommitdiffstats
path: root/doc/release-notes
diff options
context:
space:
mode:
Diffstat (limited to 'doc/release-notes')
-rw-r--r--doc/release-notes/3.12.0.md437
-rw-r--r--doc/release-notes/3.12.1.md34
-rw-r--r--doc/release-notes/3.12.2.md64
-rw-r--r--doc/release-notes/3.12.3.md53
-rw-r--r--doc/release-notes/3.12.4.md30
-rw-r--r--doc/release-notes/3.12.5.md27
-rw-r--r--doc/release-notes/3.12.6.md32
-rw-r--r--doc/release-notes/3.12.7.md14
-rw-r--r--doc/release-notes/3.12.8.md15
9 files changed, 0 insertions, 706 deletions
diff --git a/doc/release-notes/3.12.0.md b/doc/release-notes/3.12.0.md
deleted file mode 100644
index da5deb1826d..00000000000
--- a/doc/release-notes/3.12.0.md
+++ /dev/null
@@ -1,437 +0,0 @@
-# Release notes for Gluster 3.12.0
-
-This is a major Gluster release that includes, ability to mount sub-directories
-using the Gluster native protocol (FUSE), further brick multiplexing
-enhancements that help scale to larger brick counts per node, enhancements to
-gluster get-state CLI enabling better understanding of various bricks and nodes
-participation/roles in the cluster, ability to resolve GFID split-brain using
-existing CLI, easier GFID to real path mapping thus enabling easier diagnostics
-and correction for reported GFID issues (healing among other uses where GFID is
-the only available source for identifying a file), and other changes and fixes.
-
-The most notable features and changes are documented on this page. A full list
-of bugs that have been addressed is included further below.
-
-Further, as 3.11 release is a short term maintenance release, features included
-in that release are available with 3.12 as well, and could be of interest to
-users upgrading to 3.12 from older than 3.11 releases. The 3.11 [release notes](https://gluster.readthedocs.io/en/latest/release-notes/)
-captures the list of features that were introduced with 3.11.
-
-## Major changes and features
-
-### Ability to mount sub-directories using the Gluster FUSE protocol
-**Notes for users:**
-
-With this release, it is possible define sub-directories to be mounted by
-specific clients and additional granularity in the form of clients to mount
-only that portion of the volume for access.
-
-Until recently, Gluster FUSE mounts enabled mounting the entire volume on the
-client. This feature helps sharing a volume among the multiple consumers along
-with enabling restricting access to the sub-directory of choice.
-
-Option controlling sub-directory allow/deny rules can be set as follows:
-```
-# gluster volume set <volname> auth.allow "/subdir1(192.168.1.*),/(192.168.10.*),/subdir2(192.168.8.*)"
-```
-
-How to mount from the client:
-```
-# mount -t glusterfs <hostname>:/<volname>/<subdir> /<mount_point>
-```
-Or,
-```
-# mount -t glusterfs <hostname>:/<volname> -osubdir_mount=<subdir> /<mount_point>
-```
-
-**Limitations:**
-
-- There are no throttling or QoS support for this feature. The feature will
-just provide the namespace isolation for the different clients.
-
-**Known Issues:**
-
-- Once we cross more than 1000s of subdirs in 'auth.allow' option, the
-performance of reconnect / authentication would be impacted.
-
-### GFID to path conversion is enabled by default
-**Notes for users:**
-
-Prior to this feature, only when quota was enabled, did the on disk data have
-pointers back from GFID to their respective filenames. As a result, if there
-were a need to locate the path given a GFID, quota had to be enabled.
-
-The change brought in by this feature, is to enable this on disk data to be
-present, for all cases, than just quota. Further, enhancements here have been
-to improve the manner of storing this information on disk as extended
-attributes.
-
-The internal on disk xattr that is now stored to reference the filename and
-parent for a GFID is, `trusted.gfid2path.<xxhash>`
-
-This feature is enabled by default with this release.
-
-**Limitations:**
-
-None
-
-**Known Issues:**
-
-None
-
-### Various enhancements have been made to the output of get-state CLI command
-**Notes for users:**
-
-The command `#gluster get-state` has been enhanced to output more information
-as below,
-- Arbiter bricks are marked more clearly in a volume that has the feature
-enabled
-- Ability to get all volume options (both set and defaults) in the get-state
-output
-- Rebalance time estimates, for ongoing rebalance, is captured in the get-state
-output
-- If geo-replication is configured, then get-state now captures the session
-details of the same
-
-**Limitations:**
-
-None
-
-**Known Issues:**
-
-None
-
-### Provided an option to set a limit on number of bricks multiplexed in a processes
-**Notes for users:**
-
-This release includes a global option to be switched on only if brick
-multiplexing is enabled for the cluster. The introduction of this option allows
-the user to control the number of bricks that are multiplexed in a process on a
-node. If the limit set by this option is insufficient for a single process,
-more processes are spawned for the subsequent bricks.
-
-Usage:
-```
-#gluster volume set all cluster.max-bricks-per-process <value>
-```
-
-### Provided an option to use localtime timestamps in log entries
-**Limitations:**
-
-Gluster defaults to UTC timestamps. glusterd, glusterfsd, and server-side
-glusterfs daemons will use UTC until one of,
-1. command line option is processed,
-2. gluster config (/var/lib/glusterd/options) is loaded,
-3. admin manually sets localtime-logging (cluster.localtime-logging, e.g.
-`#gluster volume set all cluster.localtime-logging enable`).
-
-There is no mount option to make the FUSE client enable localtime logging.
-
-There is no option in gfapi to enable localtime logging.
-
-### Enhanced the option to export statfs data for bricks sharing the same backend filesystem
-
-**Notes for users:**
-In the past 'storage/posix' xlator had an option called option
-`export-statfs-size`, which, when set to 'no', exports zero as values for few
-fields in `struct statvfs`. These are typically reflected in an output of `df`
-command, from a user perspective.
-
-When backend bricks are shared between multiple brick processes, the values
-of these variables have been corrected to reflect
-`field_value / number-of-bricks-at-node`. Thus enabling better usage reporting
-and also enhancing the ability for file placement in the distribute translator
-when used with the option `min-free-disk`.
-
-### Provided a means to resolve GFID split-brain using the gluster CLI
-**Notes for users:**
-
-The existing CLI commands to heal files under split-brain did not handle cases
-where there was a GFID mismatch between the files. With the provided enhancement
-the same CLI commands can now address GFID split-brain situations based on the
-choices provided.
-
-The CLI options that are enhanced to help with this situation are,
-```
-volume heal <VOLNAME> split-brain {bigger-file <FILE> |
- latest-mtime <FILE> |
- source-brick <HOSTNAME:BRICKNAME> [<FILE>]}
-```
-
-**Limitations:**
-
-None
-
-**Known Issues:**
-
-None
-
-### Developer related: Added a 'site.h' for more vendor/company specific defaults
-**Notes for developers:**
-
-**NOTE**: Also relevant for users building from sources and needing different
-defaults for some options
-
-Most people consume Gluster in one of two ways:
-* From packages provided by their OS/distribution vendor
-* By building themselves from source
-
-For the first group it doesn't matter whether configuration is done in a
-configure script, via command-line options to that configure script, or in a
-header file. All of these end up as edits to some file under the packager's
-control, which is then run through their tools and process (e.g. rpmbuild) to
-create the packages that users will install.
-
-For the second group, convenience matters. Such users might not even have a
-script wrapped around the configure process, and editing one line in a header
-file is a lot easier than editing several in the configure script. This also
-prevents a messy profusion of configure options, dozens of which might need to
-be added to support a single such user's preferences. This comes back around as
-greater simplicity for packagers as well. This patch defines site.h as the
-header file for options and parameters that someone building the code for
-themselves might want to tweak.
-
-The project ships one version to reflect the developers' guess at the best
-defaults for most users, and sophisticated users with unusual needs can
-override many options at once just by maintaining their own version of that
-file. Further guidelines for how to determine whether an option should go in
-configure.ac or site.h are explained within site.h itself.
-
-### Developer related: Added xxhash library to libglusterfs for required use
-**Notes for developers:**
-
-Function gf_xxh64_wrapper has been added as a wrapper into libglusterfs for
-consumption by interested developers.
-
-Reference to code can be found [here](https://github.com/gluster/glusterfs/blob/v3.12.0alpha1/libglusterfs/src/common-utils.h#L835)
-
-### Developer related: glfs_ipc API in libgfapi is removed as a public interface
-**Notes for users:**
-
-glfs_ipc API was maintained as a public API in the GFAPI libraries. This has
-been removed as a public interface, from this release onwards.
-
-Any application, written directly to consume gfapi as a means of interfacing
-with Gluster, using the mentioned API, would need to be modified to adapt to
-this change.
-
-**NOTE:** As of this release there are no known public consumers of this
-API
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance) there
- are reports of VM images getting corrupted.
- - The last known cause for corruption (Bug #1465123) has a fix with this
- release. As further testing is still in progress, the issue is retained as
- a major issue.
- - Status of this bug can be tracked here, #1465123
-
-## Bugs addressed
-
-Bugs addressed since release-3.11.0 are listed below.
-
-- [#1047975](https://bugzilla.redhat.com/1047975): glusterfs/extras: add a convenience script to label (selinux) gluster bricks
-- [#1254002](https://bugzilla.redhat.com/1254002): [RFE] Have named pthreads for easier debugging
-- [#1318100](https://bugzilla.redhat.com/1318100): RFE : SELinux translator to support setting SELinux contexts on files in a glusterfs volume
-- [#1318895](https://bugzilla.redhat.com/1318895): Heal info shows incorrect status
-- [#1326219](https://bugzilla.redhat.com/1326219): Make Gluster/NFS an optional component
-- [#1356453](https://bugzilla.redhat.com/1356453): DHT: slow readdirp performance
-- [#1366817](https://bugzilla.redhat.com/1366817): AFR returns the node uuid of the same node for every file in the replica
-- [#1381970](https://bugzilla.redhat.com/1381970): GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws?
-- [#1400924](https://bugzilla.redhat.com/1400924): [RFE] Rsync flags for performance improvements
-- [#1402406](https://bugzilla.redhat.com/1402406): Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
-- [#1414242](https://bugzilla.redhat.com/1414242): [whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job always failed on win7-32/win2012/win2k8R2 guest
-- [#1421938](https://bugzilla.redhat.com/1421938): systemic testing: seeing lot of ping time outs which would lead to splitbrains
-- [#1424817](https://bugzilla.redhat.com/1424817): Fix wrong operators, found by coverty
-- [#1428061](https://bugzilla.redhat.com/1428061): Halo Replication feature for AFR translator
-- [#1428673](https://bugzilla.redhat.com/1428673): possible repeatedly recursive healing of same file with background heal not happening when IO is going on
-- [#1430608](https://bugzilla.redhat.com/1430608): [RFE] Pass slave volume in geo-rep as read-only
-- [#1431908](https://bugzilla.redhat.com/1431908): Enabling parallel-readdir causes dht linkto files to be visible on the mount,
-- [#1433906](https://bugzilla.redhat.com/1433906): quota: limit-usage command failed with error " Failed to start aux mount"
-- [#1437748](https://bugzilla.redhat.com/1437748): Spacing issue in fix-layout status output
-- [#1438966](https://bugzilla.redhat.com/1438966): Multiple bricks WILL crash after TCP port probing
-- [#1439068](https://bugzilla.redhat.com/1439068): Segmentation fault when creating a qcow2 with qemu-img
-- [#1442569](https://bugzilla.redhat.com/1442569): Implement Negative lookup cache feature to improve create performance
-- [#1442788](https://bugzilla.redhat.com/1442788): Cleanup timer wheel in glfs_fini()
-- [#1442950](https://bugzilla.redhat.com/1442950): RFE: Enhance handleops readdirplus operation to return handles along with dirents
-- [#1444596](https://bugzilla.redhat.com/1444596): [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
-- [#1445609](https://bugzilla.redhat.com/1445609): [perf-xlators/write-behind] write-behind-window-size could be set greater than its allowed MAX value 1073741824
-- [#1446172](https://bugzilla.redhat.com/1446172): Brick Multiplexing :- resetting a brick bring down other bricks with same PID
-- [#1446362](https://bugzilla.redhat.com/1446362): cli xml status of detach tier broken
-- [#1446412](https://bugzilla.redhat.com/1446412): error-gen don't need to convert error string to int in every fop
-- [#1446516](https://bugzilla.redhat.com/1446516): [Parallel Readdir] : Mounts fail when performance.parallel-readdir is set to "off"
-- [#1447116](https://bugzilla.redhat.com/1447116): gfapi exports non-existing glfs_upcall_inode_get_event symbol
-- [#1447266](https://bugzilla.redhat.com/1447266): [snapshot cifs]ls on .snaps directory is throwing input/output error over cifs mount
-- [#1447389](https://bugzilla.redhat.com/1447389): Brick Multiplexing: seeing Input/Output Error for .trashcan
-- [#1447609](https://bugzilla.redhat.com/1447609): server: fd should be refed before put into fdtable
-- [#1447630](https://bugzilla.redhat.com/1447630): Don't allow rebalance/fix-layout operation on sharding enabled volumes till dht+sharding bugs are fixed
-- [#1447826](https://bugzilla.redhat.com/1447826): potential endless loop in function glusterfs_graph_validate_options
-- [#1447828](https://bugzilla.redhat.com/1447828): Should use dict_set_uint64 to set fd->pid when dump fd's info to dict
-- [#1447953](https://bugzilla.redhat.com/1447953): Remove inadvertently merged IPv6 code
-- [#1447960](https://bugzilla.redhat.com/1447960): [Tiering]: High and low watermark values when set to the same level, is allowed
-- [#1447966](https://bugzilla.redhat.com/1447966): 'make cscope' fails on a clean tree due to missing generated XDR files
-- [#1448150](https://bugzilla.redhat.com/1448150): USS: stale snap entries are seen when activation/deactivation performed during one of the glusterd's unavailability
-- [#1448265](https://bugzilla.redhat.com/1448265): use common function iov_length to instead of duplicate code
-- [#1448293](https://bugzilla.redhat.com/1448293): Implement FALLOCATE FOP for EC
-- [#1448299](https://bugzilla.redhat.com/1448299): Mismatch in checksum of the image file after copying to a new image file
-- [#1448364](https://bugzilla.redhat.com/1448364): limited throughput with disperse volume over small number of bricks
-- [#1448640](https://bugzilla.redhat.com/1448640): Seeing error "Failed to get the total number of files. Unable to estimate time to complete rebalance" in rebalance logs
-- [#1448692](https://bugzilla.redhat.com/1448692): use GF_ATOMIC to generate callid
-- [#1448804](https://bugzilla.redhat.com/1448804): afr: include quorum type and count when dumping afr priv
-- [#1448914](https://bugzilla.redhat.com/1448914): [geo-rep]: extended attributes are not synced if the entry and extended attributes are done within changelog roleover/or entry sync
-- [#1449008](https://bugzilla.redhat.com/1449008): remove useless options from glusterd's volume set table
-- [#1449232](https://bugzilla.redhat.com/1449232): race condition between client_ctx_get and client_ctx_set
-- [#1449329](https://bugzilla.redhat.com/1449329): When either killing or restarting a brick with performance.stat-prefetch on, stat sometimes returns a bad st_size value.
-- [#1449348](https://bugzilla.redhat.com/1449348): disperse seek does not correctly handle the end of file
-- [#1449495](https://bugzilla.redhat.com/1449495): glfsheal: crashed(segfault) with disperse volume in RDMA
-- [#1449610](https://bugzilla.redhat.com/1449610): [New] - Replacing an arbiter brick while I/O happens causes vm pause
-- [#1450010](https://bugzilla.redhat.com/1450010): [gluster-block]:Need a volume group profile option for gluster-block volume to add necessary options to be added.
-- [#1450559](https://bugzilla.redhat.com/1450559): Error 0-socket.management: socket_poller XX.XX.XX.XX:YYY failed (Input/output error) during any volume operation
-- [#1450630](https://bugzilla.redhat.com/1450630): [brick multiplexing] detach a brick if posix health check thread complaints about underlying brick
-- [#1450730](https://bugzilla.redhat.com/1450730): Add tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t to bad tests
-- [#1450975](https://bugzilla.redhat.com/1450975): Fix on demand file migration from client
-- [#1451083](https://bugzilla.redhat.com/1451083): crash in dht_rmdir_do
-- [#1451162](https://bugzilla.redhat.com/1451162): dht: Make throttle option "normal" value uniform across dht_init and dht_reconfigure
-- [#1451248](https://bugzilla.redhat.com/1451248): Brick Multiplexing: On reboot of a node Brick multiplexing feature lost on that node as multiple brick processes get spawned
-- [#1451588](https://bugzilla.redhat.com/1451588): [geo-rep + nl]: Multiple crashes observed on slave with "nlc_lookup_cbk"
-- [#1451724](https://bugzilla.redhat.com/1451724): glusterfind pre crashes with "UnicodeDecodeError: 'utf8' codec can't decode" error when the `--no-encode` is used
-- [#1452006](https://bugzilla.redhat.com/1452006): tierd listens to a port.
-- [#1452084](https://bugzilla.redhat.com/1452084): [Ganesha] : Stale linkto files after unsuccessfuly hardlinks
-- [#1452102](https://bugzilla.redhat.com/1452102): [DHt] : segfault in dht_selfheal_dir_setattr while running regressions
-- [#1452378](https://bugzilla.redhat.com/1452378): Cleanup unnecessary logs in fix_quorum_options
-- [#1452527](https://bugzilla.redhat.com/1452527): Shared volume doesn't get mounted on few nodes after rebooting all nodes in cluster.
-- [#1452956](https://bugzilla.redhat.com/1452956): glusterd on a node crashed after running volume profile command
-- [#1453151](https://bugzilla.redhat.com/1453151): [RFE] glusterfind: add --end-time and --field-separator options
-- [#1453977](https://bugzilla.redhat.com/1453977): Brick Multiplexing: Deleting brick directories of the base volume must gracefully detach from glusterfsd without impacting other volumes IO(currently seeing transport end point error)
-- [#1454317](https://bugzilla.redhat.com/1454317): [Bitrot]: Brick process crash observed while trying to recover a bad file in disperse volume
-- [#1454375](https://bugzilla.redhat.com/1454375): ignore incorrect uuid validation in gd_validate_mgmt_hndsk_req
-- [#1454418](https://bugzilla.redhat.com/1454418): Glusterd segmentation fault in ' _Unwind_Backtrace' while running peer probe
-- [#1454701](https://bugzilla.redhat.com/1454701): DHT: Pass errno as an argument to gf_msg
-- [#1454865](https://bugzilla.redhat.com/1454865): [Brick Multiplexing] heal info shows the status of the bricks as "Transport endpoint is not connected" though bricks are up
-- [#1454872](https://bugzilla.redhat.com/1454872): [Geo-rep]: Make changelog batch size configurable
-- [#1455049](https://bugzilla.redhat.com/1455049): [GNFS+EC] Unable to release the lock when the other client tries to acquire the lock on the same file
-- [#1455104](https://bugzilla.redhat.com/1455104): dht: dht self heal fails with no hashed subvol error
-- [#1455179](https://bugzilla.redhat.com/1455179): [Geo-rep]: Log time taken to sync entry ops, metadata ops and data ops for each batch
-- [#1455301](https://bugzilla.redhat.com/1455301): gluster-block is not working as expected when shard is enabled
-- [#1455559](https://bugzilla.redhat.com/1455559): [Geo-rep]: METADATA errors are seen even though everything is in sync
-- [#1455831](https://bugzilla.redhat.com/1455831): libglusterfs: updates old comment for 'arena_size'
-- [#1456361](https://bugzilla.redhat.com/1456361): DHT : for many operation directory/file path is '(null)' in brick log
-- [#1456385](https://bugzilla.redhat.com/1456385): glusterfs client crash on io-cache.so(__ioc_page_wakeup+0x44)
-- [#1456405](https://bugzilla.redhat.com/1456405): Brick Multiplexing:dmesg shows request_sock_TCP: Possible SYN flooding on port 49152 and memory related backtraces
-- [#1456582](https://bugzilla.redhat.com/1456582): "split-brain observed [Input/output error]" error messages in samba logs during parallel rm -rf
-- [#1456653](https://bugzilla.redhat.com/1456653): nlc_lookup_cbk floods logs
-- [#1456898](https://bugzilla.redhat.com/1456898): Regression test for add-brick failing with brick multiplexing enabled
-- [#1457202](https://bugzilla.redhat.com/1457202): Use of force with volume start, creates brick directory even it is not present
-- [#1457808](https://bugzilla.redhat.com/1457808): all: spelling errors (debian package maintainer)
-- [#1457812](https://bugzilla.redhat.com/1457812): extras/hook-scripts: non-portable shell syntax (debian package maintainer)
-- [#1457981](https://bugzilla.redhat.com/1457981): client fails to connect to the brick due to an incorrect port reported back by glusterd
-- [#1457985](https://bugzilla.redhat.com/1457985): Rebalance estimate time sometimes shows negative values
-- [#1458127](https://bugzilla.redhat.com/1458127): Upcall missing invalidations
-- [#1458193](https://bugzilla.redhat.com/1458193): Implement seek() fop in trace translator
-- [#1458197](https://bugzilla.redhat.com/1458197): io-stats usability/performance statistics enhancements
-- [#1458539](https://bugzilla.redhat.com/1458539): [Negative Lookup]: negative lookup features doesn't seem to work on restart of volume
-- [#1458582](https://bugzilla.redhat.com/1458582): add all as volume option in gluster volume get usage
-- [#1458768](https://bugzilla.redhat.com/1458768): [Perf] 35% drop in small file creates on smbv3 on *2
-- [#1459402](https://bugzilla.redhat.com/1459402): brick process crashes while running bug-1432542-mpx-restart-crash.t in a loop
-- [#1459530](https://bugzilla.redhat.com/1459530): [RFE] Need a way to resolve gfid split brains
-- [#1459620](https://bugzilla.redhat.com/1459620): [geo-rep]: Worker crashed with TypeError: expected string or buffer
-- [#1459781](https://bugzilla.redhat.com/1459781): Brick Multiplexing:Even clean Deleting of the brick directories of base volume is resulting in posix health check errors(just as we see in ungraceful delete methods)
-- [#1459971](https://bugzilla.redhat.com/1459971): posix-acl: Whitelist virtual ACL xattrs
-- [#1460225](https://bugzilla.redhat.com/1460225): Not cleaning up stale socket file is resulting in spamming glusterd logs with warnings of "got disconnect from stale rpc"
-- [#1460514](https://bugzilla.redhat.com/1460514): [Ganesha] : Ganesha crashes while cluster enters failover/failback mode
-- [#1460585](https://bugzilla.redhat.com/1460585): Revert CLI restrictions on running rebalance in VM store use case
-- [#1460638](https://bugzilla.redhat.com/1460638): ec-data-heal.t fails with brick mux enabled
-- [#1460659](https://bugzilla.redhat.com/1460659): Avoid one extra call of l(get|list)xattr system call after use buffer in posix_getxattr
-- [#1461129](https://bugzilla.redhat.com/1461129): malformed cluster.server-quorum-ratio setting can lead to split brain
-- [#1461648](https://bugzilla.redhat.com/1461648): Update GlusterFS README
-- [#1461655](https://bugzilla.redhat.com/1461655): glusterd crashes when statedump is taken
-- [#1461792](https://bugzilla.redhat.com/1461792): lk fop succeeds even when lock is not acquired on at least quorum number of bricks
-- [#1461845](https://bugzilla.redhat.com/1461845): [Bitrot]: Inconsistency seen with 'scrub ondemand' - fails to trigger scrub
-- [#1462200](https://bugzilla.redhat.com/1462200): glusterd status showing failed when it's stopped in RHEL7
-- [#1462241](https://bugzilla.redhat.com/1462241): glusterfind: syntax error due to uninitialized variable 'end'
-- [#1462790](https://bugzilla.redhat.com/1462790): with AFR now making both nodes to return UUID for a file will result in georep consuming more resources
-- [#1463178](https://bugzilla.redhat.com/1463178): [Ganesha]Bricks got crashed while running posix compliance test suit on V4 mount
-- [#1463365](https://bugzilla.redhat.com/1463365): Changes for Maintainers 2.0
-- [#1463648](https://bugzilla.redhat.com/1463648): Use GF_XATTR_LIST_NODE_UUIDS_KEY to figure out local subvols
-- [#1464072](https://bugzilla.redhat.com/1464072): cns-brick-multiplexing: brick process fails to restart after gluster pod failure
-- [#1464091](https://bugzilla.redhat.com/1464091): Regression: Heal info takes longer time when a brick is down
-- [#1464110](https://bugzilla.redhat.com/1464110): [Scale] : Rebalance ETA (towards the end) may be inaccurate,even on a moderately large data set.
-- [#1464327](https://bugzilla.redhat.com/1464327): glusterfs client crashes when reading large directory
-- [#1464359](https://bugzilla.redhat.com/1464359): selfheal deamon cpu consumption not reducing when IOs are going on and all redundant bricks are brought down one after another
-- [#1465024](https://bugzilla.redhat.com/1465024): glusterfind: DELETE path needs to be unquoted before further processing
-- [#1465075](https://bugzilla.redhat.com/1465075): Fd based fops fail with EBADF on file migration
-- [#1465214](https://bugzilla.redhat.com/1465214): build failed with GF_DISABLE_MEMPOOL
-- [#1465559](https://bugzilla.redhat.com/1465559): multiple brick processes seen on gluster(fs)d restart in brick multiplexing
-- [#1466037](https://bugzilla.redhat.com/1466037): Fuse mount crashed with continuous dd on a file and reading the file in parallel
-- [#1466110](https://bugzilla.redhat.com/1466110): dht_rename_lock_cbk crashes in upstream regression test
-- [#1466188](https://bugzilla.redhat.com/1466188): Add scripts to analyze quota xattr in backend and identify accounting issues
-- [#1466785](https://bugzilla.redhat.com/1466785): assorted typos and spelling mistakes from Debian lintian
-- [#1467209](https://bugzilla.redhat.com/1467209): [Scale] : Rebalance ETA shows the initial estimate to be ~140 days,finishes within 18 hours though.
-- [#1467277](https://bugzilla.redhat.com/1467277): [GSS] [RFE] add documentation on --xml and --mode=script options to gluster interactive help and man pages
-- [#1467313](https://bugzilla.redhat.com/1467313): cthon04 can cause segfault in gNFS/NLM
-- [#1467513](https://bugzilla.redhat.com/1467513): CIFS:[USS]: .snaps is not accessible from the CIFS client after volume stop/start
-- [#1467718](https://bugzilla.redhat.com/1467718): [Geo-rep]: entry failed to sync to slave with ENOENT errror
-- [#1467841](https://bugzilla.redhat.com/1467841): gluster volume status --xml fails when there are 100 volumes
-- [#1467986](https://bugzilla.redhat.com/1467986): possible memory leak in glusterfsd with multiplexing
-- [#1468191](https://bugzilla.redhat.com/1468191): Enable stat-prefetch in group virt
-- [#1468261](https://bugzilla.redhat.com/1468261): Regression: non-disruptive(in-service) upgrade on EC volume fails
-- [#1468279](https://bugzilla.redhat.com/1468279): metadata heal not happening despite having an active sink
-- [#1468291](https://bugzilla.redhat.com/1468291): NFS Sub directory is getting mounted on solaris 10 even when the permission is restricted in nfs.export-dir volume option
-- [#1468432](https://bugzilla.redhat.com/1468432): tests: fix stats-dump.t failure
-- [#1468433](https://bugzilla.redhat.com/1468433): rpc: include current second in timed out frame cleanup on client
-- [#1468863](https://bugzilla.redhat.com/1468863): Assert in mem_pools_fini during libgfapi-fini-hang.t on NetBSD
-- [#1469029](https://bugzilla.redhat.com/1469029): Rebalance hangs on remove-brick if the target volume changes
-- [#1469179](https://bugzilla.redhat.com/1469179): invoke checkpatch.pl with strict
-- [#1469964](https://bugzilla.redhat.com/1469964): cluster/dht: Fix hardlink migration failures
-- [#1470170](https://bugzilla.redhat.com/1470170): mem-pool: mem_pool_fini() doesn't release entire memory allocated
-- [#1470220](https://bugzilla.redhat.com/1470220): glusterfs process leaking memory when error occurs
-- [#1470489](https://bugzilla.redhat.com/1470489): bulk removexattr shouldn't allow removal of trusted.gfid/trusted.glusterfs.volume-id
-- [#1470533](https://bugzilla.redhat.com/1470533): Brick Mux Setup: brick processes(glusterfsd) crash after a restart of volume which was preceded with some actions
-- [#1470768](https://bugzilla.redhat.com/1470768): file /usr/lib64/glusterfs/3.12dev/xlator is not owned by any package
-- [#1471790](https://bugzilla.redhat.com/1471790): [Brick Multiplexing] : cluster.brick-multiplex has no description.
-- [#1472094](https://bugzilla.redhat.com/1472094): Test script failing with brick multiplexing enabled
-- [#1472250](https://bugzilla.redhat.com/1472250): Remove fop_enum_to_string, get_fop_int usage in libglusterfs
-- [#1472417](https://bugzilla.redhat.com/1472417): No clear method to multiplex all bricks to one process(glusterfsd) with cluster.max-bricks-per-process option
-- [#1472949](https://bugzilla.redhat.com/1472949): [distribute] crashes seen upon rmdirs
-- [#1475181](https://bugzilla.redhat.com/1475181): dht remove-brick status does not indicate failures files not migrated because of a lack of space
-- [#1475192](https://bugzilla.redhat.com/1475192): [Scale] : Rebalance ETA shows the initial estimate to be ~140 days,finishes within 18 hours though.
-- [#1475258](https://bugzilla.redhat.com/1475258): [Geo-rep]: Geo-rep hangs in changelog mode
-- [#1475399](https://bugzilla.redhat.com/1475399): Rebalance estimate time sometimes shows negative values
-- [#1475635](https://bugzilla.redhat.com/1475635): [Scale] : Client logs flooded with "inode context is NULL" error messages
-- [#1475641](https://bugzilla.redhat.com/1475641): gluster core dump due to assert failed GF_ASSERT (brick_index < wordcount);
-- [#1475662](https://bugzilla.redhat.com/1475662): [Scale] : Rebalance Logs are bulky.
-- [#1476109](https://bugzilla.redhat.com/1476109): Brick Multiplexing: Brick process crashed at changetimerecorder(ctr) translator when restarting volumes
-- [#1476208](https://bugzilla.redhat.com/1476208): [geo-rep]: few of the self healed hardlinks on master did not sync to slave
-- [#1476653](https://bugzilla.redhat.com/1476653): cassandra fails on gluster-block with both replicate and ec volumes
-- [#1476654](https://bugzilla.redhat.com/1476654): gluster-block default shard-size should be 64MB
-- [#1476819](https://bugzilla.redhat.com/1476819): scripts: invalid test in S32gluster_enable_shared_storage.sh
-- [#1476863](https://bugzilla.redhat.com/1476863): packaging: /var/lib/glusterd/options should be %config(noreplace)
-- [#1476868](https://bugzilla.redhat.com/1476868): [EC]: md5sum mismatches every time for a file from the fuse client on EC volume
-- [#1477152](https://bugzilla.redhat.com/1477152): [Remove-brick] Few files are getting migrated eventhough the bricks crossed cluster.min-free-disk value
-- [#1477190](https://bugzilla.redhat.com/1477190): [GNFS] GNFS got crashed while mounting volume on solaris client
-- [#1477381](https://bugzilla.redhat.com/1477381): Revert experimental and 4.0 features to prepare for 3.12 release
-- [#1477405](https://bugzilla.redhat.com/1477405): eager-lock should be off for cassandra to work at the moment
-- [#1477994](https://bugzilla.redhat.com/1477994): [Ganesha] : Ganesha crashes while cluster enters failover/failback mode
-- [#1478276](https://bugzilla.redhat.com/1478276): separating attach tier and add brick
-- [#1479118](https://bugzilla.redhat.com/1479118): AFR entry self heal removes a directory's .glusterfs symlink.
-- [#1479263](https://bugzilla.redhat.com/1479263): nfs process crashed in "nfs3svc_getattr"
-- [#1479303](https://bugzilla.redhat.com/1479303): [Perf] : Large file sequential reads are off target by ~38% on FUSE/Ganesha
-- [#1479474](https://bugzilla.redhat.com/1479474): Add NULL gfid checks before creating file
-- [#1479655](https://bugzilla.redhat.com/1479655): Permission denied errors when appending files after readdir
-- [#1479662](https://bugzilla.redhat.com/1479662): when gluster pod is restarted, bricks from the restarted pod fails to connect to fuse, self-heal etc
-- [#1479717](https://bugzilla.redhat.com/1479717): Running sysbench on vm disk from plain distribute gluster volume causes disk corruption
-- [#1480448](https://bugzilla.redhat.com/1480448): More useful error - replace 'not optimal'
-- [#1480459](https://bugzilla.redhat.com/1480459): Gluster puts PID files in wrong place
-- [#1481931](https://bugzilla.redhat.com/1481931): [Scale] : I/O errors on multiple gNFS mounts with "Stale file handle" during rebalance of an erasure coded volume.
-- [#1482804](https://bugzilla.redhat.com/1482804): Negative Test: glusterd crashes for some of the volume options if set at cluster level
-- [#1482835](https://bugzilla.redhat.com/1482835): glusterd fails to start
-- [#1483402](https://bugzilla.redhat.com/1483402): DHT: readdirp fails to read some directories.
-- [#1483996](https://bugzilla.redhat.com/1483996): packaging: use rdma-core(-devel) instead of ibverbs, rdmacm; disable rdma on armv7hl
-- [#1484440](https://bugzilla.redhat.com/1484440): packaging: /run and /var/run; prefer /run
-- [#1484885](https://bugzilla.redhat.com/1484885): [rpc]: EPOLLERR - disconnecting now messages every 3 secs after completing rebalance
-- [#1486107](https://bugzilla.redhat.com/1486107): /var/lib/glusterd/peers File had a blank line, Stopped Glusterd from starting
-- [#1486110](https://bugzilla.redhat.com/1486110): [quorum]: Replace brick is happened when Quorum not met.
-- [#1486120](https://bugzilla.redhat.com/1486120): symlinks trigger faulty geo-replication state (rsnapshot usecase)
-- [#1486122](https://bugzilla.redhat.com/1486122): gluster-block profile needs to have strict-o-direct
diff --git a/doc/release-notes/3.12.1.md b/doc/release-notes/3.12.1.md
deleted file mode 100644
index 04ce830a221..00000000000
--- a/doc/release-notes/3.12.1.md
+++ /dev/null
@@ -1,34 +0,0 @@
-# Release notes for Gluster 3.12.1
-
-This is a bugfix release. The [Release Notes for 3.12.0](3.12.0.md),
- [3.12.1](3.12.1.md) contain a listing of all the new features that
- were added and bugs fixed in the GlusterFS 3.12 stable release.
-
-## Major changes, features and limitations addressed in this release
- No Major changes
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance) there
- are reports of VM images getting corrupted.
- - The last known cause for corruption (Bug #1465123) has a fix with this
- release. As further testing is still in progress, the issue is retained as
- a major issue.
- - Status of this bug can be tracked here, #1465123
-
-## Bugs addressed
-
- A total of 12 patches have been merged, addressing 11 bugs
-
-- [#1486538](https://bugzilla.redhat.com/1486538): [geo-rep+qr]: Crashes observed at slave from qr_lookup_sbk during rename/hardlink/rebalance
-- [#1486557](https://bugzilla.redhat.com/1486557): Log entry of files skipped/failed during rebalance operation
-- [#1487033](https://bugzilla.redhat.com/1487033): rpc: client_t and related objects leaked due to incorrect ref counts
-- [#1487319](https://bugzilla.redhat.com/1487319): afr: check op_ret value in __afr_selfheal_name_impunge
-- [#1488119](https://bugzilla.redhat.com/1488119): scripts: mount.glusterfs contains non-portable bashisms
-- [#1488168](https://bugzilla.redhat.com/1488168): Launch metadata heal in discover code path.
-- [#1488387](https://bugzilla.redhat.com/1488387): gluster-blockd process crashed and core generated
-- [#1488718](https://bugzilla.redhat.com/1488718): [RHHI] cannot boot vms created from template when disk format = qcow2
-- [#1489260](https://bugzilla.redhat.com/1489260): Crash in dht_check_and_open_fd_on_subvol_task()
-- [#1489296](https://bugzilla.redhat.com/1489296): glusterfsd (brick) process crashed
-- [#1489511](https://bugzilla.redhat.com/1489511): return ENOSYS for 'non readable' FOPs
diff --git a/doc/release-notes/3.12.2.md b/doc/release-notes/3.12.2.md
deleted file mode 100644
index 3f0ab9abc17..00000000000
--- a/doc/release-notes/3.12.2.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# Release notes for Gluster 3.12.2
-
-This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md),
-[3.12.2](3.12.2.md) contain a listing of all the new features that were added and bugs
-fixed in the GlusterFS 3.12 stable release.
-
-## Major changes, features and limitations addressed in this release
- 1.) In a pure distribute volume there is no source to heal the replaced brick
- from and hence would cause a loss of data that was present in the replaced brick.
- The CLI has been enhanced to prevent a user from inadvertently using replace brick
- in a pure distribute volume. It is advised to use add/remove brick to migrate from
- an existing brick in a pure distribute volume.
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance) there
- are reports of VM images getting corrupted.
- - The last known cause for corruption #1465123 is still pending, and not yet
- part of this release.
-
-2. Gluster volume restarts fail if the sub directory export feature is in use.
- Status of this issue can be tracked here, #1501315
-
-3. Mounting a gluster snapshot will fail, when attempting a FUSE based mount of
- the snapshot. So for the current users, it is recommend to only access snapshot
- via ".snaps" directory on a mounted gluster volume.
- Status of this issue can be tracked here, #1501378
-
-## Bugs addressed
-
- A total of 31 patches have been merged, addressing 28 bugs
-
-
-- [#1490493](https://bugzilla.redhat.com/1490493): Sub-directory mount details are incorrect in /proc/mounts
-- [#1491178](https://bugzilla.redhat.com/1491178): GlusterD returns a bad memory pointer in glusterd_get_args_from_dict()
-- [#1491292](https://bugzilla.redhat.com/1491292): Provide brick list as part of VOLUME_CREATE event.
-- [#1491690](https://bugzilla.redhat.com/1491690): rpc: TLSv1_2_method() is deprecated in OpenSSL-1.1
-- [#1492026](https://bugzilla.redhat.com/1492026): set the shard-block-size to 64MB in virt profile
-- [#1492061](https://bugzilla.redhat.com/1492061): CLIENT_CONNECT event not being raised
-- [#1492066](https://bugzilla.redhat.com/1492066): AFR_SUBVOL_UP and AFR_SUBVOLS_DOWN events not working
-- [#1493975](https://bugzilla.redhat.com/1493975): disallow replace brick operation on plain distribute volume
-- [#1494523](https://bugzilla.redhat.com/1494523): Spelling errors in 3.12.1
-- [#1495162](https://bugzilla.redhat.com/1495162): glusterd ends up with multiple uuids for the same node
-- [#1495397](https://bugzilla.redhat.com/1495397): Make event-history feature configurable and have it disabled by default
-- [#1495858](https://bugzilla.redhat.com/1495858): gluster volume create asks for confirmation for replica-2 volume even with force
-- [#1496238](https://bugzilla.redhat.com/1496238): [geo-rep]: Scheduler help needs correction for description of --no-color
-- [#1496317](https://bugzilla.redhat.com/1496317): [afr] split-brain observed on T files post hardlink and rename in x3 volume
-- [#1496326](https://bugzilla.redhat.com/1496326): [GNFS+EC] lock is being granted to 2 different client for the same data range at a time after performing lock acquire/release from the clients1
-- [#1497084](https://bugzilla.redhat.com/1497084): glusterfs process consume huge memory on both server and client node
-- [#1499123](https://bugzilla.redhat.com/1499123): Readdirp is considerably slower than readdir on acl clients
-- [#1499150](https://bugzilla.redhat.com/1499150): Improve performance with xattrop update.
-- [#1499158](https://bugzilla.redhat.com/1499158): client-io-threads option not working for replicated volumes
-- [#1499202](https://bugzilla.redhat.com/1499202): self-heal daemon stuck
-- [#1499392](https://bugzilla.redhat.com/1499392): [geo-rep]: Improve the output message to reflect the real failure with schedule_georep script
-- [#1500396](https://bugzilla.redhat.com/1500396): [geo-rep]: Observed "Operation not supported" error with traceback on slave log
-- [#1500472](https://bugzilla.redhat.com/1500472): Use a bitmap to store local node info instead of conf->local_nodeuuids[i].uuids
-- [#1500662](https://bugzilla.redhat.com/1500662): gluster volume heal info "healed" and "heal-failed" showing wrong information
-- [#1500835](https://bugzilla.redhat.com/1500835): [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
-- [#1500841](https://bugzilla.redhat.com/1500841): [geo-rep]: Worker crashes with OSError: [Errno 61] No data available
-- [#1500845](https://bugzilla.redhat.com/1500845): [geo-rep] master worker crash with interrupted system call
-- [#1500853](https://bugzilla.redhat.com/1500853): [geo-rep]: Incorrect last sync "0" during hystory crawl after upgrade/stop-start
-- [#1501022](https://bugzilla.redhat.com/1501022): Make choose-local configurable through `volume-set` command
-- [#1501154](https://bugzilla.redhat.com/1501154): Brick Multiplexing: Gluster volume start force complains with command "Error : Request timed out" when there are multiple volumes
diff --git a/doc/release-notes/3.12.3.md b/doc/release-notes/3.12.3.md
deleted file mode 100644
index 97d98e5cceb..00000000000
--- a/doc/release-notes/3.12.3.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# Release notes for Gluster 3.12.3
-
-This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md),
-[3.12.2](3.12.2.md), [3.12.3](3.12.3.md) contain a listing of all the new features that
-were added and bugs fixed in the GlusterFS 3.12 stable release.
-
-## Major changes, features and limitations addressed in this release
-1. The two regression related to with subdir mount got fixed
- - gluster volume restart failure (#1465123)
- - mounting gluster snapshot via fuse (#1501378)
-
-2. Improvements for "help" command with in gluster cli (#1509786)
-
-3. Introduction of new api glfs_fd_set_lkowner() to set lock owner
-
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance) there
- are reports of VM images getting corrupted.
- - The last known cause for corruption #1465123 is still pending, and not yet
- part of this release.
-
-## Bugs addressed
-
- A total of 25 patches have been merged, addressing 25 bugs
-
-- [#1484489](https://bugzilla.redhat.com/1484489): File-level WORM allows mv over read-only files
-- [#1494527](https://bugzilla.redhat.com/1494527): glusterfs fails to build twice in a row
-- [#1499889](https://bugzilla.redhat.com/1499889): md-cache uses incorrect xattr keynames for GF_POSIX_ACL keys
-- [#1499892](https://bugzilla.redhat.com/1499892): md-cache: xattr values should not be checked with string functions
-- [#1501238](https://bugzilla.redhat.com/1501238): [SNAPSHOT] Unable to mount a snapshot on client
-- [#1501315](https://bugzilla.redhat.com/1501315): Gluster Volume restart fail after exporting fuse sub-dir
-- [#1501864](https://bugzilla.redhat.com/1501864): Add generated HMAC token in header for webhook calls
-- [#1501956](https://bugzilla.redhat.com/1501956): gfapi: API needed to set lk_owner
-- [#1502104](https://bugzilla.redhat.com/1502104): [geo-rep]: RSYNC throwing internal errors
-- [#1503239](https://bugzilla.redhat.com/1503239): [Glusterd] Volume operations fail on a (tiered) volume because of a stale lock held by one of the nodes
-- [#1505221](https://bugzilla.redhat.com/1505221): glusterfs client crash when removing directories
-- [#1505323](https://bugzilla.redhat.com/1505323): When sub-dir is mounted on Fuse client,adding bricks to the same volume unmounts the subdir from fuse client
-- [#1505370](https://bugzilla.redhat.com/1505370): Mishandling null check at send_brick_req of glusterfsd/src/gf_attach.c
-- [#1505373](https://bugzilla.redhat.com/1505373): server.allow-insecure should be visible in "gluster volume set help"
-- [#1505527](https://bugzilla.redhat.com/1505527): Posix compliance rename test fails on fuse subdir mount
-- [#1505846](https://bugzilla.redhat.com/1505846): [GSS] gluster volume status command is missing in man page
-- [#1505856](https://bugzilla.redhat.com/1505856): Potential use of NULL `this` variable before it gets initialized
-- [#1507747](https://bugzilla.redhat.com/1507747): clean up port map on brick disconnect
-- [#1507748](https://bugzilla.redhat.com/1507748): Brick port mismatch
-- [#1507877](https://bugzilla.redhat.com/1507877): reset-brick commit force failed with glusterd_volume_brickinfo_get Returning -1
-- [#1508283](https://bugzilla.redhat.com/1508283): stale brick processes getting created and volume status shows brick as down(pkill glusterfsd glusterfs ,glusterd restart)
-- [#1509200](https://bugzilla.redhat.com/1509200): Event webhook should work with HTTPS urls
-- [#1509786](https://bugzilla.redhat.com/1509786): The output of the "gluster help" command is difficult to read
-- [#1511271](https://bugzilla.redhat.com/1511271): Rebalance estimate(ETA) shows wrong details(as intial message of 10min wait reappears) when still in progress
-- [#1511301](https://bugzilla.redhat.com/1511301): In distribute volume after glusterd restart, brick goes offline
diff --git a/doc/release-notes/3.12.4.md b/doc/release-notes/3.12.4.md
deleted file mode 100644
index d8157b47d3a..00000000000
--- a/doc/release-notes/3.12.4.md
+++ /dev/null
@@ -1,30 +0,0 @@
-# Release notes for Gluster 3.12.4
-
-This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md),
-[3.12.2](3.12.2.md), [3.12.3](3.12.3.md), [3.12.4](3.12.4.md) contain a listing of all
-the new features that were added and bugs fixed in the GlusterFS 3.12 stable release.
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance) there
- are reports of VM images getting corrupted.
- - The last known cause for corruption #1465123 is still pending, and not yet
- part of this release.
-
-## Bugs addressed
-
- A total of 13 patches have been merged, addressing 12 bugs
-
-- [#1478411](https://bugzilla.redhat.com/1478411): Directory listings on fuse mount are very slow due to small number of getdents() entries
-- [#1511782](https://bugzilla.redhat.com/1511782): In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
-- [#1512432](https://bugzilla.redhat.com/1512432): Test bug-1483058-replace-brick-quorum-validation.t fails inconsistently
-- [#1513258](https://bugzilla.redhat.com/1513258): NetBSD port
-- [#1514380](https://bugzilla.redhat.com/1514380): default timeout of 5min not honored for analyzing split-brain files post setfattr replica.split-brain-heal-finalize
-- [#1514420](https://bugzilla.redhat.com/1514420): gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
-- [#1515042](https://bugzilla.redhat.com/1515042): bug-1247563.t is failing on master
-- [#1516691](https://bugzilla.redhat.com/1516691): Rebalance fails on NetBSD because fallocate is not implemented
-- [#1517689](https://bugzilla.redhat.com/1517689): Memory leak in locks xlator
-- [#1518061](https://bugzilla.redhat.com/1518061): Remove 'summary' option from 'gluster vol heal..' CLI
-- [#1523048](https://bugzilla.redhat.com/1523048): glusterd consuming high memory
-- [#1523455](https://bugzilla.redhat.com/1523455): Store allocated objects in the mem_acct
diff --git a/doc/release-notes/3.12.5.md b/doc/release-notes/3.12.5.md
deleted file mode 100644
index 050c3e5f4ca..00000000000
--- a/doc/release-notes/3.12.5.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# Release notes for Gluster 3.12.5
-
-This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md),
-[3.12.2](3.12.2.md), [3.12.3](3.12.3.md), [3.12.4](3.12.4.md), [3.12.5](3.12.5.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.12 stable release.
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance) there
- are reports of VM images getting corrupted.
- - The last known cause for corruption #1465123 is still pending, and not yet
- part of this release.
-
-## Bugs addressed
-
- A total of 12 patches have been merged, addressing 11 bugs
-- [#1489043](https://bugzilla.redhat.com/1489043): The number of bytes of the quota specified in version 3.7 or later is incorrect
-- [#1511301](https://bugzilla.redhat.com/1511301): In distribute volume after glusterd restart, brick goes offline
-- [#1525850](https://bugzilla.redhat.com/1525850): rdma transport may access an obsolete item in gf_rdma_device_t->all_mr, and causes glusterfsd/glusterfs process crash.
-- [#1527276](https://bugzilla.redhat.com/1527276): feature/bitrot: remove internal xattrs from lookup cbk
-- [#1529085](https://bugzilla.redhat.com/1529085): fstat returns ENOENT/ESTALE
-- [#1529088](https://bugzilla.redhat.com/1529088): opening a file that is destination of rename results in ENOENT errors
-- [#1529095](https://bugzilla.redhat.com/1529095): /usr/sbin/glusterfs crashing on Red Hat OpenShift Container Platform node
-- [#1529539](https://bugzilla.redhat.com/1529539): JWT support without external dependency
-- [#1530448](https://bugzilla.redhat.com/1530448): glustershd fails to start on a volume force start after a brick is down
-- [#1530455](https://bugzilla.redhat.com/1530455): Files are not rebalanced if destination brick(available size) is of smaller size than source brick(available size)
-- [#1531372](https://bugzilla.redhat.com/1531372): Use after free in cli_cmd_volume_create_cbk
diff --git a/doc/release-notes/3.12.6.md b/doc/release-notes/3.12.6.md
deleted file mode 100644
index 4a2e18dfebf..00000000000
--- a/doc/release-notes/3.12.6.md
+++ /dev/null
@@ -1,32 +0,0 @@
-# Release notes for Gluster 3.12.6
-
-This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md), [3.12.2](3.12.2.md), [3.12.3](3.12.3.md), [3.12.4](3.12.4.md), [3.12.5](3.12.5.md), [3.12.5](3.12.6.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.12 stable release.
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance) there
- are reports of VM images getting corrupted.
- - The last known cause for corruption #1465123 is still pending, and not yet
- part of this release.
-
-# Bugs addressed
-
- A total of 16 patches have been merged, addressing 16 bugs
-- [#1510342](https://bugzilla.redhat.com/1510342): Not all files synced using geo-replication
-- [#1533269](https://bugzilla.redhat.com/1533269): Random GlusterFSD process dies during rebalance
-- [#1534847](https://bugzilla.redhat.com/1534847): entries not getting cleared post healing of softlinks (stale entries showing up in heal info)
-- [#1536334](https://bugzilla.redhat.com/1536334): [Disperse] Implement open fd heal for disperse volume
-- [#1537346](https://bugzilla.redhat.com/1537346): glustershd/glusterd is not using right port when connecting to glusterfsd process
-- [#1539516](https://bugzilla.redhat.com/1539516): DHT log messages: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
-- [#1540224](https://bugzilla.redhat.com/1540224): dht_(f)xattrop does not implement migration checks
-- [#1541267](https://bugzilla.redhat.com/1541267): dht_layout_t leak in dht_populate_inode_for_dentry
-- [#1541930](https://bugzilla.redhat.com/1541930): A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
-- [#1542054](https://bugzilla.redhat.com/1542054): tests/bugs/cli/bug-1169302.t fails spuriously
-- [#1542475](https://bugzilla.redhat.com/1542475): Random failures in tests/bugs/nfs/bug-974972.t
-- [#1542601](https://bugzilla.redhat.com/1542601): The used space in the volume increases when the volume is expanded
-- [#1542615](https://bugzilla.redhat.com/1542615): tests/bugs/core/multiplex-limit-issue-151.t fails sometimes in upstream master
-- [#1542826](https://bugzilla.redhat.com/1542826): Mark tests/bugs/posix/bug-990028.t bad on release-3.12
-- [#1542934](https://bugzilla.redhat.com/1542934): Seeing timer errors in the rebalance logs
-- [#1543016](https://bugzilla.redhat.com/1543016): dht_lookup_unlink_of_false_linkto_cbk fails with "Permission denied"
-- [#1544637](https://bugzilla.redhat.com/1544637): 3.8 -> 3.10 rolling upgrade fails (same for 3.12 or 3.13) on Ubuntu 14
diff --git a/doc/release-notes/3.12.7.md b/doc/release-notes/3.12.7.md
deleted file mode 100644
index 0b3310d38cf..00000000000
--- a/doc/release-notes/3.12.7.md
+++ /dev/null
@@ -1,14 +0,0 @@
-# Release notes for Gluster 3.12.6
-
-This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md), [3.12.2](3.12.2.md), [3.12.3](3.12.3.md), [3.12.4](3.12.4.md), [3.12.5](3.12.5.md), [3.12.6](3.12.6.md) , [3.12.7](3.12.7.md)contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.12 stable release.
-## Bugs addressed
-
- A total of 8 patches have been merged, addressing 8 bugs
-
-- [#1517260](https://bugzilla.redhat.com/1517260): Volume wrong size
-- [#1543709](https://bugzilla.redhat.com/1543709): Optimize glusterd_import_friend_volume code path
-- [#1544635](https://bugzilla.redhat.com/1544635): Though files are in split-brain able to perform writes to the file
-- [#1547841](https://bugzilla.redhat.com/1547841): Typo error in __dht_check_free_space function log message
-- [#1548078](https://bugzilla.redhat.com/1548078): [Rebalance] "Migrate file failed: <filepath>: failed to get xattr [No data available]" warnings in rebalance logs
-- [#1548270](https://bugzilla.redhat.com/1548270): DHT calls dht_lookup_everywhere for 1xn volumes
-- [#1549505](https://bugzilla.redhat.com/1549505): Backport patch to reduce duplicate code in server-rpc-fops.c
diff --git a/doc/release-notes/3.12.8.md b/doc/release-notes/3.12.8.md
deleted file mode 100644
index 3156bdbaff6..00000000000
--- a/doc/release-notes/3.12.8.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Release notes for Gluster 3.12.8
-
-This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md), [3.12.2](3.12.2.md), [3.12.3](3.12.3.md), [3.12.4](3.12.4.md), [3.12.5](3.12.5.md), [3.12.6](3.12.6.md), [3.12.7](3.12.7.md), [3.12.8](3.12.8.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.12 stable release.
-## Bugs addressed
-
- A total of 9 patches have been merged, addressing 9 bugs
-- [#1543708](https://bugzilla.redhat.com/1543708): glusterd fails to attach brick during restart of the node
-- [#1546627](https://bugzilla.redhat.com/1546627): Syntactical errors in hook scripts for managing SELinux context on bricks
-- [#1549473](https://bugzilla.redhat.com/1549473): possible memleak in glusterfsd process with brick multiplexing on
-- [#1555161](https://bugzilla.redhat.com/1555161): [Rebalance] ENOSPC errors on few files in rebalance logs
-- [#1555201](https://bugzilla.redhat.com/1555201): After a replace brick command, self-heal takes some time to start healing files on disperse volumes
-- [#1558352](https://bugzilla.redhat.com/1558352): [EC] Read performance of EC volume exported over gNFS is significantly lower than write performance
-- [#1561731](https://bugzilla.redhat.com/1561731): Rebalance failures on a dispersed volume with lookup-optimize enabled
-- [#1562723](https://bugzilla.redhat.com/1562723): SHD is not healing entries in halo replication
-- [#1565590](https://bugzilla.redhat.com/1565590): timer: Possible race condition between gf_timer_* routines