summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorNiels de Vos <ndevos@redhat.com>2016-06-14 12:54:42 +0200
committerNiels de Vos <ndevos@redhat.com>2016-06-14 08:17:07 -0700
commitc6d9e23b5400a34694da908b0bff768bf13fc6b9 (patch)
tree89057c5d44e703337d280eda78623ad8d60c7e4c
parent232482637171142b3a9da19bf91403fc63b19b10 (diff)
doc: finalize release-notes for 3.8.0v3.8.0
Addressed some formatting, added summary and updated the list of bugs. BUG: 1317278 Change-Id: I6b048498cf035b98d996833debd1b0628ceee77b Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/14730 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
-rw-r--r--doc/release-notes/3.8.0.md283
1 files changed, 178 insertions, 105 deletions
diff --git a/doc/release-notes/3.8.0.md b/doc/release-notes/3.8.0.md
index 0834a985799..6c555aa5f57 100644
--- a/doc/release-notes/3.8.0.md
+++ b/doc/release-notes/3.8.0.md
@@ -1,11 +1,15 @@
-# Work in progress release notes for Gluster 3.8.0 (RC2)
+# Release notes for Gluster 3.8.0
-These are the current release notes for Release Candidate 2. Follow up changes
-will add more user friendly notes and instructions.
+This is a major release that includes a huge number of changes. Many
+improvements contribute to better support of Gluster with containers and
+running your storage on the same server as your hypervisors. Lots of work has
+been done to integrate with other projects that are part of the Open Source
+storage ecosystem.
-The release-notes are being worked on by maintainers and the developers of the
-different features. Assistance of others is welcome! Contributions can be done
-in [this etherpad](https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes).
+The most notable features and changes are documented on this page. A full list
+of bugs that has been addressed is included further below.
+
+## Major changes and features
### Changes to building from the release tarball
@@ -28,40 +32,37 @@ parameters.
Building directly from the git repository has not changed.
-
### Mandatory lock support for Multiprotocol environment
*Notes for users:*
With this release GlusterFS is now capable of performing file operations based
on core mandatory locking concepts. Apart from Linux kernel style semantics,
GlusterFS volumes can now be configured in a special mode where all traditional
fcntl locks are treated mandatory so as to detect the presence of locks before
-every data modifying file operations acting on a particluar byte range. This
+every data modifying file operations acting on a particular byte range. This
will help applications to operate on more accurate data during concurrent access
-of various byte ranges within a file. Please refer Administration Guide for more
-details.
-
-http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Mandatory%20Locks/
+of various byte ranges within a file. Please refer [Administration
+Guide](http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Mandatory%20Locks/)
+for more details.
### Gluster/NFS disabled by default
*Notes for users:*
-The legacy Gluster NFS server (a.k.a. gnfs) is now disabled by default when new
+The legacy Gluster NFS server (a.k.a. gNFS) is now disabled by default when new
volumes are created. Users are encouraged to use NFS-Ganesha with FSAL_GLUSTER
-instead of gnfs. NFS-Ganesha is a full feature server that is being actively
-developed and maintained. It supports NFSv3, NFSv4, and NFSv4.1. The
-documentation
-(http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/)
+instead of gNFS. NFS-Ganesha is a full feature server that is being actively
+developed and maintained. It supports NFSv3, NFSv4, and NFSv4.1. The
+[documentation](http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/)
describes how to configure and use NFS-Ganesha. Users that prefer to use the
-gnfs server (NFSv3 only) can enable the service per volume with the following
+gNFS server (NFSv3 only) can enable the service per volume with the following
command:
```bash
# gluster volume set <VOLUME> nfs.disable false
```
-Existing volumes that have gnfs enabled will remain enabled unless explicitly
-disabled. You cannot run both gnfs and NFS-Ganesha servers on the same host.
+Existing volumes that have gNFS enabled will remain enabled unless explicitly
+disabled. You cannot run both gNFS and NFS-Ganesha servers on the same host.
-The plan is to phase gnfs out of Gluster over the next several releases,
+The plan is to phase gNFS out of Gluster over the next several releases,
starting with documenting it as officially deprecated, then not compiling and
packaging the components, and ultimately removing the component sources from the
source tree.
@@ -77,8 +78,9 @@ when using the Gluster-block driver.
*Limitations:*
The deprecated stripe functionality has not been extended with SEEK. SEEK for
sharding has not been implemented yet, and is expected to follow later in a 3.8
-update (bug 1301647). NFS-Ganesha will support SEEK over NFSv4 in the near
-future, posisbly with the upcoming nfs-ganesha 2.4.
+update ([bug 1301647](https://bugzilla.redhat.com/1301647)). NFS-Ganesha will
+support SEEK over NFSv4 in the near future, possibly with the upcoming
+NFS-Ganesha 2.4.
### Compound operations
*Notes for users:*
@@ -110,16 +112,17 @@ geo-replication session in a Tiering based volume.
*Limitations:*
Configuring geo-replication session in Tiering based volume is same as earlier.
-But, before attaching/detaching tier, a few steps needs to be followd:
+But, before attaching/detaching tier, a few steps needs to be followed:
Before attaching a tier to a volume with an existing geo-replication session,
-the session needs to be stopped. Please find detailed steps here:
-https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Attach_Volumes.html#idp11442496
+the session needs to be stopped. Please find detailed steps in the chapter
+called [Attaching a Tier to a Geo-replicated
+Volume](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Attach_Volumes.html#idp11442496).
While detaching a tier from a Tiering based volume with existing geo-replication
-session, checkpoint of session needs to be done. Please find detailed steps
-here:
-https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Detach_Tier.html#idp32905264
+session, checkpoint of session needs to be done. Please find detailed steps in
+the chapter called [Detaching a Tier of a Geo-replicated
+Volume](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Detach_Tier.html#idp32905264).
### Enhance Quota enable/disable in glusterd
*Notes for users:*
@@ -134,7 +137,7 @@ time once enable or disable quota command is issued.
A new volume option has been introduced called `cluster.favorite-child-policy`.
It can be used to automatically resolve split-brains in replica volumes without
having to use the gluster CLI or the `fuse-mount-setfattr-based` methods to
-manually select a source. The healing automcatically happens based on various
+manually select a source. The healing automatically happens based on various
policies that this option takes. See `gluster volume set help|grep
cluster.favorite-child-policy -A3` for the various policies that you can set.
The default value is 'none' , i.e. this feature is not enabled by default.
@@ -142,7 +145,7 @@ The default value is 'none' , i.e. this feature is not enabled by default.
*Limitations:*
`cluster.favorite-child-policy` applies to all files of the volume. It is
assumed that if this option is enabled with a particular policy, you don't care
-to examine the split-brain files on a per file basis and use the approrpiate
+to examine the split-brain files on a per file basis and use the appropriate
gluster split-brain resolution CLI to resolve them individually with different
policies.
@@ -151,9 +154,9 @@ policies.
These are set of coreutils designed to act on GlusterFS volumes using its native
C API similar to standard Linux coreutils like cp, ls, mv etc. Anyone can easily
make use of these utilities to directly access volumes without mounting the same
-via some protocol. Please refer Admin guide for more details
-
-http://gluster.readthedocs.org/en/latest/Administrator%20Guide/GlusterFS%20Coreutils/
+via some protocol. Please refer [Admin
+guide](http://gluster.readthedocs.org/en/latest/Administrator%20Guide/GlusterFS%20Coreutils/)
+for more details.
### WORM, Retention and Compliance
*Notes for users:*
@@ -170,6 +173,7 @@ in any of these three states:
3. WORM: Where file will be immutable but deletable
Added four volume set options:
+
1. `features.worm-file-level`: It enables the file level WORM/Retention feature.
It is "off" by default
2. `features.retention-mode`: Takes two values
@@ -191,7 +195,7 @@ equivalent command or the lazy auto-commit will take place when I/O triggered
using timeouts for untouched files. The next I/O(link, unlink, rename, truncate)
will cause the transition
-Limitations:
+*Limitations:*
1. No Data validation of Read-only data i.e Integration with bitrot not done
2. Internal operations like tiering, rebalancing, self-healing will fail on
WORMed files
@@ -223,41 +227,71 @@ gluster volume set <vol-name> granular-entry-heal on
```
*Limitations:*
-1) The feature is not backward compatible. So please enable the option only after you have upgraded all your clients and servers to 3.8 and op-version is 30800
-2) Make sure the volume is stopped and there is no pending heal before you enable the feature.
+1. The feature is not backward compatible. So please enable the option only
+ after you have upgraded all your clients and servers to 3.8 and op-version
+ is 30800
+2. Make sure the volume is stopped and there is no pending heal before you
+ enable the feature.
### Gdeploy packaged for Fedora and EPEL
*Notes for users:*
-With gdeploy, deployment and configuration is a lot easier, it abstracts the complexities of learning and writing YAML files. And reusing the gdeploy configuration files with slight modification is lot easier than editing the YAML files, and debugging the errors.
+With gdeploy, deployment and configuration is a lot easier, it abstracts the
+complexities of learning and writing YAML files. And reusing the gdeploy
+configuration files with slight modification is lot easier than editing the
+YAML files, and debugging the errors.
Setting up a GlusterFS volume involves quite a bit of tasks like:
+
1. Setting up PV, VG, LV (thinpools if necessary).
2. Peer probing the nodes.
-3. CLI to create volume (which can get lengthy and error prone as the number of nodes increase).
+3. CLI to create volume (which can get lengthy and error prone as the number of nodes increase).
-gdeploy helps in simplifying the above tasks and adds many more useful features like installing packages, handling volumes remotely, setting volume options while creating the volume so on...
+gdeploy helps in simplifying the above tasks and adds many more useful features
+like installing packages, handling volumes remotely, setting volume options
+while creating the volume so on...
*Limitations:*
-We cannot have periodic status checks or similar health monitoring of the Gluster setup using gdeploy.
-So it does not keep track of the previous deployments you have made. You need to give every detail that gdeploy would require at each stage of deployment as it does not keep any state.
+We cannot have periodic status checks or similar health monitoring of the
+Gluster setup using gdeploy.
+
+So it does not keep track of the previous deployments you have made. You need
+to give every detail that gdeploy would require at each stage of deployment as
+it does not keep any state.
### Glusterfind and Bareos Integration
*Notes for users:*
-This is a first integration of Gluster with a Backup & Recovery Application. The integration enabler is a Bareos plugin for Glusterfs and a Gluster python utility called glusterfind. The integration provides facility to backup and restore from and to Glusterfs volumes via the libgfapi library which interacts directly with the Glusterfs server and not via a Glusterfs mount point.
-During the backup operation, the glusterfind utility helps to speed up full file listing by parallelly running on bricks' back-end instead of using the more expensive READDIR file operation needed when listing at a mount point. For incremental changes, the glusterfind utility picks up changed files from file system changelogs instead of crawling the entire file system scavenging for the files' modification time.
+This is a first integration of Gluster with a Backup & Recovery Application.
+The integration enabler is a [Bareos](http://bareos.org/) plugin for Glusterfs
+and a Gluster python utility called glusterfind. The integration provides
+facility to backup and restore from and to Glusterfs volumes via the libgfapi
+library which interacts directly with the Glusterfs server and not via a
+Glusterfs mount point.
+
+During the backup operation, the glusterfind utility helps to speed up full
+file listing by parallelly running on bricks' back-end instead of using the
+more expensive READDIR file operation needed when listing at a mount point. For
+incremental changes, the glusterfind utility picks up changed files from file
+system changelogs instead of crawling the entire file system scavenging for the
+files' modification time.
*Limitations:*
-Since bareos intrerfaces with Glusterfs via the libgfapi library and needs to execute the glusterfind tool, bareos needs to be running on one of the Gluster cluster nodes to make the most of it.
+Since bareos interfaces with Glusterfs via the libgfapi library and needs to
+execute the glusterfind tool, bareos needs to be running on one of the Gluster
+cluster nodes to make the most of it.
### Heketi
*Notes for users:*
-Heketi provides a RESTful management interface which can be used to manage the life cycle of GlusterFS volumes. With Heketi, cloud services like OpenStack Manila, Kubernetes, and OpenShift can dynamically provision GlusterFS volumes with any of the supported durability types
+[Heketi](https://github.com/heketi/heketi/wiki) provides a RESTful management
+interface which can be used to manage the life cycle of GlusterFS volumes. With
+Heketi, cloud services like OpenStack Manila, Kubernetes, and OpenShift can
+dynamically provision GlusterFS volumes with any of the supported durability
+types
*Limitations:*
Currently, Heketi only provides volume create, delete, and expand commands.
## Bugs addressed
-A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
+A total of 1782 patches has been sent, addressing 1228 bugs:
- [#789278](https://bugzilla.redhat.com/789278): Issues reported by Coverity static analysis tool
- [#1004332](https://bugzilla.redhat.com/1004332): Setting of any option using volume set fails when the clients are in older version.
@@ -268,9 +302,9 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1094119](https://bugzilla.redhat.com/1094119): Remove replace-brick with data migration support from gluster cli
- [#1109180](https://bugzilla.redhat.com/1109180): Issues reported by Cppcheck static analysis tool
- [#1110262](https://bugzilla.redhat.com/1110262): suid,sgid,sticky bit on directories not preserved when doing add-brick
-- [#1114847](https://bugzilla.redhat.com/1114847): glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"
+- [#1114847](https://bugzilla.redhat.com/1114847): glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"
- [#1117886](https://bugzilla.redhat.com/1117886): Gluster not resolving hosts with IPv6 only lookups
-- [#1122377](https://bugzilla.redhat.com/1122377): [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes back
+- [#1122377](https://bugzilla.redhat.com/1122377): [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes back
- [#1122395](https://bugzilla.redhat.com/1122395): man or info page of gluster needs to be updated with self-heal commands.
- [#1129939](https://bugzilla.redhat.com/1129939): NetBSD port
- [#1131275](https://bugzilla.redhat.com/1131275): I currently have no idea what rfc.sh is doing during at any specific moment
@@ -287,7 +321,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1168819](https://bugzilla.redhat.com/1168819): [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a
- [#1169317](https://bugzilla.redhat.com/1169317): rmtab file is a bottleneck when lot of clients are accessing a volume through NFS
- [#1170075](https://bugzilla.redhat.com/1170075): [RFE] : BitRot detection in glusterfs
-- [#1171703](https://bugzilla.redhat.com/1171703): AFR+SNAPSHOT: File with hard link have different inode number in USS
+- [#1171703](https://bugzilla.redhat.com/1171703): AFR+SNAPSHOT: File with hard link have different inode number in USS
- [#1171954](https://bugzilla.redhat.com/1171954): [RFE] Rebalance Performance Improvements
- [#1174765](https://bugzilla.redhat.com/1174765): Hook scripts are not installed after make install
- [#1176062](https://bugzilla.redhat.com/1176062): Force replace-brick lead to the persistent write(use dd) return Input/output error
@@ -349,7 +383,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1207829](https://bugzilla.redhat.com/1207829): Incomplete self-heal and split-brain on directories found when self-healing files/dirs on a replaced disk
- [#1207979](https://bugzilla.redhat.com/1207979): BitRot :- In case of NFS mount, Object Versioning and file signing is not working as expected
- [#1208131](https://bugzilla.redhat.com/1208131): BitRot :- Tunable (scrub-throttle, scrub-frequency, pause/resume) for scrub functionality don't have any impact on scrubber
-- [#1208470](https://bugzilla.redhat.com/1208470): [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are generated in the snapped brick.
+- [#1208470](https://bugzilla.redhat.com/1208470): [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are generated in the snapped brick.
- [#1208482](https://bugzilla.redhat.com/1208482): pthread cond and mutex variables of fs struct has to be destroyed conditionally.
- [#1209104](https://bugzilla.redhat.com/1209104): Do not let an inode evict during split-brain resolution process.
- [#1209138](https://bugzilla.redhat.com/1209138): [Backup]: Packages to be installed for glusterfind api to work
@@ -387,13 +421,13 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1211848](https://bugzilla.redhat.com/1211848): Gluster namespace and module should be part of glusterfs-libs rpm
- [#1211900](https://bugzilla.redhat.com/1211900): package glupy as a subpackage under gluster namespace.
- [#1211913](https://bugzilla.redhat.com/1211913): nfs : racy condition in export/netgroup feature
-- [#1211962](https://bugzilla.redhat.com/1211962): Disperse volume: Input/output errors on nfs and fuse mounts during delete operation
+- [#1211962](https://bugzilla.redhat.com/1211962): Disperse volume: Input/output errors on nfs and fuse mounts during delete operation
- [#1212037](https://bugzilla.redhat.com/1212037): Data Tiering:Old copy of file still remaining on EC(disperse) layer, when edited after attaching tier(new copy is moved to hot tier)
- [#1212063](https://bugzilla.redhat.com/1212063): [Geo-replication] cli crashed and core dump was observed while running gluster volume geo-replication vol0 status command
- [#1212110](https://bugzilla.redhat.com/1212110): bricks process crash
- [#1212253](https://bugzilla.redhat.com/1212253): cli should return error with inode quota cmds on cluster with op_version less than 3.7
- [#1212385](https://bugzilla.redhat.com/1212385): Disable rpc throttling for glusterfs protocol
-- [#1212398](https://bugzilla.redhat.com/1212398): [New] - Distribute replicate volume type is shown as Distribute Stripe in the output of gluster volume info <volname> --xml
+- [#1212398](https://bugzilla.redhat.com/1212398): [New] - Distribute replicate volume type is shown as Distribute Stripe in the output of gluster volume info <volname> --xml
- [#1212400](https://bugzilla.redhat.com/1212400): Attach tier failing and messing up vol info
- [#1212410](https://bugzilla.redhat.com/1212410): dist-geo-rep : all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down.
- [#1212413](https://bugzilla.redhat.com/1212413): [RFE] Return proper error codes in case of snapshot failure
@@ -404,11 +438,11 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1213066](https://bugzilla.redhat.com/1213066): Failure in tests/performance/open-behind.t
- [#1213125](https://bugzilla.redhat.com/1213125): Bricks fail to start with tiering related logs on the brick
- [#1213295](https://bugzilla.redhat.com/1213295): Glusterd crashed after updating to 3.8 nightly build
-- [#1213349](https://bugzilla.redhat.com/1213349): [Snapshot] Scheduler should check vol-name exists or not before adding scheduled jobs
+- [#1213349](https://bugzilla.redhat.com/1213349): [Snapshot] Scheduler should check vol-name exists or not before adding scheduled jobs
- [#1213358](https://bugzilla.redhat.com/1213358): Implement directory heal for ec
- [#1213364](https://bugzilla.redhat.com/1213364): [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled.
- [#1213542](https://bugzilla.redhat.com/1213542): Symlink heal leaks 'linkname' memory
-- [#1213752](https://bugzilla.redhat.com/1213752): nfs-ganesha: Multi-head nfs need Upcall Cache invalidation support
+- [#1213752](https://bugzilla.redhat.com/1213752): nfs-ganesha: Multi-head nfs need Upcall Cache invalidation support
- [#1213773](https://bugzilla.redhat.com/1213773): upcall: polling is done for a invalid file
- [#1213933](https://bugzilla.redhat.com/1213933): common-ha: delete-node implementation
- [#1214048](https://bugzilla.redhat.com/1214048): IO touched a file undergoing migration fails for tiered volumes
@@ -421,7 +455,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1215018](https://bugzilla.redhat.com/1215018): [New] - gluster peer status goes to disconnected state.
- [#1215117](https://bugzilla.redhat.com/1215117): Disperse volume: rebalance and quotad crashed
- [#1215122](https://bugzilla.redhat.com/1215122): Data Tiering: attaching a tier with non supported replica count crashes glusterd on local host
-- [#1215161](https://bugzilla.redhat.com/1215161): rpc: Memory corruption because rpcsvc_register_notify interprets opaque mydata argument as xlator pointer
+- [#1215161](https://bugzilla.redhat.com/1215161): rpc: Memory corruption because rpcsvc_register_notify interprets opaque mydata argument as xlator pointer
- [#1215187](https://bugzilla.redhat.com/1215187): timeout/expiry of group-cache should be set to 300 seconds
- [#1215265](https://bugzilla.redhat.com/1215265): Fixes for data self-heal in ec
- [#1215486](https://bugzilla.redhat.com/1215486): configure: automake defaults to Unix V7 tar, w/ max filename length=99 chars
@@ -453,13 +487,13 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1218625](https://bugzilla.redhat.com/1218625): glfs.h:46:21: fatal error: sys/acl.h: No such file or directory
- [#1218638](https://bugzilla.redhat.com/1218638): tiering documentation
- [#1218717](https://bugzilla.redhat.com/1218717): Files migrated should stay on a tier for a full cycle
-- [#1218854](https://bugzilla.redhat.com/1218854): Clean up should not empty the contents of the global config file
+- [#1218854](https://bugzilla.redhat.com/1218854): Clean up should not empty the contents of the global config file
- [#1218951](https://bugzilla.redhat.com/1218951): Spurious failures in fop-sanity.t
-- [#1218960](https://bugzilla.redhat.com/1218960): Rebalance Status output lists an extra colon " : " after volume rebalance: <vol_name>: success:
+- [#1218960](https://bugzilla.redhat.com/1218960): Rebalance Status output lists an extra colon " : " after volume rebalance: <vol_name>: success:
- [#1219032](https://bugzilla.redhat.com/1219032): cli: While attaching tier cli sholud always ask question whether you really want to attach a tier or not.
- [#1219355](https://bugzilla.redhat.com/1219355): glusterd:Scrub and bitd reconfigure functions were not calling if quota is not enabled.
- [#1219442](https://bugzilla.redhat.com/1219442): [Snapshot] Do not run scheduler if ovirt scheduler is running
-- [#1219479](https://bugzilla.redhat.com/1219479): [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are generated in the snapped brick.
+- [#1219479](https://bugzilla.redhat.com/1219479): [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are generated in the snapped brick.
- [#1219485](https://bugzilla.redhat.com/1219485): nfs-ganesha: Discrepancies with lock states recovery during migration
- [#1219637](https://bugzilla.redhat.com/1219637): Gluster small-file creates do not scale with brick count
- [#1219732](https://bugzilla.redhat.com/1219732): brick-op failure for glusterd command should log error message in cmd_history.log
@@ -495,13 +529,13 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1221737](https://bugzilla.redhat.com/1221737): Multi-threaded SHD support
- [#1221889](https://bugzilla.redhat.com/1221889): Log EEXIST errors in DEBUG level in fops MKNOD and MKDIR
- [#1221914](https://bugzilla.redhat.com/1221914): Implement MKNOD fop in bit-rot.
-- [#1221938](https://bugzilla.redhat.com/1221938): SIGNING FAILURE Error messages are poping up in the bitd log
+- [#1221938](https://bugzilla.redhat.com/1221938): SIGNING FAILURE Error messages are poping up in the bitd log
- [#1221970](https://bugzilla.redhat.com/1221970): tiering: use sperate log/socket/pid file for tiering
- [#1222013](https://bugzilla.redhat.com/1222013): Simplify creation and set-up of meta-volume (shared storage)
- [#1222088](https://bugzilla.redhat.com/1222088): Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
- [#1222092](https://bugzilla.redhat.com/1222092): rebalance failed after attaching the tier to the volume.
- [#1222126](https://bugzilla.redhat.com/1222126): DHT: lookup-unhashed feature breaks runtime compatibility with older client versions
-- [#1222238](https://bugzilla.redhat.com/1222238): features/changelog: buffer overrun in changelog-helpers
+- [#1222238](https://bugzilla.redhat.com/1222238): features/changelog: buffer overrun in changelog-helpers
- [#1222317](https://bugzilla.redhat.com/1222317): Building packages on RHEL-5 based distributions fail
- [#1222319](https://bugzilla.redhat.com/1222319): Remove all occurrences of #include "config.h"
- [#1222378](https://bugzilla.redhat.com/1222378): GlusterD fills the logs when the NFS-server is disabled
@@ -518,7 +552,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1223432](https://bugzilla.redhat.com/1223432): Update gluster op version to 30701
- [#1223625](https://bugzilla.redhat.com/1223625): rebalance : output of rebalance status should show ' run time ' in proper format (day,hour:min:sec)
- [#1223642](https://bugzilla.redhat.com/1223642): [geo-rep]: With tarssh the file is created at slave but it doesnt get sync
-- [#1223739](https://bugzilla.redhat.com/1223739): Quota: Do not allow set/unset of quota limit in heterogeneous cluster
+- [#1223739](https://bugzilla.redhat.com/1223739): Quota: Do not allow set/unset of quota limit in heterogeneous cluster
- [#1223741](https://bugzilla.redhat.com/1223741): non-root geo-replication session goes to faulty state, when the session is started
- [#1223759](https://bugzilla.redhat.com/1223759): Sharding - Fix posix compliance test failures.
- [#1223772](https://bugzilla.redhat.com/1223772): Though brick demon is not running, gluster vol status command shows the pid
@@ -541,7 +575,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1225491](https://bugzilla.redhat.com/1225491): [AFR-V2] - afr_final_errno() should treat op_ret > 0 also as success
- [#1225542](https://bugzilla.redhat.com/1225542): [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state
- [#1225564](https://bugzilla.redhat.com/1225564): [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
-- [#1225566](https://bugzilla.redhat.com/1225566): [geo-rep]: Traceback "ValueError: filedescriptor out of range in select()" observed while creating huge set of data on master
+- [#1225566](https://bugzilla.redhat.com/1225566): [geo-rep]: Traceback "ValueError: filedescriptor out of range in select()" observed while creating huge set of data on master
- [#1225571](https://bugzilla.redhat.com/1225571): [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
- [#1225572](https://bugzilla.redhat.com/1225572): nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found
- [#1225716](https://bugzilla.redhat.com/1225716): tests : remove brick command execution displays success even after, one of the bricks down.
@@ -589,10 +623,10 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1228952](https://bugzilla.redhat.com/1228952): Disperse volume : glusterfs crashed
- [#1229127](https://bugzilla.redhat.com/1229127): afr: Correction to self-heal-daemon documentation
- [#1229134](https://bugzilla.redhat.com/1229134): [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot
-- [#1229139](https://bugzilla.redhat.com/1229139): glusterd: glusterd crashing if you run re-balance and vol status command parallely.
+- [#1229139](https://bugzilla.redhat.com/1229139): glusterd: glusterd crashing if you run re-balance and vol status command parallely.
- [#1229172](https://bugzilla.redhat.com/1229172): [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t
- [#1229297](https://bugzilla.redhat.com/1229297): [Quota] : Inode quota spurious failure
-- [#1229609](https://bugzilla.redhat.com/1229609): Quota: " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs
+- [#1229609](https://bugzilla.redhat.com/1229609): Quota: " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs
- [#1229639](https://bugzilla.redhat.com/1229639): build: fix gitclean target
- [#1229658](https://bugzilla.redhat.com/1229658): STACK_RESET may crash with concurrent statedump requests to a glusterfs process
- [#1229825](https://bugzilla.redhat.com/1229825): Add regression test for cluster lock in a heterogeneous cluster
@@ -610,7 +644,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1231197](https://bugzilla.redhat.com/1231197): Snapshot daemon failed to run on newly created dist-rep volume with uss enabled
- [#1231205](https://bugzilla.redhat.com/1231205): [geo-rep]: rsync should be made dependent package for geo-replication
- [#1231257](https://bugzilla.redhat.com/1231257): nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes
-- [#1231264](https://bugzilla.redhat.com/1231264): DHT : for many operation directory/file path is '(null)' in brick log
+- [#1231264](https://bugzilla.redhat.com/1231264): DHT : for many operation directory/file path is '(null)' in brick log
- [#1231268](https://bugzilla.redhat.com/1231268): Fix invalid logic in tier.t
- [#1231425](https://bugzilla.redhat.com/1231425): use after free bug in dht
- [#1231437](https://bugzilla.redhat.com/1231437): Rebalance is failing in test cluster framework.
@@ -626,7 +660,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1232183](https://bugzilla.redhat.com/1232183): cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
- [#1232238](https://bugzilla.redhat.com/1232238): [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
- [#1232304](https://bugzilla.redhat.com/1232304): libglusterfs: delete duplicate code in libglusterfs/src/dict.c
-- [#1232378](https://bugzilla.redhat.com/1232378): [remove-brick]: Creation of file from NFS writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
+- [#1232378](https://bugzilla.redhat.com/1232378): [remove-brick]: Creation of file from NFS writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
- [#1232391](https://bugzilla.redhat.com/1232391): Sharding - Use (f)xattrop (as opposed to (f)setxattr) to update shard size and block count
- [#1232430](https://bugzilla.redhat.com/1232430): [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
- [#1232572](https://bugzilla.redhat.com/1232572): quota: quota list displays double the size of previous value, post heal completion.
@@ -674,7 +708,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1236128](https://bugzilla.redhat.com/1236128): Quota list is not working on tiered volume.
- [#1236212](https://bugzilla.redhat.com/1236212): Migration does not work when EC is used as a tiered volume.
- [#1236270](https://bugzilla.redhat.com/1236270): [Backup]: File movement across directories does not get captured in the output file in a X3 volume
-- [#1236512](https://bugzilla.redhat.com/1236512): DHT + rebalance :- file permission got changed (sticky bit and setgid is set) after file migration failure
+- [#1236512](https://bugzilla.redhat.com/1236512): DHT + rebalance :- file permission got changed (sticky bit and setgid is set) after file migration failure
- [#1236561](https://bugzilla.redhat.com/1236561): Ganesha volume export failed
- [#1236945](https://bugzilla.redhat.com/1236945): glusterfsd crashed while rebalance and self-heal were in progress
- [#1237000](https://bugzilla.redhat.com/1237000): Add a test case for verifying NO empty changelog created
@@ -719,7 +753,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1240949](https://bugzilla.redhat.com/1240949): quota: In enforcer, caching parents in ctx during build ancestry is not working
- [#1240952](https://bugzilla.redhat.com/1240952): [USS]: snapd process is not killed once the glusterd comes back
- [#1240970](https://bugzilla.redhat.com/1240970): [Data Tiering]: HOT Files get demoted from hot tier
-- [#1240991](https://bugzilla.redhat.com/1240991): Quota: After rename operation , gluster v quota <volname> list-objects command give incorrect no. of files in output
+- [#1240991](https://bugzilla.redhat.com/1240991): Quota: After rename operation , gluster v quota <volname> list-objects command give incorrect no. of files in output
- [#1241054](https://bugzilla.redhat.com/1241054): Data Tiering: Rename of file is not heating up the file
- [#1241071](https://bugzilla.redhat.com/1241071): Spurious failure of ./tests/bugs/snapshot/bug-1109889.t
- [#1241104](https://bugzilla.redhat.com/1241104): Handle negative fcntl flock->l_len values
@@ -756,7 +790,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1243785](https://bugzilla.redhat.com/1243785): [Backup]: Password of the peer nodes prompted whenever a glusterfind session is deleted.
- [#1243798](https://bugzilla.redhat.com/1243798): quota/marker: dir count in inode quota is not atomic
- [#1243805](https://bugzilla.redhat.com/1243805): Gluster-nfs : unnecessary logging message in nfs.log for export feature
-- [#1243806](https://bugzilla.redhat.com/1243806): logging: Revert usage of global xlator for log buffer
+- [#1243806](https://bugzilla.redhat.com/1243806): logging: Revert usage of global xlator for log buffer
- [#1243812](https://bugzilla.redhat.com/1243812): [Backup]: Crash observed when keyboard interrupt is encountered in the middle of any glusterfind command
- [#1243838](https://bugzilla.redhat.com/1243838): [Backup]: Glusterfind list shows the session as corrupted on the peer node
- [#1243890](https://bugzilla.redhat.com/1243890): huge mem leak in posix xattrop
@@ -769,7 +803,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1245065](https://bugzilla.redhat.com/1245065): "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes
- [#1245142](https://bugzilla.redhat.com/1245142): DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
- [#1245276](https://bugzilla.redhat.com/1245276): ec returns EIO error in cases where a more specific error could be returned
-- [#1245331](https://bugzilla.redhat.com/1245331): volume start command is failing when glusterfs compiled with debug enabled
+- [#1245331](https://bugzilla.redhat.com/1245331): volume start command is failing when glusterfs compiled with debug enabled
- [#1245380](https://bugzilla.redhat.com/1245380): [RFE] Render all mounts of a volume defunct upon access revocation
- [#1245425](https://bugzilla.redhat.com/1245425): IFS is not set back after used as "[" in log_newer function of include.rc
- [#1245544](https://bugzilla.redhat.com/1245544): quota/marker: errors in log file 'Failed to get metadata for'
@@ -783,7 +817,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1246082](https://bugzilla.redhat.com/1246082): sharding - Populate the aggregated ia_size and ia_blocks before unwinding (f)setattr to upper layers
- [#1246229](https://bugzilla.redhat.com/1246229): tier_lookup_heal.t contains incorrect file_on_fast_tier function
- [#1246275](https://bugzilla.redhat.com/1246275): POSIX ACLs as used by a FUSE mount can not use more than 32 groups
-- [#1246432](https://bugzilla.redhat.com/1246432): ./tests/basic/volume-snapshot.t spurious fail causing glusterd crash.
+- [#1246432](https://bugzilla.redhat.com/1246432): ./tests/basic/volume-snapshot.t spurious fail causing glusterd crash.
- [#1246736](https://bugzilla.redhat.com/1246736): client3_3_removexattr_cbk floods the logs with "No data available" messages
- [#1246794](https://bugzilla.redhat.com/1246794): GF_LOG_NONE logs always
- [#1247108](https://bugzilla.redhat.com/1247108): sharding - OS installation on vm image hangs on a sharded volume
@@ -827,11 +861,11 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1251857](https://bugzilla.redhat.com/1251857): nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
- [#1251980](https://bugzilla.redhat.com/1251980): dist-geo-rep: geo-rep status shows Active/Passive even when all the gsync processes in a node are killed
- [#1252121](https://bugzilla.redhat.com/1252121): tier.t contains pattern matching error in check_counters function
-- [#1252244](https://bugzilla.redhat.com/1252244): DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation few files are not accessible and not listed on mount and more than one Directory have same gfid
+- [#1252244](https://bugzilla.redhat.com/1252244): DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation few files are not accessible and not listed on mount and more than one Directory have same gfid
- [#1252263](https://bugzilla.redhat.com/1252263): Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
- [#1252374](https://bugzilla.redhat.com/1252374): tests: no cleanup on receiving external signals INT, TERM and HUP
- [#1252410](https://bugzilla.redhat.com/1252410): libgfapi : adding follow flag to glfs_h_lookupat()
-- [#1252448](https://bugzilla.redhat.com/1252448): Probing a new node, which is part of another cluster, should throw proper error message in logs and CLI
+- [#1252448](https://bugzilla.redhat.com/1252448): Probing a new node, which is part of another cluster, should throw proper error message in logs and CLI
- [#1252586](https://bugzilla.redhat.com/1252586): Legacy files pre-existing tier attach must be promoted
- [#1252695](https://bugzilla.redhat.com/1252695): posix : pending - porting log messages to a new framework
- [#1252696](https://bugzilla.redhat.com/1252696): After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log
@@ -855,6 +889,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1254494](https://bugzilla.redhat.com/1254494): nfs-ganesha: refresh-config stdout output does not make sense
- [#1254850](https://bugzilla.redhat.com/1254850): Fix build on Mac OS X, glfs_h_lookupat symbol version
- [#1254863](https://bugzilla.redhat.com/1254863): non-default symver macros are incorrect
+- [#1254934](https://bugzilla.redhat.com/1254934): Misleading error messages on brick logs while creating directory (mkdir) on fuse mount
- [#1255310](https://bugzilla.redhat.com/1255310): Snapshot: When soft limit is reached, auto-delete is enable, create snapshot doesn't logs anything in log files
- [#1255386](https://bugzilla.redhat.com/1255386): snapd/quota/nfs daemon's runs on the node, even after that node was detached from trusted storage pool
- [#1255599](https://bugzilla.redhat.com/1255599): Remove unwanted tests from volume-snapshot.t
@@ -865,7 +900,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1256580](https://bugzilla.redhat.com/1256580): sharding - VM image size as seen from the mount keeps growing beyond configured size on a sharded volume
- [#1256588](https://bugzilla.redhat.com/1256588): arbiter-statfs.t fails spuriously in NetBSD regression
- [#1257076](https://bugzilla.redhat.com/1257076): DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on
-- [#1257110](https://bugzilla.redhat.com/1257110): Logging : unnecessary log message "REMOVEXATTR No data available " when files are written to glusterfs mount
+- [#1257110](https://bugzilla.redhat.com/1257110): Logging : unnecessary log message "REMOVEXATTR No data available " when files are written to glusterfs mount
- [#1257149](https://bugzilla.redhat.com/1257149): Provide more meaningful errors on peer probe and peer detach
- [#1257533](https://bugzilla.redhat.com/1257533): snapshot delete all command fails with --xml option.
- [#1257694](https://bugzilla.redhat.com/1257694): quota: removexattr on /d/backends/patchy/.glusterfs/79/99/799929ec-f546-4bbf-8549-801b79623262 (for trusted.glusterfs.quota.add7e3f8-833b-48ec-8a03-f7cd09925468.contri) [No such file or directory]
@@ -908,7 +943,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1261819](https://bugzilla.redhat.com/1261819): Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)
- [#1261837](https://bugzilla.redhat.com/1261837): Data Tiering:Volume task status showing as remove brick when detach tier is trigger
- [#1261841](https://bugzilla.redhat.com/1261841): [HC] Implement fallocate, discard and zerofill with sharding
-- [#1261862](https://bugzilla.redhat.com/1261862): Data Tiering: detach-tier start force command not available on a tier volume(unlike which is possible in force remove-brick)
+- [#1261862](https://bugzilla.redhat.com/1261862): Data Tiering: detach-tier start force command not available on a tier volume(unlike which is possible in force remove-brick)
- [#1261927](https://bugzilla.redhat.com/1261927): Minor improvements and code cleanup for rpc
- [#1262345](https://bugzilla.redhat.com/1262345): `getfattr -n replica.split-brain-status <file>' command hung on the mount
- [#1262438](https://bugzilla.redhat.com/1262438): Error not propagated correctly if selfheal layout lock fails
@@ -984,7 +1019,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1275502](https://bugzilla.redhat.com/1275502): [Tier]: Typo in the output while setting the wrong value of low/hi watermark
- [#1275524](https://bugzilla.redhat.com/1275524): Data Tiering:heat counters not getting reset and also internal ops seem to be heating the files
- [#1275616](https://bugzilla.redhat.com/1275616): snap-max-hard-limit for snapshots always shows as 256 in info file.
-- [#1275966](https://bugzilla.redhat.com/1275966): RFE : Exporting multiple subdirectory entries for gluster volume using cli
+- [#1275966](https://bugzilla.redhat.com/1275966): RFE : Exporting multiple subdirectory entries for gluster volume using cli
- [#1276018](https://bugzilla.redhat.com/1276018): Wrong value of snap-max-hard-limit observed in 'gluster volume info'.
- [#1276023](https://bugzilla.redhat.com/1276023): Clone creation should not be successful when the node participating in volume goes down.
- [#1276028](https://bugzilla.redhat.com/1276028): [RFE] Geo-replication support for Volumes running in docker containers
@@ -993,7 +1028,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1276203](https://bugzilla.redhat.com/1276203): add-brick on a replicate volume could lead to data-loss
- [#1276243](https://bugzilla.redhat.com/1276243): gluster-nfs : Server crashed due to an invalid reference
- [#1276386](https://bugzilla.redhat.com/1276386): vol replace-brick fails when transport.socket.bind-address is set in glusterd
-- [#1276423](https://bugzilla.redhat.com/1276423): glusterd: probing a new node(>=3.6) from 3.5 cluster is moving the peer to rejected state
+- [#1276423](https://bugzilla.redhat.com/1276423): glusterd: probing a new node(>=3.6) from 3.5 cluster is moving the peer to rejected state
- [#1276562](https://bugzilla.redhat.com/1276562): Data Tiering:tiering deamon crashes when trying to heat the file
- [#1276643](https://bugzilla.redhat.com/1276643): Upgrading a subset of cluster to 3.7.5 leads to issues with glusterd commands
- [#1276675](https://bugzilla.redhat.com/1276675): Arbiter volume becomes replica volume in some cases
@@ -1017,7 +1052,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1279166](https://bugzilla.redhat.com/1279166): Data Tiering:Metadata changes to a file should not heat/promote the file
- [#1279297](https://bugzilla.redhat.com/1279297): Remove bug-1275616.t from bad tests list
- [#1279327](https://bugzilla.redhat.com/1279327): [Snapshot]: Clone creation fails on tiered volume with pre-validation failed message
-- [#1279376](https://bugzilla.redhat.com/1279376): Data Tiering:Rename of cold file to a hot file causing split brain and showing two copies of files in mount point
+- [#1279376](https://bugzilla.redhat.com/1279376): Data Tiering:Rename of cold file to a hot file causing split brain and showing two copies of files in mount point
- [#1279484](https://bugzilla.redhat.com/1279484): glusterfsd to support volfile-server-transport type "unix"
- [#1279637](https://bugzilla.redhat.com/1279637): Data Tiering:Regression:Detach tier commit is passing when detach tier is in progress
- [#1279705](https://bugzilla.redhat.com/1279705): AFR: 3-way-replication: Transport point not connected error messaged not displayed when one of the replica pair is down
@@ -1027,13 +1062,13 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1279921](https://bugzilla.redhat.com/1279921): volume info of %s obtained from %s: ambiguous uuid - Starting geo-rep session
- [#1280428](https://bugzilla.redhat.com/1280428): fops-during-migration-pause.t spurious failure
- [#1281230](https://bugzilla.redhat.com/1281230): dht must avoid fresh lookups when a single replica pair goes offline
-- [#1281265](https://bugzilla.redhat.com/1281265): DHT :- log is full of ' Found anomalies in /<DIR> (gfid = 00000000-0000-0000-0000-000000000000)' - for each Directory which was self healed
+- [#1281265](https://bugzilla.redhat.com/1281265): DHT :- log is full of ' Found anomalies in /<DIR> (gfid = 00000000-0000-0000-0000-000000000000)' - for each Directory which was self healed
- [#1281598](https://bugzilla.redhat.com/1281598): Data Tiering: "ls" count taking link files and promote/demote files into consideration both on fuse and nfs mount
- [#1281892](https://bugzilla.redhat.com/1281892): packaging: gfind_missing_files are not in geo-rep %if ... %endif conditional
- [#1282076](https://bugzilla.redhat.com/1282076): cache mode must be the default mode for tiered volumes
- [#1282322](https://bugzilla.redhat.com/1282322): [GlusterD]: Volume start fails post add-brick on a volume which is not started
- [#1282331](https://bugzilla.redhat.com/1282331): Geo-replication is logging in Localtime
-- [#1282390](https://bugzilla.redhat.com/1282390): Data Tiering:delete command rm -rf not deleting files the linkto file(hashed) which are under migration and possible spit-brain observed and possible disk wastage
+- [#1282390](https://bugzilla.redhat.com/1282390): Data Tiering:delete command rm -rf not deleting files the linkto file(hashed) which are under migration and possible spit-brain observed and possible disk wastage
- [#1282461](https://bugzilla.redhat.com/1282461): [upgrade] Error messages seen in glusterd logs, while upgrading from RHGS 2.1.6 to RHGS 3.1
- [#1282673](https://bugzilla.redhat.com/1282673): ./tests/basic/tier/record-metadata-heat.t is failing upstream
- [#1282751](https://bugzilla.redhat.com/1282751): Large system file distribution is broken
@@ -1041,10 +1076,10 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1282915](https://bugzilla.redhat.com/1282915): glusterfs does not register with rpcbind on restart
- [#1283032](https://bugzilla.redhat.com/1283032): While file is self healing append to the file hangs
- [#1283103](https://bugzilla.redhat.com/1283103): Setting security.* xattrs fails
-- [#1283178](https://bugzilla.redhat.com/1283178): [GlusterD]: Incorrect peer status showing if volume restart done before entire cluster update.
+- [#1283178](https://bugzilla.redhat.com/1283178): [GlusterD]: Incorrect peer status showing if volume restart done before entire cluster update.
- [#1283211](https://bugzilla.redhat.com/1283211): check_host_list() should be more robust
- [#1283485](https://bugzilla.redhat.com/1283485): Warning messages seen in glusterd logs in executing gluster volume set help
-- [#1283488](https://bugzilla.redhat.com/1283488): [Tier]: Space is missed b/w the words in the detach tier stop error message
+- [#1283488](https://bugzilla.redhat.com/1283488): [Tier]: Space is missed b/w the words in the detach tier stop error message
- [#1283567](https://bugzilla.redhat.com/1283567): qupta/marker: backward compatibility with quota xattr vesrioning
- [#1283948](https://bugzilla.redhat.com/1283948): glupy default CFLAGS conflict with our CFLAGS when --enable-debug is used
- [#1283983](https://bugzilla.redhat.com/1283983): nfs-ganesha: Upcall sent on null gfid
@@ -1060,14 +1095,14 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1285152](https://bugzilla.redhat.com/1285152): store afr pending xattrs as a volume option
- [#1285173](https://bugzilla.redhat.com/1285173): Create doesn't remember flags it is opened with
- [#1285230](https://bugzilla.redhat.com/1285230): Data Tiering:File create terminates with "Input/output error" as split brain is observed
-- [#1285241](https://bugzilla.redhat.com/1285241): Corrupted objects list does not get cleared even after all the files in the volume are deleted and count increases as old + new count
+- [#1285241](https://bugzilla.redhat.com/1285241): Corrupted objects list does not get cleared even after all the files in the volume are deleted and count increases as old + new count
- [#1285288](https://bugzilla.redhat.com/1285288): Better indication of arbiter brick presence in a volume.
- [#1285483](https://bugzilla.redhat.com/1285483): legacy_many_files.t fails upstream
- [#1285488](https://bugzilla.redhat.com/1285488): [geo-rep]: Recommended Shared volume use on geo-replication is broken
- [#1285616](https://bugzilla.redhat.com/1285616): Brick crashes because of race in bit-rot init
- [#1285634](https://bugzilla.redhat.com/1285634): Self-heal triggered every couple of seconds and a 3-node 1-arbiter setup
- [#1285660](https://bugzilla.redhat.com/1285660): sharding - reads fail on sharded volume while running iozone
-- [#1285663](https://bugzilla.redhat.com/1285663): tiering: Seeing error messages E "/usr/lib64/glusterfs/3.7.5/xlator/features/changetimerecorder.so(ctr_lookup+0x54f) [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier
+- [#1285663](https://bugzilla.redhat.com/1285663): tiering: Seeing error messages E "/usr/lib64/glusterfs/3.7.5/xlator/features/changetimerecorder.so(ctr_lookup+0x54f) [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier
- [#1285968](https://bugzilla.redhat.com/1285968): cli/geo-rep : remove unused code
- [#1285989](https://bugzilla.redhat.com/1285989): bitrot: bitrot scrub status command should display the correct value of total number of scrubbed, unsigned file
- [#1286017](https://bugzilla.redhat.com/1286017): We need to skip data self-heal for arbiter bricks
@@ -1077,10 +1112,10 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1286656](https://bugzilla.redhat.com/1286656): Data Tiering:Read heat not getting calculated and read operations not heating the file with counter enabled
- [#1286735](https://bugzilla.redhat.com/1286735): RFE: add setup and teardown for fuse tests
- [#1286910](https://bugzilla.redhat.com/1286910): Tier: ec xattrs are set on a newly created file present in the non-ec hot tier
-- [#1286959](https://bugzilla.redhat.com/1286959): [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.
+- [#1286959](https://bugzilla.redhat.com/1286959): [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.
- [#1286974](https://bugzilla.redhat.com/1286974): Without detach tier commit, status changes back to tier migration
- [#1286988](https://bugzilla.redhat.com/1286988): bitrot: gluster man page and gluster cli usage does not mention the new scrub status cmd
-- [#1287027](https://bugzilla.redhat.com/1287027): glusterd: cli is showing command success for rebalance commands(command which uses op_sm framework) even though staging is failed in follower node.
+- [#1287027](https://bugzilla.redhat.com/1287027): glusterd: cli is showing command success for rebalance commands(command which uses op_sm framework) even though staging is failed in follower node.
- [#1287455](https://bugzilla.redhat.com/1287455): glusterd: all the daemon's of existing volume stopping upon peer detach
- [#1287503](https://bugzilla.redhat.com/1287503): Full heal of volume fails on some nodes "Commit failed on X", and glustershd logs "Couldn't get xlator xl-0"
- [#1287517](https://bugzilla.redhat.com/1287517): Memory leak in glusterd
@@ -1091,7 +1126,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1287842](https://bugzilla.redhat.com/1287842): Few snapshot creation fails with pre-validation failed message on tiered volume.
- [#1287872](https://bugzilla.redhat.com/1287872): add bug-924726.t to ignore list in regression
- [#1287992](https://bugzilla.redhat.com/1287992): [GlusterD]Probing a node having standalone volume, should not happen
-- [#1287996](https://bugzilla.redhat.com/1287996): [Quota]: Peer status is in "Rejected" state with Quota enabled volume
+- [#1287996](https://bugzilla.redhat.com/1287996): [Quota]: Peer status is in "Rejected" state with Quota enabled volume
- [#1288019](https://bugzilla.redhat.com/1288019): Possible memory leak in the tiered daemon
- [#1288059](https://bugzilla.redhat.com/1288059): glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
- [#1288474](https://bugzilla.redhat.com/1288474): tiering: quota list command is not working after attach or detach
@@ -1109,7 +1144,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1289845](https://bugzilla.redhat.com/1289845): spurious failure of bug-1279376-rename-demoted-file.t
- [#1289859](https://bugzilla.redhat.com/1289859): Symlinks Rename fails in Symlink not exists in Slave
- [#1289869](https://bugzilla.redhat.com/1289869): Compile is broken in gluster master
-- [#1289916](https://bugzilla.redhat.com/1289916): Client will not get notified about changes to volume if node used while mounting goes down
+- [#1289916](https://bugzilla.redhat.com/1289916): Client will not get notified about changes to volume if node used while mounting goes down
- [#1289935](https://bugzilla.redhat.com/1289935): Glusterfind hook script failing if /var/lib/glusterd/glusterfind dir was absent
- [#1290125](https://bugzilla.redhat.com/1290125): tests/basic/afr/arbiter-statfs.t fails most of the times on NetBSD
- [#1290151](https://bugzilla.redhat.com/1290151): hook script for CTDB should not change Samba config
@@ -1117,13 +1152,13 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1290421](https://bugzilla.redhat.com/1290421): changelog: CHANGELOG rename error is logged on every changelog rollover
- [#1290604](https://bugzilla.redhat.com/1290604): S30Samba scripts do not work on systemd systems
- [#1290677](https://bugzilla.redhat.com/1290677): tiering: T files getting created , even after disk quota exceeds
-- [#1290734](https://bugzilla.redhat.com/1290734): [GlusterD]: GlusterD log is filled with error messages - " Failed to aggregate response from node/brick"
+- [#1290734](https://bugzilla.redhat.com/1290734): [GlusterD]: GlusterD log is filled with error messages - " Failed to aggregate response from node/brick"
- [#1290766](https://bugzilla.redhat.com/1290766): [RFE] quota: enhance quota enable and disable process
- [#1290865](https://bugzilla.redhat.com/1290865): nfs-ganesha server do not enter grace period during failover/failback
- [#1290965](https://bugzilla.redhat.com/1290965): [Tiering] + [DHT] - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
- [#1290975](https://bugzilla.redhat.com/1290975): File is not demoted after self heal (split-brain)
- [#1291212](https://bugzilla.redhat.com/1291212): Regular files are listed as 'T' files on nfs mount
-- [#1291259](https://bugzilla.redhat.com/1291259): Upcall/Cache-Invalidation: Use parent stbuf while updating parent entry
+- [#1291259](https://bugzilla.redhat.com/1291259): Upcall/Cache-Invalidation: Use parent stbuf while updating parent entry
- [#1291537](https://bugzilla.redhat.com/1291537): [RFE] Provide mechanism to spin up reproducible test environment for all developers
- [#1291566](https://bugzilla.redhat.com/1291566): first file created after hot tier full fails to create, but later ends up as a stale erroneous file (file with ???????????)
- [#1291603](https://bugzilla.redhat.com/1291603): [tiering]: read/write freq-threshold allows negative values
@@ -1142,7 +1177,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1293227](https://bugzilla.redhat.com/1293227): Minor improvements and code cleanup for locks translator
- [#1293256](https://bugzilla.redhat.com/1293256): [Tier]: "Bad file descriptor" on removal of symlink only on tiered volume
- [#1293293](https://bugzilla.redhat.com/1293293): afr: warn if pending xattrs missing during init()
-- [#1293414](https://bugzilla.redhat.com/1293414): [GlusterD]: Peer detach happening with a node which is hosting volume bricks
+- [#1293414](https://bugzilla.redhat.com/1293414): [GlusterD]: Peer detach happening with a node which is hosting volume bricks
- [#1293523](https://bugzilla.redhat.com/1293523): tier-snapshot.t runs too slowly on RHEL6
- [#1293558](https://bugzilla.redhat.com/1293558): gluster cli crashed while performing 'gluster vol bitrot <vol_name> scrub status'
- [#1293601](https://bugzilla.redhat.com/1293601): quota: handle quota xattr removal when quota is enabled again
@@ -1184,7 +1219,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1297195](https://bugzilla.redhat.com/1297195): no-mtab (-n) mount option ignore next mount option
- [#1297311](https://bugzilla.redhat.com/1297311): Attach tier : Creates fail with invalid argument errors
- [#1297373](https://bugzilla.redhat.com/1297373): [write-behind] : Write/Append to a full volume causes fuse client to crash
-- [#1297638](https://bugzilla.redhat.com/1297638): gluster vol get volname user.metadata-text" Command fails with "volume get option: failed: Did you mean cluster.metadata-self-heal?"
+- [#1297638](https://bugzilla.redhat.com/1297638): gluster vol get volname user.metadata-text" Command fails with "volume get option: failed: Did you mean cluster.metadata-self-heal?"
- [#1297695](https://bugzilla.redhat.com/1297695): heal info reporting slow when IO is in progress on the volume
- [#1297740](https://bugzilla.redhat.com/1297740): tests/bugs/quota/bug-1049323.t fails in fedora
- [#1297750](https://bugzilla.redhat.com/1297750): volume info xml does not show arbiter details
@@ -1220,7 +1255,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1302772](https://bugzilla.redhat.com/1302772): promotions not balanced across hot tier sub-volumes
- [#1302948](https://bugzilla.redhat.com/1302948): tar complains: <fileName>: file changed as we read it
- [#1303028](https://bugzilla.redhat.com/1303028): Tiering status and rebalance status stops getting updated
-- [#1303269](https://bugzilla.redhat.com/1303269): After GlusterD restart, Remove-brick commit happening even though data migration not completed.
+- [#1303269](https://bugzilla.redhat.com/1303269): After GlusterD restart, Remove-brick commit happening even though data migration not completed.
- [#1303501](https://bugzilla.redhat.com/1303501): access-control : spurious error log message on every setxattr call
- [#1303828](https://bugzilla.redhat.com/1303828): [USS]: If .snaps already exists, ls -la lists it even after enabling USS
- [#1303829](https://bugzilla.redhat.com/1303829): [feat] Compound translator
@@ -1244,12 +1279,12 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1306852](https://bugzilla.redhat.com/1306852): Tiering threads can starve each other
- [#1306897](https://bugzilla.redhat.com/1306897): Remove split-brain-healing.t from bad tests
- [#1307208](https://bugzilla.redhat.com/1307208): dht: NULL layouts referenced while the I/O is going on tiered volume
-- [#1308402](https://bugzilla.redhat.com/1308402): Newly created volume start, starting the bricks when server quorum not met
+- [#1308402](https://bugzilla.redhat.com/1308402): Newly created volume start, starting the bricks when server quorum not met
- [#1308900](https://bugzilla.redhat.com/1308900): build: fix build break
-- [#1308961](https://bugzilla.redhat.com/1308961): [New] - quarantine folder becomes empty and bitrot status does not list any files which are corrupted
+- [#1308961](https://bugzilla.redhat.com/1308961): [New] - quarantine folder becomes empty and bitrot status does not list any files which are corrupted
- [#1309238](https://bugzilla.redhat.com/1309238): Issues with refresh-config when the ".export_added" has different values on different nodes
- [#1309342](https://bugzilla.redhat.com/1309342): Wrong permissions set on previous copy of truncated files inside trash directory
-- [#1309462](https://bugzilla.redhat.com/1309462): Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
+- [#1309462](https://bugzilla.redhat.com/1309462): Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
- [#1309659](https://bugzilla.redhat.com/1309659): [tiering]: Performing a gluster vol reset, turns off 'features.ctr-enabled' on a tiered volume
- [#1309999](https://bugzilla.redhat.com/1309999): Data Tiering:Don't allow a detach-tier commit if detach-tier start has failed to complete
- [#1310080](https://bugzilla.redhat.com/1310080): [RFE]Add --no-encode option to the `glusterfind pre` command
@@ -1281,10 +1316,10 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1313901](https://bugzilla.redhat.com/1313901): glusterd: does not start
- [#1314150](https://bugzilla.redhat.com/1314150): Choose self-heal source as local subvolume if possible
- [#1314204](https://bugzilla.redhat.com/1314204): nfs-ganesha setup fails on fedora
-- [#1314291](https://bugzilla.redhat.com/1314291): tier: GCC throws Unused variable warning for conf in tier_link_cbk function
+- [#1314291](https://bugzilla.redhat.com/1314291): tier: GCC throws Unused variable warning for conf in tier_link_cbk function
- [#1314549](https://bugzilla.redhat.com/1314549): remove replace-brick-self-heal.t from bad tests
- [#1314649](https://bugzilla.redhat.com/1314649): disperse: Provide an option to enable/disable eager lock
-- [#1315024](https://bugzilla.redhat.com/1315024): glusterfs-libs postun scriptlet fail /sbin/ldconfig: relative path `1' used to build cache
+- [#1315024](https://bugzilla.redhat.com/1315024): glusterfs-libs postun scriptlet fail /sbin/ldconfig: relative path '1' used to build cache
- [#1315168](https://bugzilla.redhat.com/1315168): Fd based fops should not be logging ENOENT/ESTALE
- [#1315186](https://bugzilla.redhat.com/1315186): setting lower op-version should throw failure message
- [#1315465](https://bugzilla.redhat.com/1315465): glusterfs brick process crashed
@@ -1292,7 +1327,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1315601](https://bugzilla.redhat.com/1315601): Geo-replication CPU usage is 100%
- [#1315659](https://bugzilla.redhat.com/1315659): [Tier]: Following volume restart, tierd shows failure at status on some nodes
- [#1315666](https://bugzilla.redhat.com/1315666): Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
-- [#1316327](https://bugzilla.redhat.com/1316327): Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
+- [#1316327](https://bugzilla.redhat.com/1316327): Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
- [#1316437](https://bugzilla.redhat.com/1316437): snapd doesn't come up automatically after node reboot.
- [#1316462](https://bugzilla.redhat.com/1316462): Fix possible failure in tests/basic/afr/arbiter.t
- [#1316499](https://bugzilla.redhat.com/1316499): volume set on user.* domain trims all white spaces in the value
@@ -1313,7 +1348,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1319374](https://bugzilla.redhat.com/1319374): smbd crashes while accessing multiple volume shares via same client
- [#1319581](https://bugzilla.redhat.com/1319581): Marker: Lot of dict_get errors in brick log!!
- [#1319706](https://bugzilla.redhat.com/1319706): Add a script that converts the gfid-string of a directory into absolute path name w.r.t the brick path.
-- [#1319717](https://bugzilla.redhat.com/1319717): glusterfind pre test projects_media2 /tmp/123 rh-storage2 - pre failed: Traceback ...
+- [#1319717](https://bugzilla.redhat.com/1319717): glusterfind pre test projects_media2 /tmp/123 rh-storage2 - pre failed: Traceback ...
- [#1319992](https://bugzilla.redhat.com/1319992): RFE: Lease support for gluster
- [#1320101](https://bugzilla.redhat.com/1320101): Client log gets flooded by default with fop stats under DEBUG level
- [#1320388](https://bugzilla.redhat.com/1320388): [GSS]-gluster v heal volname info does not work with enabled ssl/tls
@@ -1353,6 +1388,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1326308](https://bugzilla.redhat.com/1326308): WORM/Retention Feature
- [#1326410](https://bugzilla.redhat.com/1326410): /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
- [#1326627](https://bugzilla.redhat.com/1326627): nfs-ganesha crashes with segfault error while doing refresh config on volume.
+- [#1327174](https://bugzilla.redhat.com/1327174): op-version for 3.8 features should be set to GD_OP_VERSION_3_8_0
- [#1327507](https://bugzilla.redhat.com/1327507): [DHT-Rebalance]: with few brick process down, rebalance process isn't killed even after stopping rebalance process
- [#1327553](https://bugzilla.redhat.com/1327553): [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
- [#1327976](https://bugzilla.redhat.com/1327976): [RFE] Provide vagrant developer setup for GlusterFS
@@ -1374,7 +1410,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1330476](https://bugzilla.redhat.com/1330476): libgfapi:Setting need_lookup on wrong list
- [#1330481](https://bugzilla.redhat.com/1330481): glusterd restart is failing if volume brick is down due to underlying FS crash.
- [#1330567](https://bugzilla.redhat.com/1330567): SAMBA+TIER : File size is not getting updated when created on windows samba share mount
-- [#1330583](https://bugzilla.redhat.com/1330583): glusterfs-libs postun ldconfig: relative path '1' used to build cache
+- [#1330583](https://bugzilla.redhat.com/1330583): glusterfs-libs postun ldconfig: relative path `1' used to build cache
- [#1330616](https://bugzilla.redhat.com/1330616): Minor improvements and code cleanup for libglusterfs
- [#1330974](https://bugzilla.redhat.com/1330974): Swap order of GF_EVENT_SOME_CHILD_DOWN enum to match the release3.-7 branch
- [#1331042](https://bugzilla.redhat.com/1331042): glusterfsd: return actual exit status on mount process
@@ -1421,6 +1457,7 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1336152](https://bugzilla.redhat.com/1336152): [Tiering]: Files remain in hot tier even after detach tier completes
- [#1336198](https://bugzilla.redhat.com/1336198): failover is not working with latest builds.
- [#1336268](https://bugzilla.redhat.com/1336268): features/index: clang compile warnings in index.c
+- [#1336285](https://bugzilla.redhat.com/1336285): Worker dies with [Errno 5] Input/output error upon creation of entries at slave
- [#1336472](https://bugzilla.redhat.com/1336472): [Tiering]: The message 'Max cycle time reached..exiting migration' incorrectly displayed as an 'error' in the logs
- [#1336704](https://bugzilla.redhat.com/1336704): [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users
- [#1336794](https://bugzilla.redhat.com/1336794): assorted typos and spelling mistakes from Debian lintian
@@ -1428,23 +1465,59 @@ A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
- [#1336801](https://bugzilla.redhat.com/1336801): ganesha exported volumes doesn't get synced up on shutdown node when it comes up.
- [#1336854](https://bugzilla.redhat.com/1336854): scripts: bash-isms in scripts
- [#1336947](https://bugzilla.redhat.com/1336947): [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs
-- [#1337114](https://bugzilla.redhat.com/1337114): Modified volume options are not syncing once glusterd comes up.
+- [#1337114](https://bugzilla.redhat.com/1337114): Modified volume options are not syncing once glusterd comes up.
- [#1337127](https://bugzilla.redhat.com/1337127): rpc: change client insecure port ceiling from 65535 to 49151
- [#1337130](https://bugzilla.redhat.com/1337130): Revert "glusterd/afr: store afr pending xattrs as a volume option" patch on 3.8 branch
- [#1337387](https://bugzilla.redhat.com/1337387): Add arbiter brick hotplug
-- [#1337394](https://bugzilla.redhat.com/1337394): DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after renaming Directories from multiple client at same time
+- [#1337394](https://bugzilla.redhat.com/1337394): DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after renaming Directories from multiple client at same time
- [#1337596](https://bugzilla.redhat.com/1337596): Mounting a volume over NFS with a subdir followed by a / returns "Invalid argument"
- [#1337638](https://bugzilla.redhat.com/1337638): Leases: Fix lease failures in certain scenarios
- [#1337652](https://bugzilla.redhat.com/1337652): log flooded with Could not map name=xxxx to a UUID when config'd with long hostnames
- [#1337780](https://bugzilla.redhat.com/1337780): tests/bugs/write-behind/1279730.t fails spuriously
- [#1337795](https://bugzilla.redhat.com/1337795): tests/basic/afr/tarissue.t fails regression
+- [#1337805](https://bugzilla.redhat.com/1337805): Mandatory locks are not migrated during lock migration
- [#1337822](https://bugzilla.redhat.com/1337822): one of vm goes to paused state when network goes down and comes up back
- [#1337839](https://bugzilla.redhat.com/1337839): Files present in the .shard folder even after deleting all the vms from the UI
- [#1337870](https://bugzilla.redhat.com/1337870): Some of VMs go to paused state when there is concurrent I/O on vms
- [#1337908](https://bugzilla.redhat.com/1337908): SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
- [#1338051](https://bugzilla.redhat.com/1338051): ENOTCONN error during parallel rmdir
+- [#1338501](https://bugzilla.redhat.com/1338501): implement meta-lock/unlock for lock migration
+- [#1338669](https://bugzilla.redhat.com/1338669): AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously
- [#1338968](https://bugzilla.redhat.com/1338968): common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
- [#1339137](https://bugzilla.redhat.com/1339137): fuse: In fuse_first_lookup(), dict is not un-referenced in case create_frame returns an empty pointer.
- [#1339192](https://bugzilla.redhat.com/1339192): Missing autotools helper config.* files
- [#1339228](https://bugzilla.redhat.com/1339228): gfapi: set mem_acct for the variables created for upcall
-
+- [#1339436](https://bugzilla.redhat.com/1339436): Full heal of a sub-directory does not clean up name-indices when granular-entry-heal is enabled.
+- [#1339610](https://bugzilla.redhat.com/1339610): glusterfs-libs postun ldconfig: relative path `1' used to build cache
+- [#1339639](https://bugzilla.redhat.com/1339639): RFE : Feature: Automagic unsplit-brain policies for AFR
+- [#1340487](https://bugzilla.redhat.com/1340487): copy-export-ganesha.sh does not have a correct shebang
+- [#1340935](https://bugzilla.redhat.com/1340935): Automount fails because /sbin/mount.glusterfs does not accept the -s option
+- [#1340991](https://bugzilla.redhat.com/1340991): [granular entry sh] - Add more tests
+- [#1341069](https://bugzilla.redhat.com/1341069): [geo-rep]: Monitor crashed with [Errno 3] No such process
+- [#1341108](https://bugzilla.redhat.com/1341108): [geo-rep]: If the session is renamed, geo-rep configuration are not retained
+- [#1341295](https://bugzilla.redhat.com/1341295): build: RHEL7 unpackaged files /var/lib/glusterd/hooks/.../S57glusterfind-delete-post.{pyc,pyo}
+- [#1341477](https://bugzilla.redhat.com/1341477): ERROR and Warning message on writing a file from mount point "null gfid for path (null)" repeated 3 times between"
+- [#1341556](https://bugzilla.redhat.com/1341556): [features/worm] Unwind FOPs with op_errno and add gf_worm prefix to functions
+- [#1341697](https://bugzilla.redhat.com/1341697): Add ability to set oom_score_adj for glusterfs process
+- [#1341770](https://bugzilla.redhat.com/1341770): After setting up ganesha on RHEL 6, nodes remains in stopped state and grace related failures observed in pcs status
+- [#1341944](https://bugzilla.redhat.com/1341944): [geo-rep]: Snapshot creation having geo-rep session is broken
+- [#1342083](https://bugzilla.redhat.com/1342083): changelog: changelog_rollover breaks when number of fds opened is more than 1024
+- [#1342178](https://bugzilla.redhat.com/1342178): Directory creation(mkdir) fails when the remove brick is initiated for replicated volumes accessing via nfs-ganesha
+- [#1342275](https://bugzilla.redhat.com/1342275): [PATCH] Small typo fixes
+- [#1342350](https://bugzilla.redhat.com/1342350): Volume set option not present to enable leases
+- [#1342372](https://bugzilla.redhat.com/1342372): [quota+snapshot]: Directories are inaccessible from activated snapshot, when the snapshot was created during directory creation
+- [#1342387](https://bugzilla.redhat.com/1342387): Log parameters such as the gfid, fd address, offset and length of the reads upon failure for easier debugging
+- [#1342452](https://bugzilla.redhat.com/1342452): upgrade path when slave volume uuid used in geo-rep session
+- [#1342620](https://bugzilla.redhat.com/1342620): libglusterfs: race conditions and illegal mem access in timer
+- [#1342634](https://bugzilla.redhat.com/1342634): [georep]: Stopping volume fails if it has geo-rep session (Even in stopped state)
+- [#1342954](https://bugzilla.redhat.com/1342954): self heal deamon killed due to oom kills on a dist-disperse volume using nfs ganesha
+- [#1343287](https://bugzilla.redhat.com/1343287): Enabling glusternfs with nfs.rpc-auth-allow to many hosts failed
+- [#1343368](https://bugzilla.redhat.com/1343368): Input / Output when chmoding files on NFS mount point
+- [#1344421](https://bugzilla.redhat.com/1344421): fd leak in disperse
+- [#1344559](https://bugzilla.redhat.com/1344559): conservative merge happening on a x3 volume for a deleted file
+- [#1344594](https://bugzilla.redhat.com/1344594): [disperse] mkdir after re balance give Input/Output Error
+- [#1344607](https://bugzilla.redhat.com/1344607): [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
+- [#1344631](https://bugzilla.redhat.com/1344631): fail delete volume operation if one of the glusterd instance is down in cluster
+- [#1345713](https://bugzilla.redhat.com/1345713): [features/worm] - write FOP should pass for the normal files
+- [#1345977](https://bugzilla.redhat.com/1345977): api: revert glfs_ipc_xd intended for 4.0
+- [#1346222](https://bugzilla.redhat.com/1346222): Add graph for decompounder xlator