diff options
authorNiels de Vos <>2016-05-16 13:17:08 +0200
committerNiels de Vos <>2016-05-16 13:19:51 +0200
commit74ab2d6f10f6494785c5eeb62169317904aabc57 (patch)
parentef002764c8ec05d1ca7b658533250a588d0b8609 (diff)
doc: add work-in-progress release-notes for 3.8.0v3.8rc1
Initial version of the release-notes. Follow up patches should extend this and make it more useful and user friendly. BUG: 1317278 Change-Id: Ida961034e898fca6d99b5b95044be612a59c898f Signed-off-by: Niels de Vos <>
1 files changed, 1168 insertions, 0 deletions
diff --git a/doc/release-notes/ b/doc/release-notes/
new file mode 100644
index 00000000000..e6a38f94546
--- /dev/null
+++ b/doc/release-notes/
@@ -0,0 +1,1168 @@
+# Work in progress release notes for Gluster 3.8.0 (RC1)
+These are the current release notes for Release Candidate 1. Follow up changes
+will add more user friendly notes and instructions.
+The release-notes are being worked on by maintainers and the developers of the
+different features. Assistance of others is welcome! Contributions can be done
+in [this etherpad](
+(FIXME: insert useful release notes here)
+## Bugs addressed
+A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:
+- [#789278]( Issues reported by Coverity static analysis tool
+- [#1004332]( Setting of any option using volume set fails when the clients are in older version.
+- [#1054694]( A replicated volume takes too much to come online when one server is down
+- [#1075611]( [FEAT] log: enhance gluster log format with message ID and standardize errno reporting
+- [#1092414]( Disable NFS by default
+- [#1093692]( Resource/Memory leak issues reported by Coverity.
+- [#1094119]( Remove replace-brick with data migration support from gluster cli
+- [#1109180]( Issues reported by Cppcheck static analysis tool
+- [#1110262]( suid,sgid,sticky bit on directories not preserved when doing add-brick
+- [#1114847]( glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"
+- [#1117886]( Gluster not resolving hosts with IPv6 only lookups
+- [#1122377]( [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes back
+- [#1122395]( man or info page of gluster needs to be updated with self-heal commands.
+- [#1129939]( NetBSD port
+- [#1131275]( I currently have no idea what is doing during at any specific moment
+- [#1132465]( [FEAT] Trash translator
+- [#1141379]( Geo-Replication - Fails to handle file renaming correctly between master and slave
+- [#1142423]( [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
+- [#1143880]( [FEAT] Exports and Netgroups Authentication for Gluster NFS mount
+- [#1158654]( [FEAT] Journal Based Replication (JBR - formerly NSR)
+- [#1162905]( hardcoded gsyncd path causes geo-replication to fail on non-redhat systems
+- [#1163416]( [USS]: From NFS, unable to go to .snaps directory (error: No such file or directory)
+- [#1163543]( Fix regression test spurious failures
+- [#1165041]( Different client can not execute "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime
+- [#1166862]( rmtab file is a bottleneck when lot of clients are accessing a volume through NFS
+- [#1168819]( [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a
+- [#1169317]( rmtab file is a bottleneck when lot of clients are accessing a volume through NFS
+- [#1170075]( [RFE] : BitRot detection in glusterfs
+- [#1171703]( AFR+SNAPSHOT: File with hard link have different inode number in USS
+- [#1171954]( [RFE] Rebalance Performance Improvements
+- [#1174765]( Hook scripts are not installed after make install
+- [#1176062]( Force replace-brick lead to the persistent write(use dd) return Input/output error
+- [#1176837]( [USS] : statfs call fails on USS.
+- [#1178619]( Statfs is hung because of frame loss in quota
+- [#1180545]( Incomplete conservative merge for split-brained directories
+- [#1188145]( Disperse volume: I/O error on client when USS is turned on
+- [#1188242]( Disperse volume: client crashed while running iozone
+- [#1189363]( ignore_deletes option is not something you can configure
+- [#1189473]( [RFE] While creating a snapshot the timestamp has to be appended to the snapshot name.
+- [#1193388]( Disperse volume: Failed to update version and size (error 2) seen during delete operations
+- [#1193636]( [DHT:REBALANCE]: xattrs set on the file during rebalance migration will be lost after migration is over
+- [#1194640]( Tracker bug for Logging framework expansion.
+- [#1194753]( Storage tier feature
+- [#1195947]( Reduce the contents of dependencies from glusterfs-api
+- [#1196027]( Fix memory leak while using scandir
+- [#1198849]( Minor improvements and cleanup for the build system
+- [#1199894]( RFE: Clone of a snapshot
+- [#1199985]( [RFE] arbiter for 3 way replication
+- [#1200082]( [FEAT] - Sharding xlator
+- [#1200254]( NFS-Ganesha : Locking of global option file used by NFS-Ganesha.
+- [#1200262]( Upcall framework support along with cache_invalidation usecase handled
+- [#1200265]( NFS-Ganesha: Handling GlusterFS CLI commands when NFS-Ganesha related commands are executed and other additonal checks
+- [#1200267]( Upcall: Cleanup the expired upcall entries
+- [#1200271]( Upcall: xlator options for Upcall xlator
+- [#1200364]( longevity: Incorrect log level messages in posix_istat and posix_lookup
+- [#1200704]( rdma: properly handle memory registration during network interruption
+- [#1201284]( tools/glusterfind: Use Changelogs more effectively for GFID to Path conversion
+- [#1201289]( tools/glusterfind: Support Partial Find feature
+- [#1202244]( [Quota] : To have a separate quota.conf file for inode quota.
+- [#1202274]( Minor improvements and code cleanup for libgfapi
+- [#1202649]( [georep]: Transition from xsync to changelog doesn't happen once the brick is brought online
+- [#1202758]( Disperse volume: brick logs are getting filled with "anonymous fd creation failed" messages
+- [#1203089]( Disperse volume: misleading unsuccessful message with heal and heal full
+- [#1203185]( Detached node list stale snaps
+- [#1204641]( [geo-rep] fails to stop all gluster processes
+- [#1204651]( libgfapi : Anonymous fd support in gfapi
+- [#1205037]( [SNAPSHOT]: "man gluster" needs modification for few snapshot commands
+- [#1205186]( RCU changes wrt peers to be done for GlusterFS-3.7.0
+- [#1205540]( Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
+- [#1205545]( Effect of Trash translator over CTR translator
+- [#1205596]( [SNAPSHOT]: Output message when a snapshot create is issued when multiple bricks are down needs to be improved
+- [#1205624]( Data Tiering:rebalance fails on a tiered volume
+- [#1206461]( sparse file self heal fail under xfs version 2 with speculative preallocation feature on
+- [#1206539]( Tracker bug for GlusterFS documentation Improvement.
+- [#1206546]( [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily
+- [#1206587]( Replace contrib/uuid by a libglusterfs wrapper that uses the uuid implementation from the OS
+- [#1207020]( BitRot :- CPU/disk throttling during signature calculation
+- [#1207028]( [Backup]: User must be warned while running the 'glusterfind pre' command twice without running the post command
+- [#1207029]( BitRot :- If peer in cluster doesn't have brick then its should not start bitd on that node and should not create partial volume file
+- [#1207115]( geo-rep: add debug logs to master for slave ENTRY operation failures
+- [#1207134]( BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node
+- [#1207532]( BitRot :- gluster volume help gives insufficient and ambiguous information for bitrot
+- [#1207603]( Persist file size and block count of sharded files in the form of xattrs
+- [#1207615]( sharding - Implement remaining fops
+- [#1207627]( BitRot :- Data scrubbing status is not available
+- [#1207712]( Input/Output error with disperse volume when geo-replication is started
+- [#1207735]( Disperse volume: Huge memory leak of glusterfsd process
+- [#1207829]( Incomplete self-heal and split-brain on directories found when self-healing files/dirs on a replaced disk
+- [#1207979]( BitRot :- In case of NFS mount, Object Versioning and file signing is not working as expected
+- [#1208131]( BitRot :- Tunable (scrub-throttle, scrub-frequency, pause/resume) for scrub functionality don't have any impact on scrubber
+- [#1208470]( [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are generated in the snapped brick.
+- [#1208482]( pthread cond and mutex variables of fs struct has to be destroyed conditionally.
+- [#1209104]( Do not let an inode evict during split-brain resolution process.
+- [#1209138]( [Backup]: Packages to be installed for glusterfind api to work
+- [#1209298]( NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
+- [#1209329]( glusterd services are not handled properly when re configuring services
+- [#1209430]( quota/marker: turn off inode quotas by default
+- [#1209461]( BVT: glusterd crashed and dumped during upgrade (on rhel7.1 server)
+- [#1209735]( FSAL_GLUSTER : symlinks are not working properly if acl is enabled
+- [#1209752]( BitRot :- info about bitd and scrubber daemon is not shown in volume status
+- [#1209818]( BitRot :- volume info should not show 'features.scrub: resume' if scrub process is resumed
+- [#1209843]( [Backup]: Crash observed when multiple sessions were created for the same volume
+- [#1209869]( xdata in FOPs should always be valid and never junk
+- [#1210344]( Have a fixed name for common meta-volume for nfs, snapshot and geo-rep and mount it at a fixed mount location
+- [#1210562]( Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf"
+- [#1210684]( BitRot :- scrub pause/resume should give proper error message if scrubber is already paused/resumed and Admin tries to perform same operation
+- [#1210687]( BitRot :- If scrubber finds bad file then it should log as a 'ALERT' in log not 'Warning'
+- [#1210689]( BitRot :- Files marked as 'Bad' should not be accessible from mount
+- [#1210934]( qcow2 image creation using qemu-img hits segmentation fault
+- [#1210965]( Geo-replication very slow, not able to sync all the files to slave
+- [#1211037]( [dist-geo-rep]:Directory not empty and Stale file handle errors in geo-rep logs during deletes from master in history/changelog crawl
+- [#1211123]( ls command failed with on while mounting ec volume.
+- [#1211132]( 'volume get' invoked on a non-existing key fails with zero as a return value
+- [#1211220]( quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O.
+- [#1211221]( Any operation that relies on fd->flags may not work on anonymous fds
+- [#1211264]( Data Tiering: glusterd(management) communication issues seen on tiering setup
+- [#1211327]( Changelog: Changelog should be treated as discontinuous only on changelog enable/disable
+- [#1211562]( Data Tiering:UI:changes required to CLI responses for attach and detach tier
+- [#1211570]( Data Tiering:UI:when a user looks for detach-tier help, instead command seems to be getting executed
+- [#1211576]( Gluster CLI crashes when volume create command is incomplete
+- [#1211594]( status.brick memory allocation failure.
+- [#1211640]( glusterd crash when snapshot create was in progress on different volumes at the same time - job edited to create snapshots at the given time
+- [#1211749]( glusterd crashes when brick option validation fails
+- [#1211808]( quota: inode quota not healing after upgrade
+- [#1211836]( glusterfs-api.pc versioning breaks QEMU
+- [#1211848]( Gluster namespace and module should be part of glusterfs-libs rpm
+- [#1211900]( package glupy as a subpackage under gluster namespace.
+- [#1211913]( nfs : racy condition in export/netgroup feature
+- [#1211962]( Disperse volume: Input/output errors on nfs and fuse mounts during delete operation
+- [#1212037]( Data Tiering:Old copy of file still remaining on EC(disperse) layer, when edited after attaching tier(new copy is moved to hot tier)
+- [#1212063]( [Geo-replication] cli crashed and core dump was observed while running gluster volume geo-replication vol0 status command
+- [#1212110]( bricks process crash
+- [#1212253]( cli should return error with inode quota cmds on cluster with op_version less than 3.7
+- [#1212385]( Disable rpc throttling for glusterfs protocol
+- [#1212398]( [New] - Distribute replicate volume type is shown as Distribute Stripe in the output of gluster volume info <volname> --xml
+- [#1212400]( Attach tier failing and messing up vol info
+- [#1212410]( dist-geo-rep : all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down.
+- [#1212413]( [RFE] Return proper error codes in case of snapshot failure
+- [#1212437]( probing and detaching a peer generated a CRITICAL error - "Could not find peer" in glusterd logs
+- [#1212660]( Crashes in logging code
+- [#1212816]( NFS-Ganesha : Add-node and delete-node should start/stop NFS-Ganesha service
+- [#1213063]( The tiering feature requires counters.
+- [#1213066]( Failure in tests/performance/open-behind.t
+- [#1213125]( Bricks fail to start with tiering related logs on the brick
+- [#1213295]( Glusterd crashed after updating to 3.8 nightly build
+- [#1213349]( [Snapshot] Scheduler should check vol-name exists or not before adding scheduled jobs
+- [#1213358]( Implement directory heal for ec
+- [#1213364]( [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled.
+- [#1213542]( Symlink heal leaks 'linkname' memory
+- [#1213752]( nfs-ganesha: Multi-head nfs need Upcall Cache invalidation support
+- [#1213773]( upcall: polling is done for a invalid file
+- [#1213933]( common-ha: delete-node implementation
+- [#1214048]( IO touched a file undergoing migration fails for tiered volumes
+- [#1214219]( Data Tiering:Enabling quota command fails with "quota command failed : Commit failed on localhost"
+- [#1214222]( Directories are missing on the mount point after attaching tier to distribute replicate volume.
+- [#1214289]( I/O failure on attaching tier
+- [#1214561]( [Backup]: To capture path for deletes in changelog file
+- [#1214574]( Snapshot-scheduling helper script errors out while running " init"
+- [#1215002]( glusterd crashed on the node when tried to detach a tier after restoring data from the snapshot.
+- [#1215018]( [New] - gluster peer status goes to disconnected state.
+- [#1215117]( Disperse volume: rebalance and quotad crashed
+- [#1215122]( Data Tiering: attaching a tier with non supported replica count crashes glusterd on local host
+- [#1215161]( rpc: Memory corruption because rpcsvc_register_notify interprets opaque mydata argument as xlator pointer
+- [#1215187]( timeout/expiry of group-cache should be set to 300 seconds
+- [#1215265]( Fixes for data self-heal in ec
+- [#1215486]( configure: automake defaults to Unix V7 tar, w/ max filename length=99 chars
+- [#1215550]( glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume
+- [#1215571]( Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency)
+- [#1215592]( Crash in dht_getxattr_cbk
+- [#1215660]( tiering: cksum mismach for tiered volume.
+- [#1215896]( Typos in the messages logged by the CTR translator
+- [#1216067]( Autogenerated files delivered in tarball
+- [#1216187]( readdir-ahead needs to be enabled by default for new volumes on gluster-3.7
+- [#1216898]( Data Tiering: Volume inconsistency errors getting logged when attaching uneven(odd) number of hot bricks in hot tier(pure distribute tier layer) to a dist-rep volume
+- [#1216931]( [Snapshot] Snapshot scheduler show status disable even when it is enabled
+- [#1216960]( data tiering: do not allow tiering related volume set options on a regular volume
+- [#1217311]( Disperse volume: gluster volume status doesn't show shd status
+- [#1217701]( ec test spurious failures
+- [#1217766]( Spurious failures in tests/bugs/distribute/bug-1122443.t
+- [#1217786]( Data Tiering : Adding performance to unlink/link/rename in CTR Xlator
+- [#1217788]( spurious failure bug-908146.t
+- [#1217937]( DHT/Tiering/Rebalancer: The Client PID set by tiering migration is getting reset by dht migration
+- [#1217949]( Null check before freeing dir_dfmeta and tmp_container
+- [#1218055]( "Snap_scheduler disable" should have different return codes for different failures.
+- [#1218060]( [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message
+- [#1218120]( Regression failures in tests/bugs/snapshot/bug-1162498.t
+- [#1218164]( [SNAPSHOT] : Correction required in output message after initilalising snap_scheduler
+- [#1218287]( Use tiering only if all nodes are capable of it at proper version
+- [#1218304]( Intermittent failure of basic/afr/data-self-heal.t
+- [#1218552]( Rsync Hang and Georep fails to Sync files
+- [#1218573]( [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down
+- [#1218625]( glfs.h:46:21: fatal error: sys/acl.h: No such file or directory
+- [#1218638]( tiering documentation
+- [#1218717]( Files migrated should stay on a tier for a full cycle
+- [#1218854]( Clean up should not empty the contents of the global config file
+- [#1218951]( Spurious failures in fop-sanity.t
+- [#1218960]( Rebalance Status output lists an extra colon " : " after volume rebalance: <vol_name>: success:
+- [#1219032]( cli: While attaching tier cli sholud always ask question whether you really want to attach a tier or not.
+- [#1219355]( glusterd:Scrub and bitd reconfigure functions were not calling if quota is not enabled.
+- [#1219442]( [Snapshot] Do not run scheduler if ovirt scheduler is running
+- [#1219479]( [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are generated in the snapped brick.
+- [#1219485]( nfs-ganesha: Discrepancies with lock states recovery during migration
+- [#1219637]( Gluster small-file creates do not scale with brick count
+- [#1219732]( brick-op failure for glusterd command should log error message in cmd_history.log
+- [#1219738]( Regression failures in tests/bugs/snapshot/bug-1112559.t
+- [#1219784]( bitrot: glusterd is crashing when user enable bitrot on the volume
+- [#1219816]( Spurious failure in tests/bugs/replicate/bug-976800.t
+- [#1219846]( Data Tiering: glusterd(management) communication issues seen on tiering setup
+- [#1219894]( [georep]: Creating geo-rep session kills all the brick process
+- [#1219937]( Running status second time shows no active sessions
+- [#1219954]( The python-gluster RPM should be 'noarch'
+- [#1220016]( bitrot testcases fail spuriously
+- [#1220058]( Disable known bad tests
+- [#1220173]( SEEK_HOLE support (optimization)
+- [#1220329]( DHT Rebalance : Misleading log messages for linkfiles
+- [#1220332]( dHT rebalance: Dict_copy log messages when running rebalance on a dist-rep volume
+- [#1220348]( Client hung up on listing the files on a perticular directory
+- [#1220381]( unable to start the volume with the latest beta1 rpms
+- [#1220670]( snap_scheduler script must be usable as python module.
+- [#1220713]( Scrubber should be disabled once bitrot is reset
+- [#1221008]( libgfapi: Segfault seen when glfs_*() methods are invoked with invalid glfd
+- [#1221025]( Glusterd crashes after enabling quota limit on a distrep volume.
+- [#1221095]( Fix nfs/mount3.c build warnings reported in Koji
+- [#1221104]( Sharding - Skip update of block count and size for directories in readdirp callback
+- [#1221128]( `gluster volume heal <vol-name> split-brain' tries to heal even with insufficient arguments
+- [#1221131]( NFS-Ganesha: ACL should not be enabled by default
+- [#1221145]( ctdb's ping_pong lock tester fails with input/output error on disperse volume mounted with glusterfs
+- [#1221270]( Do not allow detach-tier commands on a non tiered volume
+- [#1221481]( `ls' on a directory which has files with mismatching gfid's does not list anything
+- [#1221490]( fuse: check return value of setuid
+- [#1221544]( [Backup]: Unable to create a glusterfind session
+- [#1221577]( glusterfsd crashed on a quota enabled volume where snapshots were scheduled
+- [#1221696]( rebalance failing on one of the node
+- [#1221737]( Multi-threaded SHD support
+- [#1221889]( Log EEXIST errors in DEBUG level in fops MKNOD and MKDIR
+- [#1221914]( Implement MKNOD fop in bit-rot.
+- [#1221938]( SIGNING FAILURE Error messages are poping up in the bitd log
+- [#1221970]( tiering: use sperate log/socket/pid file for tiering
+- [#1222013]( Simplify creation and set-up of meta-volume (shared storage)
+- [#1222088]( Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
+- [#1222092]( rebalance failed after attaching the tier to the volume.
+- [#1222126]( DHT: lookup-unhashed feature breaks runtime compatibility with older client versions
+- [#1222238]( features/changelog: buffer overrun in changelog-helpers
+- [#1222317]( Building packages on RHEL-5 based distributions fail
+- [#1222319]( Remove all occurrences of #include "config.h"
+- [#1222378]( GlusterD fills the logs when the NFS-server is disabled
+- [#1222379]( Fix infinite looping in shard_readdir(p) on '/'
+- [#1222769]( libglusterfs: fix uninitialized argument value
+- [#1222840]( I/O's hanging on tiered volumes (NFS)
+- [#1222898]( geo-replication: fix memory leak in gsyncd
+- [#1223185]( [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6
+- [#1223213]( gluster volume status fails with locking failed error message
+- [#1223280]( [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
+- [#1223338]( glusterd could crash in remove-brick-status when local remove-brick process has just completed
+- [#1223378]( gfid-access: Remove dead increment (dead store)
+- [#1223385]( packaging: .pc files included in -api-devel should be in -devel
+- [#1223432]( Update gluster op version to 30701
+- [#1223625]( rebalance : output of rebalance status should show ' run time ' in proper format (day,hour:min:sec)
+- [#1223642]( [geo-rep]: With tarssh the file is created at slave but it doesnt get sync
+- [#1223739]( Quota: Do not allow set/unset of quota limit in heterogeneous cluster
+- [#1223741]( non-root geo-replication session goes to faulty state, when the session is started
+- [#1223759]( Sharding - Fix posix compliance test failures.
+- [#1223772]( Though brick demon is not running, gluster vol status command shows the pid
+- [#1223798]( Quota: spurious failures with quota testcases
+- [#1223889]( readdirp return 64bits inodes even if enable-ino32 is set
+- [#1223937]( Outdated autotools helper config.* files
+- [#1224016]( NFS: IOZone tests hang, disconnects and hung tasks seen in logs.
+- [#1224098]( [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
+- [#1224290]( peers connected in the middle of a transaction are participating in the transaction
+- [#1224596]( [RFE] Provide hourly scrubbing option
+- [#1224600]( [RFE] Move signing trigger mechanism to [f]setxattr()
+- [#1224611]( Skip zero byte files when triggering signing
+- [#1224857]( DHT - rebalance - when any brick/sub-vol is down and rebalance is not performing any action(fixing lay-out or migrating data) it should not say 'Starting rebalance on volume <vol-name> has been successful' .
+- [#1225018]( Scripts/Binaries are not installed with +x bit
+- [#1225323]( Glusterfs client crash during fd migration after graph switch
+- [#1225328]( afr: unrecognized option in re-balance volfile
+- [#1225330]( tiering: tier daemon not restarting during volume/glusterd restart
+- [#1225424]( [Backup]: Misleading error message when glusterfind delete is given with non-existent volume
+- [#1225465]( [Backup]: Glusterfind session entry persists even after volume is deleted
+- [#1225491]( [AFR-V2] - afr_final_errno() should treat op_ret > 0 also as success
+- [#1225542]( [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state
+- [#1225564]( [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
+- [#1225566]( [geo-rep]: Traceback "ValueError: filedescriptor out of range in select()" observed while creating huge set of data on master
+- [#1225571]( [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
+- [#1225572]( nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found
+- [#1225716]( tests : remove brick command execution displays success even after, one of the bricks down.
+- [#1225793]( Spurious failure in tests/bugs/disperse/bug-1161621.t
+- [#1226005]( should not spawn another migration daemon on graph switch
+- [#1226223]( Mount broker user add command removes existing volume for a mountbroker user when second volume is attached to same user
+- [#1226253]( gluster volume heal info crashes
+- [#1226276]( Volume heal info not reporting files in split brain and core dumping, after upgrading to 3.7.0
+- [#1226279]( GF_CONTENT_KEY should not be handled unless we are sure no other operations are in progress
+- [#1226307]( Volume start fails when glusterfs is source compiled with GCC v5.1.1
+- [#1226367]( bug-973073.t fails spuriously
+- [#1226384]( build: xlators/mgmt/glusterd/src/glusterd-errno.h is not in dist tarball
+- [#1226507]( Honour afr self-heal volume set options from clients
+- [#1226551]( libglusterfs: Copy _all_ members of gf_dirent_t in entry_copy()
+- [#1226714]( auth_cache_entry structure barely gets cached
+- [#1226717]( racy condition in nfs/auth-cache feature
+- [#1226829]( gf_store_save_value fails to check for errors, leading to emptying files in /var/lib/glusterd/
+- [#1226881]( tiering:compiler warning with gcc v5.1.1
+- [#1226902]( bitrot: scrubber is crashing while user set any scrubber tunable value.
+- [#1227204]( glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3
+- [#1227449]( Fix deadlock in timer-wheel del_timer() API
+- [#1227583]( [Virt-RHGS] Creating a image on gluster volume using qemu-img + gfapi throws error messages related to rpc_transport
+- [#1227590]( bug-857330/xml.t fails spuriously
+- [#1227624]( tests/geo-rep: Existing geo-rep regressino test suite is time consuming.
+- [#1227646]( Glusterd fails to start after volume restore, tier attach and node reboot
+- [#1227654]( linux untar hanged after the bricks are up in a 8+4 config
+- [#1227667]( Minor improvements and code cleanup for protocol server/client
+- [#1227803]( tiering: tier status shows as " progressing " but there is no rebalance daemon running
+- [#1227884]( Update gluster op version to 30702
+- [#1227894]( Increment op-version requirement for lookup-optimize configuration option
+- [#1227904]( Memory leak in marker xlator
+- [#1227996]( Objects are not signed upon truncate()
+- [#1228093]( Glusterd crash
+- [#1228111]( [Backup]: Crash observed when glusterfind pre is run after deleting a directory containing files
+- [#1228112]( tiering:glusterd crashed when trying to detach-tier commit force on a non-tiered volume.
+- [#1228157]( Provide and use a common way to do reference counting of (internal) structures
+- [#1228415]( Not able to export volume using nfs-ganesha
+- [#1228492]( [geo-rep]: RENAME are not synced to slave when quota is enabled.
+- [#1228613]( [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node
+- [#1228635]( Do not invoke glfs_fini for glfs-heal processes.
+- [#1228680]( bitrot: (rfe) object signing wait time value should be tunable.
+- [#1228696]( geo-rep: throws error if slave_host entry is not added to know_hosts file
+- [#1228731]( nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful
+- [#1228818]( Add documentation for lookup-optimize configuration option in DHT
+- [#1228952]( Disperse volume : glusterfs crashed
+- [#1229127]( afr: Correction to self-heal-daemon documentation
+- [#1229134]( [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot
+- [#1229139]( glusterd: glusterd crashing if you run re-balance and vol status command parallely.
+- [#1229172]( [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t
+- [#1229297]( [Quota] : Inode quota spurious failure
+- [#1229609]( Quota: " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs
+- [#1229639]( build: fix gitclean target
+- [#1229658]( STACK_RESET may crash with concurrent statedump requests to a glusterfs process
+- [#1229825]( Add regression test for cluster lock in a heterogeneous cluster
+- [#1229860]( context of access control translator should be updated properly for GF_POSIX_ACL_*_KEY xattrs
+- [#1229948]( cluster setup not working with RHEL7 and derivatives
+- [#1230007]( [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink
+- [#1230015]( [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available
+- [#1230017]( [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions
+- [#1230090]( [geo-rep]: use_meta_volume config option should be validated for its values
+- [#1230111]( [Backup]: Glusterfind delete does not delete the session related information present in $GLUSTERD_WORKDIR
+- [#1230121]( [glusterd] glusterd crashed while trying to remove a bricks - one selected from each replica set - after shrinking nX3 to nX2 to nX1
+- [#1230127]( [Backup]: Chown/chgrp for a directory does not get recorded as a MODIFY entry in the outfile
+- [#1230647]( Disperse volume : client crashed while running IO
+- [#1231132]( Detect and send ENOTSUP if upcall feature is not enabled
+- [#1231197]( Snapshot daemon failed to run on newly created dist-rep volume with uss enabled
+- [#1231205]( [geo-rep]: rsync should be made dependent package for geo-replication
+- [#1231257]( nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes
+- [#1231264]( DHT : for many operation directory/file path is '(null)' in brick log
+- [#1231268]( Fix invalid logic in tier.t
+- [#1231425]( use after free bug in dht
+- [#1231437]( Rebalance is failing in test cluster framework.
+- [#1231617]( Scrubber crash upon pause
+- [#1231619]( BitRot :- Handle brick re-connection sanely in bitd/scrub process
+- [#1231620]( scrub frequecny and throttle change information need to be present in Scrubber log
+- [#1231738]( nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start
+- [#1231789]( Not able to create snapshots for geo-replicated volumes when session is created with root user
+- [#1231876]( Snapshot: When Cluster.enable-shared-storage is enable, shared storage should get mount after Node reboot
+- [#1232001]( nfs-ganesha: 8 node pcs cluster setup fails
+- [#1232165]( NFS Authentication Performance Issue
+- [#1232172]( Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time
+- [#1232183]( cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
+- [#1232238]( [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
+- [#1232304]( libglusterfs: delete duplicate code in libglusterfs/src/dict.c
+- [#1232378]( [remove-brick]: Creation of file from NFS writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
+- [#1232391]( Sharding - Use (f)xattrop (as opposed to (f)setxattr) to update shard size and block count
+- [#1232430]( [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
+- [#1232572]( quota: quota list displays double the size of previous value, post heal completion.
+- [#1232658]( Change default values of allow-insecure and bind-insecure
+- [#1232666]( [geo-rep]: Segmentation faults are observed on all the master nodes
+- [#1232678]( Disperse volume : data corruption with appending writes in 8+4 config
+- [#1232686]( quorum calculation might go for toss for a concurrent peer probe command
+- [#1232693]( glusterd crashed when testing heal full on replaced disks
+- [#1232729]( [Backup]: Glusterfind session(s) created before starting the volume results in 'changelog not available' error, eventually
+- [#1232912]( [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
+- [#1233018]( tests: Add the command being 'TEST'ed in all gluster logs
+- [#1233139]( Null pointer dreference in dht_migrate_complete_check_task
+- [#1233151]( rm command fails with "Transport end point not connected" during add brick
+- [#1233162]( [Quota] The root of the volume on which the quota is set shows the volume size more than actual volume size, when checked with "df" command.
+- [#1233246]( nfs-ganesha: add node fails to add a new node to the cluster
+- [#1233258]( Possible double execution of the state machine for fops that start other subfops
+- [#1233411]( [geo-rep]: UnboundLocalError: local variable 'fd' referenced before assignment
+- [#1233544]( gluster v set help needs to be updated for cluster.enable-shared-storage option
+- [#1233617]( Introduce an ATOMIC_WRITE flag in posix writev
+- [#1233624]( nfs-ganesha: --refresh-config not working
+- [#1234286]( changelog: directory renames not getting recorded
+- [#1234474]( nfs-ganesha:delete node throws error and pcs status also notifies about failures, in fact I/O also doesn't resume post grace period
+- [#1234694]( [geo-rep]: Setting meta volume config to false when meta volume is stopped/deleted leads geo-rep to faulty
+- [#1234819]( glusterd: glusterd crashes while importing a USS enabled volume which is already started
+- [#1234842]( GlusterD does not store updated peerinfo objects.
+- [#1234882]( [geo-rep]: Feature fan-out fails with the use of meta volume config
+- [#1235007]( Allow only lookup and delete operation on file that is in split-brain
+- [#1235195]( quota: marker accounting miscalculated when renaming a file on with write is in progress
+- [#1235216]( tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
+- [#1235231]( unix domain sockets on Gluster/NFS are created as fifo/pipe
+- [#1235246]( Missing xattr for files after heal process
+- [#1235269]( Data Tiering: Files not getting promoted once demoted
+- [#1235292]( [geo-rep]: needs modification in gluster path to support mount broker functionality
+- [#1235359]( [geo-rep]: Mountbroker setup goes to Faulty with ssh 'Permission Denied' Errors
+- [#1235538]( Porting the left out gf_log messages to the new logging API
+- [#1235542]( Upcall: Directory or file creation should send cache invalidation requests to parent directories
+- [#1235582]( snapd crashed due to stack overflow
+- [#1235751]( peer probe results in Peer Rejected(Connected)
+- [#1235921]( POSIX: brick logs filled with _gf_log_callingfn due to this==NULL in dict_get
+- [#1235927]( memory corruption in the way we maintain migration information in inodes.
+- [#1235989]( Do null check before dict_ref
+- [#1236009]( do an explicit lookup on the inodes linked in readdirp
+- [#1236032]( Tiering: unlink failed with error "Invalid argument"
+- [#1236065]( Disperse volume: FUSE I/O error after self healing the failed disk files
+- [#1236128]( Quota list is not working on tiered volume.
+- [#1236212]( Migration does not work when EC is used as a tiered volume.
+- [#1236270]( [Backup]: File movement across directories does not get captured in the output file in a X3 volume
+- [#1236512]( DHT + rebalance :- file permission got changed (sticky bit and setgid is set) after file migration failure
+- [#1236561]( Ganesha volume export failed
+- [#1236945]( glusterfsd crashed while rebalance and self-heal were in progress
+- [#1237000]( Add a test case for verifying NO empty changelog created
+- [#1237174]( Incorrect state created in '/var/lib/nfs/statd'
+- [#1237381]( Throttle background heals in disperse volumes
+- [#1238054]( Consecutive volume start/stop operations when ganesha.enable is on, leads to errors
+- [#1238063]( libgfchangelog: Example programs are not working.
+- [#1238072]( protocol/server doesn't reconfigure auth.ssl-allow options
+- [#1238135]( Initialize daemons on demand
+- [#1238188]( Not able to recover the corrupted file on Replica volume
+- [#1238224]( setting enable-shared-storage without mentioning the domain, doesn't enables shared storage
+- [#1238508]( Renamed Files are missing after self-heal
+- [#1238593]( tiering/snapshot: Tier daemon failed to start during volume start after restoring into a tiered volume from a non-tiered volume.
+- [#1238661]( When bind-insecure is enabled, bricks may not be able to bind to port assigned by Glusterd
+- [#1238747]( Crash in Quota enforcer
+- [#1238788]( Fix build on Mac OS X, header guard macros
+- [#1238791]( Fix build on Mac OS X, gfapi symbol versions
+- [#1238793]( Fix build on Mac OS X, timerwheel spinlock
+- [#1238796]( Fix build on Mac OS X, configure(.ac)
+- [#1238798]( Fix build on Mac OS X, ACLs
+- [#1238936]( 'unable to get transaction op-info' error seen in glusterd log while executing gluster volume status command
+- [#1238952]( gf_msg_callingfn does not log the callers of the function in which it is called
+- [#1239037]( disperse: Wrong values for "cluster.heal-timeout" could be assigned using CLI
+- [#1239044]( [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"
+- [#1239269]( [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
+- [#1240161]( glusterfsd crashed after volume start force
+- [#1240184]( snap-view:mount crash if debug mode is enabled
+- [#1240210]( Metadata self-heal is not handling failures while heal properly
+- [#1240218]( Scrubber log should mark file corrupted message as Alert not as information
+- [#1240219]( Scrubber log should mark file corrupted message as Alert not as information
+- [#1240229]( Unable to pause georep session if one of the nodes in cluster is not part of master volume.
+- [#1240244]( Unable to examine file in metadata split-brain after setting `replica.split-brain-choice' attribute to a particular replica
+- [#1240254]( quota+afr: quotad crash "afr_local_init (local=0x0, priv=0x7fddd0372220, op_errno=0x7fddce1434dc) at afr-common.c:4112"
+- [#1240284]( Disperse volume: NFS crashed
+- [#1240564]( Gluster commands timeout on SSL enabled system, after adding new node to trusted storage pool
+- [#1240577]( Data Tiering: Database locks observed on tiered volumes on continous writes to a file
+- [#1240581]( quota/marker: marker code cleanup
+- [#1240598]( quota/marker: lk_owner is null while acquiring inodelk in rename operation
+- [#1240621]( tiering: Tier daemon stopped prior to graph switch.
+- [#1240654]( quota: allowed to set soft-limit %age beyond 100%
+- [#1240916]( glfs_loc_link: Update loc.inode with the existing inode incase if already exits
+- [#1240949]( quota: In enforcer, caching parents in ctx during build ancestry is not working
+- [#1240952]( [USS]: snapd process is not killed once the glusterd comes back
+- [#1240970]( [Data Tiering]: HOT Files get demoted from hot tier
+- [#1240991]( Quota: After rename operation , gluster v quota <volname> list-objects command give incorrect no. of files in output
+- [#1241054]( Data Tiering: Rename of file is not heating up the file
+- [#1241071]( Spurious failure of ./tests/bugs/snapshot/bug-1109889.t
+- [#1241104]( Handle negative fcntl flock->l_len values
+- [#1241133]( nfs-ganesha: execution of script throws a error for a file
+- [#1241153]( quota: marker accounting can get miscalculated after upgrade to 3.7
+- [#1241274]( Peer not recognized after IP address change
+- [#1241379]( Reduce 'CTR disabled' brick log message from ERROR to INFO/DEBUG
+- [#1241480]( ganesha volume export fails in rhel7.1
+- [#1241788]( syncop:Include iatt to 'syncop_link' args
+- [#1241882]( GlusterD cannot restart after being probed into a cluster.
+- [#1241895]( nfs-ganesha: add-node logic does not copy the "/etc/ganesha/exports" directory to the correct path on the newly added node
+- [#1242030]( nfs-ganesha: bricks crash while executing acl related operation for named group/user
+- [#1242041]( nfs-ganesha : Multiple setting of nfs4_acl on a same file will cause brick crash
+- [#1242254]( fops fail with EIO on nfs mount after add-brick and rebalance
+- [#1242280]( Handle all errors equal in dict_set_bin()
+- [#1242317]( [RFE] Improve I/O latency during signing
+- [#1242333]( rdma : pending - porting log messages to a new framework
+- [#1242421]( Enable multi-threaded epoll for glusterd process
+- [#1242504]( [Data Tiering]: Frequency Counters of un-selected file in the DB wont get clear after a promotion/demotion cycle
+- [#1242570]( GlusterD crashes when management encryption is enabled
+- [#1242609]( replacing a offline brick fails with "replace-brick" command
+- [#1242742]( Gluster peer probe with negative num
+- [#1242809]( Performance: Impact of Bitrot on I/O Performance
+- [#1242819]( Quota list on a volume hangs after glusterd restart an a node.
+- [#1242875]( Quota: Quota Daemon doesn't start after node reboot
+- [#1242892]( SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable.
+- [#1242894]( [RFE] 'gluster volume help' output could be sorted alphabetically
+- [#1243108]( bash tab completion fails with "grep: Invalid range end"
+- [#1243187]( Disperse volume : client glusterfs crashed while running IO
+- [#1243382]( EC volume: Replace bricks is not healing version of root directory
+- [#1243391]( fail the fops if inode context get fails
+- [#1243753]( Gluster cli logs invalid argument error on every gluster command execution
+- [#1243774]( glusterd crashed when a client which doesn't support SSL tries to mount a SSL enabled gluster volume
+- [#1243785]( [Backup]: Password of the peer nodes prompted whenever a glusterfind session is deleted.
+- [#1243798]( quota/marker: dir count in inode quota is not atomic
+- [#1243805]( Gluster-nfs : unnecessary logging message in nfs.log for export feature
+- [#1243806]( logging: Revert usage of global xlator for log buffer
+- [#1243812]( [Backup]: Crash observed when keyboard interrupt is encountered in the middle of any glusterfind command
+- [#1243838]( [Backup]: Glusterfind list shows the session as corrupted on the peer node
+- [#1243890]( huge mem leak in posix xattrop
+- [#1243946]( RFE: posix: xattrop 'GF_XATTROP_ADD_DEF_ARRAY' implementation
+- [#1244109]( quota: brick crashes when create and remove performed in parallel
+- [#1244144]( [Backup]: Glusterfind pre attribute '--output-prefix' not working as expected in case of DELETEs
+- [#1244165]( [RHEV-RHGS] App VMs paused due to IO error caused by split-brain, after initiating remove-brick operation
+- [#1244613]( using fop's dict for resolving causes problems
+- [#1245045]( Data Loss:Remove brick commit passing when remove-brick process has not even started(due to killing glusterd)
+- [#1245065]( "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes
+- [#1245142]( DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
+- [#1245276]( ec returns EIO error in cases where a more specific error could be returned
+- [#1245331]( volume start command is failing when glusterfs compiled with debug enabled
+- [#1245380]( [RFE] Render all mounts of a volume defunct upon access revocation
+- [#1245425]( IFS is not set back after used as "[" in log_newer function of include.rc
+- [#1245544]( quota/marker: errors in log file 'Failed to get metadata for'
+- [#1245547]( sharding - Fix unlink of sparse files
+- [#1245558]( gluster vol quota dist-vol list is not displaying quota informatio.
+- [#1245689]( ec sequentializes all reads, limiting read throughtput
+- [#1245895]( gluster snapshot status --xml gives back unexpected non xml output
+- [#1245935]( Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
+- [#1245981]( forgotten inodes are not being signed
+- [#1246052]( Deceiving log messages like "Failing STAT on gfid : split-brain observed. [Input/output error]" reported
+- [#1246082]( sharding - Populate the aggregated ia_size and ia_blocks before unwinding (f)setattr to upper layers
+- [#1246229]( tier_lookup_heal.t contains incorrect file_on_fast_tier function
+- [#1246275]( POSIX ACLs as used by a FUSE mount can not use more than 32 groups
+- [#1246432]( ./tests/basic/volume-snapshot.t spurious fail causing glusterd crash.
+- [#1246736]( client3_3_removexattr_cbk floods the logs with "No data available" messages
+- [#1246794]( GF_LOG_NONE logs always
+- [#1247108]( sharding - OS installation on vm image hangs on a sharded volume
+- [#1247152]( SSL improvements: ECDH, DH, CRL, and accessible options
+- [#1247529]( [geo-rep]: rename followed by deletes causes ESTALE
+- [#1247536]( Dist-geo-rep : checkpoint doesn't reach even though all the files have been synced through hybrid crawl.
+- [#1247563]( ACL created on a dht.linkto file on a files that skipped rebalance
+- [#1247603]( glusterfs : fix double free possibility in the code
+- [#1247765]( Glusterfsd crashes because of thread-unsafe code in gf_authenticate
+- [#1247930]( rpc: check for unprivileged port should start at 1024 and not beyond 1024
+- [#1248298]( [upgrade] After upgrade from 3.5 to 3.6 onwards version, bumping up op-version failed
+- [#1248306]( tiering: rename fails with "Device or resource busy" error message
+- [#1248415]( rebalance stuck at 0 byte when auth.allow is set
+- [#1248521]( quota : display the size equivalent to the soft limit percentage in gluster v quota <volname> list* command
+- [#1248669]( all: Make all the xlator fops static to avoid incorrect symbol resolution
+- [#1248887]( AFR: Make [f]xattrop metadata transaction
+- [#1249391]( Fix build on Mac OS X, booleans
+- [#1249499]( Make ping-timeout option configurable at a volume-level
+- [#1250009]( Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf"
+- [#1250170]( Write performance from a Windows client on 3-way replicated volume decreases substantially when one brick in the replica set is brought down
+- [#1250297]( [New] - glusterfs dead when user creates a rdma volume
+- [#1250387]( [RFE] changes needed in snapshot info command's xml output.
+- [#1250441]( Sharding - Excessive logging of messages of the kind 'Failed to get trusted.glusterfs.shard.file-size for bf292f5b-6dd6-45a8-b03c-aaf5bb973c50'
+- [#1250582]( Quota: volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled.
+- [#1250601]( nfs-ganesha: remove the entry of the deleted node
+- [#1250628]( nfs-ganesha: --status is actually same as "pcs status"
+- [#1250797]( rpc: Address issues with transport object reference and leak
+- [#1250803]( Perf: Metadata operation(ls -l) performance regression.
+- [#1250828]( Tiering: segfault when trying to rename a file
+- [#1250855]( sharding - Renames on non-sharded files failing with ENOMEM
+- [#1251042]( while re-configuring the scrubber frequency, scheduling is not happening based on current time
+- [#1251121]( Unable to demote files in tiered volumes when cold tier is EC.
+- [#1251346]( statfs giving incorrect values for AFR arbiter volumes
+- [#1251446]( Disperse volume: fuse mount hung after self healing
+- [#1251449]( posix_make_ancestryfromgfid doesn't set op_errno
+- [#1251454]( marker: set loc.parent if NULL
+- [#1251592]( Fix the tests infra
+- [#1251674]( Add known failures to bad_tests list
+- [#1251821]( /usr/lib/glusterfs/ganesha/ is distro specific
+- [#1251824]( Sharding - Individual shards' ownership differs from that of the original file
+- [#1251857]( nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
+- [#1251980]( dist-geo-rep: geo-rep status shows Active/Passive even when all the gsync processes in a node are killed
+- [#1252121]( tier.t contains pattern matching error in check_counters function
+- [#1252244]( DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation few files are not accessible and not listed on mount and more than one Directory have same gfid
+- [#1252263]( Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
+- [#1252374]( tests: no cleanup on receiving external signals INT, TERM and HUP
+- [#1252410]( libgfapi : adding follow flag to glfs_h_lookupat()
+- [#1252448]( Probing a new node, which is part of another cluster, should throw proper error message in logs and CLI
+- [#1252586]( Legacy files pre-existing tier attach must be promoted
+- [#1252695]( posix : pending - porting log messages to a new framework
+- [#1252696]( After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log
+- [#1252737]( xml output for volume status on tiered volume
+- [#1252807]( libgfapi : pending - porting log messages to a new framework
+- [#1252808]( protocol server : Pending - porting log messages to a new framework
+- [#1252825]( Though scrubber settings changed on one volume log shows all volumes scrubber information
+- [#1252836]( libglusterfs: Pending - Porting log messages to new framework
+- [#1253149]( performance translators: Pending - porting logging messages to new logging framework
+- [#1253309]( AFR: gluster v restart force or brick process restart doesn't heal the files
+- [#1253828]( glusterd: remove unused large memory/buffer allocations
+- [#1253831]( glusterd: clean dead initializations
+- [#1253967]( glusterfs doesn't include firewalld rules
+- [#1253970]( garbage files created in /var/run/gluster
+- [#1254121]( Start self-heal and display correct heal info after replace brick
+- [#1254127]( Spurious failure blocking NetBSD regression runs
+- [#1254146]( quota: numbers of warning messages in nfs.log a single file itself
+- [#1254167]( `gluster volume heal <vol-name> split-brain' changes required for entry-split-brain
+- [#1254428]( Data Tiering : Writes to a file being promoted/demoted are missing once the file migration is complete
+- [#1254451]( Data Tiering : Some tier xlator_fops translate to the default fops
+- [#1254494]( nfs-ganesha: refresh-config stdout output does not make sense
+- [#1254850]( Fix build on Mac OS X, glfs_h_lookupat symbol version
+- [#1254863]( non-default symver macros are incorrect
+- [#1255310]( Snapshot: When soft limit is reached, auto-delete is enable, create snapshot doesn't logs anything in log files
+- [#1255386]( snapd/quota/nfs daemon's runs on the node, even after that node was detached from trusted storage pool
+- [#1255599]( Remove unwanted tests from volume-snapshot.t
+- [#1255693]( Tiering status command is very cumbersome.
+- [#1255694]( glusterd: volume status backward compatibility
+- [#1256243]( remove-brick: avoid mknod op falling on decommissioned brick even after fix-layout has happened on parent directory
+- [#1256352]( gluster-nfs : contents of export file is not updated correctly in its context
+- [#1256580]( sharding - VM image size as seen from the mount keeps growing beyond configured size on a sharded volume
+- [#1256588]( arbiter-statfs.t fails spuriously in NetBSD regression
+- [#1257076]( DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on
+- [#1257110]( Logging : unnecessary log message "REMOVEXATTR No data available " when files are written to glusterfs mount
+- [#1257149]( Provide more meaningful errors on peer probe and peer detach
+- [#1257533]( snapshot delete all command fails with --xml option.
+- [#1257694]( quota: removexattr on /d/backends/patchy/.glusterfs/79/99/799929ec-f546-4bbf-8549-801b79623262 (for trusted.glusterfs.quota.add7e3f8-833b-48ec-8a03-f7cd09925468.contri) [No such file or directory]
+- [#1257709]( Copy NFS-Ganesha export files as part of volume snapshot creation
+- [#1257792]( bug-1238706-daemons-stop-on-peer-cleanup.t fails occasionally
+- [#1257847]( Dist-geo-rep: Geo-replication doesn't work with NetBSD
+- [#1257911]( add policy mechanism for promotion and demotion
+- [#1258196]( gNFSd: NFS mount fails with "Remote I/O error"
+- [#1258311]( trace xlator: Print write size also in trace_writev logs
+- [#1258334]( Sharding - Unlink of VM images can sometimes fail with EINVAL
+- [#1258714]( bug-948686.t fails spuriously
+- [#1258766]( quota test 'quota-nfs.t' fails spuriously
+- [#1258801]( Change order of marking AFR post op
+- [#1258883]( build: compile error on RHEL5
+- [#1258905]( Sharding - read/write performance improvements for VM workload
+- [#1258975]( packaging: gluster-server install failure due to %ghost of hooks/.../delete
+- [#1259225]( Add node of nfs-ganesha not working on rhel7.1
+- [#1259298]( Tier xattr name is misleading (trusted.tier-gfid)
+- [#1259572]( client is sending io to arbiter with replica 2
+- [#1259651]( sharding - Fix reads on zero-byte shards representing holes in the file
+- [#1260051]( DHT: Few files are missing after remove-brick operation
+- [#1260147]( fuse client crashed during i/o
+- [#1260185]( Data Tiering:Regression:Commit of detach tier passes without directly without even issuing a detach tier start
+- [#1260545]( Quota+Rebalance : While rebalance is in progress , quota list shows 'Used Space' more than the Hard Limit set
+- [#1260561]( transport and port should be optional arguments for glfs_set_volfile_server
+- [#1260611]( snapshot: from nfs-ganesha mount no content seen in .snaps/<snapshot-name> directory
+- [#1260637]( sharding - Do not expose internal sharding xattrs to the application.
+- [#1260730]( Database locking due to write contention between CTR sql connection and tier migrator sql connection
+- [#1260848]( Disperse volume: df -h on a nfs mount throws Invalid argument error
+- [#1260918]( [BACKUP]: If more than 1 node in cluster are not added in known_host, glusterfind create command hungs
+- [#1261260]( [RFE]: Have reads be performed on same bricks for a given file
+- [#1261276]( Tier/shd: Tracker bug for tier and shd compatibility
+- [#1261399]( [HC] Fuse mount crashes, when client-quorum is not met
+- [#1261404]( No quota API to get real hard-limit value.
+- [#1261444]( cli : volume start will create/overwrite ganesha export file
+- [#1261482]( glusterd_copy_file can cause file corruption
+- [#1261741]( Tier: glusterd crash when trying to detach , when hot tier is having exactly one brick and cold tier is of replica type
+- [#1261757]( Tiering/glusted: volume status failed after detach tier start
+- [#1261773]( features.sharding is not available in 'gluster volume set help'
+- [#1261819]( Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)
+- [#1261837]( Data Tiering:Volume task status showing as remove brick when detach tier is trigger
+- [#1261841]( [HC] Implement fallocate, discard and zerofill with sharding
+- [#1261862]( Data Tiering: detach-tier start force command not available on a tier volume(unlike which is possible in force remove-brick)
+- [#1261927]( Minor improvements and code cleanup for rpc
+- [#1262345]( `getfattr -n replica.split-brain-status <file>' command hung on the mount
+- [#1262438]( Error not propagated correctly if selfheal layout lock fails
+- [#1262805]( [upgrade] After upgrade from 3.5 to 3.6, probing a new 3.6 node is moving the peer to rejected state
+- [#1262881]( nfs-ganesha: refresh-config stdout output includes dbus messages "method return sender=:1.61 -> dest=:1.65 reply_serial=2"
+- [#1263056]( libgfapi: brick process crashes if attr KEY length > 255 for glfs_lgetxattr(...)
+- [#1263087]( RHEL7/systemd : can't have server in debug mode anymore
+- [#1263100]( Data Tiering: Tiering related information is not displayed in gluster volume status xml output
+- [#1263177]( Data Tieirng:Change error message as detach-tier error message throws as "remove-brick"
+- [#1263204]( Data Tiering:Setting only promote frequency and no demote frequency causes crash
+- [#1263224]( 'gluster v tier/attach-tier/detach-tier help' command shows the usage, and then throws 'Tier command failed' error message
+- [#1263549]( I/O failure on attaching tier
+- [#1263726]( Data Tieirng:Detach tier status shows number of failures even when all files are migrated successfully
+- [#1265148]( Dist-geo-rep: Support geo-replication to work with sharding
+- [#1265470]( AFR : "gluster volume heal <volume_name info" doesn't report the fqdn of storage nodes.
+- [#1265479]( AFR: cluster options like data-self-heal, metadata-self-heal and entry-self-heal should not be allowed to set, if volume is not distribute-replicate volume
+- [#1265516]( sharding - Add more logs in failure code paths + port existing messages to the msg-id framework
+- [#1265522]( Geo-Replication failes on uppercase hostnames
+- [#1265531]( Message ids in quota-messages.h should start from 120000 as opposed to 110000
+- [#1265677]( Have a way to disable readdirp on dht from glusterd volume set command
+- [#1265893]( Perf: Getting bad performance while doing ls
+- [#1266476]( RFE : Feature: Periodic FOP statistics dumps for v3.6.x/v3.7.x
+- [#1266818]( Disabling enable-shared-storage deletes the volume with the name - "gluster_shared_storage"
+- [#1266834]( AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously
+- [#1266875]( geo-replication: [RFE] Geo-replication + Tiering
+- [#1266877]( Possible memory leak during rebalance with large quantity of files
+- [#1266883]( protocol/server: do not define the number of inodes in terms of memory units
+- [#1267539]( Data Tiering:CLI crashes with segmentation fault when user tries "gluster v tier" command
+- [#1267812]( Data Tiering:Promotions and demotions fail after quota hard limits are hit for a tier volume
+- [#1267950]( need a way to pause/stop tiering to take snapshot
+- [#1267967]( core: use syscall wrappers instead of making direct syscalls
+- [#1268755]( Data Tiering:Throw a warning when user issues a detach-tier commit command
+- [#1268790]( Add bug-1221481-allow-fops-on-dir-split-brain.t to bad test
+- [#1268796]( Test tests/bugs/shard/bug-1245547.t failing consistently when run with patch
+- [#1268810]( gluster v status --xml for a replicated hot tier volume
+- [#1268822]( tier/cli: number of bricks remains the same in v info --xml
+- [#1269375]( rm -rf on /run/gluster/vol/<directory name>/ is not showing quota output header for other quota limit applied directories
+- [#1269461]( Feature: Entry self-heal performance enhancements using more granular changelogs
+- [#1269470]( Self-heal daemon crashes when bricks godown at the time of data heal
+- [#1269696]( Glusterfsd crashes on pmap signin failure
+- [#1269754]( Core:Blocker:Segmentation fault when using fallocate command on a gluster volume
+- [#1270328]( Rare transpoint endpoint not connected error in tier.t tests.
+- [#1270668]( Index entries are not being purged in case of file does not exist
+- [#1270694]( Introduce priv dump in shard xlator for better debugging
+- [#1271148]( Tier: Do not promote/demote files on which POSIX locks are held
+- [#1271150]( libglusterfs : glusterd was not restarting after setting key=value length beyond PATH_MAX (4096) character
+- [#1271310]( RFE : Feature: Tunable FOP sampling for v3.6.x/v3.7.x
+- [#1271325]( RFE: use code generation for repetitive stuff
+- [#1271358]( ECVOL: glustershd log grows quickly and fills up the root volume
+- [#1271907]( Improvement in install & package header files
+- [#1272006]( tools/glusterfind: add query command to list files without session
+- [#1272207]( Data Tiering:Filenames with spaces are not getting migrated at all
+- [#1272319]( Tier : Move common functions into tier.rc
+- [#1272339]( Creating a already deleted snapshot-clone deletes the corresponding snapshot.
+- [#1272362]( Fix in afr transaction code
+- [#1272411]( quota: set quota version for files/directories
+- [#1272460]( Disk usage mismatching after self-heal
+- [#1272557]( [Tier]: man page of gluster should be updated to list tier commands
+- [#1272949]( I/O failure on attaching tier on nfs client
+- [#1272986]( [sharding+geo-rep]: On existing slave mount, reading files fails to show sharded file content
+- [#1273043]( Data Tiering:Lot of Promotions/Demotions failed error messages
+- [#1273215]( Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
+- [#1273315]( fuse: Avoid redundant lookup on "." and ".." as part of every readdirp
+- [#1273372]( Data Tiering:getting failed to fsync on germany-hot-dht (Structure needs cleaning) warning
+- [#1273387]( FUSE clients in a container environment hang and do not recover post losing connections to all bricks
+- [#1273726]( Fully support data-tiering in 3.7.x, remove out of 'experimental' status
+- [#1274626]( Remove selinux mount option from "man mount.glusterfs"
+- [#1274629]( Data Tiering:error "[2015-10-14 18:15:09.270483] E [MSGID: 122037] [ec-common.c:1502:ec_update_size_version_done] 0-tiervolume-disperse-1: Failed to update version and size [Input/output error]"
+- [#1274847]( CTR should be enabled on attach tier, disabled otherwise.
+- [#1275247]( I/O hangs while self-heal is in progress on files
+- [#1275383]( Data Tiering:Getting lookup failed on files in hot tier, when volume is restarted
+- [#1275489]( Enhance the naming used for bugs for better name space
+- [#1275502]( [Tier]: Typo in the output while setting the wrong value of low/hi watermark
+- [#1275524]( Data Tiering:heat counters not getting reset and also internal ops seem to be heating the files
+- [#1275616]( snap-max-hard-limit for snapshots always shows as 256 in info file.
+- [#1275966]( RFE : Exporting multiple subdirectory entries for gluster volume using cli
+- [#1276018]( Wrong value of snap-max-hard-limit observed in 'gluster volume info'.
+- [#1276023]( Clone creation should not be successful when the node participating in volume goes down.
+- [#1276028]( [RFE] Geo-replication support for Volumes running in docker containers
+- [#1276031]( Assertion failure while moving files between directories on a dispersed volume
+- [#1276141]( Data Tiering: Tiering deamon is seeing each part of a file in a Disperse cold volume as a different file
+- [#1276203]( add-brick on a replicate volume could lead to data-loss
+- [#1276243]( gluster-nfs : Server crashed due to an invalid reference
+- [#1276386]( vol replace-brick fails when transport.socket.bind-address is set in glusterd
+- [#1276423]( glusterd: probing a new node(>=3.6) from 3.5 cluster is moving the peer to rejected state
+- [#1276562]( Data Tiering:tiering deamon crashes when trying to heat the file
+- [#1276643]( Upgrading a subset of cluster to 3.7.5 leads to issues with glusterd commands
+- [#1276675]( Arbiter volume becomes replica volume in some cases
+- [#1276839]( Geo-replication doesn't deal properly with sparse files
+- [#1276989]( ec-readdir.t is failing consistently
+- [#1277024]( BSD Smoke fails with _IOS_SAMP_DIR undeclared
+- [#1277076]( Monitor should restart the worker process when Changelog agent dies
+- [#1277081]( [New] - Message displayed after attach tier is misleading
+- [#1277105]( vol quota enable fails when transport.socket.bind-address is set in glusterd
+- [#1277352]( [Tier]: restarting volume reports "insert/update failure" in cold brick logs
+- [#1277481]( Upgrading to 3.7.-5-5 has changed volume to distributed disperse
+- [#1277533]( doesn't return correct return status
+- [#1277716]( fix lookup-unhashed for tiered volumes.
+- [#1277997]( vol heal info fails when transport.socket.bind-address is set in glusterd
+- [#1278326]( [New] - Files in a tiered volume gets promoted when bitd signs them
+- [#1278418]( Spurious failure in bug-1275616.t
+- [#1278476]( move mount-nfs-auth.t to failed tests lists
+- [#1278689]( quota/marker: quota accouting goes wrong with renaming file when IO in progress
+- [#1278709]( Tests/tiering: Correct typo in bug-1214222-directories_miising_after_attach_tier.t in bad_tests
+- [#1278927]( [New] - Message shown in gluster vol tier <volname> status output is incorrect.
+- [#1279166]( Data Tiering:Metadata changes to a file should not heat/promote the file
+- [#1279297]( Remove bug-1275616.t from bad tests list
+- [#1279327]( [Snapshot]: Clone creation fails on tiered volume with pre-validation failed message
+- [#1279376]( Data Tiering:Rename of cold file to a hot file causing split brain and showing two copies of files in mount point
+- [#1279484]( glusterfsd to support volfile-server-transport type "unix"
+- [#1279637]( Data Tiering:Regression:Detach tier commit is passing when detach tier is in progress
+- [#1279705]( AFR: 3-way-replication: Transport point not connected error messaged not displayed when one of the replica pair is down
+- [#1279730]( guest paused due to IO error from gluster based storage doesn't resume automatically or manually
+- [#1279739]( libgfapi to support set_volfile-server-transport type "unix"
+- [#1279836]( Fails to build twice in a row
+- [#1279921]( volume info of %s obtained from %s: ambiguous uuid - Starting geo-rep session
+- [#1280428]( fops-during-migration-pause.t spurious failure
+- [#1281230]( dht must avoid fresh lookups when a single replica pair goes offline
+- [#1281265]( DHT :- log is full of ' Found anomalies in /<DIR> (gfid = 00000000-0000-0000-0000-000000000000)' - for each Directory which was self healed
+- [#1281598]( Data Tiering: "ls" count taking link files and promote/demote files into consideration both on fuse and nfs mount
+- [#1281892]( packaging: gfind_missing_files are not in geo-rep %if ... %endif conditional
+- [#1282076]( cache mode must be the default mode for tiered volumes
+- [#1282322]( [GlusterD]: Volume start fails post add-brick on a volume which is not started
+- [#1282331]( Geo-replication is logging in Localtime
+- [#1282390]( Data Tiering:delete command rm -rf not deleting files the linkto file(hashed) which are under migration and possible spit-brain observed and possible disk wastage
+- [#1282461]( [upgrade] Error messages seen in glusterd logs, while upgrading from RHGS 2.1.6 to RHGS 3.1
+- [#1282673]( ./tests/basic/tier/record-metadata-heat.t is failing upstream
+- [#1282751]( Large system file distribution is broken
+- [#1282761]( EC: File healing promotes it to hot tier
+- [#1282915]( glusterfs does not register with rpcbind on restart
+- [#1283032]( While file is self healing append to the file hangs
+- [#1283103]( Setting security.* xattrs fails
+- [#1283178]( [GlusterD]: Incorrect peer status showing if volume restart done before entire cluster update.
+- [#1283211]( check_host_list() should be more robust
+- [#1283485]( Warning messages seen in glusterd logs in executing gluster volume set help
+- [#1283488]( [Tier]: Space is missed b/w the words in the detach tier stop error message
+- [#1283567]( qupta/marker: backward compatibility with quota xattr vesrioning
+- [#1283948]( glupy default CFLAGS conflict with our CFLAGS when --enable-debug is used
+- [#1283983]( nfs-ganesha: Upcall sent on null gfid
+- [#1284090]( sometimes files are not getting demoted from hot tier to cold tier
+- [#1284357]( Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
+- [#1284365]( Sharding - Extending writes filling incorrect final size in postbuf
+- [#1284372]( [Tier]: Stopping and Starting tier volume triggers fixing layout which fails on local host
+- [#1284419]( Resource leak in marker
+- [#1284752]( quota cli: enhance quota list command to list usage even if the limit is not set
+- [#1284789]( Snapshot creation after attach-tier causes glusterd crash
+- [#1284823]( fops-during-migration.t fails if hot and cold tiers are dist-rep
+- [#1285046]( AFR self-heal-daemon option is still set on volume though tier is detached
+- [#1285152]( store afr pending xattrs as a volume option
+- [#1285173]( Create doesn't remember flags it is opened with
+- [#1285230]( Data Tiering:File create terminates with "Input/output error" as split brain is observed
+- [#1285241]( Corrupted objects list does not get cleared even after all the files in the volume are deleted and count increases as old + new count
+- [#1285288]( Better indication of arbiter brick presence in a volume.
+- [#1285483]( legacy_many_files.t fails upstream
+- [#1285488]( [geo-rep]: Recommended Shared volume use on geo-replication is broken
+- [#1285616]( Brick crashes because of race in bit-rot init
+- [#1285634]( Self-heal triggered every couple of seconds and a 3-node 1-arbiter setup
+- [#1285660]( sharding - reads fail on sharded volume while running iozone
+- [#1285663]( tiering: Seeing error messages E "/usr/lib64/glusterfs/3.7.5/xlator/features/ [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier
+- [#1285968]( cli/geo-rep : remove unused code
+- [#1285989]( bitrot: bitrot scrub status command should display the correct value of total number of scrubbed, unsigned file
+- [#1286017]( We need to skip data self-heal for arbiter bricks
+- [#1286029]( Data Tiering:File create terminates with "Input/output error" as split brain is observed
+- [#1286279]( tools/glusterfind: add --full option to query command
+- [#1286346]( Data Tiering:Don't allow or reset the frequency threshold values to zero when record counter features.record-counter is turned off
+- [#1286656]( Data Tiering:Read heat not getting calculated and read operations not heating the file with counter enabled
+- [#1286735]( RFE: add setup and teardown for fuse tests
+- [#1286910]( Tier: ec xattrs are set on a newly created file present in the non-ec hot tier
+- [#1286959]( [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.
+- [#1286974]( Without detach tier commit, status changes back to tier migration
+- [#1286988]( bitrot: gluster man page and gluster cli usage does not mention the new scrub status cmd
+- [#1287027]( glusterd: cli is showing command success for rebalance commands(command which uses op_sm framework) even though staging is failed in follower node.
+- [#1287455]( glusterd: all the daemon's of existing volume stopping upon peer detach
+- [#1287503]( Full heal of volume fails on some nodes "Commit failed on X", and glustershd logs "Couldn't get xlator xl-0"
+- [#1287517]( Memory leak in glusterd
+- [#1287519]( [geo-rep+tiering]: symlinks are not getting synced to slave on tiered master setup
+- [#1287539]( xattrs on directories are unavailable on distributed replicated volume after adding new bricks
+- [#1287723]( Handle Rsync/Tar errors effectively
+- [#1287763]( glusterfs does not allow passing standard SElinux mount options to fuse
+- [#1287842]( Few snapshot creation fails with pre-validation failed message on tiered volume.
+- [#1287872]( add bug-924726.t to ignore list in regression
+- [#1287992]( [GlusterD]Probing a node having standalone volume, should not happen
+- [#1287996]( [Quota]: Peer status is in "Rejected" state with Quota enabled volume
+- [#1288019]( Possible memory leak in the tiered daemon
+- [#1288059]( glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
+- [#1288474]( tiering: quota list command is not working after attach or detach
+- [#1288517]( Data Tiering: new set of gluster v tier commands not working as expected
+- [#1288857]( Use after free bug in notify_kernel_loop in fuse-bridge code
+- [#1288995]( [tiering]: Tier daemon crashed on two of eight nodes and lot of "demotion failed" seen in the system
+- [#1289068]( libgfapi: Errno incorrectly set to EINVAL even on success
+- [#1289258]( core: use syscall wrappers instead of making direct syscalls; pread, pwrite
+- [#1289428]( Test ./tests/bugs/fuse/bug-924726.t fails
+- [#1289447]( Sharding - Iozone on sharded volume fails on NFS
+- [#1289578]( [Tier]: Failed to open "demotequeryfile-master-tier-dht" errors logged on the node having only cold bricks
+- [#1289584]( brick_up_status in tests/volume.rc is not correct
+- [#1289602]( After detach-tier start writes still go to hot tier
+- [#1289840]( Sharding: Remove dependency on performance.strict-write-ordering
+- [#1289845]( spurious failure of bug-1279376-rename-demoted-file.t
+- [#1289859]( Symlinks Rename fails in Symlink not exists in Slave
+- [#1289869]( Compile is broken in gluster master
+- [#1289916]( Client will not get notified about changes to volume if node used while mounting goes down
+- [#1289935]( Glusterfind hook script failing if /var/lib/glusterd/glusterfind dir was absent
+- [#1290125]( tests/basic/afr/arbiter-statfs.t fails most of the times on NetBSD
+- [#1290151]( hook script for CTDB should not change Samba config
+- [#1290270]( Several intermittent regression failures
+- [#1290421]( changelog: CHANGELOG rename error is logged on every changelog rollover
+- [#1290604]( S30Samba scripts do not work on systemd systems
+- [#1290677]( tiering: T files getting created , even after disk quota exceeds
+- [#1290734]( [GlusterD]: GlusterD log is filled with error messages - " Failed to aggregate response from node/brick"
+- [#1290766]( [RFE] quota: enhance quota enable and disable process
+- [#1290865]( nfs-ganesha server do not enter grace period during failover/failback
+- [#1290965]( [Tiering] + [DHT] - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
+- [#1290975]( File is not demoted after self heal (split-brain)
+- [#1291212]( Regular files are listed as 'T' files on nfs mount
+- [#1291259]( Upcall/Cache-Invalidation: Use parent stbuf while updating parent entry
+- [#1291537]( [RFE] Provide mechanism to spin up reproducible test environment for all developers
+- [#1291566]( first file created after hot tier full fails to create, but later ends up as a stale erroneous file (file with ???????????)
+- [#1291603]( [tiering]: read/write freq-threshold allows negative values
+- [#1291701]( Renames/deletes failed with "No such file or directory" when few of the bricks from the hot tier went offline
+- [#1292067]( Data Tiering:Watermark:File continuously trying to demote itself but failing " [dht-rebalance.c:608:__dht_rebalance_create_dst_file] 0-wmrk-tier-dht: chown failed for //AP.BH.avi on wmrk-cold-dht (No such file or directory)"
+- [#1292084]( [georep+tiering]: Geo-replication sync is broken if cold tier is EC
+- [#1292112]( [Tier]: start tier daemon using rebal tier start doesnt start tierd if it is failed on any of single node
+- [#1292379]( md5sum of files mismatch after the self-heal is complete on the file
+- [#1292463]( [geo-rep]: ChangelogException: [Errno 22] Invalid argument observed upon rebooting the ACTIVE master node
+- [#1292671]( [tiering]: cluster.tier-max-files option in tiering is not honored
+- [#1292749]( Friend update floods can render the cluster incapable of handling other commands
+- [#1292954]( all: fix various errors/warnings reported by cppcheck
+- [#1293034]( Creation of files on hot tier volume taking very long time
+- [#1293133]( all: fix clang compile warnings
+- [#1293223]( Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
+- [#1293227]( Minor improvements and code cleanup for locks translator
+- [#1293256]( [Tier]: "Bad file descriptor" on removal of symlink only on tiered volume
+- [#1293293]( afr: warn if pending xattrs missing during init()
+- [#1293414]( [GlusterD]: Peer detach happening with a node which is hosting volume bricks
+- [#1293523]( tier-snapshot.t runs too slowly on RHEL6
+- [#1293558]( gluster cli crashed while performing 'gluster vol bitrot <vol_name> scrub status'
+- [#1293601]( quota: handle quota xattr removal when quota is enabled again
+- [#1293932]( [Tiering]: When files are heated continuously, promotions are too aggressive that it promotes files way beyond high water mark
+- [#1293950]( Gluster manpage doesn't show georeplication options
+- [#1293963]( [Tier]: can not delete symlinks from client using rm
+- [#1294051]( Though files are in split-brain able to perform writes to the file
+- [#1294053]( Excessive logging in mount when bricks of the replica are down
+- [#1294209]( use %global per Fedora packaging guidelines
+- [#1294223]( uses deprecated find -perm +xxx syntax
+- [#1294446]( Ganesha hook script executes showmount and causes a hang
+- [#1294448]( [tiering]: Incorrect display of 'gluster v tier help'
+- [#1294479]( quota: limit xattr not healed for a sub-directory on a newly added bricks
+- [#1294497]( gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
+- [#1294588]( Dist-geo-rep : geo-rep worker crashed while init with [Errno 34] Numerical result out of range.
+- [#1294600]( [Tier]: Killing glusterfs tier process doesn't reflect as failed/faulty in tier status
+- [#1294637]( [tiering]: Tiering isn't started after attaching hot tier and hence no promotion/demotion
+- [#1294743]( Lot of Inode not found messages in glfsheal log file
+- [#1294786]( Good files does not promoted in a tiered volume when bitrot is enabled
+- [#1294794]( "Transport endpoint not connected" in heal info though hot tier bricks are up
+- [#1294809]( mount options no longer valid: noexec, nosuid, noatime
+- [#1294826]( Speed up regression tests
+- [#1295107]( Fix mem leaks related to gfapi applications
+- [#1295504]( S29CTDBsetup hook script contains outdated documentation comments
+- [#1295505]( S29CTDB hook scripts contain comment references to downstream products and versions
+- [#1295520]( Manual mount command in S29CTDBsetup script lacks options (_netdev ...)
+- [#1295702]( Fix spurious failure in bug-1221481-allow-fops-on-dir-split-brain.t
+- [#1295704]( RFE: Provide a mechanism to disable some tests in regression
+- [#1295763]( Unable to modify quota hard limit on tier volume after disk limit got exceeded
+- [#1295784]( dht: misleading indentation, gcc-6
+- [#1296174]( geo-rep: hard-link rename issue on changelog replay
+- [#1296206]( Geo-Replication Session goes "FAULTY" when application logs rolled on master
+- [#1296399]( Stale stat information for corrupted objects (replicated volume)
+- [#1296496]( [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"
+- [#1296611]( Rebalance crashed after detach tier.
+- [#1296818]( Move away from gf_log completely to gf_msg
+- [#1296992]( Stricter dependencies for glusterfs-server
+- [#1297172]( Client self-heals block the FOP that triggered the heals
+- [#1297195]( no-mtab (-n) mount option ignore next mount option
+- [#1297311]( Attach tier : Creates fail with invalid argument errors
+- [#1297373]( [write-behind] : Write/Append to a full volume causes fuse client to crash
+- [#1297638]( gluster vol get volname user.metadata-text" Command fails with "volume get option: failed: Did you mean cluster.metadata-self-heal?"
+- [#1297695]( heal info reporting slow when IO is in progress on the volume
+- [#1297740]( tests/bugs/quota/bug-1049323.t fails in fedora
+- [#1297750]( volume info xml does not show arbiter details
+- [#1297897]( RFE: "heal" commands output should have a fixed fields
+- [#1298111]( Fix sparse-file-self-heal.t and remove from bad tests
+- [#1298439]( GlusterD restart, starting the bricks when server quorum not met
+- [#1298498]( glusterfs crash during load testing
+- [#1298520]( tests : Modifying tests for crypt xlator
+- [#1299410]( [Fuse: ] crash while --attribute-timeout and -entry-timeout are set to 0
+- [#1299497]( Quota Aux mount crashed
+- [#1299710]( Glusterd: Creation of volume is failing if one of the brick is down on the server
+- [#1299819]( Snapshot creation fails on a tiered volume
+- [#1300152]( Rebalance process crashed during cleanup_and_exit
+- [#1300253]( Test open-behind.t failing fairly often on NetBSD
+- [#1300412]( Data Tiering:Change the default tiering values to optimize tiering settings
+- [#1300564]( I/O failure during a graph change followed by an option change.
+- [#1300596]( 'gluster volume get' returns 0 value for server-quorum-ratio
+- [#1300929]( Lot of assertion failures are seen in nfs logs with disperse volume
+- [#1300956]( [RFE] Schedule Geo-replication
+- [#1300979]( [Snapshot]: Snapshot restore stucks in post validation.
+- [#1301032]( [georep+tiering]: Hardlink sync is broken if master volume is tiered
+- [#1301227]( Tiering should break out of iterating query file once cycle time completes.
+- [#1301352]( Point users of glusterfs-hadoop to the upstream project
+- [#1301473]( [Tiering]: Values of watermarks, min free disk etc will be miscalculated with quota set on root directory of gluster volume
+- [#1302200]( Unable to get the client statedump, as /var/run/gluster directory is not available by default
+- [#1302201]( Scrubber crash (list corruption)
+- [#1302205]( Improve error message for unsupported clients
+- [#1302234]( [SNAPSHOT]: Decrease the VHD_SIZE in snapshot.rc
+- [#1302257]( [tiering]: Quota object limits not adhered to, in a tiered volume
+- [#1302291]( Self heal command gives error "Launching heal operation to perform index self heal on volume vol0 has been unsuccessful"
+- [#1302307]( Vim commands from a non-root user fails to execute on fuse mount with trash feature enabled
+- [#1302554]( Able to create files when quota limit is set to 0
+- [#1302772]( promotions not balanced across hot tier sub-volumes
+- [#1302948]( tar complains: <fileName>: file changed as we read it
+- [#1303028]( Tiering status and rebalance status stops getting updated
+- [#1303269]( After GlusterD restart, Remove-brick commit happening even though data migration not completed.
+- [#1303501]( access-control : spurious error log message on every setxattr call
+- [#1303828]( [USS]: If .snaps already exists, ls -la lists it even after enabling USS
+- [#1303829]( [feat] Compound translator
+- [#1303895]( promotions not happening when space is created on previously full hot tier
+- [#1303945]( Memory leak in dht
+- [#1303995]( SMB: SMB crashes with AIO enabled on reads + vers=3.0
+- [#1304301]( self-heald.t spurious failure
+- [#1304348]( Allow GlusterFS to build with URCU 0.6
+- [#1304686]( Start self-heal and display correct heal info after replace brick
+- [#1304966]( DHT: Take blocking locks while renaming files
+- [#1304970]( [quota]: Incorrect disk usage shown on a tiered volume
+- [#1304988]( DHT: Rebalance hang while migrating the files of disperse volume
+- [#1305277]( [Tier]: Endup in multiple entries of same file on client after rename which had a hardlinks
+- [#1305839]( Wrong interpretation of disk size in script
+- [#1306193]( cd to .snaps fails with "transport endpoint not connected" after force start of the volume.
+- [#1306199]( gluster volume heal info takes extra 2 seconds
+- [#1306220]( quota: xattr trusted.glusterfs.quota.limit-objects not healed on a root of newly added brick
+- [#1306264]( glfs_lseek returns incorrect offset for SEEK_SET and SEEK_CUR flags
+- [#1306560]( Accessing program list in build_prog_details () should be lock protected
+- [#1306807]( use mutex on single core machines
+- [#1306852]( Tiering threads can starve each other
+- [#1306897]( Remove split-brain-healing.t from bad tests
+- [#1307208]( dht: NULL layouts referenced while the I/O is going on tiered volume
+- [#1308402]( Newly created volume start, starting the bricks when server quorum not met
+- [#1308900]( build: fix build break
+- [#1308961]( [New] - quarantine folder becomes empty and bitrot status does not list any files which are corrupted
+- [#1309238]( Issues with refresh-config when the ".export_added" has different values on different nodes
+- [#1309342]( Wrong permissions set on previous copy of truncated files inside trash directory
+- [#1309462]( Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
+- [#1309659]( [tiering]: Performing a gluster vol reset, turns off 'features.ctr-enabled' on a tiered volume
+- [#1309999]( Data Tiering:Don't allow a detach-tier commit if detach-tier start has failed to complete
+- [#1310080]( [RFE]Add --no-encode option to the `glusterfind pre` command
+- [#1310171]( Incorrect file size on mount if stat is served from the arbiter brick.
+- [#1310437]( rsyslog can't be completely removed due to dependency in libglusterfs
+- [#1310620]( gfapi : listxattr is broken for handle ops.
+- [#1310677]( glusterd crashed when probing a node with firewall enabled on only one node
+- [#1310755]( glusterd: coverity warning in glusterd-snapshot-utils.c copy_nfs_ganesha_file()
+- [#1311124]( Implement inode_forget_cbk() similar fops in gfapi
+- [#1311146]( glfs_dup() functionality is broken
+- [#1311178]( Tier: Actual files are not demoted and keep on trying to demoted deleted files
+- [#1311874]( Peer probe from a reinstalled node should fail
+- [#1312036]( tests: upstream test infra brocken
+- [#1312226]( Readdirp op_ret is modified by client xlator in case of xdata_rsp presence
+- [#1312346]( nfs: fix lock variable type
+- [#1312354]( changelog: fix typecasting of function
+- [#1312816]( gfid-reset of a directory in distributed replicate volume doesn't set gfid on 2nd till last subvolumes
+- [#1312845]( Protocol server/client handshake gap
+- [#1312897]( glusterfs-server %post script is not quiet, prints "success" during installation
+- [#1313135]( RFE: Need type of gfid in index_readdir
+- [#1313206]( Encrypted rpc clients do not reconnect sometimes
+- [#1313228]( promotions and demotions not happening after attach tier due to fix layout taking very long time(3 days)
+- [#1313293]( [HC] glusterfs mount crashed
+- [#1313300]( quota: reduce latency for testcase ./tests/bugs/quota/bug-1293601.t
+- [#1313303]( [geo-rep]: Session goes to faulty with Errno 13: Permission denied
+- [#1313495]( migrate files based on file size
+- [#1313628]( Brick ports get changed after GlusterD restart
+- [#1313775]( ec-read-policy.t can report a false-failure
+- [#1313901]( glusterd: does not start
+- [#1314150]( Choose self-heal source as local subvolume if possible
+- [#1314204]( nfs-ganesha setup fails on fedora
+- [#1314291]( tier: GCC throws Unused variable warning for conf in tier_link_cbk function
+- [#1314549]( remove replace-brick-self-heal.t from bad tests
+- [#1314649]( disperse: Provide an option to enable/disable eager lock
+- [#1315024]( glusterfs-libs postun scriptlet fail /sbin/ldconfig: relative path `1' used to build cache
+- [#1315168]( Fd based fops should not be logging ENOENT/ESTALE
+- [#1315186]( setting lower op-version should throw failure message
+- [#1315465]( glusterfs brick process crashed
+- [#1315560]( ./tests/basic/tier/tier-file-create.t dumping core fairly often on build machines in Linux
+- [#1315601]( Geo-replication CPU usage is 100%
+- [#1315659]( [Tier]: Following volume restart, tierd shows failure at status on some nodes
+- [#1315666]( Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
+- [#1316327]( Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
+- [#1316437]( snapd doesn't come up automatically after node reboot.
+- [#1316462]( Fix possible failure in tests/basic/afr/arbiter.t
+- [#1316499]( volume set on user.* domain trims all white spaces in the value
+- [#1316819]( Errors seen in cli.log, while executing the command 'gluster snapshot info --xml'
+- [#1316848]( Peers goes to rejected state after reboot of one node when quota is enabled on cloned volume.
+- [#1317361]( Do not succeed mkdir without gfid-req
+- [#1317424]( nfs-ganesha server do not enter grace period during failover/failback
+- [#1317785]( Cache swift xattrs
+- [#1317902]( Different epoch values for each of NFS-Ganesha heads
+- [#1317948]( inode ref leaks with
+- [#1318107]( Typo in log message for posix_mkdir log
+- [#1318158]( Client's App is having issues retrieving files from share 1002976973
+- [#1318544]( Glusterd crashed during volume status of snapd daemon
+- [#1318546]( Glusterd crashed just after a peer probe command failed.
+- [#1318751]( cluster/afr: Fix partial heals in 3-way replication
+- [#1318757]( trash xlator : trash_unlink_mkdir_cbk() enters in an infinte loop which results segfault
+- [#1319374]( smbd crashes while accessing multiple volume shares via same client
+- [#1319581]( Marker: Lot of dict_get errors in brick log!!
+- [#1319706]( Add a script that converts the gfid-string of a directory into absolute path name w.r.t the brick path.
+- [#1319717]( glusterfind pre test projects_media2 /tmp/123 rh-storage2 - pre failed: Traceback ...
+- [#1319992]( RFE: Lease support for gluster
+- [#1320101]( Client log gets flooded by default with fop stats under DEBUG level
+- [#1320388]( [GSS]-gluster v heal volname info does not work with enabled ssl/tls
+- [#1320458]( Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP
+- [#1320489]( glfs-mgmt: fix connecting to multiple volfile transports
+- [#1320716]( RFE Sort volume quota <volume> list output alphabetically by path
+- [#1320818]( Over some time Files which were accessible become inaccessible(music files)
+- [#1321322]( afr: add mtime based split-brain resolution to CLI
+- [#1321554]( assert failure happens when parallel rm -rf is issued on nfs mounts
+- [#1321762]( glusterd: response not aligned
+- [#1321872]( el6 - Installing glusterfs-ganesha-3.7.9-1.el6rhs.x86_64 fails with dependency on /usr/bin/dbus-send
+- [#1321955]( Self-heal and manual heal not healing some file
+- [#1322214]( [HC] Add disk in a Hyper-converged environment fails when glusterfs is running in directIO mode
+- [#1322237]( glusterd pmap scan wastes time scanning for not relevant ports
+- [#1322253]( gluster volume heal info shows conservative merge entries as in split-brain
+- [#1322262]( Glusterd crashes when a message is passed through rpc which is not available
+- [#1322320]( build: git ignore files generated by fdl xlator
+- [#1322323]( fdl: fix make clean
+- [#1322489]( marker: account goes bad with rm -rf
+- [#1322772]( glusterd: glusted didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3. The underlying file system may be in bad state [No such file or directory]"
+- [#1322801]( nfs-ganesha installation : no pacemaker package installed for RHEL 6.7
+- [#1322805]( [scale] Brick process does not start after node reboot
+- [#1322825]( IO-stats, client profile is overwritten when it is on the same node as bricks
+- [#1322850]( Healing queue rarely empty
+- [#1323040]( Inconsistent directory structure on dht subvols caused by parent layouts going stale during entry create operations because of fix-layout
+- [#1323287]( TIER : Attach tier fails
+- [#1323360]( quota/cli: quota list with path not working when limit is not set
+- [#1323486]( quota: check inode limits only when new file/dir is created and not with write FOP
+- [#1323659]( rpc: assign port only if it is unreserved
+- [#1324004]( arbiter volume write performance is bad.
+- [#1324439]( SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
+- [#1324509]( Continuous nfs_grace_monitor log messages observed in /var/log/messages
+- [#1325683]( the wrong variable was being checked for gf_strdup
+- [#1325822]( Too many log messages showing inode ctx is NULL for 00000000-0000-0000-0000-000000000000
+- [#1325841]( Volume stop is failing when one of brick is down due to underlying filesystem crash
+- [#1326085]( [rfe]posix-locks: Lock migration
+- [#1326308]( WORM/Retention Feature
+- [#1326410]( /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
+- [#1326627]( nfs-ganesha crashes with segfault error while doing refresh config on volume.
+- [#1327507]( [DHT-Rebalance]: with few brick process down, rebalance process isn't killed even after stopping rebalance process
+- [#1327553]( [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
+- [#1327976]( [RFE] Provide vagrant developer setup for GlusterFS
+- [#1328010]( snapshot-clone: clone volume doesn't start after node reboot
+- [#1328043]( [FEAT] Renaming NSR to JBR
+- [#1328399]( [geo-rep]: doesn't touch the mount in every iteration
+- [#1328502]( Move FOP enumerations and other network protocol bits to XDR generated headers
+- [#1328696]( quota : fix null dereference issue
+- [#1329129]( runner: extract and return actual exit status of child
+- [#1329501]( self-heal does fsyncs even after setting ensure-durability off
+- [#1329503]( [tiering]: during detach tier operation, Input/output error is seen with new file writes on NFS mount
+- [#1329773]( Inode leaks found in data-self-heal
+- [#1329870]( Lots of [global.glusterfs - usage-type (null) memusage] are seen in statedump
+- [#1330052]( [RFE] We need more debug info from stack wind and unwind calls
+- [#1330225]( gluster is not using pthread_equal to compare thread
+- [#1330248]( glusterd: SSL certificate depth volume option is incorrect
+- [#1330346]( distaflibs: structure directory tree to follow setuptools namespace packages format
+- [#1330353]( [Tiering]: promotion of files may not be balanced on distributed hot tier when promoting files with size as that of max.mb
+- [#1330476]( libgfapi:Setting need_lookup on wrong list
+- [#1330481]( glusterd restart is failing if volume brick is down due to underlying FS crash.
+- [#1330567]( SAMBA+TIER : File size is not getting updated when created on windows samba share mount
+- [#1330583]( glusterfs-libs postun ldconfig: relative path `1' used to build cache
+- [#1330616]( Minor improvements and code cleanup for libglusterfs
+- [#1330974]( Swap order of GF_EVENT_SOME_CHILD_DOWN enum to match the release3.-7 branch
+- [#1331042]( glusterfsd: return actual exit status on mount process
+- [#1331253]( glusterd: fix max pmap alloc to GF_PORT_MAX
+- [#1331289]( glusterd memory overcommit
+- [#1331658]( [geo-rep]: doesn't work when invoked using cron
+- [#1332020]( multiple regression failures for tests/basic/quota-ancestry-building.t
+- [#1332021]( multiple failures for testcase: tests/basic/inode-quota-enforcing.t
+- [#1332022]( multiple failures for testcase: tests/bugs/disperse/bug-1304988.t
+- [#1332045]( multiple failures for testcase: tests/basic/quota.t
+- [#1332162]( Support mandatory locking in glusterfs
+- [#1332370]( DHT: Once remove brick start failed in between Remove brick commit should not be allowed
+- [#1332396]( posix: Set correct d_type for readdirp() calls
+- [#1332414]( protocol/server: address double free's
+- [#1332788]( Wrong op-version for mandatory-locks volume set option
+- [#1332789]( quota: client gets IO error instead of disk quota exceed when the limit is exceeded
+- [#1332845]( Disperse volume fails on high load and logs show some assertion failures
+- [#1332864]( glusterd + bitrot : Creating clone of snapshot. error "xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/ cannot open shared object file:
+- [#1333243]( [AFR]: "volume heal info" command is failing during in-service upgrade to latest.
+- [#1333244]( Fix excessive logging due to NULL dict in dht
+- [#1333266]( SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.
+- [#1333711]( [scale] Brick process does not start after node reboot
+- [#1333803]( Detach tier fire before the background fixlayout is complete may result in failure
+- [#1333900]( /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
+- [#1334074]( No xml output on gluster volume heal info command with --xml
+- [#1334268]( GlusterFS 3.8 fails to build in the CentOS Community Build System
+- [#1334287]( Under high read load, sometimes the message "XDR decoding failed" appears in the logs and read fails
+- [#1334443]( SAMBA-VSS : Permission denied issue while restoring the directory from windows client 1 when files are deleted from windows client 2
+- [#1334699]( readdir-ahead does not fetch xattrs that md-cache needs in it's internal calls
+- [#1334994]( Fix the message ids in Client
+- [#1335017]( set errno in case of inode_link failures
+- [#1335282]( Wrong constant used in length based comparison for XATTR_SECURITY_PREFIX
+- [#1335283]( Self Heal fails on a replica3 volume with 'disk quota exceeded'
+- [#1335285]( tar complains: <fileName>: file changed as we read it
+- [#1335433]( Self heal shows different information for the same volume from each node
+- [#1335726]( stop all gluster processes should also also include glusterfs mount process
+- [#1335730]( mount/fuse: Logging improvements
+- [#1335822]( Revert "features/shard: Make o-direct writes work with sharding:"