From a69b00ae946db909bcbf43c9a80bfcb875019fb5 Mon Sep 17 00:00:00 2001 From: Shyam Date: Thu, 2 Feb 2017 21:00:42 -0500 Subject: doc: Updated release notes Included missing release notes for features that are part of this release. Included bugs fixed in this release since release-3.9 Change-Id: I573aade31ffb0809985ecc4e5e224d2439b60856 BUG: 1417735 Signed-off-by: Shyam Reviewed-on: https://review.gluster.org/16535 CentOS-regression: Gluster Build System NetBSD-regression: NetBSD Build System Smoke: Gluster Build System --- doc/release-notes/3.10.0.md | 320 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 317 insertions(+), 3 deletions(-) diff --git a/doc/release-notes/3.10.0.md b/doc/release-notes/3.10.0.md index 9c3b667834b..ed36a9ed7c3 100644 --- a/doc/release-notes/3.10.0.md +++ b/doc/release-notes/3.10.0.md @@ -86,6 +86,12 @@ number and affect the calculated estimates. ### Separation of tier as its own service *Notes for users:* +This change is to move the management of the tier deamon into the gluster +service framework, thereby improving it stability and manageability by the +service framework. + +This has no change to any of the tier commands or user facing interfaces and +operations. *Limitations:* @@ -106,6 +112,12 @@ glusterd still tries to invoke ganesha-ha.sh to setup and teardown HA. ### Statedump support for gfapi based applications *Notes for users:* +gfapi based applications now can dump state information for better trouble +shooting of issues. + +Backport of this feature to 3.10 post the release branching is not done, this +will possibly appear in the next beta or RC candidate build or be a part of the +next release. *Limitations:* @@ -127,6 +139,15 @@ manually delete the directory from the mount point. ### Implemented parallel readdirp with distribute xlator *Notes for users:* +Note is WIP! +Improves directory enumeration performance in large clusters, by performing +readdir ahead in parallel. + +To test the benefits of this this feature set the following 2 volume options, +```bash +# gluster volume set performance.readdir-ahead on +# gluster volume set performance.parallel-readdir on +``` *Limitations:* @@ -134,6 +155,20 @@ manually delete the directory from the mount point. ### md-cache can optionally -ve cache security.ima xattr *Notes for users:* +Note is WIP! +From kernel version 3.X or greater, creating of a file results in removexattr +call on security.ima xattr. But this xattr is not set on the file unless IMA +feature is active. With this patch, removxattr call returns ENODATA if it is +not found in the cache. + +The end benefit is faster create operations where IMA is not enabled. + +To cache this xattr use, +```bash +# gluster volume set performance.cache-ima-xattrs on +``` + +The above option is on by default. *Limitations:* @@ -141,6 +176,285 @@ manually delete the directory from the mount point. ## Bugs addressed -A total of XXX patches has been sent, addressing YYY bugs: - -- TODO +Bugs addressed since release-3.9 are listed below. + +- [#789278](https://bugzilla.redhat.com/789278): Issues reported by Coverity static analysis tool +- [#1198849](https://bugzilla.redhat.com/1198849): Minor improvements and cleanup for the build system +- [#1211863](https://bugzilla.redhat.com/1211863): RFE: Support in md-cache to use upcall notifications to invalidate its cache +- [#1231224](https://bugzilla.redhat.com/1231224): Misleading error messages on brick logs while creating directory (mkdir) on fuse mount +- [#1234054](https://bugzilla.redhat.com/1234054): `gluster volume heal split-brain' does not heal if data/metadata/entry self-heal options are turned off +- [#1289922](https://bugzilla.redhat.com/1289922): Implement SIMD support on EC +- [#1290304](https://bugzilla.redhat.com/1290304): [RFE]Reducing number of network round trips +- [#1297182](https://bugzilla.redhat.com/1297182): Mounting with "-o noatime" or "-o noexec" causes "nosuid,nodev" to be set as well +- [#1313838](https://bugzilla.redhat.com/1313838): Tiering as separate process and in v status moving tier task to tier process +- [#1316873](https://bugzilla.redhat.com/1316873): EC: Set/unset dirty flag for all the update operations +- [#1325531](https://bugzilla.redhat.com/1325531): Statedump: Add per xlator ref counting for inode +- [#1325792](https://bugzilla.redhat.com/1325792): "gluster vol heal test statistics heal-count replica" seems doesn't work +- [#1330604](https://bugzilla.redhat.com/1330604): out-of-tree builds generate XDR headers and source files in the original directory +- [#1336371](https://bugzilla.redhat.com/1336371): Sequential volume start&stop is failing with SSL enabled setup. +- [#1341948](https://bugzilla.redhat.com/1341948): DHT: Rebalance- Misleading log messages from __dht_check_free_space function +- [#1344714](https://bugzilla.redhat.com/1344714): removal of file from nfs mount crashs ganesha server +- [#1349385](https://bugzilla.redhat.com/1349385): [FEAT]jbr: Add rollbacking of failed fops +- [#1355956](https://bugzilla.redhat.com/1355956): RFE : move ganesha related configuration into shared storage +- [#1356076](https://bugzilla.redhat.com/1356076): DHT doesn't evenly balance files on FreeBSD with ZFS +- [#1356960](https://bugzilla.redhat.com/1356960): OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume +- [#1357753](https://bugzilla.redhat.com/1357753): JSON output for all Events CLI commands +- [#1357754](https://bugzilla.redhat.com/1357754): Delayed Events if any one Webhook is slow +- [#1358296](https://bugzilla.redhat.com/1358296): tier: breaking down the monolith processing function tier_migrate_using_query_file() +- [#1359612](https://bugzilla.redhat.com/1359612): [RFE] Geo-replication Logging Improvements +- [#1360670](https://bugzilla.redhat.com/1360670): Add output option `--xml` to man page of gluster +- [#1363595](https://bugzilla.redhat.com/1363595): Node remains in stopped state in pcs status with "/usr/lib/ocf/resource.d/heartbeat/ganesha_mon: line 137: [: too many arguments ]" messages in logs. +- [#1363965](https://bugzilla.redhat.com/1363965): geo-replication *changes.log does not respect the log-level configured +- [#1364420](https://bugzilla.redhat.com/1364420): [RFE] History Crawl performance improvement +- [#1365395](https://bugzilla.redhat.com/1365395): Support for rc.d and init for Service management +- [#1365740](https://bugzilla.redhat.com/1365740): dht: Update stbuf from servers having layout +- [#1365791](https://bugzilla.redhat.com/1365791): Geo-rep worker Faulty with OSError: [Errno 21] Is a directory +- [#1365822](https://bugzilla.redhat.com/1365822): [RFE] cli command to get max supported cluster.op-version +- [#1366494](https://bugzilla.redhat.com/1366494): Rebalance is not considering the brick sizes while fixing the layout +- [#1366495](https://bugzilla.redhat.com/1366495): 1 mkdir generates tons of log messages from dht xlator +- [#1366648](https://bugzilla.redhat.com/1366648): [GSS] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input/output error. +- [#1366815](https://bugzilla.redhat.com/1366815): spurious heal info as pending heal entries never end on an EC volume while IOs are going on +- [#1368012](https://bugzilla.redhat.com/1368012): gluster fails to propagate permissions on the root of a gluster export when adding bricks +- [#1368138](https://bugzilla.redhat.com/1368138): Crash of glusterd when using long username with geo-replication +- [#1368312](https://bugzilla.redhat.com/1368312): Value of `replica.split-brain-status' attribute of a directory in metadata split-brain in a dist-rep volume reads that it is not in split-brain +- [#1368336](https://bugzilla.redhat.com/1368336): [RFE] Tier Events +- [#1369077](https://bugzilla.redhat.com/1369077): The directories get renamed when data bricks are offline in 4*(2+1) volume +- [#1369124](https://bugzilla.redhat.com/1369124): fix unused variable warnings from out-of-tree builds generate XDR headers and source files i... +- [#1369397](https://bugzilla.redhat.com/1369397): segment fault in changelog_cleanup_dispatchers +- [#1369403](https://bugzilla.redhat.com/1369403): [RFE]: events from protocol server +- [#1369523](https://bugzilla.redhat.com/1369523): worm: variable reten_mode is invalid to be free by mem_put in fini() +- [#1370410](https://bugzilla.redhat.com/1370410): [granular entry sh] - Provide a CLI to enable/disable the feature that checks that there are no heals pending before allowing the operation +- [#1370567](https://bugzilla.redhat.com/1370567): [RFE] Provide snapshot events for the new eventing framework +- [#1370931](https://bugzilla.redhat.com/1370931): glfs_realpath() should not return malloc()'d allocated memory +- [#1371353](https://bugzilla.redhat.com/1371353): posix: Integrate important events with events framework +- [#1371470](https://bugzilla.redhat.com/1371470): disperse: Integrate important events with events framework +- [#1371485](https://bugzilla.redhat.com/1371485): [RFE]: AFR events +- [#1371539](https://bugzilla.redhat.com/1371539): Quota version not changing in the quota.conf after upgrading to 3.7.1 from 3.6.1 +- [#1371540](https://bugzilla.redhat.com/1371540): Spurious regression in tests/basic/gfapi/bug1291259.t +- [#1371874](https://bugzilla.redhat.com/1371874): [RFE] DHT Events +- [#1372193](https://bugzilla.redhat.com/1372193): [geo-rep]: AttributeError: 'Popen' object has no attribute 'elines' +- [#1372211](https://bugzilla.redhat.com/1372211): write-behind: flush stuck by former failed write +- [#1372356](https://bugzilla.redhat.com/1372356): glusterd experiencing repeated connect/disconnect messages when shd is down +- [#1372553](https://bugzilla.redhat.com/1372553): "gluster vol status all clients --xml" doesn't generate xml if there is a failure in between +- [#1372584](https://bugzilla.redhat.com/1372584): Fix the test case http://review.gluster.org/#/c/15385/ +- [#1373072](https://bugzilla.redhat.com/1373072): Event pushed even if Answer is No in the Volume Stop and Delete prompt +- [#1373373](https://bugzilla.redhat.com/1373373): Worker crashes with EINVAL errors +- [#1373520](https://bugzilla.redhat.com/1373520): [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume +- [#1373741](https://bugzilla.redhat.com/1373741): [geo-replication]: geo-rep Status is not showing bricks from one of the nodes +- [#1374093](https://bugzilla.redhat.com/1374093): glusterfs: create a directory with 0464 mode return EIO error +- [#1374286](https://bugzilla.redhat.com/1374286): [geo-rep]: defunct tar process while using tar+ssh sync +- [#1374584](https://bugzilla.redhat.com/1374584): Detach tier commit is allowed when detach tier start goes into failed state +- [#1374587](https://bugzilla.redhat.com/1374587): gf_event python fails with ImportError +- [#1374993](https://bugzilla.redhat.com/1374993): bug-963541.t spurious failure +- [#1375181](https://bugzilla.redhat.com/1375181): /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory +- [#1375431](https://bugzilla.redhat.com/1375431): [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd/groups/virt +- [#1375526](https://bugzilla.redhat.com/1375526): Kill rpc.statd on Linux machines +- [#1375532](https://bugzilla.redhat.com/1375532): Rpm installation fails with conflicts error for eventsconfig.json file +- [#1376671](https://bugzilla.redhat.com/1376671): Rebalance fails to start if a brick is down +- [#1376693](https://bugzilla.redhat.com/1376693): RFE: Provide a prompt when enabling gluster-NFS +- [#1377097](https://bugzilla.redhat.com/1377097): The GlusterFS Callback RPC-calls always use RPC/XID 42 +- [#1377341](https://bugzilla.redhat.com/1377341): out-of-tree builds generate XDR headers and source files in the original directory +- [#1377427](https://bugzilla.redhat.com/1377427): incorrect fuse dumping for WRITE +- [#1377556](https://bugzilla.redhat.com/1377556): Files not being opened with o_direct flag during random read operation (Glusterfs 3.8.2) +- [#1377584](https://bugzilla.redhat.com/1377584): memory leak problems are found in daemon:glusterd, server:glusterfsd and client:glusterfs +- [#1377607](https://bugzilla.redhat.com/1377607): Volume restart couldn't re-export the volume exported via ganesha. +- [#1377864](https://bugzilla.redhat.com/1377864): Creation of files on hot tier volume taking very long time +- [#1378057](https://bugzilla.redhat.com/1378057): glusterd fails to start without installing glusterfs-events package +- [#1378072](https://bugzilla.redhat.com/1378072): Modifications to AFR Events +- [#1378305](https://bugzilla.redhat.com/1378305): DHT: remove unused structure members +- [#1378436](https://bugzilla.redhat.com/1378436): build: python-ctypes no longer exists in Fedora Rawhide +- [#1378492](https://bugzilla.redhat.com/1378492): warning messages seen in glusterd logs for each 'gluster volume status' command +- [#1378684](https://bugzilla.redhat.com/1378684): Poor smallfile read performance on Arbiter volume compared to Replica 3 volume +- [#1378778](https://bugzilla.redhat.com/1378778): Add a test script for compound fops changes in AFR +- [#1378842](https://bugzilla.redhat.com/1378842): [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all' +- [#1379223](https://bugzilla.redhat.com/1379223): "nfs.disable: on" is not showing in Vol info by default for the 3.7.x volumes after updating to 3.9.0 +- [#1379285](https://bugzilla.redhat.com/1379285): gfapi: Fix fd ref leaks +- [#1379328](https://bugzilla.redhat.com/1379328): Boolean attributes are published as string +- [#1379330](https://bugzilla.redhat.com/1379330): eventsapi/georep: Events are not available for Checkpoint and Status Change +- [#1379511](https://bugzilla.redhat.com/1379511): Fix spurious failures in open-behind.t +- [#1379655](https://bugzilla.redhat.com/1379655): Recording (ffmpeg) processes on FUSE get hung +- [#1379720](https://bugzilla.redhat.com/1379720): errors appear in brick and nfs logs and getting stale files on NFS clients +- [#1379769](https://bugzilla.redhat.com/1379769): GlusterFS fails to build on old Linux distros with linux/oom.h missing +- [#1380249](https://bugzilla.redhat.com/1380249): Huge memory usage of FUSE client +- [#1380275](https://bugzilla.redhat.com/1380275): client ID should logged when SSL connection fails +- [#1381115](https://bugzilla.redhat.com/1381115): Polling failure errors getting when volume is started&stopped with SSL enabled setup. +- [#1381421](https://bugzilla.redhat.com/1381421): afr fix shd log message error +- [#1381830](https://bugzilla.redhat.com/1381830): Regression caused by enabling client-io-threads by default +- [#1382236](https://bugzilla.redhat.com/1382236): glusterfind pre session hangs indefinitely +- [#1382258](https://bugzilla.redhat.com/1382258): RFE: Support to update NFS-Ganesha export options dynamically +- [#1382266](https://bugzilla.redhat.com/1382266): md-cache: Invalidate cache entry in case of OPEN with O_TRUNC +- [#1384142](https://bugzilla.redhat.com/1384142): crypt: changes needed for openssl-1.1 (coming in Fedora 26) +- [#1384297](https://bugzilla.redhat.com/1384297): glusterfs can't self heal character dev file for invalid dev_t parameters +- [#1384906](https://bugzilla.redhat.com/1384906): arbiter volume write performance is bad with sharding +- [#1385104](https://bugzilla.redhat.com/1385104): invalid argument warning messages seen in fuse client logs 2016-09-30 06:34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x58722) 0-dict: !this || !value for key=link-count [Invalid argument] +- [#1385575](https://bugzilla.redhat.com/1385575): pmap_signin event fails to update brickinfo->signed_in flag +- [#1385593](https://bugzilla.redhat.com/1385593): Fix some spelling mistakes in comments and log messages +- [#1385839](https://bugzilla.redhat.com/1385839): Incorrect volume type in the "glusterd_state" file generated using CLI "gluster get-state" +- [#1386088](https://bugzilla.redhat.com/1386088): Memory Leaks in snapshot code path +- [#1386097](https://bugzilla.redhat.com/1386097): 4 of 8 bricks (2 dht subvols) crashed on systemic setup +- [#1386123](https://bugzilla.redhat.com/1386123): geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary +- [#1386141](https://bugzilla.redhat.com/1386141): Error and warning message getting while removing glusterfs-events package +- [#1386188](https://bugzilla.redhat.com/1386188): Asynchronous Unsplit-brain still causes Input/Output Error on system calls +- [#1386200](https://bugzilla.redhat.com/1386200): Log all published events +- [#1386247](https://bugzilla.redhat.com/1386247): [Eventing]: 'gluster volume tier start force' does not generate a TIER_START event +- [#1386450](https://bugzilla.redhat.com/1386450): Continuous warning messages getting when one of the cluster node is down on SSL setup. +- [#1386516](https://bugzilla.redhat.com/1386516): [Eventing]: UUID is showing zeros in the event message for the peer probe operation. +- [#1386626](https://bugzilla.redhat.com/1386626): fuse mount point not accessible +- [#1386766](https://bugzilla.redhat.com/1386766): trashcan max file limit cannot go beyond 1GB +- [#1387160](https://bugzilla.redhat.com/1387160): clone creation with older names in a system fails +- [#1387207](https://bugzilla.redhat.com/1387207): [Eventing]: Random VOLUME_SET events seen when no operation is done on the gluster cluster +- [#1387241](https://bugzilla.redhat.com/1387241): Pass proper permission to acl_permit() in posix_acl_open() +- [#1387652](https://bugzilla.redhat.com/1387652): [Eventing]: BRICK_DISCONNECTED events seen when a tier volume is stopped +- [#1387864](https://bugzilla.redhat.com/1387864): [Eventing]: 'gluster vol bitrot scrub ondemand' does not produce an event +- [#1388010](https://bugzilla.redhat.com/1388010): [Eventing]: 'VOLUME_REBALANCE' event messages have an incorrect volume name +- [#1388062](https://bugzilla.redhat.com/1388062): throw warning to show that older tier commands are depricated and will be removed. +- [#1388292](https://bugzilla.redhat.com/1388292): performance.read-ahead on results in processes on client stuck in IO wait +- [#1388348](https://bugzilla.redhat.com/1388348): glusterd: Display proper error message and fail the command if S32gluster_enable_shared_storage.sh hook script is not present during gluster volume set all cluster.enable-shared-storage command +- [#1388401](https://bugzilla.redhat.com/1388401): Labelled geo-rep checkpoints hide geo-replication status +- [#1388861](https://bugzilla.redhat.com/1388861): build: python on Debian-based dists use .../lib/python2.7/dist-packages instead of .../site-packages +- [#1388862](https://bugzilla.redhat.com/1388862): [Eventing]: Events not seen when command is triggered from one of the peer nodes +- [#1388877](https://bugzilla.redhat.com/1388877): Continuous errors getting in the mount log when the volume mount server glusterd is down. +- [#1389293](https://bugzilla.redhat.com/1389293): build: incorrect Requires: for portblock resource agent +- [#1389481](https://bugzilla.redhat.com/1389481): glusterfind fails to list files from tiered volume +- [#1389697](https://bugzilla.redhat.com/1389697): Remove-brick status output is showing status of fix-layout instead of original remove-brick status output +- [#1389746](https://bugzilla.redhat.com/1389746): Refresh config fails while exporting subdirectories within a volume +- [#1390050](https://bugzilla.redhat.com/1390050): Elasticsearch get CorruptIndexException errors when running with GlusterFS persistent storage +- [#1391086](https://bugzilla.redhat.com/1391086): gfapi clients crash while using async calls due to double fd_unref +- [#1391387](https://bugzilla.redhat.com/1391387): The FUSE client log is filling up with posix_acl_default and posix_acl_access messages +- [#1392167](https://bugzilla.redhat.com/1392167): SMB[md-cache Private Build]:Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null +- [#1392445](https://bugzilla.redhat.com/1392445): Hosted Engine VM paused post replace-brick operation +- [#1392713](https://bugzilla.redhat.com/1392713): inconsistent file permissions b/w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled (and within the invalidation cycle) +- [#1392772](https://bugzilla.redhat.com/1392772): [setxattr_cbk] "Permission denied" warning messages are seen in logs while running pjd-fstest suite +- [#1392865](https://bugzilla.redhat.com/1392865): Better logging when reporting failures of the kind " Failing MKNOD as quorum is not met" +- [#1393259](https://bugzilla.redhat.com/1393259): stat of file is hung with possible deadlock +- [#1393678](https://bugzilla.redhat.com/1393678): Worker restarts on log-rsync-performance config update +- [#1394131](https://bugzilla.redhat.com/1394131): [md-cache]: All bricks crashed while performing symlink and rename from client at the same time +- [#1394224](https://bugzilla.redhat.com/1394224): "nfs-grace-monitor" timed out messages observed +- [#1394548](https://bugzilla.redhat.com/1394548): Make debugging EACCES errors easier to debug +- [#1394719](https://bugzilla.redhat.com/1394719): libgfapi core dumps +- [#1394881](https://bugzilla.redhat.com/1394881): Failed to enable nfs-ganesha after disabling nfs-ganesha cluster +- [#1395261](https://bugzilla.redhat.com/1395261): Seeing error messages [snapview-client.c:283:gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c) +- [#1395648](https://bugzilla.redhat.com/1395648): ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes +- [#1395660](https://bugzilla.redhat.com/1395660): Checkpoint completed event missing master node detail +- [#1395687](https://bugzilla.redhat.com/1395687): Client side IObuff leaks at a high pace consumes complete client memory and hence making gluster volume inaccessible +- [#1395993](https://bugzilla.redhat.com/1395993): heal info --xml when bricks are down in a systemic environment is not displaying anything even after more than 30minutes +- [#1396038](https://bugzilla.redhat.com/1396038): refresh-config fails and crashes ganesha when mdcache is enabled on the volume. +- [#1396048](https://bugzilla.redhat.com/1396048): A hard link is lost during rebalance+lookup +- [#1396062](https://bugzilla.redhat.com/1396062): [geo-rep]: Worker crashes seen while renaming directories in loop +- [#1396081](https://bugzilla.redhat.com/1396081): Wrong value in Last Synced column during Hybrid Crawl +- [#1396364](https://bugzilla.redhat.com/1396364): Scheduler : Scheduler should not depend on glusterfs-events package +- [#1396793](https://bugzilla.redhat.com/1396793): [Ganesha] : Ganesha crashes intermittently during nfs-ganesha restarts. +- [#1396807](https://bugzilla.redhat.com/1396807): capture volume tunables in get-state dump +- [#1396952](https://bugzilla.redhat.com/1396952): I/O errors on FUSE mount point when reading and writing from 2 clients +- [#1397052](https://bugzilla.redhat.com/1397052): OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed. +- [#1397177](https://bugzilla.redhat.com/1397177): memory leak when using libgfapi +- [#1397419](https://bugzilla.redhat.com/1397419): glusterfs_ctx_defaults_init is re-initializing ctx->locks +- [#1397424](https://bugzilla.redhat.com/1397424): PEER_REJECT, EVENT_BRICKPATH_RESOLVE_FAILED, EVENT_COMPARE_FRIEND_VOLUME_FAILED are not seen +- [#1397754](https://bugzilla.redhat.com/1397754): [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off +- [#1397795](https://bugzilla.redhat.com/1397795): NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services +- [#1398076](https://bugzilla.redhat.com/1398076): SEEK_HOLE/ SEEK_DATA doesn't return the correct offset +- [#1398226](https://bugzilla.redhat.com/1398226): With compound fops on, client process crashes when a replica is brought down while IO is in progress +- [#1398566](https://bugzilla.redhat.com/1398566): self-heal info command hangs after triggering self-heal +- [#1399031](https://bugzilla.redhat.com/1399031): build: add systemd dependency to glusterfs sub-package +- [#1399072](https://bugzilla.redhat.com/1399072): [Disperse] healing should not start if only data bricks are UP +- [#1399134](https://bugzilla.redhat.com/1399134): GlusterFS client crashes during remove-brick operation +- [#1399154](https://bugzilla.redhat.com/1399154): After ganesha node reboot/shutdown, portblock process goes to FAILED state +- [#1399186](https://bugzilla.redhat.com/1399186): [GANESHA] Export ID changed during volume start and stop with message "lookup_export failed with Export id not found" in ganesha.log +- [#1399578](https://bugzilla.redhat.com/1399578): [compound FOPs]: Memory leak while doing FOPs with brick down +- [#1399592](https://bugzilla.redhat.com/1399592): Memory leak when self healing daemon queue is full +- [#1399780](https://bugzilla.redhat.com/1399780): Use standard refcounting for structures where possible +- [#1399995](https://bugzilla.redhat.com/1399995): Dump volume specific options in get-state output in a more parseable manner +- [#1400013](https://bugzilla.redhat.com/1400013): [USS,SSL] .snaps directory is not reachable when I/O encryption (SSL) is enabled +- [#1400026](https://bugzilla.redhat.com/1400026): Duplicate value assigned to GD_MSG_DAEMON_STATE_REQ_RCVD and GD_MSG_BRICK_CLEANUP_SUCCESS messages +- [#1400237](https://bugzilla.redhat.com/1400237): Ganesha services are not stopped when pacemaker quorum is lost +- [#1400613](https://bugzilla.redhat.com/1400613): [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha/ in already existing cluster nodes +- [#1400818](https://bugzilla.redhat.com/1400818): possible memory leak on client when writing to a file while another client issues a truncate +- [#1401095](https://bugzilla.redhat.com/1401095): log the error when locking the brick directory fails +- [#1401218](https://bugzilla.redhat.com/1401218): Fix compound fops memory leaks +- [#1401404](https://bugzilla.redhat.com/1401404): [Arbiter] IO's Halted and heal info command hung +- [#1401777](https://bugzilla.redhat.com/1401777): atime becomes zero when truncating file via ganesha (or gluster-NFS) +- [#1401801](https://bugzilla.redhat.com/1401801): [RFE] Use Host UUID to find local nodes to spawn workers +- [#1401812](https://bugzilla.redhat.com/1401812): RFE: Make readdirp parallel in dht +- [#1401822](https://bugzilla.redhat.com/1401822): [GANESHA]Unable to export the ganesha volume after doing volume start and stop +- [#1401836](https://bugzilla.redhat.com/1401836): update documentation to readthedocs.io +- [#1401921](https://bugzilla.redhat.com/1401921): glusterfsd crashed while taking snapshot using scheduler +- [#1402237](https://bugzilla.redhat.com/1402237): Bad spacing in error message in cli +- [#1402261](https://bugzilla.redhat.com/1402261): cli: compile warnings (unused var) if building without bd xlator +- [#1402369](https://bugzilla.redhat.com/1402369): Getting the warning message while erasing the gluster "glusterfs-server" package. +- [#1402710](https://bugzilla.redhat.com/1402710): ls and move hung on disperse volume +- [#1402730](https://bugzilla.redhat.com/1402730): self-heal not happening, as self-heal info lists the same pending shards to be healed +- [#1402828](https://bugzilla.redhat.com/1402828): Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped +- [#1402841](https://bugzilla.redhat.com/1402841): Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress. +- [#1403130](https://bugzilla.redhat.com/1403130): [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node. +- [#1403780](https://bugzilla.redhat.com/1403780): Incorrect incrementation of volinfo refcnt during volume start +- [#1404118](https://bugzilla.redhat.com/1404118): Snapshot: After snapshot restore failure , snapshot goes into inconsistent state +- [#1404168](https://bugzilla.redhat.com/1404168): Upcall: Possible use after free when log level set to TRACE +- [#1404181](https://bugzilla.redhat.com/1404181): [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts +- [#1404410](https://bugzilla.redhat.com/1404410): [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6 +- [#1404573](https://bugzilla.redhat.com/1404573): tests/bugs/snapshot/bug-1316437.t test is causing spurious failure +- [#1404678](https://bugzilla.redhat.com/1404678): [geo-rep]: Config commands fail when the status is 'Created' +- [#1404905](https://bugzilla.redhat.com/1404905): DHT : file rename operation is successful but log has error 'key:trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on failed (File exists)' +- [#1405165](https://bugzilla.redhat.com/1405165): Allow user to disable mem-pool +- [#1405301](https://bugzilla.redhat.com/1405301): Fix the failure in tests/basic/gfapi/bug1291259.t +- [#1405478](https://bugzilla.redhat.com/1405478): Keepalive should be set for IPv6 & IPv4 +- [#1405554](https://bugzilla.redhat.com/1405554): Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t +- [#1405775](https://bugzilla.redhat.com/1405775): GlusterFS process crashed after add-brick +- [#1405902](https://bugzilla.redhat.com/1405902): Fix spurious failure in tests/bugs/replicate/bug-1402730.t +- [#1406224](https://bugzilla.redhat.com/1406224): VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume +- [#1406249](https://bugzilla.redhat.com/1406249): [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ganesha/ganesha.conf file +- [#1406252](https://bugzilla.redhat.com/1406252): Free xdr-allocated compound request and response arrays +- [#1406348](https://bugzilla.redhat.com/1406348): [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op +- [#1406410](https://bugzilla.redhat.com/1406410): [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node +- [#1406411](https://bugzilla.redhat.com/1406411): Fail add-brick command if replica count changes +- [#1406878](https://bugzilla.redhat.com/1406878): ec prove tests fail in FB build environment. +- [#1408115](https://bugzilla.redhat.com/1408115): Remove-brick rebalance failed while rm -rf is in progress +- [#1408131](https://bugzilla.redhat.com/1408131): Remove tests/distaf +- [#1408395](https://bugzilla.redhat.com/1408395): [Arbiter] After Killing a brick writes drastically slow down +- [#1408712](https://bugzilla.redhat.com/1408712): with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host +- [#1408755](https://bugzilla.redhat.com/1408755): Remove tests/basic/rpm.t +- [#1408757](https://bugzilla.redhat.com/1408757): Fix failure of split-brain-favorite-child-policy.t in CentOS7 +- [#1408758](https://bugzilla.redhat.com/1408758): tests/bugs/glusterd/bug-913555.t fails spuriously +- [#1409078](https://bugzilla.redhat.com/1409078): RFE: Need a command to check op-version compatibility of clients +- [#1409186](https://bugzilla.redhat.com/1409186): Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task +- [#1409202](https://bugzilla.redhat.com/1409202): Warning messages throwing when EC volume offline brick comes up are difficult to understand for end user. +- [#1409206](https://bugzilla.redhat.com/1409206): Extra lookup/fstats are sent over the network when a brick is down. +- [#1409727](https://bugzilla.redhat.com/1409727): [ganesha + EC]posix compliance rename tests failed on EC volume with nfs-ganesha mount. +- [#1409730](https://bugzilla.redhat.com/1409730): [ganesha+ec]: Contents of original file are not seen when hardlink is created +- [#1410071](https://bugzilla.redhat.com/1410071): [Geo-rep] Geo replication status detail without master and slave volume args +- [#1410313](https://bugzilla.redhat.com/1410313): brick crashed on systemic setup +- [#1410355](https://bugzilla.redhat.com/1410355): Remove-brick rebalance failed while rm -rf is in progress +- [#1410375](https://bugzilla.redhat.com/1410375): [Mdcache] clients being served wrong information about a file, can lead to file inconsistency +- [#1410777](https://bugzilla.redhat.com/1410777): ganesha service crashed on all nodes of ganesha cluster on disperse volume when doing lookup while copying files remotely using scp +- [#1410843](https://bugzilla.redhat.com/1410843): common-ha: switch to storhaug HA, first step, remove resource agents and setup script +- [#1410853](https://bugzilla.redhat.com/1410853): glusterfs-server should depend on firewalld-filesystem +- [#1411607](https://bugzilla.redhat.com/1411607): [Geo-rep] If for some reason MKDIR failed to sync, it should not proceed further. +- [#1411625](https://bugzilla.redhat.com/1411625): Spurious split-brain error messages are seen in rebalance logs +- [#1411999](https://bugzilla.redhat.com/1411999): URL to Fedora distgit no longer uptodate +- [#1412002](https://bugzilla.redhat.com/1412002): Examples/getvolfile.py is not pep8 compliant +- [#1412069](https://bugzilla.redhat.com/1412069): No rollback of renames on succeeded subvols during failure +- [#1412174](https://bugzilla.redhat.com/1412174): Memory leak on mount/fuse when setxattr fails +- [#1412467](https://bugzilla.redhat.com/1412467): Remove tests/bugs/distribute/bug-1063230.t +- [#1412489](https://bugzilla.redhat.com/1412489): Upcall: Possible memleak if inode_ctx_set fails +- [#1412689](https://bugzilla.redhat.com/1412689): [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts +- [#1412917](https://bugzilla.redhat.com/1412917): OOM kill of glusterfsd during continuous add-bricks +- [#1412918](https://bugzilla.redhat.com/1412918): fuse: Resource leak in fuse-helper under GF_SOLARIS_HOST_OS +- [#1413967](https://bugzilla.redhat.com/1413967): geo-rep session faulty with ChangelogException "No such file or directory" +- [#1415226](https://bugzilla.redhat.com/1415226): packaging: python/python2(/python3) cleanup +- [#1415245](https://bugzilla.redhat.com/1415245): core: max op version +- [#1415279](https://bugzilla.redhat.com/1415279): libgfapi: remove/revert glfs_ipc() changes targeted for 4.0 +- [#1415581](https://bugzilla.redhat.com/1415581): RFE : Create trash directory only when its is enabled +- [#1415915](https://bugzilla.redhat.com/1415915): RFE: An administrator friendly way to determine rebalance completion time +- [#1415918](https://bugzilla.redhat.com/1415918): Cache security.ima xattrs as well +- [#1416285](https://bugzilla.redhat.com/1416285): EXPECT_WITHIN is taking too much time even if the result matches with expected value +- [#1416416](https://bugzilla.redhat.com/1416416): Improve output of "gluster volume status detail" +- [#1417027](https://bugzilla.redhat.com/1417027): option performance.parallel-readdir should honor cluster.readdir-optimize +- [#1417028](https://bugzilla.redhat.com/1417028): option performance.parallel-readdir can cause OOM in large volumes +- [#1417042](https://bugzilla.redhat.com/1417042): glusterd restart is starting the offline shd daemon on other node in the cluster +- [#1417135](https://bugzilla.redhat.com/1417135): [Stress] : SHD Logs flooded with "Heal Failed" messages,filling up "/" quickly +- [#1417521](https://bugzilla.redhat.com/1417521): [SNAPSHOT] With all USS plugin enable .snaps directory is not visible in cifs mount as well as windows mount +- [#1417527](https://bugzilla.redhat.com/1417527): glusterfind: After glusterfind pre command execution all temporary files and directories /usr/var/lib/misc/glusterfsd/glusterfind/// should be removed +- [#1417804](https://bugzilla.redhat.com/1417804): debug/trace: Print iatts of individual entries in readdirp callback for better debugging experience +- [#1418091](https://bugzilla.redhat.com/1418091): [RFE] Support multiple bricks in one process (multiplexing) +- [#1418536](https://bugzilla.redhat.com/1418536): Portmap allocates way too much memory (256KB) on stack +- [#1418541](https://bugzilla.redhat.com/1418541): [Ganesha+SSL] : Bonnie++ hangs during rewrites. +- [#1418623](https://bugzilla.redhat.com/1418623): client process crashed due to write behind translator -- cgit