## Release Notes for GlusterFS 3.7.2 This is a bugfix release. The Release Notes for [3.7.0](3.7.0.md), [3.7.1](3.7.1.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.7 stable releases. ### Bugs Fixed - [1218570](https://bugzilla.redhat.com/1218570): `gluster volume heal split-brain' tries to heal even with insufficient arguments - [1219953](https://bugzilla.redhat.com/1219953): The python-gluster RPM should be 'noarch' - [1221473](https://bugzilla.redhat.com/1221473): BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6 - [1221656](https://bugzilla.redhat.com/1221656): rebalance failing on one of the node - [1221941](https://bugzilla.redhat.com/1221941): glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3 - [1222065](https://bugzilla.redhat.com/1222065): GlusterD fills the logs when the NFS-server is disabled - [1223390](https://bugzilla.redhat.com/1223390): packaging: .pc files included in -api-devel should be in -devel - [1223890](https://bugzilla.redhat.com/1223890): readdirp return 64bits inodes even if enable-ino32 is set - [1225320](https://bugzilla.redhat.com/1225320): ls command failed with features.read-only on while mounting ec volume. - [1225548](https://bugzilla.redhat.com/1225548): [Backup]: Misleading error message when glusterfind delete is given with non-existent volume - [1225551](https://bugzilla.redhat.com/1225551): [Backup]: Glusterfind session entry persists even after volume is deleted - [1225565](https://bugzilla.redhat.com/1225565): [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state - [1225574](https://bugzilla.redhat.com/1225574): [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log - [1225796](https://bugzilla.redhat.com/1225796): Spurious failure in tests/bugs/disperse/bug-1161621.t - [1225809](https://bugzilla.redhat.com/1225809): [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done - [1225839](https://bugzilla.redhat.com/1225839): [DHT:REBALANCE]: xattrs set on the file during rebalance migration will be lost after migration is over - [1225842](https://bugzilla.redhat.com/1225842): Minor improvements and cleanup for the build system - [1225859](https://bugzilla.redhat.com/1225859): Glusterfs client crash during fd migration after graph switch - [1225940](https://bugzilla.redhat.com/1225940): DHT: lookup-unhashed feature breaks runtime compatibility with older client versions - [1225999](https://bugzilla.redhat.com/1225999): Update gluster op version to 30701 - [1226117](https://bugzilla.redhat.com/1226117): [RFE] Return proper error codes in case of snapshot failure - [1226213](https://bugzilla.redhat.com/1226213): snap_scheduler script must be usable as python module. - [1226224](https://bugzilla.redhat.com/1226224): [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled. - [1226272](https://bugzilla.redhat.com/1226272): Volume heal info not reporting files in split brain and core dumping, after upgrading to 3.7.0 - [1226789](https://bugzilla.redhat.com/1226789): quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O. - [1226792](https://bugzilla.redhat.com/1226792): Statfs is hung because of frame loss in quota - [1226880](https://bugzilla.redhat.com/1226880): Fix infinite looping in shard_readdir(p) on '/' - [1226962](https://bugzilla.redhat.com/1226962): nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_=: command not found - [1227028](https://bugzilla.redhat.com/1227028): nfs-ganesha: Discrepancies with lock states recovery during migration - [1227167](https://bugzilla.redhat.com/1227167): NFS: IOZone tests hang, disconnects and hung tasks seen in logs. - [1227235](https://bugzilla.redhat.com/1227235): glusterfsd crashed on a quota enabled volume where snapshots were scheduled - [1227572](https://bugzilla.redhat.com/1227572): Sharding - Fix posix compliance test failures. - [1227576](https://bugzilla.redhat.com/1227576): libglusterfs: Copy _all_ members of gf_dirent_t in entry_copy() - [1227611](https://bugzilla.redhat.com/1227611): Fix deadlock in timer-wheel del_timer() API - [1227615](https://bugzilla.redhat.com/1227615): "Snap_scheduler disable" should have different return codes for different failures. - [1227674](https://bugzilla.redhat.com/1227674): Honour afr self-heal volume set options from clients - [1227677](https://bugzilla.redhat.com/1227677): Glusterd crashes and cannot start after rebalance - [1227887](https://bugzilla.redhat.com/1227887): Update gluster op version to 30702 - [1227916](https://bugzilla.redhat.com/1227916): auth_cache_entry structure barely gets cached - [1228045](https://bugzilla.redhat.com/1228045): Scrubber should be disabled once bitrot is reset - [1228065](https://bugzilla.redhat.com/1228065): Though brick demon is not running, gluster vol status command shows the pid - [1228100](https://bugzilla.redhat.com/1228100): Disperse volume: brick logs are getting filled with "anonymous fd creation failed" messages - [1228160](https://bugzilla.redhat.com/1228160): linux untar hanged after the bricks are up in a 8+4 config - [1228181](https://bugzilla.redhat.com/1228181): Simplify creation and set-up of meta-volume (shared storage) - [1228510](https://bugzilla.redhat.com/1228510): Building packages on RHEL-5 based distributions fails - [1228592](https://bugzilla.redhat.com/1228592): Glusterd fails to start after volume restore, tier attach and node reboot - [1228601](https://bugzilla.redhat.com/1228601): [Virt-RHGS] Creating a image on gluster volume using qemu-img + gfapi throws error messages related to rpc_transport - [1228729](https://bugzilla.redhat.com/1228729): nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful - [1229100](https://bugzilla.redhat.com/1229100): Do not invoke glfs_fini for glfs-heal processes. - [1229282](https://bugzilla.redhat.com/1229282): Disperse volume: Huge memory leak of glusterfsd process - [1229331](https://bugzilla.redhat.com/1229331): Disperse volume : glusterfs crashed - [1229550](https://bugzilla.redhat.com/1229550): [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t - [1230018](https://bugzilla.redhat.com/1230018): [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message - [1230026](https://bugzilla.redhat.com/1230026): BVT: glusterd crashed and dumped during upgrade (on rhel7.1 server) - [1230167](https://bugzilla.redhat.com/1230167): [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node - [1230350](https://bugzilla.redhat.com/1230350): Client hung up on listing the files on a perticular directory - [1230560](https://bugzilla.redhat.com/1230560): data tiering: do not allow tiering related volume set options on a regular volume - [1230563](https://bugzilla.redhat.com/1230563): tiering:glusterd crashed when trying to detach-tier commit force on a non-tiered volume. - [1230653](https://bugzilla.redhat.com/1230653): Disperse volume : client crashed while running IO - [1230687](https://bugzilla.redhat.com/1230687): [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink - [1230691](https://bugzilla.redhat.com/1230691): [geo-rep]: use_meta_volume config option should be validated for its values - [1230693](https://bugzilla.redhat.com/1230693): [geo-rep]: RENAME are not synced to slave when quota is enabled. - [1230694](https://bugzilla.redhat.com/1230694): [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available - [1230712](https://bugzilla.redhat.com/1230712): [Backup]: Chown/chgrp for a directory does not get recorded as a MODIFY entry in the outfile - [1230715](https://bugzilla.redhat.com/1230715): [Backup]: Glusterfind delete does not delete the session related information present in $GLUSTERD_WORKDIR - [1230783](https://bugzilla.redhat.com/1230783): [Backup]: Crash observed when glusterfind pre is run after deleting a directory containing files - [1230791](https://bugzilla.redhat.com/1230791): [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions - [1231213](https://bugzilla.redhat.com/1231213): [geo-rep]: rsync should be made dependent package for geo-replication - [1231366](https://bugzilla.redhat.com/1231366): NFS Authentication Performance Issue - [1231516](https://bugzilla.redhat.com/1231516): glusterfsd process on 100% cpu, upcall busy loop in reaper thread - [1231646](https://bugzilla.redhat.com/1231646): [glusterd] glusterd crashed while trying to remove a bricks - one selected from each replica set - after shrinking nX3 to nX2 to nX1 - [1231832](https://bugzilla.redhat.com/1231832): bitrot: (rfe) object signing wait time value should be tunable. - [1232002](https://bugzilla.redhat.com/1232002): nfs-ganesha: 8 node pcs cluster setup fails - [1232135](https://bugzilla.redhat.com/1232135): Quota: " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs - [1232143](https://bugzilla.redhat.com/1232143): nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes - [1232155](https://bugzilla.redhat.com/1232155): Not able to export volume using nfs-ganesha - [1232335](https://bugzilla.redhat.com/1232335): nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start - [1232589](https://bugzilla.redhat.com/1232589): [Bitrot] Gluster v set bitrot enable command succeeds , which is not supported to enable bitrot - [1233042](https://bugzilla.redhat.com/1233042): use after free bug in dht - [1233117](https://bugzilla.redhat.com/1233117): quota: quota list displays double the size of previous value, post heal completion - [1233484](https://bugzilla.redhat.com/1233484): Possible double execution of the state machine for fops that start other subfops - [1233056](https://bugzilla.redhat.com/1233056): Not able to create snapshots for geo-replicated volumes when session is created with root user - [1233044](https://bugzilla.redhat.com/1233044): Segmentation faults are observed on all the master nodes - [1232179](https://bugzilla.redhat.com/1232179): Objects are not signed upon truncate() ### Known Issues - [1212842](https://bugzilla.redhat.com/1212842): tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed - [1229226](https://bugzilla.redhat.com/1229226): Gluster split-brain not logged and data integrity not enforced - [1213352](https://bugzilla.redhat.com/1213352): nfs-ganesha: HA issue, the iozone process is not moving ahead, once the nfs-ganesha is killed - [1220270](https://bugzilla.redhat.com/1220270): nfs-ganesha: Rename fails while exectuing Cthon general category test - [1214169](https://bugzilla.redhat.com/1214169): glusterfsd crashed while rebalance and self-heal were in progress - [1225567](https://bugzilla.redhat.com/1225567): Traceback "ValueError:filedescriptor out of range in select()" observed while creating huge set of data on master - [1226233](https://bugzilla.redhat.com/1226233): Mount broker user add command removes existing volume for a mountbroker user when second volume is attached to same user - [1231539](https://bugzilla.redhat.com/1231539): Detect and send ENOTSUP if upcall feature is not enabled - [1232333](https://bugzilla.redhat.com/1232333): Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives - [1218961](https://bugzilla.redhat.com/1218961): snapshot: Can not activate the name provided while creating snaps to do any further access - [1219399](https://bugzilla.redhat.com/1219399): NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client - Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported. - The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly: ~~~ # gluster volume set server.allow-insecure on ~~~ Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on` Post 1, restarting the volume would be necessary: ~~~ # gluster volume stop # gluster volume start ~~~ Post 2, restarting glusterd would be necessary: ~~~ # service glusterd restart ~~~ or ~~~ # systemctl restart glusterd ~~~