# Release notes for GlusterFS-v3.7.10 GlusterFS-v3.7.10 is back on the correct schedule after a long 3.7.9 release. ## Bugs fixed The following bugs have been fixed in 3.7.10, - [1299712](https://bugzilla.redhat.com/1299712) - [HC] Implement fallocate, discard and zerofill with sharding - [1304963](https://bugzilla.redhat.com/1304963) - [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file. - [1310445](https://bugzilla.redhat.com/1310445) - Gluster not resolving hosts with IPv6 only lookups - [1311441](https://bugzilla.redhat.com/1311441) - Fix mem leaks related to gfapi applications - [1311578](https://bugzilla.redhat.com/1311578) - SMB: SMB crashes with AIO enabled on reads + vers=3.0 - [1312721](https://bugzilla.redhat.com/1312721) - tar complains: : file changed as we read it - [1313312](https://bugzilla.redhat.com/1313312) - Client self-heals block the FOP that triggered the heals - [1313623](https://bugzilla.redhat.com/1313623) - [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error" - [1314366](https://bugzilla.redhat.com/1314366) - Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP - [1315141](https://bugzilla.redhat.com/1315141) - RFE: "heal" commands output should have a fixed fields - [1315147](https://bugzilla.redhat.com/1315147) - Peer probe from a reinstalled node should fail - [1315626](https://bugzilla.redhat.com/1315626) - glusterd crashed when probing a node with firewall enabled on only one node - [1315628](https://bugzilla.redhat.com/1315628) - After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log - [1316099](https://bugzilla.redhat.com/1316099) - AFR+SNAPSHOT: File with hard link have different inode number in USS - [1316391](https://bugzilla.redhat.com/1316391) - Brick ports get changed after GlusterD restart - [1316806](https://bugzilla.redhat.com/1316806) - snapd doesn't come up automatically after node reboot. - [1316808](https://bugzilla.redhat.com/1316808) - Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume - [1317363](https://bugzilla.redhat.com/1317363) - Errors seen in cli.log, while executing the command 'gluster snapshot info --xml' - [1317366](https://bugzilla.redhat.com/1317366) - Tier: Actual files are not demoted and keep on trying to demoted deleted files - [1317425](https://bugzilla.redhat.com/1317425) - "gluster_shared_storage" - [1317482](https://bugzilla.redhat.com/1317482) - Different epoch values for each of NFS-Ganesha heads - [1317788](https://bugzilla.redhat.com/1317788) - Cache swift xattrs - [1317861](https://bugzilla.redhat.com/1317861) - Probing a new node, which is part of another cluster, should throw proper error message in logs and CLI - [1317863](https://bugzilla.redhat.com/1317863) - glfs_dup() functionality is broken - [1318498](https://bugzilla.redhat.com/1318498) - [Tier]: Following volume restart, tierd shows failure at status on some nodes - [1318505](https://bugzilla.redhat.com/1318505) - gluster volume status xml output of tiered volume has all the common services tagged under - [1318750](https://bugzilla.redhat.com/1318750) - bash tab completion fails with "grep: Invalid range end" - [1318965](https://bugzilla.redhat.com/1318965) - disperse: Provide an option to enable/disable eager lock - [1319645](https://bugzilla.redhat.com/1319645) - setting enable-shared-storage without mentioning the domain, doesn't enables shared storage - [1319649](https://bugzilla.redhat.com/1319649) - libglusterfs : glusterd was not restarting after setting key=value length beyond PATH_MAX (4096) character - [1319989](https://bugzilla.redhat.com/1319989) - smbd crashes while accessing multiple volume shares via same client - [1320020](https://bugzilla.redhat.com/1320020) - add-brick on a replicate volume could lead to data-loss - [1320024](https://bugzilla.redhat.com/1320024) - Client's App is having issues retrieving files from share 1002976973 - [1320367](https://bugzilla.redhat.com/1320367) - Add a script that converts the gfid-string of a directory into absolute path name w.r.t the brick path. - [1320374](https://bugzilla.redhat.com/1320374) - Glusterd crashed just after a peer probe command failed. - [1320377](https://bugzilla.redhat.com/1320377) - Setting of any option using volume set fails when the clients are in older version. - [1320821](https://bugzilla.redhat.com/1320821) - volume set on user.* domain trims all white spaces in the value - [1320892](https://bugzilla.redhat.com/1320892) - Over some time Files which were accessible become inaccessible(music files) - [1321514](https://bugzilla.redhat.com/1321514) - [GSS]-gluster v heal volname info does not work with enabled ssl/tls - [1322242](https://bugzilla.redhat.com/1322242) - Installing glusterfs-ganesha-3.7.9-1.el6rhs.x86_64 fails with dependency on /usr/bin/dbus-send - [1322431](https://bugzilla.redhat.com/1322431) - pre failed: Traceback ... - [1322516](https://bugzilla.redhat.com/1322516) - RFE: Need type of gfid in index_readdir - [1322521](https://bugzilla.redhat.com/1322521) - Choose self-heal source as local subvolume if possible - [1322552](https://bugzilla.redhat.com/1322552) - Self-heal and manual heal not healing some file ### Known Issues [1322772](https://bugzilla.redhat.com/1322772): glusterd: glusterd didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3. The underlying file system may be in bad state [No such file or directory]" * Problem : If snapshot is activated and cluster has some snapshots and if a node is rebooted, glusterd instance doesn't come up and an error log "The underlying file system may be in bad state [No such file or directory]" is seens in glusterd log file. * Workaround would be to run [this script](https://gist.github.com/atinmu/a3682ba6782e1d79cf4362d040a89bd1#file-bz1322772-work-around-sh) and post that restart glusterd service on all the nodes. [1323287](https://bugzilla.redhat.com/1323287): TIER : Attach tier fails * Problem: This is not a tiering related issue, rather its on glusterd. If on a multi node cluster, one of the node/glusterd instance is down and volume operations are performed, once the faulty node or glusterd instance comes back, real_path info doesn't get populated back for all the existing bricks resulting into further volume create/attach tier/add-brick commands to fail. * Workaround would be to restart glusterd instance once again.