summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorKaushal M <kaushal@redhat.com>2016-06-24 16:47:59 +0530
committerKaushal M <kaushal@redhat.com>2016-06-24 16:47:59 +0530
commit5cf8e289889dadb5b66ea1d4de74896dcbf733da (patch)
treeef9f6cc105aa9e6f0e999553d9528c9797d0a8fe
parented16cfb0455e41ee39addf6b3cdacdbe0d98308a (diff)
Add release notes for v3.7.12v3.7.12
Change-Id: I3e60cc3431ece2e613d4f9dfdc6399c1a225fdee
-rw-r--r--doc/release-notes/3.7.12.md135
1 files changed, 135 insertions, 0 deletions
diff --git a/doc/release-notes/3.7.12.md b/doc/release-notes/3.7.12.md
new file mode 100644
index 00000000000..4e5f799e848
--- /dev/null
+++ b/doc/release-notes/3.7.12.md
@@ -0,0 +1,135 @@
+# Release notes for GlusterFS-3.7.12
+
+GlusterFS-3.7.12 is a normal minor release for GlusterFS-v3.7.
+
+## Bugs Fixed
+
+The following bugs have been fixed in this release.
+
+- [1063506](https://bugzilla.redhat.com/1063506) - No xml output on gluster volume heal info command with --xml
+- [1212676](https://bugzilla.redhat.com/1212676) - NetBSD port
+- [1257894](https://bugzilla.redhat.com/1257894) - "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes
+- [1268125](https://bugzilla.redhat.com/1268125) - glusterd memory overcommit
+- [1283972](https://bugzilla.redhat.com/1283972) - dht must avoid fresh lookups when a single replica pair goes offline
+- [1294077](https://bugzilla.redhat.com/1294077) - uses deprecated find -perm +xxx syntax
+- [1294675](https://bugzilla.redhat.com/1294675) - Healing queue rarely empty
+- [1312721](https://bugzilla.redhat.com/1312721) - tar complains: <fileName>: file changed as we read it
+- [1312722](https://bugzilla.redhat.com/1312722) - when any brick/sub-vol is down and rebalance is not performing any action(fixing lay-out or migrating data) it should not say 'Starting rebalance on volume <vol-name> has been successful' .
+- [1316533](https://bugzilla.redhat.com/1316533) - RFE: Provide a mechanism to disable some tests in regression
+- [1316808](https://bugzilla.redhat.com/1316808) - Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
+- [1319380](https://bugzilla.redhat.com/1319380) - trash xlator : trash_unlink_mkdir_cbk() enters in an infinte loop which results segfault
+- [1322523](https://bugzilla.redhat.com/1322523) - Fd based fops should not be logging ENOENT/ESTALE
+- [1323017](https://bugzilla.redhat.com/1323017) - Ordering query results from libgfdb
+- [1323564](https://bugzilla.redhat.com/1323564) - [scale] Brick process does not start after node reboot
+- [1324510](https://bugzilla.redhat.com/1324510) - Continuous nfs_grace_monitor log messages observed in /var/log/messages
+- [1325843](https://bugzilla.redhat.com/1325843) - [HC] Add disk in a Hyper-converged environment fails when glusterfs is running in directIO mode
+- [1325857](https://bugzilla.redhat.com/1325857) - Multi-threaded SHD support
+- [1326174](https://bugzilla.redhat.com/1326174) - Volume stop is failing when one of brick is down due to underlying filesystem crash
+- [1326212](https://bugzilla.redhat.com/1326212) - gluster volume heal info shows conservative merge entries as in split-brain
+- [1326413](https://bugzilla.redhat.com/1326413) - /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
+- [1327863](https://bugzilla.redhat.com/1327863) - cluster/afr: Fix partial heals in 3-way replication
+- [1327864](https://bugzilla.redhat.com/1327864) - assert failure happens when parallel rm -rf is issued on nfs mounts
+- [1328410](https://bugzilla.redhat.com/1328410) - SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
+- [1328473](https://bugzilla.redhat.com/1328473) - DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation few files are not accessible and not listed on mount and more than one Directory have same gfid
+- [1328706](https://bugzilla.redhat.com/1328706) - [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
+- [1328836](https://bugzilla.redhat.com/1328836) - glusterfs-libs postun ldconfig: relative path `1' used to build cache
+- [1329062](https://bugzilla.redhat.com/1329062) - Inconsistent directory structure on dht subvols caused by parent layouts going stale during entry create operations because of fix-layout
+- [1329115](https://bugzilla.redhat.com/1329115) - quota : fix null dereference issue
+- [1329492](https://bugzilla.redhat.com/1329492) - Peers goes to rejected state after reboot of one node when quota is enabled on cloned volume.
+- [1329779](https://bugzilla.redhat.com/1329779) - Inode leaks found in data-self-heal
+- [1329989](https://bugzilla.redhat.com/1329989) - snapshot-clone: clone volume doesn't start after node reboot
+- [1330018](https://bugzilla.redhat.com/1330018) - RFE Sort volume quota <volume> list output alphabetically by path
+- [1330132](https://bugzilla.redhat.com/1330132) - Disperse volume fails on high load and logs show some assertion failures
+- [1330241](https://bugzilla.redhat.com/1330241) - Backport 'nuke' (fast tree deletion) functionality to 3.7
+- [1330249](https://bugzilla.redhat.com/1330249) - glusterd: SSL certificate depth volume option is incorrect
+- [1330428](https://bugzilla.redhat.com/1330428) - [tiering]: during detach tier operation, Input/output error is seen with new file writes on NFS mount
+- [1330450](https://bugzilla.redhat.com/1330450) - [geo-rep]: schedule_georep.py doesn't touch the mount in every iteration
+- [1330529](https://bugzilla.redhat.com/1330529) - [DHT-Rebalance]: with few brick process down, rebalance process isn't killed even after stopping rebalance process
+- [1330545](https://bugzilla.redhat.com/1330545) - setting lower op-version should throw failure message
+- [1330739](https://bugzilla.redhat.com/1330739) - [RFE] We need more debug info from stack wind and unwind calls
+- [1330765](https://bugzilla.redhat.com/1330765) - Migration does not work when EC is used as a tiered volume.
+- [1330855](https://bugzilla.redhat.com/1330855) - A replicated volume takes too much to come online when one server is down
+- [1330892](https://bugzilla.redhat.com/1330892) - nfs-ganesha crashes with segfault error while doing refresh config on volume.
+- [1331263](https://bugzilla.redhat.com/1331263) - SAMBA+TIER : File size is not getting updated when created on windows samba share mount
+- [1331264](https://bugzilla.redhat.com/1331264) - libgfapi:Setting need_lookup on wrong list
+- [1331342](https://bugzilla.redhat.com/1331342) - self-heal does fsyncs even after setting ensure-durability off
+- [1331502](https://bugzilla.redhat.com/1331502) - Under high read load, sometimes the message "XDR decoding failed" appears in the logs and read fails
+- [1331759](https://bugzilla.redhat.com/1331759) - runner: extract and return actual exit status of child
+- [1331772](https://bugzilla.redhat.com/1331772) - glusterd: fix max pmap alloc to GF_PORT_MAX
+- [1331924](https://bugzilla.redhat.com/1331924) - [geo-rep]: schedule_georep.py doesn't work when invoked using cron
+- [1331933](https://bugzilla.redhat.com/1331933) - rm -rf to a dir gives directory not empty(ENOTEMPTY) error
+- [1331934](https://bugzilla.redhat.com/1331934) - glusterd restart is failing if volume brick is down due to underlying FS crash.
+- [1331938](https://bugzilla.redhat.com/1331938) - glusterfsd: return actual exit status on mount process
+- [1331941](https://bugzilla.redhat.com/1331941) - rpc: fix gf_process_reserved_ports
+- [1332072](https://bugzilla.redhat.com/1332072) - values for Number of Scrubbed files, Number of Unsigned files, Last completed scrub time and Duration of last scrub are shown as zeros in bit rot scrub status
+- [1332074](https://bugzilla.redhat.com/1332074) - Marker: Lot of dict_get errors in brick log!!
+- [1332372](https://bugzilla.redhat.com/1332372) - Do not succeed mkdir without gfid-req
+- [1332397](https://bugzilla.redhat.com/1332397) - posix: Set correct d_type for readdirp() calls
+- [1332404](https://bugzilla.redhat.com/1332404) - the wrong variable was being checked for gf_strdup
+- [1332433](https://bugzilla.redhat.com/1332433) - Ganesha+Tiering: Continuous "0-glfs_h_poll_cache_invalidation: invalid argument" messages getting logged in ganesha-gfapi logs.
+- [1332776](https://bugzilla.redhat.com/1332776) - glusterd + bitrot : Creating clone of snapshot. error "xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file:
+- [1332790](https://bugzilla.redhat.com/1332790) - quota: client gets IO error instead of disk quota exceed when the limit is exceeded
+- [1332838](https://bugzilla.redhat.com/1332838) - rpc: assign port only if it is unreserved
+- [1333237](https://bugzilla.redhat.com/1333237) - DHT: Once remove brick start failed in between Remove brick commit should not be allowed
+- [1333239](https://bugzilla.redhat.com/1333239) - [AFR]: "volume heal info" command is failing during in-service upgrade to latest.
+- [1333241](https://bugzilla.redhat.com/1333241) - Fix excessive logging due to NULL dict in dht
+- [1333268](https://bugzilla.redhat.com/1333268) - SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.
+- [1333528](https://bugzilla.redhat.com/1333528) - Unexporting a volume sometimes fails with "Dynamic export addition/deletion failed".
+- [1333645](https://bugzilla.redhat.com/1333645) - NFS+attach tier:IOs hang while attach tier is issued
+- [1333661](https://bugzilla.redhat.com/1333661) - ganesha exported volumes doesn't get synced up on shutdown node when it comes up.
+- [1333934](https://bugzilla.redhat.com/1333934) - Detach tier fire before the background fixlayout is complete may result in failure
+- [1334204](https://bugzilla.redhat.com/1334204) - Test open-behind.t failing fairly often on NetBSD
+- [1334441](https://bugzilla.redhat.com/1334441) - SAMBA-VSS : Permission denied issue while restoring the directory from windows client 1 when files are deleted from windows client 2
+- [1334566](https://bugzilla.redhat.com/1334566) - Self heal shows different information for the same volume from each node
+- [1334700](https://bugzilla.redhat.com/1334700) - readdir-ahead does not fetch xattrs that md-cache needs in it's internal calls
+- [1334750](https://bugzilla.redhat.com/1334750) - stop all gluster processes should also also include glusterfs mount process
+- [1335016](https://bugzilla.redhat.com/1335016) - set errno in case of inode_link failures
+- [1335686](https://bugzilla.redhat.com/1335686) - Self Heal fails on a replica3 volume with 'disk quota exceeded'
+- [1335728](https://bugzilla.redhat.com/1335728) - [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users
+- [1335729](https://bugzilla.redhat.com/1335729) - mount/fuse: Logging improvements
+- [1335792](https://bugzilla.redhat.com/1335792) - [Tiering]: Detach tier commit is allowed before rebalance is complete
+- [1335813](https://bugzilla.redhat.com/1335813) - rpc: change client insecure port ceiling from 65535 to 49151
+- [1335821](https://bugzilla.redhat.com/1335821) - Revert "features/shard: Make o-direct writes work with sharding: http://review.gluster.org/#/c/13846/"
+- [1335836](https://bugzilla.redhat.com/1335836) - Heal info shows split-brain for .shard directory though only one brick was down
+- [1336148](https://bugzilla.redhat.com/1336148) - [Tiering]: Files remain in hot tier even after detach tier completes
+- [1336199](https://bugzilla.redhat.com/1336199) - failover is not working with latest builds.
+- [1336284](https://bugzilla.redhat.com/1336284) - Worker dies with [Errno 5] Input/output error upon creation of entries at slave
+- [1336331](https://bugzilla.redhat.com/1336331) - Unexporting a volume sometimes fails with "Dynamic export addition/deletion failed".
+- [1336470](https://bugzilla.redhat.com/1336470) - [Tiering]: The message 'Max cycle time reached..exiting migration' incorrectly displayed as an 'error' in the logs
+- [1336948](https://bugzilla.redhat.com/1336948) - [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs
+- [1337022](https://bugzilla.redhat.com/1337022) - DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after renaming Directories from multiple client at same time
+- [1337113](https://bugzilla.redhat.com/1337113) - Modified volume options are not syncing once glusterd comes up.
+- [1337653](https://bugzilla.redhat.com/1337653) - log flooded with Could not map name=xxxx to a UUID when config'd with long hostnames
+- [1337779](https://bugzilla.redhat.com/1337779) - tests/bugs/write-behind/1279730.t fails spuriously
+- [1337831](https://bugzilla.redhat.com/1337831) - one of vm goes to paused state when network goes down and comes up back
+- [1337837](https://bugzilla.redhat.com/1337837) - Files present in the .shard folder even after deleting all the vms from the UI
+- [1337872](https://bugzilla.redhat.com/1337872) - Some of VMs go to paused state when there is concurrent I/O on vms
+- [1338668](https://bugzilla.redhat.com/1338668) - AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously
+- [1338969](https://bugzilla.redhat.com/1338969) - common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
+- [1339226](https://bugzilla.redhat.com/1339226) - gfapi: set mem_acct for the variables created for upcall
+- [1339446](https://bugzilla.redhat.com/1339446) - ENOTCONN error during parallel rmdir
+- [1340992](https://bugzilla.redhat.com/1340992) - Directory creation(mkdir) fails when the remove brick is initiated for replicated volumes accessing via nfs-ganesha
+- [1341068](https://bugzilla.redhat.com/1341068) - [geo-rep]: Monitor crashed with [Errno 3] No such process
+- [1341121](https://bugzilla.redhat.com/1341121) - [geo-rep]: If the session is renamed, geo-rep configuration are not retained
+- [1341478](https://bugzilla.redhat.com/1341478) - [geo-rep]: Snapshot creation having geo-rep session is broken
+- [1341952](https://bugzilla.redhat.com/1341952) - changelog: changelog_rollover breaks when number of fds opened is more than 1024
+- [1342348](https://bugzilla.redhat.com/1342348) - Log parameters such as the gfid, fd address, offset and length of the reads upon failure for easier debugging
+- [1342374](https://bugzilla.redhat.com/1342374) - [quota+snapshot]: Directories are inaccessible from activated snapshot, when the snapshot was created during directory creation
+- [1342431](https://bugzilla.redhat.com/1342431) - [georep]: Stopping volume fails if it has geo-rep session (Even in stopped state)
+- [1342453](https://bugzilla.redhat.com/1342453) - upgrade path when slave volume uuid used in geo-rep session
+- [1342903](https://bugzilla.redhat.com/1342903) - O_DIRECT support for sharding
+- [1342964](https://bugzilla.redhat.com/1342964) - self heal deamon killed due to oom kills on a dist-disperse volume using nfs ganesha
+- [1343362](https://bugzilla.redhat.com/1343362) - Input / Output when chmoding files on NFS mount point
+- [1344422](https://bugzilla.redhat.com/1344422) - fd leak in disperse
+- [1344561](https://bugzilla.redhat.com/1344561) - conservative merge happening on a x3 volume for a deleted file
+- [1344595](https://bugzilla.redhat.com/1344595) - [disperse] mkdir after re balance give Input/Output Error
+- [1344605](https://bugzilla.redhat.com/1344605) - [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
+- [1346132](https://bugzilla.redhat.com/1346132) - tiering : Multiple brick processes crashed on tiered volume while taking snapshots
+- [1346184](https://bugzilla.redhat.com/1346184) - quota : rectify quota-deem-statfs default value in gluster v set help command
+- [1346751](https://bugzilla.redhat.com/1346751) - Unsafe access to inode->fd_list
+
+
+## Known Issues
+
+- Commit b33f3c9, which introduces changes to improve IPv6 support in GlusterFS has been reverted as it exposed problems in network encryption, which could cause a GlusterFS cluster to fail operating correctly when management network encryption is used.
+- Network encryption has an issue which could sometimes prevent reconnections from correctly happening.