summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorKaushal M <kaushal@redhat.com>2016-02-24 15:09:57 +0530
committerKaushal M <kaushal@redhat.com>2016-02-28 20:46:17 -0800
commit06f4a475f47f2acb243c9d1ad2e2f16587f0ff69 (patch)
tree53d76ba7a6a3c79ea0c98ce32f9c7471ceb91454 /doc
parent29adf166aa5f15202c5fe49369ad4f11df799c5b (diff)
Add missing release-notes
Change-Id: I16ad79b951c1e366fc94501063441459f9a4faad BUG: 1311451 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/13509 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/release-notes/3.7.5.md77
-rw-r--r--doc/release-notes/3.7.6.md3
-rw-r--r--doc/release-notes/3.7.7.md171
-rw-r--r--doc/release-notes/3.7.8.md5
-rw-r--r--doc/release-notes/upgrading-from-3.7.2-or-older.md37
5 files changed, 293 insertions, 0 deletions
diff --git a/doc/release-notes/3.7.5.md b/doc/release-notes/3.7.5.md
new file mode 100644
index 00000000000..47ac742a541
--- /dev/null
+++ b/doc/release-notes/3.7.5.md
@@ -0,0 +1,77 @@
+## Bugs fixed
+The following bugs were fixed in this release.
+
+- [1246397](https://bugzilla.redhat.com/1246397) - POSIX ACLs as used by a FUSE mount can not use more than 32 groups
+- [1248890](https://bugzilla.redhat.com/1248890) - AFR: Make [f]xattrop metadata transaction
+- [1248941](https://bugzilla.redhat.com/1248941) - Logging : unnecessary log message "REMOVEXATTR No data available " when files are written to glusterfs mount
+- [1250388](https://bugzilla.redhat.com/1250388) - [RFE] changes needed in snapshot info command's xml output.
+- [1251821](https://bugzilla.redhat.com/1251821) - /usr/lib/glusterfs/ganesha/ganesha_ha.sh is distro specific
+- [1255110](https://bugzilla.redhat.com/1255110) - client is sending io to arbiter with replica 2
+- [1255384](https://bugzilla.redhat.com/1255384) - Detached node list stale snaps
+- [1257394](https://bugzilla.redhat.com/1257394) - Provide more meaningful errors on peer probe and peer detach
+- [1258113](https://bugzilla.redhat.com/1258113) - snapshot delete all command fails with --xml option.
+- [1258244](https://bugzilla.redhat.com/1258244) - Data Tieirng:Change error message as detach-tier error message throws as "remove-brick"
+- [1258313](https://bugzilla.redhat.com/1258313) - Start self-heal and display correct heal info after replace brick
+- [1258338](https://bugzilla.redhat.com/1258338) - Data Tiering: Tiering related information is not displayed in gluster volume info xml output
+- [1258340](https://bugzilla.redhat.com/1258340) - Data Tiering:Volume task status showing as remove brick when detach tier is trigger
+- [1258347](https://bugzilla.redhat.com/1258347) - Data Tiering: Tiering related information is not displayed in gluster volume status xml output
+- [1258377](https://bugzilla.redhat.com/1258377) - ACL created on a dht.linkto file on a files that skipped rebalance
+- [1258406](https://bugzilla.redhat.com/1258406) - porting log messages to a new framework
+- [1258411](https://bugzilla.redhat.com/1258411) - trace xlator: Print write size also in trace_writev logs
+- [1258717](https://bugzilla.redhat.com/1258717) - gluster-nfs : contents of export file is not updated correctly in its context
+- [1258727](https://bugzilla.redhat.com/1258727) - porting logging messages to new logging framework
+- [1258736](https://bugzilla.redhat.com/1258736) - porting log messages to a new framework
+- [1258769](https://bugzilla.redhat.com/1258769) - Porting log messages to new framework
+- [1258798](https://bugzilla.redhat.com/1258798) - bug-948686.t fails spuriously
+- [1258845](https://bugzilla.redhat.com/1258845) - Change order of marking AFR post op
+- [1258976](https://bugzilla.redhat.com/1258976) - packaging: gluster-server install failure due to %ghost of hooks/.../delete
+- [1259078](https://bugzilla.redhat.com/1259078) - should not spawn another migration daemon on graph switch
+- [1259079](https://bugzilla.redhat.com/1259079) - Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
+- [1259081](https://bugzilla.redhat.com/1259081) - I/O failure on attaching tier on fuse client
+- [1259225](https://bugzilla.redhat.com/1259225) - Add node of nfs-ganesha not working on rhel7.1
+- [1259360](https://bugzilla.redhat.com/1259360) - garbage files created in /var/run/gluster
+- [1259652](https://bugzilla.redhat.com/1259652) - quota test 'quota-nfs.t' fails spuriously
+- [1259659](https://bugzilla.redhat.com/1259659) - Fix bug in arbiter-statfs.t
+- [1259694](https://bugzilla.redhat.com/1259694) - Data Tiering:Regression:Commit of detach tier passes without directly without even issuing a detach tier start
+- [1259697](https://bugzilla.redhat.com/1259697) - Disperse volume: Huge memory leak of glusterfsd process
+- [1259726](https://bugzilla.redhat.com/1259726) - Fix reads on zero-byte shards representing holes in the file
+- [1260511](https://bugzilla.redhat.com/1260511) - fuse client crashed during i/o
+- [1260593](https://bugzilla.redhat.com/1260593) - man or info page of gluster needs to be updated with self-heal commands.
+- [1260856](https://bugzilla.redhat.com/1260856) - xml output for volume status on tiered volume
+- [1260858](https://bugzilla.redhat.com/1260858) - glusterd: volume status backward compatibility
+- [1260859](https://bugzilla.redhat.com/1260859) - snapshot: from nfs-ganesha mount no content seen in .snaps/<snapshot-name> directory
+- [1260919](https://bugzilla.redhat.com/1260919) - Quota+Rebalance : While rebalance is in progress , quota list shows 'Used Space' more than the Hard Limit set
+- [1261008](https://bugzilla.redhat.com/1261008) - Do not expose internal sharding xattrs to the application.
+- [1261234](https://bugzilla.redhat.com/1261234) - Possible memory leak during rebalance with large quantity of files
+- [1261444](https://bugzilla.redhat.com/1261444) - cli : volume start will create/overwrite ganesha export file
+- [1261664](https://bugzilla.redhat.com/1261664) - Tiering status command is very cumbersome.
+- [1261715](https://bugzilla.redhat.com/1261715) - [HC] Fuse mount crashes, when client-quorum is not met
+- [1261716](https://bugzilla.redhat.com/1261716) - read/write performance improvements for VM workload
+- [1261742](https://bugzilla.redhat.com/1261742) - Tier: glusterd crash when trying to detach , when hot tier is having exactly one brick and cold tier is of replica type
+- [1262197](https://bugzilla.redhat.com/1262197) - DHT: Few files are missing after remove-brick operation
+- [1262335](https://bugzilla.redhat.com/1262335) - Fix invalid logic in tier.t
+- [1262341](https://bugzilla.redhat.com/1262341) - Database locking due to write contention between CTR sql connection and tier migrator sql connection
+- [1262344](https://bugzilla.redhat.com/1262344) - quota: numbers of warning messages in nfs.log a single file itself
+- [1262408](https://bugzilla.redhat.com/1262408) - Data Tieirng:Detach tier status shows number of failures even when all files are migrated successfully
+- [1262547](https://bugzilla.redhat.com/1262547) - `getfattr -n replica.split-brain-status <file>' command hung on the mount
+- [1262700](https://bugzilla.redhat.com/1262700) - DHT + rebalance :- file permission got changed (sticky bit and setgid is set) after file migration failure
+- [1262881](https://bugzilla.redhat.com/1262881) - nfs-ganesha: refresh-config stdout output includes dbus messages "method return sender=:1.61 -> dest=:1.65 reply_serial=2"
+- [1263191](https://bugzilla.redhat.com/1263191) - Error not propagated correctly if selfheal layout lock fails
+- [1263746](https://bugzilla.redhat.com/1263746) - Data Tiering:Setting only promote frequency and no demote frequency causes crash
+- [1264738](https://bugzilla.redhat.com/1264738) - 'gluster v tier/attach-tier/detach-tier help' command shows the usage, and then throws 'Tier command failed' error message
+- [1265633](https://bugzilla.redhat.com/1265633) - AFR : "gluster volume heal <volume_name info" doesn't report the fqdn of storage nodes.
+- [1265890](https://bugzilla.redhat.com/1265890) - rm command fails with "Transport end point not connected" during add brick
+- [1265892](https://bugzilla.redhat.com/1265892) - Data Tiering : Writes to a file being promoted/demoted are missing once the file migration is complete
+- [1266822](https://bugzilla.redhat.com/1266822) - Add more logs in failure code paths + port existing messages to the msg-id framework
+- [1266872](https://bugzilla.redhat.com/1266872) - FOP handling during file migration is broken in the release-3.7 branch.
+- [1266882](https://bugzilla.redhat.com/1266882) - RFE: posix: xattrop 'GF_XATTROP_ADD_DEF_ARRAY' implementation
+- [1267149](https://bugzilla.redhat.com/1267149) - Perf: Getting bad performance while doing ls
+- [1267532](https://bugzilla.redhat.com/1267532) - Data Tiering:CLI crashes with segmentation fault when user tries "gluster v tier" command
+- [1267817](https://bugzilla.redhat.com/1267817) - No quota API to get real hard-limit value.
+- [1267822](https://bugzilla.redhat.com/1267822) - Have a way to disable readdirp on dht from glusterd volume set command
+- [1267823](https://bugzilla.redhat.com/1267823) - Perf: Getting bad performance while doing ls
+- [1268804](https://bugzilla.redhat.com/1268804) - Test tests/bugs/shard/bug-1245547.t failing consistently when run with patch http://review.gluster.org/#/c/11938/
+
+## Upgrade notes
+
+If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/3.7.6.md b/doc/release-notes/3.7.6.md
index 5f3d69b5e92..a1cebedf658 100644
--- a/doc/release-notes/3.7.6.md
+++ b/doc/release-notes/3.7.6.md
@@ -71,3 +71,6 @@ bugs fixed in the GlusterFS 3.7 stable release.
- Volume commands fail with "staging failed" message when few nodes in trusted storage pool have 3.7.6 installed and other nodes have 3.7.5 installed. Please upgrade all nodes to recover from this error. This issue is not seen if upgrading from 3.7.4 or previous to 3.7.6.
+### Upgrade notes
+
+If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/3.7.7.md b/doc/release-notes/3.7.7.md
new file mode 100644
index 00000000000..cfbc1bd37a7
--- /dev/null
+++ b/doc/release-notes/3.7.7.md
@@ -0,0 +1,171 @@
+## Bugs fixed
+The following bugs were fixed in this release.
+
+- [1212676](https://bugzilla.redhat.com/1212676) - NetBSD port
+- [1225567](https://bugzilla.redhat.com/1225567) - [geo-rep]: Traceback "ValueError: filedescriptor out of range in select()" observed while creating huge set of data on master
+- [1250410](https://bugzilla.redhat.com/1250410) - [Backup]: Password of the peer nodes prompted whenever a glusterfind session is deleted.
+- [1251467](https://bugzilla.redhat.com/1251467) - ec sequentializes all reads, limiting read throughtput
+- [1257141](https://bugzilla.redhat.com/1257141) - [Backup]: Glusterfind pre attribute '--output-prefix' not working as expected in case of DELETEs
+- [1257546](https://bugzilla.redhat.com/1257546) - [Backup]: Glusterfind list shows the session as corrupted on the peer node
+- [1257710](https://bugzilla.redhat.com/1257710) - Copy NFS-Ganesha export files as part of volume snapshot creation
+- [1258594](https://bugzilla.redhat.com/1258594) - build: compile error on RHEL5
+- [1262860](https://bugzilla.redhat.com/1262860) - Data Tiering: Tiering deamon is seeing each part of a file in a Disperse cold volume as a different file
+- [1264441](https://bugzilla.redhat.com/1264441) - Data Tiering:Regression:Detach tier commit is passing when detach tier is in progress
+- [1266880](https://bugzilla.redhat.com/1266880) - Tiering: unlink failed with error "Invaid argument"
+- [1269702](https://bugzilla.redhat.com/1269702) - Glusterfsd crashes on pmap signin failure
+- [1272007](https://bugzilla.redhat.com/1272007) - tools/glusterfind: add query command to list files without session
+- [1272926](https://bugzilla.redhat.com/1272926) - libgfapi: brick process crashes if attr KEY length > 255 for glfs_lgetxattr(...)
+- [1274100](https://bugzilla.redhat.com/1274100) - need a way to pause/stop tiering to take snapshot
+- [1275173](https://bugzilla.redhat.com/1275173) - geo-replication: [RFE] Geo-replication + Tiering
+- [1276907](https://bugzilla.redhat.com/1276907) - Arbiter volume becomes replica volume in some cases
+- [1277390](https://bugzilla.redhat.com/1277390) - snap-max-hard-limit for snapshots always shows as 256 in info file.
+- [1278640](https://bugzilla.redhat.com/1278640) - Files in a tiered volume gets promoted when bitd signs them
+- [1278744](https://bugzilla.redhat.com/1278744) - ec-readdir.t is failing consistently
+- [1279059](https://bugzilla.redhat.com/1279059) - [Tier]: restarting volume reports "insert/update failure" in cold brick logs
+- [1279095](https://bugzilla.redhat.com/1279095) - I/O failure on attaching tier on nfs client
+- [1279306](https://bugzilla.redhat.com/1279306) - Dist-geo-rep : checkpoint doesn't reach even though all the files have been synced through hybrid crawl.
+- [1279309](https://bugzilla.redhat.com/1279309) - Message shown in gluster vol tier <volname> status output is incorrect.
+- [1279331](https://bugzilla.redhat.com/1279331) - quota: removexattr on /d/backends/patchy/.glusterfs/79/99/799929ec-f546-4bbf-8549-801b79623262 (for trusted.glusterfs.quota.add7e3f8-833b-48ec-8a03-f7cd09925468.contri) [No such file or directory]
+- [1279345](https://bugzilla.redhat.com/1279345) - Fails to build twice in a row
+- [1279351](https://bugzilla.redhat.com/1279351) - [GlusterD]: Volume start fails post add-brick on a volume which is not started
+- [1279362](https://bugzilla.redhat.com/1279362) - Monitor should restart the worker process when Changelog agent dies
+- [1279644](https://bugzilla.redhat.com/1279644) - Starting geo-rep session
+- [1279776](https://bugzilla.redhat.com/1279776) - stop-all-gluster-processes.sh doesn't return correct return status
+- [1280715](https://bugzilla.redhat.com/1280715) - fops-during-migration-pause.t spurious failure
+- [1281226](https://bugzilla.redhat.com/1281226) - Remove selinux mount option from "man mount.glusterfs"
+- [1281893](https://bugzilla.redhat.com/1281893) - packaging: gfind_missing_files are not in geo-rep %if ... %endif conditional
+- [1282315](https://bugzilla.redhat.com/1282315) - Data Tiering:Metadata changes to a file should not heat/promote the file
+- [1282465](https://bugzilla.redhat.com/1282465) - [Backup]: Crash observed when keyboard interrupt is encountered in the middle of any glusterfind command
+- [1282675](https://bugzilla.redhat.com/1282675) - ./tests/basic/tier/record-metadata-heat.t is failing upstream
+- [1283036](https://bugzilla.redhat.com/1283036) - Index entries are not being purged in case of file does not exist
+- [1283038](https://bugzilla.redhat.com/1283038) - libgfapi to support set_volfile-server-transport type "unix"
+- [1283060](https://bugzilla.redhat.com/1283060) - [RFE] Geo-replication support for Volumes running in docker containers
+- [1283107](https://bugzilla.redhat.com/1283107) - Setting security.* xattrs fails
+- [1283138](https://bugzilla.redhat.com/1283138) - core dump in protocol/client:client_submit_request
+- [1283142](https://bugzilla.redhat.com/1283142) - glusterfs does not register with rpcbind on restart
+- [1283187](https://bugzilla.redhat.com/1283187) - [GlusterD]: Incorrect peer status showing if volume restart done before entire cluster update.
+- [1283288](https://bugzilla.redhat.com/1283288) - cache mode must be the default mode for tiered volumes
+- [1283302](https://bugzilla.redhat.com/1283302) - volume start command is failing when glusterfs compiled with debug enabled
+- [1283473](https://bugzilla.redhat.com/1283473) - Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf"
+- [1283478](https://bugzilla.redhat.com/1283478) - While file is self healing append to the file hangs
+- [1283480](https://bugzilla.redhat.com/1283480) - Data Tiering:Rename of cold file to a hot file causing split brain and showing two copies of files in mount point
+- [1283568](https://bugzilla.redhat.com/1283568) - qupta/marker: backward compatibility with quota xattr vesrioning
+- [1283570](https://bugzilla.redhat.com/1283570) - Better indication of arbiter brick presence in a volume.
+- [1283679](https://bugzilla.redhat.com/1283679) - remove mount-nfs-auth.t from bad tests lists
+- [1283756](https://bugzilla.redhat.com/1283756) - self-heal won't work in disperse volumes when they are attached as tiers
+- [1283757](https://bugzilla.redhat.com/1283757) - EC: File healing promotes it to hot tier
+- [1283833](https://bugzilla.redhat.com/1283833) - Warning messages seen in glusterd logs in executing gluster volume set help
+- [1283856](https://bugzilla.redhat.com/1283856) - [Tier]: Space is missed b/w the words in the detach tier stop error message
+- [1283881](https://bugzilla.redhat.com/1283881) - BitRot :- Data scrubbing status is not available
+- [1283923](https://bugzilla.redhat.com/1283923) - Data Tiering: "ls" count taking link files and promote/demote files into consideration both on fuse and nfs mount
+- [1283956](https://bugzilla.redhat.com/1283956) - Self-heal triggered every couple of seconds and a 3-node 1-arbiter setup
+- [1284453](https://bugzilla.redhat.com/1284453) - Dist-geo-rep: Support geo-replication to work with sharding
+- [1284737](https://bugzilla.redhat.com/1284737) - Geo-replication is logging in Localtime
+- [1284746](https://bugzilla.redhat.com/1284746) - tests/geo-rep: Existing geo-rep regressino test suite is time consuming.
+- [1284850](https://bugzilla.redhat.com/1284850) - Resource leak in marker
+- [1284863](https://bugzilla.redhat.com/1284863) - Full heal of volume fails on some nodes "Commit failed on X", and glustershd logs "Couldn't get xlator xl-0"
+- [1285139](https://bugzilla.redhat.com/1285139) - Extending writes filling incorrect final size in postbuf
+- [1285168](https://bugzilla.redhat.com/1285168) - vol heal info fails when transport.socket.bind-address is set in glusterd
+- [1285174](https://bugzilla.redhat.com/1285174) - Create doesn't remember flags it is opened with
+- [1285335](https://bugzilla.redhat.com/1285335) - [Tier]: Stopping and Starting tier volume triggers fixing layout which fails on local host
+- [1285629](https://bugzilla.redhat.com/1285629) - Snapshot creation after attach-tier causes glusterd crash
+- [1285688](https://bugzilla.redhat.com/1285688) - sometimes files are not getting demoted from hot tier to cold tier
+- [1285758](https://bugzilla.redhat.com/1285758) - Brick crashes because of race in bit-rot init
+- [1285762](https://bugzilla.redhat.com/1285762) - reads fail on sharded volume while running iozone
+- [1285793](https://bugzilla.redhat.com/1285793) - Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
+- [1285961](https://bugzilla.redhat.com/1285961) - glusterfsd to support volfile-server-transport type "unix"
+- [1285978](https://bugzilla.redhat.com/1285978) - AFR self-heal-daemon option is still set on volume though tier is detached
+- [1286169](https://bugzilla.redhat.com/1286169) - We need to skip data self-heal for arbiter bricks
+- [1286517](https://bugzilla.redhat.com/1286517) - cli/geo-rep : remove unused code
+- [1286601](https://bugzilla.redhat.com/1286601) - vol quota enable fails when transport.socket.bind-address is set in glusterd
+- [1286985](https://bugzilla.redhat.com/1286985) - Tier: ec xattrs are set on a newly created file present in the non-ec hot tier
+- [1287079](https://bugzilla.redhat.com/1287079) - nfs-ganesha: Upcall sent on null gfid
+- [1287456](https://bugzilla.redhat.com/1287456) - [geo-rep]: Recommended Shared volume use on geo-replication is broken
+- [1287531](https://bugzilla.redhat.com/1287531) - Perf: Metadata operation(ls -l) performance regression.
+- [1287538](https://bugzilla.redhat.com/1287538) - [Snapshot]: Clone creation fails on tiered volume with pre-validation failed message
+- [1287560](https://bugzilla.redhat.com/1287560) - Data Tiering:Don't allow or reset the frequency threshold values to zero when record counter features.record-counter is turned off
+- [1287583](https://bugzilla.redhat.com/1287583) - Data Tiering:Read heat not getting calculated and read operations not heating the file with counter enabled
+- [1287597](https://bugzilla.redhat.com/1287597) - [upgrade] Error messages seen in glusterd logs, while upgrading from RHGS 2.1.6 to RHGS 3.1
+- [1287877](https://bugzilla.redhat.com/1287877) - glusterfs does not allow passing standard SElinux mount options to fuse
+- [1287960](https://bugzilla.redhat.com/1287960) - Geo-Replication failes on uppercase hostnames
+- [1288027](https://bugzilla.redhat.com/1288027) - [geo-rep+tiering]: symlinks are not getting synced to slave on tiered master setup
+- [1288030](https://bugzilla.redhat.com/1288030) - Clone creation should not be successful when the node participating in volume goes down.
+- [1288052](https://bugzilla.redhat.com/1288052) - [Quota]: Peer status is in "Rejected" state with Quota enabled volume
+- [1288056](https://bugzilla.redhat.com/1288056) - glusterd: all the daemon's of existing volume stopping upon peer detach
+- [1288060](https://bugzilla.redhat.com/1288060) - glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
+- [1288352](https://bugzilla.redhat.com/1288352) - Few snapshot creation fails with pre-validation failed message on tiered volume.
+- [1288484](https://bugzilla.redhat.com/1288484) - tiering: quota list command is not working after attach or detach
+- [1288716](https://bugzilla.redhat.com/1288716) - add bug-924726.t to ignore list in regression
+- [1288922](https://bugzilla.redhat.com/1288922) - Use after free bug in notify_kernel_loop in fuse-bridge code
+- [1288963](https://bugzilla.redhat.com/1288963) - [GlusterD]Probing a node having standalone volume, should not happen
+- [1288992](https://bugzilla.redhat.com/1288992) - Possible memory leak in the tiered daemon
+- [1289063](https://bugzilla.redhat.com/1289063) - quota cli: enhance quota list command to list usage even if the limit is not set
+- [1289414](https://bugzilla.redhat.com/1289414) - [tiering]: Tier daemon crashed on two of eight nodes and lot of "demotion failed" seen in the system
+- [1289570](https://bugzilla.redhat.com/1289570) - Iozone on sharded volume fails on NFS
+- [1289602](https://bugzilla.redhat.com/1289602) - After detach-tier start writes still go to hot tier
+- [1289898](https://bugzilla.redhat.com/1289898) - Without detach tier commit, status changes back to tier migration
+- [1290048](https://bugzilla.redhat.com/1290048) - [Tier]: Failed to open "demotequeryfile-master-tier-dht" errors logged on the node having only cold bricks
+- [1290295](https://bugzilla.redhat.com/1290295) - tiering: Seeing error messages E "/usr/lib64/glusterfs/3.7.5/xlator/features/changetimerecorder.so(ctr_lookup+0x54f) [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier
+- [1290363](https://bugzilla.redhat.com/1290363) - Data Tiering:File create terminates with "Input/output error" as split brain is observed
+- [1290532](https://bugzilla.redhat.com/1290532) - Several intermittent regression failures
+- [1290534](https://bugzilla.redhat.com/1290534) - Minor improvements and cleanup for the build system
+- [1290655](https://bugzilla.redhat.com/1290655) - Sharding: Remove dependency on performance.strict-write-ordering
+- [1290658](https://bugzilla.redhat.com/1290658) - tests/basic/afr/arbiter-statfs.t fails most of the times on NetBSD
+- [1290719](https://bugzilla.redhat.com/1290719) - Geo-replication doesn't deal properly with sparse files
+- [1291002](https://bugzilla.redhat.com/1291002) - File is not demoted after self heal (split-brain)
+- [1291046](https://bugzilla.redhat.com/1291046) - spurious failure of bug-1279376-rename-demoted-file.t
+- [1291208](https://bugzilla.redhat.com/1291208) - Regular files are listed as 'T' files on nfs mount
+- [1291546](https://bugzilla.redhat.com/1291546) - bitrot: bitrot scrub status command should display the correct value of total number of scrubbed, unsigned file
+- [1291557](https://bugzilla.redhat.com/1291557) - Data Tiering:File create terminates with "Input/output error" as split brain is observed
+- [1291970](https://bugzilla.redhat.com/1291970) - Data Tiering: new set of gluster v tier commands not working as expected
+- [1291985](https://bugzilla.redhat.com/1291985) - store afr pending xattrs as a volume option
+- [1292046](https://bugzilla.redhat.com/1292046) - Renames/deletes failed with "No such file or directory" when few of the bricks from the hot tier went offline
+- [1292254](https://bugzilla.redhat.com/1292254) - hook script for CTDB should not change Samba config
+- [1292359](https://bugzilla.redhat.com/1292359) - [tiering]: read/write freq-threshold allows negative values
+- [1292697](https://bugzilla.redhat.com/1292697) - Symlinks Rename fails in Symlink not exists in Slave
+- [1292755](https://bugzilla.redhat.com/1292755) - S30Samba scripts do not work on systemd systems
+- [1292945](https://bugzilla.redhat.com/1292945) - [tiering]: cluster.tier-max-files option in tiering is not honored
+- [1293224](https://bugzilla.redhat.com/1293224) - Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
+- [1293265](https://bugzilla.redhat.com/1293265) - md5sum of files mismatch after the self-heal is complete on the file
+- [1293300](https://bugzilla.redhat.com/1293300) - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
+- [1293309](https://bugzilla.redhat.com/1293309) - [georep+tiering]: Geo-replication sync is broken if cold tier is EC
+- [1293342](https://bugzilla.redhat.com/1293342) - Data Tiering:Watermark:File continuously trying to demote itself but failing " [dht-rebalance.c:608:__dht_rebalance_create_dst_file] 0-wmrk-tier-dht: chown failed for //AP.BH.avi on wmrk-cold-dht (No such file or directory)"
+- [1293348](https://bugzilla.redhat.com/1293348) - first file created after hot tier full fails to create, but gets database entry and later ends up as a stale erroneous file (file with ???????????)
+- [1293536](https://bugzilla.redhat.com/1293536) - afr: warn if pending xattrs missing during init()
+- [1293584](https://bugzilla.redhat.com/1293584) - Corrupted objects list does not get cleared even after all the files in the volume are deleted and count increases as old + new count
+- [1293595](https://bugzilla.redhat.com/1293595) - [geo-rep]: ChangelogException: [Errno 22] Invalid argument observed upon rebooting the ACTIVE master node
+- [1293659](https://bugzilla.redhat.com/1293659) - Creation of files on hot tier volume taking very long time
+- [1293698](https://bugzilla.redhat.com/1293698) - [Tier]: start tier daemon using rebal tier start doesnt start tierd if it is failed on any of single node
+- [1293827](https://bugzilla.redhat.com/1293827) - fops-during-migration.t fails if hot and cold tiers are dist-rep
+- [1294410](https://bugzilla.redhat.com/1294410) - Friend update floods can render the cluster incapable of handling other commands
+- [1294608](https://bugzilla.redhat.com/1294608) - quota: limit xattr not healed for a sub-directory on a newly added bricks
+- [1294609](https://bugzilla.redhat.com/1294609) - quota: handle quota xattr removal when quota is enabled again
+- [1294797](https://bugzilla.redhat.com/1294797) - "Transport endpoint not connected" in heal info though hot tier bricks are up
+- [1294942](https://bugzilla.redhat.com/1294942) - [tiering]: Incorrect display of 'gluster v tier help'
+- [1294954](https://bugzilla.redhat.com/1294954) - tier-snapshot.t runs too slowly on RHEL6
+- [1294969](https://bugzilla.redhat.com/1294969) - Large system file distribution is broken
+- [1296024](https://bugzilla.redhat.com/1296024) - Unable to modify quota hard limit on tier volume after disk limit got exceeded
+- [1296108](https://bugzilla.redhat.com/1296108) - xattrs on directories are unavailable on distributed replicated volume after adding new bricks
+- [1296795](https://bugzilla.redhat.com/1296795) - Good files does not promoted in a tiered volume when bitrot is enabled
+- [1296996](https://bugzilla.redhat.com/1296996) - Stricter dependencies for glusterfs-server
+- [1297213](https://bugzilla.redhat.com/1297213) - Stale stat information for corrupted objects (replicated volume)
+- [1297305](https://bugzilla.redhat.com/1297305) - [GlusterD]: Peer detach happening with a node which is hosting volume bricks
+- [1297309](https://bugzilla.redhat.com/1297309) - Rebalance crashed after detach tier.
+- [1297862](https://bugzilla.redhat.com/1297862) - Ganesha hook script executes showmount and causes a hang
+- [1299314](https://bugzilla.redhat.com/1299314) - glusterfs crash during load testing
+- [1299712](https://bugzilla.redhat.com/1299712) - [HC] Implement fallocate, discard and zerofill with sharding
+- [1299822](https://bugzilla.redhat.com/1299822) - Snapshot creation fails on a tiered volume
+- [1300174](https://bugzilla.redhat.com/1300174) - volume info xml does not show arbiter details
+- [1300210](https://bugzilla.redhat.com/1300210) - Fix sparse-file-self-heal.t and remove from bad tests
+- [1300243](https://bugzilla.redhat.com/1300243) - Quota Aux mount crashed
+- [1300600](https://bugzilla.redhat.com/1300600) - tests/bugs/quota/bug-1049323.t fails in fedora
+- [1300924](https://bugzilla.redhat.com/1300924) - Fix mem leaks related to gfapi applications
+- [1300978](https://bugzilla.redhat.com/1300978) - I/O failure during a graph change followed by an option change.
+- [1302012](https://bugzilla.redhat.com/1302012) - [Tiering]: Values of watermarks, min free disk etc will be miscalculated with quota set on root directory of gluster volume
+- [1302199](https://bugzilla.redhat.com/1302199) - Scrubber crash (list corruption)
+- [1302521](https://bugzilla.redhat.com/1302521) - Improve error message for unsupported clients
+- [1302943](https://bugzilla.redhat.com/1302943) - Lot of Inode not found messages in glfsheal log file
+
+## Upgrade notes
+
+If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/3.7.8.md b/doc/release-notes/3.7.8.md
index cec924941f0..f4d969575d0 100644
--- a/doc/release-notes/3.7.8.md
+++ b/doc/release-notes/3.7.8.md
@@ -17,3 +17,8 @@ The following bugs have been fixed in addition to the above two reverts,
- [1288857](https://bugzilla.redhat.com/1288857) - Use after free bug in notify_kernel_loop in fuse-bridge code
- [1288922](https://bugzilla.redhat.com/1288922) - Use after free bug in notify_kernel_loop in fuse-bridge code
- [1296400](https://bugzilla.redhat.com/1296400) - Fix spurious failure in bug-1221481-allow-fops-on-dir-split-brain.t
+
+
+## Upgrade notes
+
+If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/upgrading-from-3.7.2-or-older.md b/doc/release-notes/upgrading-from-3.7.2-or-older.md
new file mode 100644
index 00000000000..f4f41568455
--- /dev/null
+++ b/doc/release-notes/upgrading-from-3.7.2-or-older.md
@@ -0,0 +1,37 @@
+A new feature in 3.7.3 is causing troubles during upgrades from previous versions of GlusterFS to 3.7.3.
+The details of the feature, issue and work around are below.
+
+## Feature
+In GlusterFS-3.7.3, insecure-ports have been enabled by default. This
+means that by default, servers accept connections from insecure ports,
+clients use insecure ports to connect to servers. This change
+particularly benefits usage of libgfapi, for example when it is used
+in qemu run by a normal user.
+
+## Issue
+This has caused troubles when upgrading from previous versions to
+3.7.3 in rolling upgrades and when attempting to use 3.7.3 clients
+with older servers. The 3.7.3 clients establish connections using
+insecure ports by default. But the older servers still expect
+connections to come from secure-ports (if this setting has not been
+changed). This causes servers to reject connections from 3.7.3, and
+leads to broken clusters during upgrade and rejected clients.
+
+## Workaround
+There are two possible workarounds.
+Before upgrading,
+
+1. Set 'client.bind-insecure off' on all volumes.
+This forces 3.7.3 clients to use secure ports to connect to the servers.
+This does not affect older clients as this setting is the default for them.
+
+2. Set 'server.allow-insecure on' on all volumes.
+This enables servers to accept connections from insecure ports.
+The new clients can successfully connect to the servers with this set.
+
+
+If anyone faces any problems with these workarounds, please let us know via email[1][1] or in IRC[2][2].
+
+
+[1]: gluster-devel at gluster dot org / gluster-users at gluster dot org
+[2]: #gluster / #gluster-dev @ freenode