summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/3.7.3.md
blob: 605554623e8f69ad1f6cf2d1021e74b2fc07ab14 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
## Release Notes for GlusterFS 3.7.3

This is a bugfix release. The Release Notes for [3.7.0](3.7.0.md), [3.7.1](3.7.1.md) and [3.7.2](3.7.2.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.7 stable releases.

### Bugs Fixed

- [1212842](https://bugzilla.redhat.com/1212842): tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
- [1214169](https://bugzilla.redhat.com/1214169): glusterfsd crashed while rebalance and self-heal were in progress
- [1217722](https://bugzilla.redhat.com/1217722): Tracker bug for Logging framework expansion.
- [1219358](https://bugzilla.redhat.com/1219358): Disperse volume: client crashed while running iozone
- [1223318](https://bugzilla.redhat.com/1223318): brick-op failure for glusterd command should log error message in cmd_history.log
- [1226666](https://bugzilla.redhat.com/1226666): BitRot :- Handle brick re-connection sanely in bitd/scrub process
- [1226830](https://bugzilla.redhat.com/1226830): Scrubber crash upon pause
- [1227572](https://bugzilla.redhat.com/1227572): Sharding - Fix posix compliance test failures.
- [1227808](https://bugzilla.redhat.com/1227808): Issues reported by Cppcheck static analysis tool
- [1228535](https://bugzilla.redhat.com/1228535): Memory leak in marker xlator
- [1228640](https://bugzilla.redhat.com/1228640): afr: unrecognized option in re-balance volfile
- [1229282](https://bugzilla.redhat.com/1229282): Disperse volume: Huge memory leak of glusterfsd process
- [1229563](https://bugzilla.redhat.com/1229563): Disperse volume: Failed to update version and size (error 2) seen during delete operations
- [1230327](https://bugzilla.redhat.com/1230327): context of access control translator should be updated properly for GF_POSIX_ACL_*_KEY xattrs
- [1230399](https://bugzilla.redhat.com/1230399): [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down
- [1230523](https://bugzilla.redhat.com/1230523): glusterd: glusterd crashing if you run  re-balance and vol status  command parallely.
- [1230857](https://bugzilla.redhat.com/1230857): Files migrated should stay on a tier for a full cycle
- [1231024](https://bugzilla.redhat.com/1231024): scrub frequecny and throttle change information need to be present in Scrubber log
- [1231608](https://bugzilla.redhat.com/1231608): Add regression test for cluster lock in a heterogeneous cluster
- [1231767](https://bugzilla.redhat.com/1231767): tiering:compiler warning with gcc v5.1.1
- [1232173](https://bugzilla.redhat.com/1232173): Incomplete self-heal and split-brain on directories found when self-healing files/dirs on a replaced disk
- [1232185](https://bugzilla.redhat.com/1232185): cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
- [1232199](https://bugzilla.redhat.com/1232199): Skip zero byte files when triggering signing
- [1232333](https://bugzilla.redhat.com/1232333): Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives
- [1232335](https://bugzilla.redhat.com/1232335): nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start
- [1232602](https://bugzilla.redhat.com/1232602): bug-857330/xml.t fails spuriously
- [1232612](https://bugzilla.redhat.com/1232612): Disperse volume: misleading unsuccessful message with heal and heal full
- [1232883](https://bugzilla.redhat.com/1232883): Snapshot daemon failed to run on newly created dist-rep volume with uss enabled
- [1232885](https://bugzilla.redhat.com/1232885): [SNAPSHOT]: "man gluster" needs modification for few snapshot commands
- [1232886](https://bugzilla.redhat.com/1232886): [SNAPSHOT]: Output message when a snapshot create is issued when multiple bricks are down needs to be improved
- [1232887](https://bugzilla.redhat.com/1232887): [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
- [1232889](https://bugzilla.redhat.com/1232889): Snapshot: When Cluster.enable-shared-storage is enable, shared storage should get mount after Node reboot
- [1233041](https://bugzilla.redhat.com/1233041): glusterd crashed when testing heal full on replaced disks
- [1233158](https://bugzilla.redhat.com/1233158): Null pointer dreference in dht_migrate_complete_check_task
- [1233518](https://bugzilla.redhat.com/1233518): [Backup]: Glusterfind session(s) created before starting the volume results in 'changelog not available' error, eventually
- [1233555](https://bugzilla.redhat.com/1233555): gluster v set help needs to be updated for cluster.enable-shared-storage option
- [1233559](https://bugzilla.redhat.com/1233559): libglusterfs: avoid crash due to ctx being NULL
- [1233611](https://bugzilla.redhat.com/1233611): Incomplete conservative merge for split-brained directories
- [1233632](https://bugzilla.redhat.com/1233632): Disperse volume: client crashed while running iozone
- [1233651](https://bugzilla.redhat.com/1233651): pthread cond and mutex variables of fs struct has to be destroyed conditionally.
- [1234216](https://bugzilla.redhat.com/1234216): nfs-ganesha: add node fails to add a new node to the cluster
- [1234225](https://bugzilla.redhat.com/1234225): Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency)
- [1234297](https://bugzilla.redhat.com/1234297): Quota: Porting logging messages to new logging framework
- [1234408](https://bugzilla.redhat.com/1234408): STACK_RESET may crash with concurrent statedump requests to a glusterfs process
- [1234584](https://bugzilla.redhat.com/1234584): nfs-ganesha:delete node throws error and pcs status also notifies about failures, in fact I/O also doesn't resume post grace period
- [1234679](https://bugzilla.redhat.com/1234679): Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time
- [1234695](https://bugzilla.redhat.com/1234695): [geo-rep]: Setting meta volume config to false when meta volume is stopped/deleted leads geo-rep to faulty
- [1234843](https://bugzilla.redhat.com/1234843): GlusterD does not store updated peerinfo objects.
- [1234898](https://bugzilla.redhat.com/1234898): [geo-rep]: Feature fan-out fails with the use of meta volume config
- [1235203](https://bugzilla.redhat.com/1235203): tiering: tier status shows as " progressing " but there is no rebalance daemon running
- [1235208](https://bugzilla.redhat.com/1235208): glusterd: glusterd crashes while importing a USS enabled volume which is already started
- [1235242](https://bugzilla.redhat.com/1235242): changelog: directory renames not getting recorded
- [1235258](https://bugzilla.redhat.com/1235258): nfs-ganesha: ganesha-ha.sh --refresh-config not working
- [1235297](https://bugzilla.redhat.com/1235297): [geo-rep]: set_geo_rep_pem_keys.sh needs modification in gluster path to support mount broker functionality
- [1235360](https://bugzilla.redhat.com/1235360): [geo-rep]: Mountbroker setup goes to Faulty with ssh 'Permission Denied' Errors
- [1235428](https://bugzilla.redhat.com/1235428): Mount broker user add command removes existing volume for a mountbroker user when second volume is attached to same user
- [1235512](https://bugzilla.redhat.com/1235512): quorum calculation might go for toss for a concurrent peer probe command
- [1235629](https://bugzilla.redhat.com/1235629): Missing trusted.ec.config xattr for files after heal process
- [1235904](https://bugzilla.redhat.com/1235904): fgetxattr() crashes when key name is NULL
- [1235923](https://bugzilla.redhat.com/1235923): POSIX: brick logs filled with _gf_log_callingfn due to this==NULL in dict_get
- [1235928](https://bugzilla.redhat.com/1235928): memory corruption in the way we maintain migration information in inodes.
- [1235934](https://bugzilla.redhat.com/1235934): Allow only lookup and delete operation on file that is in split-brain
- [1235939](https://bugzilla.redhat.com/1235939): Provide and use a common way to do reference counting of (internal) structures
- [1235966](https://bugzilla.redhat.com/1235966): [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
- [1235990](https://bugzilla.redhat.com/1235990): quota: marker accounting miscalculated when renaming a file on with write is in progress
- [1236019](https://bugzilla.redhat.com/1236019): peer probe results in Peer Rejected(Connected)
- [1236093](https://bugzilla.redhat.com/1236093): [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
- [1236260](https://bugzilla.redhat.com/1236260): [Quota] The root of the volume on which the quota is set shows the volume size more than actual volume size, when checked with "df" command.
- [1236269](https://bugzilla.redhat.com/1236269): FSAL_GLUSTER : symlinks are not working properly if acl is enabled
- [1236271](https://bugzilla.redhat.com/1236271): Introduce an ATOMIC_WRITE flag in posix writev
- [1236274](https://bugzilla.redhat.com/1236274): Upcall: Directory or file creation should send cache invalidation requests to parent directories
- [1236282](https://bugzilla.redhat.com/1236282): [Backup]: File movement across directories does not get captured in the output file in a X3 volume
- [1236288](https://bugzilla.redhat.com/1236288): Data Tiering: Files not getting promoted once demoted
- [1236933](https://bugzilla.redhat.com/1236933): Ganesha volume export failed
- [1238052](https://bugzilla.redhat.com/1238052): Quota list is not working on tiered volume.
- [1238057](https://bugzilla.redhat.com/1238057): Incorrect state created in '/var/lib/nfs/statd'
- [1238073](https://bugzilla.redhat.com/1238073): protocol/server doesn't reconfigure auth.ssl-allow options
- [1238476](https://bugzilla.redhat.com/1238476): Throttle background heals in disperse volumes
- [1238752](https://bugzilla.redhat.com/1238752): Consecutive volume start/stop operations when ganesha.enable is on, leads to errors
- [1239270](https://bugzilla.redhat.com/1239270): [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
- [1240183](https://bugzilla.redhat.com/1240183): Renamed Files are missing after self-heal
- [1240190](https://bugzilla.redhat.com/1240190): do an explicit lookup on the inodes linked in readdirp
- [1240603](https://bugzilla.redhat.com/1240603): glusterfsd crashed after volume start force
- [1240607](https://bugzilla.redhat.com/1240607): [geo-rep]: UnboundLocalError: local variable 'fd' referenced before assignment
- [1240616](https://bugzilla.redhat.com/1240616): Unable to pause georep session if one of the nodes in cluster is not part of master volume.
- [1240906](https://bugzilla.redhat.com/1240906): quota+afr: quotad crash "afr_local_init (local=0x0, priv=0x7fddd0372220, op_errno=0x7fddce1434dc) at afr-common.c:4112"
- [1240955](https://bugzilla.redhat.com/1240955): [USS]: snapd process is not killed once the glusterd comes back
- [1241134](https://bugzilla.redhat.com/1241134): nfs-ganesha: execution of script ganesha-ha.sh throws a error for a file
- [1241487](https://bugzilla.redhat.com/1241487): quota/marker: lk_owner is null while acquiring inodelk in rename operation
- [1241529](https://bugzilla.redhat.com/1241529): BitRot :- Files marked as 'Bad' should not be accessible from mount
- [1241666](https://bugzilla.redhat.com/1241666): glfs_loc_link: Update loc.inode with the existing inode incase if already exits
- [1241776](https://bugzilla.redhat.com/1241776): [Data Tiering]: HOT Files get demoted from hot tier
- [1241784](https://bugzilla.redhat.com/1241784): Gluster commands timeout on SSL enabled system, after adding new node to trusted storage pool
- [1241831](https://bugzilla.redhat.com/1241831): quota: marker accounting can get miscalculated after upgrade to 3.7
- [1241841](https://bugzilla.redhat.com/1241841): gf_msg_callingfn does not log the callers of the function in which it is called
- [1241885](https://bugzilla.redhat.com/1241885): ganesha volume export fails in rhel7.1
- [1241963](https://bugzilla.redhat.com/1241963): Peer not recognized after IP address change
- [1242031](https://bugzilla.redhat.com/1242031): nfs-ganesha: bricks crash while executing acl related operation for named group/user
- [1242044](https://bugzilla.redhat.com/1242044): nfs-ganesha : Multiple setting of nfs4_acl on a same file will cause brick crash
- [1242192](https://bugzilla.redhat.com/1242192): nfs-ganesha: add-node logic does not copy the "/etc/ganesha/exports" directory to the correct path on the newly added node
- [1242274](https://bugzilla.redhat.com/1242274): Migration does not work when EC is used as a tiered volume.
- [1242329](https://bugzilla.redhat.com/1242329): [Quota] : Inode quota spurious failure
- [1242515](https://bugzilla.redhat.com/1242515): racy condition in nfs/auth-cache feature
- [1242718](https://bugzilla.redhat.com/1242718): [RFE] Improve I/O latency during signing
- [1242728](https://bugzilla.redhat.com/1242728): replacing a offline brick fails with "replace-brick" command
- [1242734](https://bugzilla.redhat.com/1242734): GlusterD crashes when management encryption is enabled
- [1242882](https://bugzilla.redhat.com/1242882): Quota: Quota Daemon doesn't start after node reboot
- [1242898](https://bugzilla.redhat.com/1242898): Crash in Quota enforcer
- [1243408](https://bugzilla.redhat.com/1243408): syncop:Include iatt to 'syncop_link' args
- [1243642](https://bugzilla.redhat.com/1243642): GF_CONTENT_KEY should not be handled unless we are sure no other operations are in progress
- [1243644](https://bugzilla.redhat.com/1243644): Metadata self-heal is not handling failures while heal properly
- [1243647](https://bugzilla.redhat.com/1243647): Disperse volume : data corruption with appending writes in 8+4 config
- [1243648](https://bugzilla.redhat.com/1243648): Disperse volume: NFS crashed
- [1243654](https://bugzilla.redhat.com/1243654): fops fail with EIO on nfs mount after add-brick and rebalance
- [1243655](https://bugzilla.redhat.com/1243655): Sharding - Use (f)xattrop (as opposed to (f)setxattr) to update shard size and block count
- [1243898](https://bugzilla.redhat.com/1243898): huge mem leak in posix xattrop
- [1244100](https://bugzilla.redhat.com/1244100): using fop's dict for resolving causes problems
- [1244103](https://bugzilla.redhat.com/1244103): Gluster cli logs invalid argument error on every gluster command execution
- [1244114](https://bugzilla.redhat.com/1244114): unix domain sockets on Gluster/NFS are created as fifo/pipe
- [1244116](https://bugzilla.redhat.com/1244116): quota: brick crashes when create and remove performed in parallel
- [1245908](https://bugzilla.redhat.com/1245908): snap-view:mount crash if debug mode is enabled
- [1245934](https://bugzilla.redhat.com/1245934): [RHEV-RHGS] App VMs paused due to IO error caused by split-brain, after initiating remove-brick operation
- [1246121](https://bugzilla.redhat.com/1246121): Disperse volume : client glusterfs crashed while running IO
- [1246481](https://bugzilla.redhat.com/1246481): rpc: fix binding brick issue while bind-insecure is enabled
- [1246728](https://bugzilla.redhat.com/1246728): client3_3_removexattr_cbk floods the logs with "No data available" messages
- [1246809](https://bugzilla.redhat.com/1246809): glusterd crashed when a client which doesn't support SSL tries to mount a SSL enabled gluster volume
- [1246987](https://bugzilla.redhat.com/1246987): Deceiving log messages like "Failing STAT on gfid : split-brain observed. [Input/output error]" reported
- [1246988](https://bugzilla.redhat.com/1246988): sharding - Populate the aggregated ia_size and ia_blocks before unwinding (f)setattr to upper layers
- [1247012](https://bugzilla.redhat.com/1247012): Initialize daemons on demand

### Known Issues

- [1219399](https://bugzilla.redhat.com/1219399): NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
- [1225077](https://bugzilla.redhat.com/1225077): Fix regression test spurious failures
- [1207023](https://bugzilla.redhat.com/1207023): [RFE] Snapshot scheduler enhancements (both GUI Console & CLI)
- [1218990](https://bugzilla.redhat.com/1218990): failing installation of glusterfs-server-3.7.0beta1-0.14.git09bbd5c.el7.centos.x86_64
- [1221957](https://bugzilla.redhat.com/1221957): Fully support data-tiering in 3.7.x, remove out of 'experimental' status
- [1225567](https://bugzilla.redhat.com/1225567): [geo-rep]:  Traceback ValueError: filedescriptor out of range in select() observed while creating huge set of data on master
- [1227656](https://bugzilla.redhat.com/1227656): Unable to mount a replicated volume without all bricks online.
- [1235964](https://bugzilla.redhat.com/1235964): Disperse volume: FUSE I/O error after self healing the failed disk files
- [1231539](https://bugzilla.redhat.com/1231539): Detect and send ENOTSUP if upcall feature is not enabled
- [1240920](https://bugzilla.redhat.com/1240920): libgfapi: Segfault seen when glfs_*() methods are invoked with invalid glfd


- Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
- The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:

    ~~~
    # gluster volume set <volname> server.allow-insecure on
    ~~~

    Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on`

    Post 1, restarting the volume would be necessary:

    ~~~
    # gluster volume stop <volname>
    # gluster volume start <volname>
    ~~~

    Post 2, restarting glusterd would be necessary:

    ~~~
    # service glusterd restart
    ~~~

    or

    ~~~
    # systemctl restart glusterd
    ~~~