summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/3.7.4.md
blob: c3dff6e42f90269362cb37838c2e4ea90e2cb8b7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
## Release Notes for GlusterFS 3.7.4

This is a bugfix release. The Release Notes for [3.7.0](3.7.0.md), [3.7.1](3.7.1.md), [3.7.2](3.7.2.md) and [3.7.3](3.7.3.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.7 stable releases.

### Bugs Fixed

Release 3.7.4 contains 93 bug fixes.

- [1223945](https://bugzilla.redhat.com/1223945): Scripts/Binaries are not installed with +x bit
- [1228216](https://bugzilla.redhat.com/1228216): Disperse volume: gluster volume status doesn't show shd status
- [1228521](https://bugzilla.redhat.com/1228521): USS: Take ref on root inode
- [1231678](https://bugzilla.redhat.com/1231678): geo-rep: gverify.sh throws error if slave_host entry is not added to know_hosts file
- [1235202](https://bugzilla.redhat.com/1235202): tiering: tier daemon not restarting during volume/glusterd restart
- [1235964](https://bugzilla.redhat.com/1235964): Disperse volume: FUSE I/O error after self healing the failed disk files
- [1236050](https://bugzilla.redhat.com/1236050): Disperse volume: fuse mount hung after self healing
- [1238706](https://bugzilla.redhat.com/1238706): snapd/quota/nfs daemon's runs on the node, even after that node was detached from trusted storage pool
- [1240920](https://bugzilla.redhat.com/1240920): libgfapi: Segfault seen when glfs_*() methods are invoked with invalid glfd
- [1242536](https://bugzilla.redhat.com/1242536): Data Tiering: Rename of file is not heating up the file
- [1243384](https://bugzilla.redhat.com/1243384): EC volume: Replace bricks is not healing version of root directory
- [1244721](https://bugzilla.redhat.com/1244721): glusterd: Porting left out log messages to new logging API
- [1244724](https://bugzilla.redhat.com/1244724): quota: allowed to set soft-limit %age beyond 100%
- [1245922](https://bugzilla.redhat.com/1245922): [SNAPSHOT] : Correction required in output message after initilalising snap_scheduler
- [1245923](https://bugzilla.redhat.com/1245923): [Snapshot] Scheduler should check vol-name exists or not  before adding scheduled jobs
- [1247014](https://bugzilla.redhat.com/1247014): sharding - Fix unlink of sparse files
- [1247153](https://bugzilla.redhat.com/1247153): SSL improvements: ECDH, DH, CRL, and accessible options
- [1247551](https://bugzilla.redhat.com/1247551): forgotten inodes are not being signed
- [1247615](https://bugzilla.redhat.com/1247615): tests/bugs/replicate/bug-1238508-self-heal.t fails in 3.7 branch
- [1247833](https://bugzilla.redhat.com/1247833): sharding - OS installation on vm image hangs on a sharded volume
- [1247850](https://bugzilla.redhat.com/1247850): Glusterfsd crashes because of thread-unsafe code in gf_authenticate
- [1247882](https://bugzilla.redhat.com/1247882): [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"""
- [1247910](https://bugzilla.redhat.com/1247910): Gluster peer probe with negative num
- [1247917](https://bugzilla.redhat.com/1247917): ./tests/basic/volume-snapshot.t  spurious fail causing glusterd crash.
- [1248325](https://bugzilla.redhat.com/1248325): quota: In enforcer, caching parents in ctx during build ancestry is not working
- [1248337](https://bugzilla.redhat.com/1248337): Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
- [1248450](https://bugzilla.redhat.com/1248450): rpc: check for unprivileged port should start at 1024 and not beyond 1024
- [1248962](https://bugzilla.redhat.com/1248962): quota/marker: errors in log file 'Failed to get metadata for'
- [1249461](https://bugzilla.redhat.com/1249461): 'unable to get transaction op-info' error seen in glusterd log while executing gluster volume status command
- [1249547](https://bugzilla.redhat.com/1249547): [geo-rep]: rename followed by deletes causes ESTALE
- [1249921](https://bugzilla.redhat.com/1249921): [upgrade] After upgrade from 3.5 to 3.6 onwards version, bumping up op-version failed
- [1249925](https://bugzilla.redhat.com/1249925): DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
- [1249983](https://bugzilla.redhat.com/1249983): Rebalance is failing in test cluster framework.
- [1250601](https://bugzilla.redhat.com/1250601): nfs-ganesha: remove the entry of the deleted node
- [1250628](https://bugzilla.redhat.com/1250628): nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"""
- [1250809](https://bugzilla.redhat.com/1250809): Enable multi-threaded epoll for glusterd process
- [1250810](https://bugzilla.redhat.com/1250810): Make ping-timeout option configurable at a volume-level
- [1250834](https://bugzilla.redhat.com/1250834): Sharding - Excessive logging of messages of the kind 'Failed to get trusted.glusterfs.shard.file-size for bf292f5b-6dd6-45a8-b03c-aaf5bb973c50'
- [1250864](https://bugzilla.redhat.com/1250864): ec returns EIO error in cases where a more specific error could be returned
- [1251106](https://bugzilla.redhat.com/1251106): sharding - Renames on non-sharded files failing with ENOMEM
- [1251380](https://bugzilla.redhat.com/1251380): statfs giving incorrect values for AFR arbiter volumes
- [1252272](https://bugzilla.redhat.com/1252272): rdma : pending - porting log messages to a new framework
- [1252297](https://bugzilla.redhat.com/1252297): Quota: volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled.
- [1252348](https://bugzilla.redhat.com/1252348): using fop's dict for resolving causes problems
- [1252680](https://bugzilla.redhat.com/1252680): probing and detaching a peer generated a CRITICAL error - "Could not find peer"" in glusterd logs"
- [1252727](https://bugzilla.redhat.com/1252727): tiering: Tier daemon stopped prior to graph switch.
- [1252873](https://bugzilla.redhat.com/1252873): gluster vol quota dist-vol list is not displaying quota informatio.
- [1252903](https://bugzilla.redhat.com/1252903): Fix invalid logic in tier.t
- [1252907](https://bugzilla.redhat.com/1252907): Unable to demote files in tiered volumes when cold tier is EC.
- [1253148](https://bugzilla.redhat.com/1253148): gf_store_save_value fails to check for errors, leading to emptying files in /var/lib/glusterd/
- [1253151](https://bugzilla.redhat.com/1253151): Sharding - Individual shards' ownership differs from that of the original file
- [1253160](https://bugzilla.redhat.com/1253160): while re-configuring the scrubber frequency, scheduling is not happening based on current time
- [1253165](https://bugzilla.redhat.com/1253165): glusterd services are not handled properly when re configuring services
- [1253212](https://bugzilla.redhat.com/1253212): snapd crashed due to stack overflow
- [1253260](https://bugzilla.redhat.com/1253260): posix_make_ancestryfromgfid doesn't set op_errno
- [1253542](https://bugzilla.redhat.com/1253542): rebalance stuck at 0 byte when auth.allow is set
- [1253607](https://bugzilla.redhat.com/1253607): gluster snapshot status --xml gives back unexpected non xml output
- [1254419](https://bugzilla.redhat.com/1254419): nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
- [1254436](https://bugzilla.redhat.com/1254436): logging:  Revert usage of global xlator for log buffer
- [1254437](https://bugzilla.redhat.com/1254437): tiering: rename fails with "Device or resource busy"" error message"
- [1254438](https://bugzilla.redhat.com/1254438): Tiering: segfault when trying to rename a file
- [1254439](https://bugzilla.redhat.com/1254439): Quota list is not working on tiered volume.
- [1254442](https://bugzilla.redhat.com/1254442): tiering/snapshot: Tier daemon failed to start during volume start after restoring into a tiered volume from a non-tiered volume.
- [1254468](https://bugzilla.redhat.com/1254468): Data Tiering : Some tier xlator_fops translate to the default fops
- [1254494](https://bugzilla.redhat.com/1254494): nfs-ganesha: refresh-config stdout output does not make sense
- [1254503](https://bugzilla.redhat.com/1254503): fuse: check return value of setuid
- [1254607](https://bugzilla.redhat.com/1254607): rpc: Address issues with transport object reference and leak
- [1254865](https://bugzilla.redhat.com/1254865): non-default symver macros are incorrect
- [1255244](https://bugzilla.redhat.com/1255244): Quota: After rename operation ,  gluster v quota <volname> list-objects command give  incorrect no. of  files in output
- [1255311](https://bugzilla.redhat.com/1255311): Snapshot: When soft limit is reached, auto-delete is enable, create snapshot doesn't logs anything in log files
- [1255351](https://bugzilla.redhat.com/1255351): fail the fops if inode context get fails
- [1255604](https://bugzilla.redhat.com/1255604): Not able to recover the corrupted file on Replica volume
- [1255605](https://bugzilla.redhat.com/1255605): Scrubber log should mark file corrupted message as Alert not as information
- [1255636](https://bugzilla.redhat.com/1255636): Remove unwanted tests from volume-snapshot.t
- [1255644](https://bugzilla.redhat.com/1255644): quota : display the size equivalent to the soft limit percentage in gluster v quota <volname> list* command
- [1255690](https://bugzilla.redhat.com/1255690): AFR: gluster v restart force or brick process restart doesn't heal the files
- [1255698](https://bugzilla.redhat.com/1255698): Write performance from a Windows client on 3-way replicated volume decreases substantially when one brick in the replica set is brought down
- [1256265](https://bugzilla.redhat.com/1256265): Data Loss:Remove brick commit passing when remove-brick process has not even started(due to killing glusterd)
- [1256283](https://bugzilla.redhat.com/1256283): [remove-brick]: Creation of file from NFS  writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
- [1256307](https://bugzilla.redhat.com/1256307): [Backup]: Glusterfind session entry persists even after volume is deleted
- [1256485](https://bugzilla.redhat.com/1256485): [Snapshot]/[NFS-Ganesha] mount point hangs upon snapshot create-activate and 'cd' into .snaps directory
- [1256605](https://bugzilla.redhat.com/1256605): `gluster volume heal <vol-name> split-brain' changes required for entry-split-brain
- [1256616](https://bugzilla.redhat.com/1256616): libgfapi : adding follow flag to glfs_h_lookupat()
- [1256669](https://bugzilla.redhat.com/1256669): Though scrubber settings changed on one volume log shows all volumes scrubber information
- [1256702](https://bugzilla.redhat.com/1256702): remove-brick: avoid mknod op falling on decommissioned brick even after fix-layout has happened on parent directory
- [1256909](https://bugzilla.redhat.com/1256909): Unable to examine file in metadata split-brain after setting `replica.split-brain-choice' attribute to a particular replica
- [1257193](https://bugzilla.redhat.com/1257193): protocol server : Pending - porting log messages to a new framework
- [1257204](https://bugzilla.redhat.com/1257204): sharding - VM image size as seen from the mount keeps growing beyond configured size on a sharded volume
- [1257441](https://bugzilla.redhat.com/1257441): marker: set loc.parent if NULL
- [1257881](https://bugzilla.redhat.com/1257881): Quota list on a volume hangs after glusterd restart an a node.
- [1258306](https://bugzilla.redhat.com/1258306): bug-1238706-daemons-stop-on-peer-cleanup.t fails occasionally
- [1258344](https://bugzilla.redhat.com/1258344): tests: rebasing bad tests from mainline branch to release-3.7 branch

### Upgrade notes

#### Insecure ports by default

GlusterFS uses insecure ports by default from release v3.7.3. This causes problems when upgrading from release 3.7.2 and below to 3.7.3 and above. Performing the following steps before upgrading helps avoid problems.

- Enable insecure ports for all volumes.

  ```
  gluster volume set <VOLNAME> server.allow-insecure on
  gluster volume set <VOLNAME> client.bind-insecure on
  ```

- Enable insecure ports for GlusterD. Set the following line in `/etc/glusterfs/glusterd.vol`

  ```
  option rpc-auth-allow-insecure on
  ```

  This needs to be done on all the members in the cluster.