summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/3.8.10.md
blob: d67b8ca7494720d2c7ff7fd2a4cd6ab34d15e48e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
# Release notes for Gluster 3.8.10

This is a bugfix release. The [Release Notes for 3.8.0](3.8.0.md),
[3.8.1](3.8.1.md), [3.8.2](3.8.2.md), [3.8.3](3.8.3.md), [3.8.4](3.8.4.md),
[3.8.5](3.8.5.md), [3.8.6](3.8.6.md), [3.8.7](3.8.7.md), [3.8.8](3.8.8.md) and
[3.8.9](3.8.9.md) contain a listing of all the new features that were added and
bugs fixed in the GlusterFS 3.8 stable release.


## Improved configuration with additional 'virt' options

This release includes 5 more options to group virt (for VM workloads) for
optimal performance.

Updating to the glusterfs version containing this patch won't automatically set
these newer options on already existing volumes that have group virt
configured. The changes take effect only when post-upgrade

    # gluster volume-set <VOL> group virt

is performed.

For already existing volumes the users may execute the following five commands,
if not already set:

    # gluster volume set <VOL> performance.low-prio-threads 32
    # gluster volume set <VOL> cluster.locking-scheme granular
    # gluster volume set <VOL> features.shard on
    # gluster volume set <VOL> cluster.shd-max-threads 8
    # gluster volume set <VOL> cluster.shd-wait-qlength 10000
    # gluster volume set <VOL> user.cifs off

It is most likely that `features.shard` would already have been set on the
volume even before the upgrade, in which case the third `volume set` command
above may be skipped.


## Bugs addressed

A total of 18 patches have been merged, addressing 16 bugs:

- [#1387878](https://bugzilla.redhat.com/1387878): Rebalance after add bricks corrupts files
- [#1412994](https://bugzilla.redhat.com/1412994): Memory leak on mount/fuse when setxattr fails
- [#1420993](https://bugzilla.redhat.com/1420993): Modified volume options not synced once offline nodes comes up.
- [#1422352](https://bugzilla.redhat.com/1422352): glustershd process crashed on systemic setup
- [#1422394](https://bugzilla.redhat.com/1422394): Gluster NFS server crashing in __mnt3svc_umountall
- [#1422811](https://bugzilla.redhat.com/1422811): [Geo-rep] Recreating geo-rep session with same slave after deleting with reset-sync-time fails to sync
- [#1424915](https://bugzilla.redhat.com/1424915): dht_setxattr returns EINVAL when a file is deleted during the FOP
- [#1424934](https://bugzilla.redhat.com/1424934): Include few more options in virt file
- [#1424974](https://bugzilla.redhat.com/1424974): remove-brick status shows 0 rebalanced files
- [#1425112](https://bugzilla.redhat.com/1425112): [Ganesha] : Unable to bring up a Ganesha HA cluster on RHEL 6.9.
- [#1425307](https://bugzilla.redhat.com/1425307): Fix statvfs for FreeBSD in Python
- [#1427390](https://bugzilla.redhat.com/1427390): systemic testing: seeing lot of ping time outs  which would lead to splitbrains
- [#1427419](https://bugzilla.redhat.com/1427419): Warning messages throwing when EC volume offline brick comes up are difficult to understand for end user.
- [#1428743](https://bugzilla.redhat.com/1428743): Fix crash in dht resulting from tests/features/nuke.t
- [#1429312](https://bugzilla.redhat.com/1429312): Prevent reverse heal from happening
- [#1429405](https://bugzilla.redhat.com/1429405): Restore atime/mtime for symlinks and other non-regular files.