summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/3.10.0.md
blob: 9c3b667834b77f66639b72d818d58935942e1939 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
# Release notes for Gluster 3.10.0

This is a major Gluster release that includes some substantial changes. The
features revolve around, better support in container environments, scaling to
larger number of bricks per node, and a few usability and performance
improvements, among other bug fixes.

The most notable features and changes are documented on this page. A full list
of bugs that has been addressed is included further below.

## Major changes and features

### Brick multiplexing
*Notes for users:*

*Limitations:*

*Known Issues:*

### Support to display op-version information from clients
*Notes for users:*

To get information on what op-version are supported by the clients, users can
invoke the `gluster volume status` command for clients. Along with information
on hostname, port, bytes read, bytes written and number of clients connected
per brick, we now also get the op-version on which the respective clients
operate. Following is the example usage:

```bash
# gluster volume status <VOLNAME|all> clients
```

*Limitations:*

*Known Issues:*

### Support to get maximum op-version in a heterogeneous cluster
*Notes for users:*

A heterogeneous cluster operates on a common op-version that can be supported
across all the nodes in the trusted storage pool. Upon upgrade of the nodes in
the cluster, the cluster might support a higher op-version. Users can retrieve
the maximum op-version to which the cluster could be bumped up to by invoking
the `gluster volume get` command on the newly introduced global option,
`cluster.max-op-version`. The usage is as follows:

```bash
# gluster volume get all cluster.max-op-version
```

*Limitations:*

*Known Issues:*

### Support for rebalance time to completion estimation
*Notes for users:*
Users can now see approximately how much time the rebalance
operation will take to complete across all nodes.

The estimated time left for rebalance to complete is displayed
as part of the rebalance status. Use the command:

```bash
# gluster volume rebalance VOLNAME status
```

*Limitations:*
The rebalance process calculates the time left based on the rate
at while files are processed on the node and the total number of files
on the brick which is determined using statfs. The limitations of this
are:

 * A single fs partitiion must host only one brick. Multiple bricks on
the same fs partition will cause the statfs results to be invalid.

 * The estimates are dynamic and are recalculated every time the rebalance status
command is invoked.The estimates become more accurate over time so short running
rebalance operations may not benefit.


*Known Issues:*
As glusterfs does not stored the number of files on the brick, we use statfs to
guess the number. The .glusterfs directory contents can significantly skew this
number and affect the calculated estimates.


### Separation of tier as its own service
*Notes for users:*

*Limitations:*

*Known Issues:*

### Switch to storhaug for HA for NFS-Ganesha and SAMBA
*Notes for users:*
storhaug has been packaged in Fedora and soon in the CentOS Storage SIG.

*Limitations:*
glusterd doesn't yet use storhaug.
storhaug packages are not yet available for Debian, Ubuntu or SuSE.

*Known Issues:*
Packaging (glusterfs.spec(.in)) has no dependency on the storhaug rpm.
glusterd still starts ganesha.nfsd using systemctl or init.d equivalent.
glusterd still tries to invoke ganesha-ha.sh to setup and teardown HA.

### Statedump support for gfapi based applications
*Notes for users:*

*Limitations:*

*Known Issues:*

### Disabled creation of trash directory by default
*Notes for users:*
From now onwards trash directory, namely .trashcan, will not be be created by
default upon creation of new volumes unless and until the feature is tunred ON
and the restrictions on the same will be applicable as long as features.trash
is set for a particular volume.

*Limitations:*
After upgrade for pre-existing volumes, trash directory will be still present at
root of the volume. Those who are not interested in this feature may have to
manually delete the directory from the mount point.

*Known Issues:*

### Implemented parallel readdirp with distribute xlator
*Notes for users:*

*Limitations:*

*Known Issues:*

### md-cache can optionally -ve cache security.ima xattr
*Notes for users:*

*Limitations:*

*Known Issues:*

## Bugs addressed

A total of XXX patches has been sent, addressing YYY bugs:

- TODO