summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/4.1.0.md
blob: 70db59280e35bb4dfcf3b3441e4ffebb99bf521a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
# Release notes for Gluster 4.1.0

This is a major release that includes a range of features enhancing management,
performance, monitoring, and providing newer functionality like thin arbiters,
cloud archival, time consistency. It also contains several bug fixes.

A selection of the important features and changes are documented on this page.
A full list of bugs that have been addressed is included further below.

- [Announcements](#announcements)
- [Major changes and features](#major-changes-and-features)
- [Major issues](#major-issues)
- [Bugs addressed in the release](#bugs-addressed)

## Announcements

1. As 4.0 was a short term maintenance release, features which have been
included in that release are available with 4.1.0 as well. These features may
be of interest to users upgrading to 4.1.0 from older than 4.0 releases. The 4.0
[release notes](http://docs.gluster.org/en/latest/release-notes/) captures the list of features that were introduced with 4.0.

**NOTE:** As 4.0 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.1.0. ([reference](https://www.gluster.org/release-schedule/))

2. Releases that receive maintenance updates post 4.1 release are, 3.12, and
4.1 ([reference](https://www.gluster.org/release-schedule/))

**NOTE:** 3.10 long term maintenance release, will reach end of life (EOL) with
the release of 4.1.0. ([reference](https://www.gluster.org/release-schedule/))

3. Continuing with this release, the CentOS storage SIG will not build server
packages for CentOS6. Server packages will be available for CentOS7 only. For
ease of migrations, client packages on CentOS6 will be published and maintained.

**NOTE**: This change was announced [here](http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html)

## Major changes and features

Features are categorized into the following sections,

- [Management](#management)
- [Monitoring](#monitoring)
- [Performance](#performance)
- [Standalone](#standalone)
- [Developer related](#developer-related)

### Management

GlusterD2 <TBD>

#### 2. Changes to gluster based smb.conf share management

Previously Gluster used to delete the entire volume share section from smb.conf
either after volume is stopped or while disabling user.cifs/user.smb volume set
options. With this release those volume share sections, that were added by
samba hook scripts inside smb.conf, will not get removed post a volume stop or
on disabling user.cifs/user.smb volume set options. Instead we add the following
share specific smb.conf parameter to the end of corresponding volume share
section to make it unavailable for client access:

```
available = no
```

This will make sure that the additional smb.conf parameters configured
externally are retained. For more details on the above parameter search under
"available (S)" at [smb.conf(5)](https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html) manual page.

### Monitoring

Various xlators are enhanced to provide additional metrics, that help in
determining the effectiveness of the xlator in various workloads.

These metrics can be dumped and visualized as detailed [here](https://docs.gluster.org/en/latest/release-notes/4.0.0/#monitoring).

#### 1. Additional metrics added to negative lookup cache xlator

Metrics added are:
  - negative_lookup_hit_count
  - negative_lookup_miss_count
  - get_real_filename_hit_count
  - get_real_filename_miss_count
  - nameless_lookup_count
  - inodes_with_positive_dentry_cache
  - inodes_with_negative_dentry_cache
  - dentry_invalidations_recieved
  - cache_limit
  - consumed_cache_size
  - inode_limit
  - consumed_inodes

#### 2. Additional metrics added to md-cache xlator

Metrics added are:
  - stat_cache_hit_count
  - stat_cache_miss_count
  - xattr_cache_hit_count
  - xattr_cache_miss_count
  - nameless_lookup_count
  - negative_lookup_count
  - stat_cache_invalidations_received
  - xattr_cache_invalidations_received

#### 3. Additional metrics added to quick-read xlator

Metrics added are:
  - total_files_cached
  - total_cache_used
  - cache-hit
  - cache-miss
  - cache-invalidations

### Performance

#### 1. Support for fuse writeback cache

Gluster FUSE mounts support FUSE extension to leverage the kernel
"writeback cache".

For usage help see `man 8 glusterfs` and `man 8 mount.glusterfs`, specifically
the options `-kernel-writeback-cache` and `-attr-times-granularity`.

#### 2. Extended eager-lock to metadata transactions in replicate xlator

Eager lock feature in replicate xlator is extended to support metadata
transactions in addition to data transactions. This helps in improving the
performance when there are frequent metadata updates in the workload. This is
typically seen with sharded volumes by default, and in other workloads that
incur a higher rate of metadata modifications to the same set of files.

As a part of this feature, compounded FOPs feature in AFR is deprecated, volumes
that are configured to leverage compounding will start disregarding the option
`use-compound-fops`.

**NOTE:** This is an internal change in AFR xlator and is not user controlled
or configurable.

#### 3. Support for multi-threaded fuse readers

FUSE based mounts can specify number of FUSE request processing threads during
a mount. For workloads that have high concurrency on a single client, this helps
in processing FUSE requests in parallel, than the existing single reader model.

This is provided as a mount time option named `reader-thread-count` and can be
used as follows,
```
# mount -t glusterfs -o reader-thread-count=<n> <server>:<volname> <mntpoint>
```

#### 4. Configurable aggregate size for write-behind xlator

Write-behind xlator provides the option `performance.aggregate-size` to enable
configurable aggregate write sizes. This option enables write-behind xlator to
aggregate writes till the specified value before the writes are sent to the
bricks.

Existing behaviour set this size to a maximum of 128KB per file. The
configurable option provides the ability to tune this up or down based on the
workload to improve performance of writes.

Usage:
```
# gluster volume set <volname> performance.aggregate-size <size>
```

#### 5. Adaptive read replica selection based on queue length

AFR xlator is enhanced with a newer value for the option `read-hash-mode`.
Providing this option with a value of `3` will distribute reads across AFR
subvolumes based on the subvol having the least outstanding read requests.

This helps in better distributing and hence improving workload performance on
reads, in replicate based volumes.

### Standalone

#### 1. Thin arbiter quorum for 2-way replication

**NOTE:** This feature is available only with GlusterD2

<TBD>
  - What is it?
  - How to configure/setup?
  - How to convert an existing 2-way to thin-arbiter?
  - Better would be to add all these to gluster docs and point to that here,
  with a short introduction to the feature, and its limitations at present.

#### 2. Automatically configure backup volfile servers in clients

**NOTE:** This feature is available only with GlusterD2

Clients connecting and mounting a Gluster volume, will automatically fetch and
configure backup volfile servers, for future volfile updates and fetches, when
the initial server used to fetch the volfile and mount is down.

When using glusterd, this is achieved using the FUSE mount option
`backup-volfile-servers`, and when using GlusterD2 this is done automatically.

#### 3. Python code base is now python3 compatible

All python code in Gluster that are installed by the various packages are
python3 compatible.

#### 4. (c/m)time equivalence across replicate and disperse subvolumes

Enabling the utime feature, enables Gluster to maintain consistent change and
modification time stamps on files and directories across bricks.

This feature is useful when applications are sensitive to time deltas between
operations (for example tar may report "file changed as we read it"), to
maintain and report equal time stamps on the file across the subvolumes.

To enable the feature use,
```
# gluster volume set <volname> features.utime
```

**Limitations**:
<TBD>

### Developer related

#### 1. New API for acquiring leases and acting on lease recalls

A new API to acquire a lease on an open file and also to receive callbacks when
the lease is recalled, is provided with gfapi.

Refer to the [header](https://github.com/gluster/glusterfs/blob/release-4.1/api/src/glfs.h#L1112) for details on how to use this API.

#### 2. Extended language bindings for gfapi to include perl

See, [libgfapi-perl](https://github.com/gluster/libgfapi-perl) - Libgfapi bindings for Perl using FFI

## Major issues

**None**

## Bugs addressed

Bugs addressed since release-4.0.0 are listed below.

<TBD>