summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/4.1.0.md
blob: 3b40921e6f3ccf15f106dcdfcfc16cc872b5ede4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
# Release notes for Gluster 4.1.0

This is a major release that includes a range of features enhancing management,
performance, monitoring, and providing newer functionality like thin arbiters,
cloud archival, time consistency. It also contains several bug fixes.

A selection of the important features and changes are documented on this page.
A full list of bugs that have been addressed is included further below.

- [Announcements](#announcements)
- [Major changes and features](#major-changes-and-features)
- [Major issues](#major-issues)
- [Bugs addressed in the release](#bugs-addressed)

## Announcements

1. As 4.0 was a short term maintenance release, features which have been
included in that release are available with 4.1.0 as well. These features may
be of interest to users upgrading to 4.1.0 from older than 4.0 releases. The 4.0
[release notes](http://docs.gluster.org/en/latest/release-notes/) captures the list of features that were introduced with 4.0.

**NOTE:** As 4.0 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.1.0. ([reference](https://www.gluster.org/release-schedule/))

2. Releases that receive maintenance updates post 4.1 release are, 3.12, and
4.1 ([reference](https://www.gluster.org/release-schedule/))

**NOTE:** 3.10 long term maintenance release, will reach end of life (EOL) with
the release of 4.1.0. ([reference](https://www.gluster.org/release-schedule/))

3. Continuing with this release, the CentOS storage SIG will not build server
packages for CentOS6. Server packages will be available for CentOS7 only. For
ease of migrations, client packages on CentOS6 will be published and maintained.

**NOTE**: This change was announced [here](http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html)

## Major changes and features

Features are categorized into the following sections,

- [Management](#management)
- [Monitoring](#monitoring)
- [Performance](#performance)
- [Standalone](#standalone)
- [Developer related](#developer-related)

### Management

#### GlusterD2

> **IMP:** GlusterD2 in Gluster-4.1.0 is still considered a preview and is
> experimental. It should not be considered for production use. Users should
> still expect breaking changes to be possible, though efforts will be taken to
> avoid such changes. As GD2 is still under heavy development, new features can
> be expected throughout the 4.1 release.

GD2 brings initial support for rebalance, snapshots, intelligent volume
provisioning and a lot of other bug fixes and internal changes.

##### Rebalance [#786](https://github.com/gluster/glusterd2/pull/786)

GD2 supports running rebalance on volumes. Supported rebalance operations include,
- rebalance start
  - rebalance start with fix-layout
- rebalance stop
- rebalance status

Support only exists in the ReST API right now. CLI support will be introduced in subsequent releases.

##### Snapshot [#533](https://github.com/gluster/glusterd2/pull/533)

Initial support for volume snapshot has been introduced. At the moment, snapshots are supported only on Thin-LVM bricks.

Support snapshot operations include,
- create
- activate/deactivate
- list
- info

##### Intelligent volume provisioning (IVP) [#661](https://github.com/gluster/glusterd2/pull/661)

GD2 brings very early preview for intelligent volume creation, similar to
[Heketi](https://github.com/heketi/heketi).

> **IMP:** This is considered experimental, and the API and implementation is
> not final. It is very possible that both the API and the implementation will
> change.

IVP enables users to create volumes by just providing the expected volume type
and a size, without providing the bricks layout. IVP is supported in CLI in the
normal `volume create` command.

More information on IVP can be found in the pull-request.

To suport IVP, support for adding and managing block devices, and basic support
for zones is available. [#783](https://github.com/gluster/glusterd2/pull/783) [#785](https://github.com/gluster/glusterd2/pull/785)

##### Other changes

Other notable changes include,
- Support for volume option levels (experimental, advanced, deprecated) [#591](https://github.com/gluster/glusterd2/pull/591)
- Support for resetting volume options [#545](https://github.com/gluster/glusterd2/pull/545)
- Option hooks for volume set [#708](https://github.com/gluster/glusterd2/pull/708)
- Support for setting quota options [#583](https://github.com/gluster/glusterd2/pull/583)
- Changes to transaction locking [#808](https://github.com/gluster/glusterd2/pull/808)
- Support for setting metadata on peers and volume [#600](https://github.com/gluster/glusterd2/pull/600) [#689](https://github.com/gluster/glusterd2/pull/689) [#704](https://github.com/gluster/glusterd2/pull/704)
- Thin arbiter support [#673](https://github.com/gluster/glusterd2/pull/673) [#702](https://github.com/gluster/glusterd2/pull/702)

In addition to the above, a lot of smaller bug-fixes and enhancements to internal frameworks and tests have also been done.

##### Known issues

GD2 is still under heavy development and has lots of known bugs. For filing new bugs or tracking known bugs, please use the [GD2 github issue tracker](http://github.com/gluster/glusterd2/issues?q=is%3Aissue+is%3Aopen+label%3Abug).

#### 2. Changes to gluster based smb.conf share management

Previously Gluster used to delete the entire volume share section from smb.conf
either after volume is stopped or while disabling user.cifs/user.smb volume set
options. With this release those volume share sections, that were added by
samba hook scripts inside smb.conf, will not get removed post a volume stop or
on disabling user.cifs/user.smb volume set options. Instead we add the following
share specific smb.conf parameter to the end of corresponding volume share
section to make it unavailable for client access:

```
available = no
```

This will make sure that the additional smb.conf parameters configured
externally are retained. For more details on the above parameter search under
"available (S)" at [smb.conf(5)](https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html) manual page.

### Monitoring

Various xlators are enhanced to provide additional metrics, that help in
determining the effectiveness of the xlator in various workloads.

These metrics can be dumped and visualized as detailed [here](https://docs.gluster.org/en/latest/release-notes/4.0.0/#monitoring).

#### 1. Additional metrics added to negative lookup cache xlator

Metrics added are:
  - negative_lookup_hit_count
  - negative_lookup_miss_count
  - get_real_filename_hit_count
  - get_real_filename_miss_count
  - nameless_lookup_count
  - inodes_with_positive_dentry_cache
  - inodes_with_negative_dentry_cache
  - dentry_invalidations_recieved
  - cache_limit
  - consumed_cache_size
  - inode_limit
  - consumed_inodes

#### 2. Additional metrics added to md-cache xlator

Metrics added are:
  - stat_cache_hit_count
  - stat_cache_miss_count
  - xattr_cache_hit_count
  - xattr_cache_miss_count
  - nameless_lookup_count
  - negative_lookup_count
  - stat_cache_invalidations_received
  - xattr_cache_invalidations_received

#### 3. Additional metrics added to quick-read xlator

Metrics added are:
  - total_files_cached
  - total_cache_used
  - cache-hit
  - cache-miss
  - cache-invalidations

### Performance

#### 1. Support for fuse writeback cache

Gluster FUSE mounts support FUSE extension to leverage the kernel
"writeback cache".

For usage help see `man 8 glusterfs` and `man 8 mount.glusterfs`, specifically
the options `-kernel-writeback-cache` and `-attr-times-granularity`.

#### 2. Extended eager-lock to metadata transactions in replicate xlator

Eager lock feature in replicate xlator is extended to support metadata
transactions in addition to data transactions. This helps in improving the
performance when there are frequent metadata updates in the workload. This is
typically seen with sharded volumes by default, and in other workloads that
incur a higher rate of metadata modifications to the same set of files.

As a part of this feature, compounded FOPs feature in AFR is deprecated, volumes
that are configured to leverage compounding will start disregarding the option
`use-compound-fops`.

**NOTE:** This is an internal change in AFR xlator and is not user controlled
or configurable.

#### 3. Support for multi-threaded fuse readers

FUSE based mounts can specify number of FUSE request processing threads during
a mount. For workloads that have high concurrency on a single client, this helps
in processing FUSE requests in parallel, than the existing single reader model.

This is provided as a mount time option named `reader-thread-count` and can be
used as follows,
```
# mount -t glusterfs -o reader-thread-count=<n> <server>:<volname> <mntpoint>
```

#### 4. Configurable aggregate size for write-behind xlator

Write-behind xlator provides the option `performance.aggregate-size` to enable
configurable aggregate write sizes. This option enables write-behind xlator to
aggregate writes till the specified value before the writes are sent to the
bricks.

Existing behaviour set this size to a maximum of 128KB per file. The
configurable option provides the ability to tune this up or down based on the
workload to improve performance of writes.

Usage:
```
# gluster volume set <volname> performance.aggregate-size <size>
```

#### 5. Adaptive read replica selection based on queue length

AFR xlator is enhanced with a newer value for the option `read-hash-mode`.
Providing this option with a value of `3` will distribute reads across AFR
subvolumes based on the subvol having the least outstanding read requests.

This helps in better distributing and hence improving workload performance on
reads, in replicate based volumes.

### Standalone

#### 1. Thin arbiter quorum for 2-way replication

**NOTE:** This feature is available only with GlusterD2

<TBD>
  - What is it?
  - How to configure/setup?
  - How to convert an existing 2-way to thin-arbiter?
  - Better would be to add all these to gluster docs and point to that here,
  with a short introduction to the feature, and its limitations at present.

#### 2. Automatically configure backup volfile servers in clients

**NOTE:** This feature is available only with GlusterD2

Clients connecting and mounting a Gluster volume, will automatically fetch and
configure backup volfile servers, for future volfile updates and fetches, when
the initial server used to fetch the volfile and mount is down.

When using glusterd, this is achieved using the FUSE mount option
`backup-volfile-servers`, and when using GlusterD2 this is done automatically.

#### 3. Python code base is now python3 compatible

All python code in Gluster that are installed by the various packages are
python3 compatible.

#### 4. (c/m)time equivalence across replicate and disperse subvolumes

Enabling the utime feature, enables Gluster to maintain consistent change and
modification time stamps on files and directories across bricks.

This feature is useful when applications are sensitive to time deltas between
operations (for example tar may report "file changed as we read it"), to
maintain and report equal time stamps on the file across the subvolumes.

To enable the feature use,
```
# gluster volume set <volname> features.utime
```

**Limitations**:
<TBD>

### Developer related

#### 1. New API for acquiring leases and acting on lease recalls

A new API to acquire a lease on an open file and also to receive callbacks when
the lease is recalled, is provided with gfapi.

Refer to the [header](https://github.com/gluster/glusterfs/blob/release-4.1/api/src/glfs.h#L1112) for details on how to use this API.

#### 2. Extended language bindings for gfapi to include perl

See, [libgfapi-perl](https://github.com/gluster/libgfapi-perl) - Libgfapi bindings for Perl using FFI

## Major issues

**None**

## Bugs addressed

Bugs addressed since release-4.0.0 are listed below.

<TBD>