summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/3.13.0.md
blob: ddaa66f9394f2cc4242915ecf7e39fad88b1e8cf (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
# Release notes for Gluster 3.13.0

This is a major Gluster release that includes, a range of usability related
features, GFAPI developer enhancements to Gluster, among other bug fixes.

The most notable features and changes are documented on this page. A full list
of bugs that have been addressed is included further below.

## Major changes and features

### Addition of summary option to the heal info CLI

**Notes for users:**
Gluster heal info CLI has been enhanced with a 'summary' option, that helps in
displaying the statistics of entries pending heal, in split-brain and,
currently being healed, per brick.

Usage:
```
# gluster volume heal <volname> info summary
```

Sample output:
```
Brick <brickname>
Status: Connected
Total Number of entries: 3
Number of entries in heal pending: 2
Number of entries in split-brain: 1
Number of entries possibly healing: 0

Brick <brickname>
Status: Connected
Total Number of entries: 4
Number of entries in heal pending: 3
Number of entries in split-brain: 1
Number of entries possibly healing: 0
```

This option supports xml format output, when the CLI is provided with the
--xml option.

NOTE: Summary information is obtained in a similar fashion to detailed
information, thus time taken for the command to complete would still be the
same, and not faster.

**Limitations:**

None

**Known Issues:**
None

### Addition of checks for allowing lookups in AFR and removal of 'cluster.quorum-reads' volume option.

 ** Notes for users:**

Traditionally, AFR has never failed lookup unless there is a gfid mismatch.
This behavior is being changed with this release, as a part of
fixing [Bug#1515572](https://bugzilla.redhat.com/show_bug.cgi?id=1515572).

Lookups in replica-3 and arbiter volumes will now succeed only if there is
quorum and there is a good copy of a file. I.e. the lookup has to succeed on
quorum #bricks and at least one of them has to be a good copy. If these
conditions are not met, the operation will fail with the ENOTCONN error.

As a part of this change the cluster.quorum-reads volume option is removed, as
lookup failure will result in all subsequent operations (including reads)
failing, which makes this option redundant.

Ensuring this strictness also helps prevent a long standing
rename-leading-to-dataloss [Bug#1366818](https://bugzilla.redhat.com/show_bug.cgi?id=1366818), by disallowing lookups (and thus
renames) when a good copy is not available.

Note: These checks do not affect replica 2 volumes where lookups works as
before, even when only 1 brick is online.

Further reference: [mailing list discussions on topic](http://lists.gluster.org/pipermail/gluster-users/2017-September/032524.html)

###  Support for max-port range in glusterd.vol

**Notes for users:**

If an user wants to have a finer control on the number of ports to be exposed
for glusterd to allocate for its daemons, one can define the upper limit of
max-port value in glusterd.vol. The default max-port value is set to 65535 and
currently the entry of this configuration is commented out. If an user wants to
configure this value, please set the desired value and uncomment this option and
restart the glusterd service.

**Limitations:**


**Known Issues:**


### Prevention of other processes accessing the mounted brick snapshots

**Notes for users:**


**Limitations:**


**Known Issues:**


### Thin client feature

**Notes for users:**


**Limitations:**


**Known Issues:**


### Ability to reserve backend storage space

**Notes for users:**


**Limitations:**


**Known Issues:**


### List all the connected clients for a brick and also exported bricks/snapshots from each brick process

**Notes for users:**


**Limitations:**


**Known Issues:**


### Improved write performance with Disperse xlator, by introducing parallel writes to file

**Notes for users:**


**Limitations:**


**Known Issues:**


### Disperse xlator now supports discard operations

**Notes for users:**
This feature enables users to punch hole in files
created on disperse volumes.

Usage:

# fallocate  -p -o <offset> -l <len> <file_name>

**Limitations:**

None.

**Known Issues:**

None.

### Addressed several compilation warnings with gcc 7.x

**Notes for users:**


**Limitations:**


**Known Issues:**


### Included details about memory pools in statedumps

**Notes for users:**
For troubleshooting purposes it sometimes is useful to verify the memory
allocations done by Gluster. A previous release of Gluster included a rewrite
of the memory pool internals. Since these changes, `statedump`s did not include
details about the memory pools anymore.

This version of Gluster adds details about the used memory pools in the
`statedump`. Troubleshooting memory consumption problems is much more efficient
again.

**Limitations:**
There are currently no statistics included in the `statedump` about the actual
behavior of the memory pools. This means that the efficiency of the memory
pools can not be verified.


### Gluster APIs added to register callback functions for upcalls

**Notes for developers:**
New APIs have been added to allow gfapi applications to register and unregister
for upcall events. Along with the list of events interested, applications now
have to register callback function. This routine shall be invoked
asynchronously, in gluster thread context, in case of any upcalls sent by the
backend server.

```sh
int glfs_upcall_register (struct glfs *fs, uint32_t event_list,
                          glfs_upcall_cbk cbk, void *data);
int glfs_upcall_unregister (struct glfs *fs, uint32_t event_list);
```
libgfapi [header](https://github.com/gluster/glusterfs/blob/release-3.13/api/src/glfs.h#L970) files include the complete synopsis about these APIs definition and their usage.


**Limitations:**
An application can register only a single callback function for all the upcall
events it is interested in.

**Known Issues:**
[Bug#1515748](https://bugzilla.redhat.com/show_bug.cgi?id=1515748) GlusterFS server should be able to identify the clients which
registered for upcalls and notify only those clients in case of such events

### Gluster API added with a `glfs_mem_header` for exported memory

**Notes for developers:**
Memory allocations done in `libgfapi` that return a structure to the calling
application should use `GLFS_CALLOC()` and friends. Applications can then
correctly free the memory by calling `glfs_free()`.

This is implemented with a new `glfs_mem_header` similar to how the memory
allocations are done with `GF_CALLOC()` etc. The new header includes a
`release()` function pointer that gets called to free the resource when the
application calls `glfs_free()`.

The change is a major improvement for allocating and free'ing resources in a
standardized way that is transparent to the `libgfapi` applications.

### Provided a new xlator to delay fops, to aid slow brick response simulation and debugging

**Notes for developers:**


**Limitations:**


**Known Issues:**


## Major issues
1. Expanding a gluster volume that is sharded may cause file corruption
    - Sharded volumes are typically used for VM images, if such volumes are
  expanded or possibly contracted (i.e add/remove bricks and rebalance) there
  are reports of VM images getting corrupted.
    - The last known cause for corruption (Bug #1515434) has a fix with this
  release. As further testing is still in progress, the issue is retained as
  a major issue.
    - Status of this bug can be tracked here, #1515434

## Bugs addressed

Bugs addressed since release-3.12.0 are listed below.

**To Be Done**