summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/3.5.3.md
blob: bbdac8e9570b9c7967ce56b01fd27986e342d225 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
## Release Notes for GlusterFS 3.5.3

This is a bugfix release. The [Release Notes for 3.5.0](3.5.0.md),
[3.5.1](3.5.1.md) and [3.5.2](3.5.2.md) contain a listing of all the new
features that were added and bugs fixed in the GlusterFS 3.5 stable release.

### Bugs Fixed:

- [1081016](https://bugzilla.redhat.com/1081016): glusterd needs xfsprogs and e2fsprogs packages
- [1100204](https://bugzilla.redhat.com/1100204): brick failure detection does not work for ext4 filesystems
- [1126801](https://bugzilla.redhat.com/1126801): glusterfs logrotate config file pollutes global config
- [1129527](https://bugzilla.redhat.com/1129527): DHT :- data loss - file is missing on renaming same file from multiple client at same time
- [1129541](https://bugzilla.redhat.com/1129541): [DHT:REBALANCE]: Rebalance failures are seen with error message  " remote operation failed: File exists"
- [1132391](https://bugzilla.redhat.com/1132391): NFS interoperability problem: stripe-xlator removes EOF at end of READDIR
- [1133949](https://bugzilla.redhat.com/1133949): Minor typo in afr logging
- [1136221](https://bugzilla.redhat.com/1136221): The memories are exhausted quickly when handle the message which has multi fragments in a single record
- [1136835](https://bugzilla.redhat.com/1136835): crash on fsync
- [1138922](https://bugzilla.redhat.com/1138922): DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories
- [1139103](https://bugzilla.redhat.com/1139103): DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing
- [1139170](https://bugzilla.redhat.com/1139170): DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file
- [1139245](https://bugzilla.redhat.com/1139245): vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process)
- [1140338](https://bugzilla.redhat.com/1140338): rebalance is not resulting in the hash layout changes being available to nfs client
- [1140348](https://bugzilla.redhat.com/1140348): Renaming file while rebalance is in progress causes data loss
- [1140549](https://bugzilla.redhat.com/1140549): DHT: Rebalance process crash after add-brick and `rebalance start' operation
- [1140556](https://bugzilla.redhat.com/1140556): Core: client crash while doing rename operations on the mount
- [1141558](https://bugzilla.redhat.com/1141558): AFR : "gluster volume heal <volume_name> info" prints some random characters
- [1141733](https://bugzilla.redhat.com/1141733): data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back
- [1142052](https://bugzilla.redhat.com/1142052): Very high memory usage during rebalance
- [1142614](https://bugzilla.redhat.com/1142614): files with open fd's getting into split-brain when bricks goes offline and comes back online
- [1144315](https://bugzilla.redhat.com/1144315): core: all brick processes crash when quota is enabled
- [1145000](https://bugzilla.redhat.com/1145000): Spec %post server does not wait for the old glusterd to exit
- [1147156](https://bugzilla.redhat.com/1147156): AFR client segmentation fault in afr_priv_destroy
- [1147243](https://bugzilla.redhat.com/1147243): nfs: volume set help says the rmtab file is in "/var/lib/glusterd/rmtab"
- [1149857](https://bugzilla.redhat.com/1149857): Option transport.socket.bind-address ignored
- [1153626](https://bugzilla.redhat.com/1153626): Sizeof bug for allocation of memory in afr_lookup
- [1153629](https://bugzilla.redhat.com/1153629): AFR : excessive logging of "Non blocking entrylks failed" in glfsheal log file.
- [1153900](https://bugzilla.redhat.com/1153900): Enabling Quota on existing data won't create pgfid xattrs
- [1153904](https://bugzilla.redhat.com/1153904): self heal info logs are filled with messages reporting ENOENT while self-heal is going on
- [1155073](https://bugzilla.redhat.com/1155073): Excessive logging in the self-heal daemon after a replace-brick
- [1157661](https://bugzilla.redhat.com/1157661): GlusterFS allows insecure SSL modes

### Known Issues:

- The following configuration changes are necessary for 'qemu' and 'samba vfs
  plugin' integration with libgfapi to work seamlessly:

   1. `gluster volume set <volname> server.allow-insecure on`
   2. restarting the volume is necessary

       ~~~
       gluster volume stop <volname>
       gluster volume start <volname>
       ~~~

   3. Edit `/etc/glusterfs/glusterd.vol` to contain this line:

       ~~~
       option rpc-auth-allow-insecure on
       ~~~

   4. restarting glusterd is necessary

       ~~~
       service glusterd restart
       ~~~

   More details are also documented in the Gluster Wiki on the [Libgfapi with qemu libvirt](http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt) page.

- For Block Device translator based volumes open-behind translator at the
  client side needs to be disabled.

  ~~~
  gluster volume set <volname> performance.open-behind disabled
  ~~~

- libgfapi clients calling `glfs_fini` before a successful `glfs_init` will cause the client to
  hang as reported [here](http://lists.gnu.org/archive/html/gluster-devel/2014-04/msg00179.html).
  The workaround is NOT to call `glfs_fini` for error cases encountered before a successful
  `glfs_init`. This is being tracked in [Bug 1134050](https://bugzilla.redhat.com/1134050) for
  glusterfs-3.5 and [Bug 1093594](https://bugzilla.redhat.com/1093594) for mainline.

- If the `/var/run/gluster` directory does not exist enabling quota will likely
  fail ([Bug 1117888](https://bugzilla.redhat.com/show_bug.cgi?id=1117888)).