summaryrefslogtreecommitdiffstats
path: root/doc/release-notes/3.8.0.md
blob: 3f06a5e9ef511279bde31b47ab8e54f93adb048c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
# Work in progress release notes for Gluster 3.8.0 (RC2)

These are the current release notes for Release Candidate 2. Follow up changes
will add more user friendly notes and instructions.

The release-notes are being worked on by maintainers and the developers of the
different features. Assistance of others is welcome! Contributions can be done
in [this etherpad](https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes).

### Changes to building from the release tarball

By default the release tarballs contain some of the scripts from the GNU
autotools projects. These scripts are used for detecting the environment where
the software is built. This includes operating system, architecture and more.

Bundling these scripts in the tarball makes it mandatory for some distributions
to replace them with more updated versions. The scripts are included from the
host operating system where the tarball is generated. If this is an older
operating system (like RHEL/CentOS-6), the scripts are not current enough for
some build targets.

Many distributions have the habit to replace the included `config.guess` and
`config.sub` scripts. The intention of our release tarball is to not include
the script at all, however that breaks some builds. We have now replaced these
scripts with dummy ones, and expect the build environment to replace the
scripts, or run `./configure` with the appropriate `--host=..` and `--build=..`
parameters.

Building directly from the git repository has not changed.


### Mandatory lock support for Multiprotocol environment
*Notes for users:*
With this release GlusterFS is now capable of performing file operations based
on core mandatory locking concepts. Apart from Linux kernel style semantics,
GlusterFS volumes can now be configured in a special mode where all traditional
fcntl locks are treated mandatory so as to detect the presence of locks before
every data modifying file operations acting on a particluar byte range. This
will help applications to operate on more accurate data during concurrent access
of various byte ranges within a file. Please refer Administration Guide for more
details.

http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Mandatory%20Locks/

### Gluster/NFS disabled by default
*Notes for users:*
The legacy Gluster NFS server (a.k.a. gnfs) is now disabled by default when new
volumes are created. Users are encouraged to use NFS-Ganesha with FSAL_GLUSTER
instead of gnfs. NFS-Ganesha is a full feature server that is being actively
developed and maintained. It supports NFSv3, NFSv4, and NFSv4.1.  The
documentation
(http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/)
describes how to configure and use NFS-Ganesha. Users that prefer to use the
gnfs server (NFSv3 only) can enable the service per volume with the following
command:

```bash
# gluster volume set <VOLUME> nfs.disable false
```

Existing volumes that have gnfs enabled will remain enabled unless explicitly
disabled. You cannot run both gnfs and NFS-Ganesha servers on the same host.

The plan is to phase gnfs out of Gluster over the next several releases,
starting with documenting it as officially deprecated, then not compiling and
packaging the components, and ultimately removing the component sources from the
source tree.

### SEEK
*Notes for users:*
All modern filesystems support SEEK_DATA and SEEK_HOLE with the lseek()
systemcall. This improves performance when reading sparse files. GlusterFS now
supports the SEEK operation as well. Linux kernel 4.5 comes with an improved
FUSE module where lseek() can be used. QEMU can now detect holes in VM images
when using the Gluster-block driver.

*Limitations:*
The deprecated stripe functionality has not been extended with SEEK. SEEK for
sharding has not been implemented yet, and is expected to follow later in a 3.8
update (bug 1301647). NFS-Ganesha will support SEEK over NFSv4 in the near
future, posisbly with the upcoming nfs-ganesha 2.4.

### Tiering aware Geo-replication
*Notes for users:*
Tiering moves files between hot/cold tier bricks. Geo-replication syncs files
from bricks in Master volume to Slave volume. With this, Users can configure
geo-replication session in a Tiering based volume.

*Limitations:*
Configuring geo-replication session in Tiering based volume is same as earlier.
But, before attaching/detaching tier, a few steps needs to be followd:

Before attaching a tier to a volume with an existing geo-replication session,
the session needs to be stopped.  Please find detailed steps here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Attach_Volumes.html#idp11442496

While detaching a tier from a Tiering based volume with existing geo-replication
session, checkpoint of session needs to be done. Please find detailed steps
here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Detach_Tier.html#idp32905264

### Enhance Quota enable/disable in glusterd
*Notes for users:*
The enhancement will spawn the crawl process for each brick in the volume and
files will be checked in parallel, which is an independent process for every
brick. This improves the speed of crawling process, thus enhancing the quota
enable/disable process. With this feature, the user need not wait for a long
time once enable or disable quota command is issued.

### Automagic unsplit-brain by [ctime|mtime|size|majority] for AFR
*Notes for users:*
A new volume option has been introduced called `cluster.favorite-child-policy`.
It can be used to automatically resolve split-brains in replica volumes without
having to use the gluster CLI or the `fuse-mount-setfattr-based` methods to
manually select a source. The healing automcatically happens based on various
policies that this option takes. See `gluster volume set help|grep
cluster.favorite-child-policy -A3` for the various policies that you can set.
The default value is 'none' , i.e. this feature is not enabled by default.

*Limitations:*
`cluster.favorite-child-policy` applies to all files of the volume. It is
assumed that if this option is enabled with a particular policy, you don't care
to examine the split-brain files on a per file basis and use the approrpiate
gluster split-brain resolution CLI to resolve them individually with different
policies.

### glusterfs-coreutils packaged for Fedora and CentOS Storage SIG
*Notes for users:*
These are set of coreutils designed to act on GlusterFS volumes using its native
C API similar to standard Linux coreutils like cp, ls, mv etc. Anyone can easily
make use of these utilities to directly access volumes without mounting the same
via some protocol. Please refer Admin guide for more details

http://gluster.readthedocs.org/en/latest/Administrator%20Guide/GlusterFS%20Coreutils/

### WORM, Retention and Compliance
*Notes for users:*
This feature is about having WORM based compliance/archiving solution in
GlusterFS. This adds the file-level WORM/Retention feature to the existing
implementation of the WORM translator which works at volume level. Users can
switch between either volume-level WORM or file-level WORM/Retention features.
This feature will work only if the "read-only" and "worm" options on the volume
are "off" and the "worm-file-level option" is "on" on the volume. A file can be
in any of these three states:

1. Normal: Where we can perform normal operations on the files
2. WORM-Retained: Where file will be immutable and undeletable
3. WORM: Where file will be immutable but deletable

Added four volume set options:
1. `features.worm-file-level`: It enables the file level WORM/Retention feature.
   It is "off" by default
2. `features.retention-mode`: Takes two values
  1. `relax`: allows users to increase or decrease the retention period of a
     WORM/Retained file (Can not be decreased below the modification time of the
     file)
  2. `enterprise`: allows users only to increase the retention period of a
     WORM/Retained file
3. `features.auto-commit-period`: time period at/after which the auto commit
   feature should look for the dormant files for state transition. Default value
   is 180 seconds
4. `features.default-retention-period`: time period till which a file should be
   undeletable. This value is also used to find the dormant files, i.e., files
   which are not modified for this much time, will qualify for state transition.
   Default value is 120 seconds

User can do the manual transition by using the `chmod -w <filename>` or
equivalent command or the lazy auto-commit will take place when  I/O triggered
using timeouts for untouched files. The next I/O(link, unlink, rename, truncate)
will cause the transition

Limitations:
1. No Data validation of Read-only data i.e Integration with bitrot not done
2. Internal operations like tiering, rebalancing, self-healing will fail on
   WORMed files
3. No control on ctime

### Lock migration
*Notes for users:*
In the current release, the lock state of a file is lost when the file moves to
another brick as part of rebalance. With the new lock migration feature, the
locks associated with a file will be migrated during a rebalance operation.

Users can enable this feature by the following command:

```bash
gluster volume set <vol-name> lock-migration on
```

*Limitations:*
The current implementation is experimental. Hence it is not recommended for a
production environment. This feature is planned to be stabilized in future
releases.  Feedback from the community is welcome and greatly appreciated.

### Granular Entry self-heal for AFR
*Notes for users:*
This feature can be enabled using the command

```bash
gluster volume set <vol-name> granular-entry-heal on
```

*Limitations:*
1) The feature is not backward compatible. So please enable the option only after you have upgraded all your clients and servers to 3.8 and op-version is 30800
2) Make sure the volume is stopped and there is no pending heal before you enable the feature.

### Gdeploy packaged for Fedora and EPEL
*Notes for users:*
With gdeploy, deployment and configuration is a lot easier, it abstracts the complexities of learning and writing YAML files. And reusing the gdeploy configuration files with slight modification is lot easier than editing the YAML files, and debugging the errors.
Setting up a GlusterFS volume involves quite a bit of tasks like:
1. Setting up PV, VG, LV (thinpools if necessary).
2. Peer probing the nodes.
3.  CLI to create volume (which can get lengthy and error prone as the number of nodes increase).

gdeploy helps in simplifying the above tasks and adds many more useful features like installing packages, handling volumes remotely, setting volume options while creating the volume so on...

*Limitations:*
We cannot have periodic status checks or similar health monitoring of the Gluster setup using gdeploy.
So it does not keep track of the previous deployments you have made. You need to give every detail that gdeploy would require at each stage of deployment as it does not keep any state.

### Glusterfind and Bareos Integration
*Notes for users:*
This is a first integration of Gluster with a Backup & Recovery Application. The integration enabler is a Bareos plugin for Glusterfs and a Gluster python utility called glusterfind. The integration provides facility to backup and restore from and to Glusterfs volumes via the libgfapi library which interacts directly with the Glusterfs server and not via a Glusterfs mount point.
During the backup operation, the glusterfind utility helps to speed up full file listing by parallelly running on bricks' back-end  instead of using the more expensive READDIR file operation needed when listing at a mount point. For incremental changes, the glusterfind utility picks up changed files from file system changelogs instead of crawling the entire file system scavenging for the files' modification time.

*Limitations:*
Since bareos intrerfaces with Glusterfs via the libgfapi library and needs to execute the glusterfind tool, bareos needs to be running on one of the Gluster cluster nodes to make the most of it.

### Heketi
*Notes for users:*
Heketi provides a RESTful management interface which can be used to manage the life cycle of GlusterFS volumes. With Heketi, cloud services like OpenStack Manila, Kubernetes, and OpenShift can dynamically provision GlusterFS volumes with any of the supported durability types

*Limitations:*
Currently, Heketi only provides volume create, delete, and expand commands.

## Bugs addressed

A total of 1685 (FIXME) patches has been sent, addressing 1154 (FIXME) bugs:

- [#789278](https://bugzilla.redhat.com/789278): Issues reported by Coverity static analysis tool
- [#1004332](https://bugzilla.redhat.com/1004332): Setting of any option using volume set fails when the clients are in older version.
- [#1054694](https://bugzilla.redhat.com/1054694): A replicated volume takes too much to come online when one server is down
- [#1075611](https://bugzilla.redhat.com/1075611): [FEAT] log: enhance gluster log format with message ID and standardize errno reporting
- [#1092414](https://bugzilla.redhat.com/1092414): Disable NFS by default
- [#1093692](https://bugzilla.redhat.com/1093692): Resource/Memory leak issues reported by Coverity.
- [#1094119](https://bugzilla.redhat.com/1094119): Remove replace-brick with data migration support from gluster cli
- [#1109180](https://bugzilla.redhat.com/1109180): Issues reported by Cppcheck static analysis tool
- [#1110262](https://bugzilla.redhat.com/1110262): suid,sgid,sticky bit on directories not preserved when doing add-brick
- [#1114847](https://bugzilla.redhat.com/1114847): glusterd logs are filled with  "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"
- [#1117886](https://bugzilla.redhat.com/1117886): Gluster not resolving hosts with IPv6 only lookups
- [#1122377](https://bugzilla.redhat.com/1122377): [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes  back
- [#1122395](https://bugzilla.redhat.com/1122395): man or info page of gluster needs to be updated with self-heal commands.
- [#1129939](https://bugzilla.redhat.com/1129939): NetBSD port
- [#1131275](https://bugzilla.redhat.com/1131275): I currently have no idea what rfc.sh is doing during at any specific moment
- [#1132465](https://bugzilla.redhat.com/1132465): [FEAT] Trash translator
- [#1141379](https://bugzilla.redhat.com/1141379): Geo-Replication - Fails to handle file renaming correctly between master and slave
- [#1142423](https://bugzilla.redhat.com/1142423): [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
- [#1143880](https://bugzilla.redhat.com/1143880): [FEAT] Exports and Netgroups Authentication for Gluster NFS mount
- [#1158654](https://bugzilla.redhat.com/1158654): [FEAT] Journal Based Replication (JBR - formerly NSR)
- [#1162905](https://bugzilla.redhat.com/1162905): hardcoded gsyncd path causes geo-replication to fail on non-redhat systems
- [#1163416](https://bugzilla.redhat.com/1163416): [USS]: From NFS, unable to go to .snaps directory (error: No such file or directory)
- [#1163543](https://bugzilla.redhat.com/1163543): Fix regression test spurious failures
- [#1165041](https://bugzilla.redhat.com/1165041): Different client can not execute "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime
- [#1166862](https://bugzilla.redhat.com/1166862): rmtab file is a bottleneck when lot of clients are accessing a volume through NFS
- [#1168819](https://bugzilla.redhat.com/1168819): [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a
- [#1169317](https://bugzilla.redhat.com/1169317): rmtab file is a bottleneck when lot of clients are accessing a volume through NFS
- [#1170075](https://bugzilla.redhat.com/1170075): [RFE] : BitRot detection in glusterfs
- [#1171703](https://bugzilla.redhat.com/1171703): AFR+SNAPSHOT: File with hard link  have different inode number in USS
- [#1171954](https://bugzilla.redhat.com/1171954): [RFE] Rebalance Performance Improvements
- [#1174765](https://bugzilla.redhat.com/1174765): Hook scripts are not installed after make install
- [#1176062](https://bugzilla.redhat.com/1176062): Force replace-brick lead to the persistent write(use dd) return Input/output error
- [#1176837](https://bugzilla.redhat.com/1176837): [USS] : statfs call fails on USS.
- [#1178619](https://bugzilla.redhat.com/1178619): Statfs is hung because of frame loss in quota
- [#1180545](https://bugzilla.redhat.com/1180545): Incomplete conservative merge for split-brained directories
- [#1188145](https://bugzilla.redhat.com/1188145): Disperse volume: I/O error on client when USS is turned on
- [#1188242](https://bugzilla.redhat.com/1188242): Disperse volume: client crashed while running iozone
- [#1189363](https://bugzilla.redhat.com/1189363): ignore_deletes option is not something you can configure
- [#1189473](https://bugzilla.redhat.com/1189473): [RFE] While creating a snapshot the timestamp has to be appended to the snapshot name.
- [#1193388](https://bugzilla.redhat.com/1193388): Disperse volume: Failed to update version and size (error 2) seen during delete operations
- [#1193636](https://bugzilla.redhat.com/1193636): [DHT:REBALANCE]: xattrs set on the file during rebalance migration will be lost after migration is over
- [#1194640](https://bugzilla.redhat.com/1194640): Tracker bug for Logging framework expansion.
- [#1194753](https://bugzilla.redhat.com/1194753): Storage tier feature
- [#1195947](https://bugzilla.redhat.com/1195947): Reduce the contents of dependencies from glusterfs-api
- [#1196027](https://bugzilla.redhat.com/1196027): Fix memory leak while using scandir
- [#1198849](https://bugzilla.redhat.com/1198849): Minor improvements and cleanup for the build system
- [#1199894](https://bugzilla.redhat.com/1199894): RFE: Clone of a snapshot
- [#1199985](https://bugzilla.redhat.com/1199985): [RFE] arbiter for 3 way replication
- [#1200082](https://bugzilla.redhat.com/1200082): [FEAT] - Sharding xlator
- [#1200254](https://bugzilla.redhat.com/1200254): NFS-Ganesha : Locking of global option file used by NFS-Ganesha.
- [#1200262](https://bugzilla.redhat.com/1200262): Upcall framework support along with cache_invalidation usecase handled
- [#1200265](https://bugzilla.redhat.com/1200265): NFS-Ganesha: Handling GlusterFS CLI commands when NFS-Ganesha related commands are executed and other additonal checks
- [#1200267](https://bugzilla.redhat.com/1200267): Upcall: Cleanup the expired upcall entries
- [#1200271](https://bugzilla.redhat.com/1200271): Upcall: xlator options for Upcall xlator
- [#1200364](https://bugzilla.redhat.com/1200364): longevity: Incorrect log level messages in posix_istat and posix_lookup
- [#1200704](https://bugzilla.redhat.com/1200704): rdma: properly handle memory registration during network interruption
- [#1201284](https://bugzilla.redhat.com/1201284): tools/glusterfind: Use Changelogs more effectively for GFID to Path conversion
- [#1201289](https://bugzilla.redhat.com/1201289): tools/glusterfind: Support Partial Find feature
- [#1202244](https://bugzilla.redhat.com/1202244): [Quota] : To have a separate quota.conf file for inode quota.
- [#1202274](https://bugzilla.redhat.com/1202274): Minor improvements and code cleanup for libgfapi
- [#1202649](https://bugzilla.redhat.com/1202649): [georep]: Transition from xsync to changelog doesn't happen once the brick is brought online
- [#1202758](https://bugzilla.redhat.com/1202758): Disperse volume: brick logs are getting filled with "anonymous fd creation failed" messages
- [#1203089](https://bugzilla.redhat.com/1203089): Disperse volume: misleading unsuccessful message with heal and heal full
- [#1203185](https://bugzilla.redhat.com/1203185): Detached node list stale snaps
- [#1204641](https://bugzilla.redhat.com/1204641): [geo-rep] stop-all-gluster-processes.sh fails to stop all gluster processes
- [#1204651](https://bugzilla.redhat.com/1204651): libgfapi : Anonymous fd support in gfapi
- [#1205037](https://bugzilla.redhat.com/1205037): [SNAPSHOT]: "man gluster" needs modification for few snapshot commands
- [#1205186](https://bugzilla.redhat.com/1205186): RCU changes wrt peers to be done for GlusterFS-3.7.0
- [#1205540](https://bugzilla.redhat.com/1205540): Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
- [#1205545](https://bugzilla.redhat.com/1205545): Effect of Trash translator over CTR translator
- [#1205596](https://bugzilla.redhat.com/1205596): [SNAPSHOT]: Output message when a snapshot create is issued when multiple bricks are down needs to be improved
- [#1205624](https://bugzilla.redhat.com/1205624): Data Tiering:rebalance fails on a tiered volume
- [#1206461](https://bugzilla.redhat.com/1206461): sparse file self heal fail under xfs version 2 with speculative preallocation feature on
- [#1206539](https://bugzilla.redhat.com/1206539): Tracker bug for GlusterFS documentation Improvement.
- [#1206546](https://bugzilla.redhat.com/1206546): [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily
- [#1206587](https://bugzilla.redhat.com/1206587): Replace contrib/uuid by a libglusterfs wrapper that uses the uuid implementation from the OS
- [#1207020](https://bugzilla.redhat.com/1207020): BitRot :- CPU/disk throttling during signature calculation
- [#1207028](https://bugzilla.redhat.com/1207028): [Backup]: User must be warned while running the 'glusterfind pre' command twice without running the post command
- [#1207029](https://bugzilla.redhat.com/1207029): BitRot :- If peer in cluster doesn't have brick then its should not start bitd on that node and should not create partial volume file
- [#1207115](https://bugzilla.redhat.com/1207115): geo-rep: add debug logs to master for slave ENTRY operation failures
- [#1207134](https://bugzilla.redhat.com/1207134): BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node
- [#1207532](https://bugzilla.redhat.com/1207532): BitRot :- gluster volume help gives insufficient and ambiguous information for bitrot
- [#1207603](https://bugzilla.redhat.com/1207603): Persist file size and block count of sharded files in the form of xattrs
- [#1207615](https://bugzilla.redhat.com/1207615): sharding - Implement remaining fops
- [#1207627](https://bugzilla.redhat.com/1207627): BitRot :- Data scrubbing status is not available
- [#1207712](https://bugzilla.redhat.com/1207712): Input/Output error with disperse volume when geo-replication is started
- [#1207735](https://bugzilla.redhat.com/1207735): Disperse volume: Huge memory leak of glusterfsd process
- [#1207829](https://bugzilla.redhat.com/1207829): Incomplete self-heal and split-brain on directories found when self-healing files/dirs on a replaced disk
- [#1207979](https://bugzilla.redhat.com/1207979): BitRot :- In case of NFS mount, Object Versioning and file signing is not working as expected
- [#1208131](https://bugzilla.redhat.com/1208131): BitRot :- Tunable (scrub-throttle, scrub-frequency, pause/resume) for scrub functionality don't have any impact on scrubber
- [#1208470](https://bugzilla.redhat.com/1208470): [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are  generated in the snapped brick.
- [#1208482](https://bugzilla.redhat.com/1208482): pthread cond and mutex variables of fs struct has to be destroyed conditionally.
- [#1209104](https://bugzilla.redhat.com/1209104): Do not let an inode evict during split-brain resolution process.
- [#1209138](https://bugzilla.redhat.com/1209138): [Backup]: Packages to be installed for glusterfind api to work
- [#1209298](https://bugzilla.redhat.com/1209298): NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
- [#1209329](https://bugzilla.redhat.com/1209329): glusterd services are not handled properly when re configuring services
- [#1209430](https://bugzilla.redhat.com/1209430): quota/marker: turn off inode quotas by default
- [#1209461](https://bugzilla.redhat.com/1209461): BVT: glusterd crashed and dumped during upgrade (on rhel7.1 server)
- [#1209735](https://bugzilla.redhat.com/1209735): FSAL_GLUSTER : symlinks are not working properly if acl is enabled
- [#1209752](https://bugzilla.redhat.com/1209752): BitRot :- info about bitd and scrubber daemon is not shown in volume status
- [#1209818](https://bugzilla.redhat.com/1209818): BitRot :- volume info should not show 'features.scrub: resume' if scrub process is resumed
- [#1209843](https://bugzilla.redhat.com/1209843): [Backup]: Crash observed when multiple sessions were created for the same volume
- [#1209869](https://bugzilla.redhat.com/1209869): xdata in FOPs should always be valid and never junk
- [#1210344](https://bugzilla.redhat.com/1210344): Have a fixed name for common meta-volume for nfs, snapshot and geo-rep and mount it at a fixed mount location
- [#1210562](https://bugzilla.redhat.com/1210562): Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf"
- [#1210684](https://bugzilla.redhat.com/1210684): BitRot :- scrub pause/resume should give proper error message if scrubber is already paused/resumed and Admin tries to perform same operation
- [#1210687](https://bugzilla.redhat.com/1210687): BitRot :- If scrubber finds bad file then it should log as a 'ALERT' in log not 'Warning'
- [#1210689](https://bugzilla.redhat.com/1210689): BitRot :- Files marked as 'Bad' should not be accessible from mount
- [#1210934](https://bugzilla.redhat.com/1210934): qcow2 image creation using qemu-img hits segmentation fault
- [#1210965](https://bugzilla.redhat.com/1210965): Geo-replication very slow, not able to sync all the files to slave
- [#1211037](https://bugzilla.redhat.com/1211037): [dist-geo-rep]:Directory not empty and Stale file handle errors in geo-rep logs during deletes from master in history/changelog crawl
- [#1211123](https://bugzilla.redhat.com/1211123): ls command failed with features.read-only on while mounting ec volume.
- [#1211132](https://bugzilla.redhat.com/1211132): 'volume get' invoked on a non-existing key fails with zero as a return value
- [#1211220](https://bugzilla.redhat.com/1211220): quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O.
- [#1211221](https://bugzilla.redhat.com/1211221): Any operation that relies on fd->flags may not work on anonymous fds
- [#1211264](https://bugzilla.redhat.com/1211264): Data Tiering: glusterd(management) communication issues seen on tiering setup
- [#1211327](https://bugzilla.redhat.com/1211327): Changelog: Changelog should be treated as discontinuous only on changelog enable/disable
- [#1211562](https://bugzilla.redhat.com/1211562): Data Tiering:UI:changes required to CLI responses for attach and detach tier
- [#1211570](https://bugzilla.redhat.com/1211570): Data Tiering:UI:when a user looks for detach-tier help, instead command seems to be getting executed
- [#1211576](https://bugzilla.redhat.com/1211576): Gluster CLI crashes when volume create command is incomplete
- [#1211594](https://bugzilla.redhat.com/1211594): status.brick memory allocation failure.
- [#1211640](https://bugzilla.redhat.com/1211640): glusterd crash when snapshot create was in progress on different volumes at the same time - job edited to create snapshots at the given time
- [#1211749](https://bugzilla.redhat.com/1211749): glusterd crashes when brick option validation fails
- [#1211808](https://bugzilla.redhat.com/1211808): quota: inode quota not healing after upgrade
- [#1211836](https://bugzilla.redhat.com/1211836): glusterfs-api.pc versioning breaks QEMU
- [#1211848](https://bugzilla.redhat.com/1211848): Gluster namespace and module should be part of glusterfs-libs rpm
- [#1211900](https://bugzilla.redhat.com/1211900): package glupy as a subpackage under gluster namespace.
- [#1211913](https://bugzilla.redhat.com/1211913): nfs : racy condition in export/netgroup feature
- [#1211962](https://bugzilla.redhat.com/1211962): Disperse volume: Input/output  errors on nfs and fuse mounts during delete operation
- [#1212037](https://bugzilla.redhat.com/1212037): Data Tiering:Old copy of file still remaining on EC(disperse) layer, when edited after attaching tier(new copy is moved to hot tier)
- [#1212063](https://bugzilla.redhat.com/1212063): [Geo-replication] cli crashed and core dump was observed while running gluster volume geo-replication vol0 status command
- [#1212110](https://bugzilla.redhat.com/1212110): bricks process crash
- [#1212253](https://bugzilla.redhat.com/1212253): cli should return error with inode quota cmds on cluster with op_version less than 3.7
- [#1212385](https://bugzilla.redhat.com/1212385): Disable rpc throttling for glusterfs protocol
- [#1212398](https://bugzilla.redhat.com/1212398): [New] - Distribute replicate volume type is shown as Distribute Stripe in  the output of gluster volume info <volname> --xml
- [#1212400](https://bugzilla.redhat.com/1212400): Attach tier failing and messing up vol info
- [#1212410](https://bugzilla.redhat.com/1212410): dist-geo-rep : all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down.
- [#1212413](https://bugzilla.redhat.com/1212413): [RFE] Return proper error codes in case of snapshot failure
- [#1212437](https://bugzilla.redhat.com/1212437): probing and detaching a peer generated a CRITICAL error - "Could not find peer" in glusterd logs
- [#1212660](https://bugzilla.redhat.com/1212660): Crashes in logging code
- [#1212816](https://bugzilla.redhat.com/1212816): NFS-Ganesha : Add-node and delete-node should start/stop NFS-Ganesha service
- [#1213063](https://bugzilla.redhat.com/1213063): The tiering feature requires counters.
- [#1213066](https://bugzilla.redhat.com/1213066): Failure in tests/performance/open-behind.t
- [#1213125](https://bugzilla.redhat.com/1213125): Bricks fail to start with tiering related logs on the brick
- [#1213295](https://bugzilla.redhat.com/1213295): Glusterd crashed after updating to 3.8 nightly build
- [#1213349](https://bugzilla.redhat.com/1213349): [Snapshot] Scheduler should check vol-name exists or not  before adding scheduled jobs
- [#1213358](https://bugzilla.redhat.com/1213358): Implement directory heal for ec
- [#1213364](https://bugzilla.redhat.com/1213364): [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled.
- [#1213542](https://bugzilla.redhat.com/1213542): Symlink heal leaks 'linkname' memory
- [#1213752](https://bugzilla.redhat.com/1213752): nfs-ganesha: Multi-head nfs  need Upcall Cache invalidation support
- [#1213773](https://bugzilla.redhat.com/1213773): upcall: polling is done for a invalid file
- [#1213933](https://bugzilla.redhat.com/1213933): common-ha: delete-node implementation
- [#1214048](https://bugzilla.redhat.com/1214048): IO touched a file undergoing migration fails for tiered volumes
- [#1214219](https://bugzilla.redhat.com/1214219): Data Tiering:Enabling quota command fails with "quota command failed : Commit failed on localhost"
- [#1214222](https://bugzilla.redhat.com/1214222): Directories are missing on the mount point after attaching tier to distribute replicate volume.
- [#1214289](https://bugzilla.redhat.com/1214289): I/O failure on attaching tier
- [#1214561](https://bugzilla.redhat.com/1214561): [Backup]: To capture path for deletes in changelog file
- [#1214574](https://bugzilla.redhat.com/1214574): Snapshot-scheduling helper script errors out while running "snap_scheduler.py init"
- [#1215002](https://bugzilla.redhat.com/1215002): glusterd crashed on the node when tried to detach a tier after restoring data from the snapshot.
- [#1215018](https://bugzilla.redhat.com/1215018): [New] - gluster peer status goes to disconnected state.
- [#1215117](https://bugzilla.redhat.com/1215117): Disperse volume: rebalance and quotad crashed
- [#1215122](https://bugzilla.redhat.com/1215122): Data Tiering: attaching a tier with non supported replica count crashes glusterd on local host
- [#1215161](https://bugzilla.redhat.com/1215161): rpc: Memory corruption  because rpcsvc_register_notify interprets opaque mydata argument as xlator pointer
- [#1215187](https://bugzilla.redhat.com/1215187): timeout/expiry of group-cache should be set to 300 seconds
- [#1215265](https://bugzilla.redhat.com/1215265): Fixes for data self-heal in ec
- [#1215486](https://bugzilla.redhat.com/1215486): configure: automake defaults to Unix V7 tar, w/ max filename length=99 chars
- [#1215550](https://bugzilla.redhat.com/1215550): glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume
- [#1215571](https://bugzilla.redhat.com/1215571): Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency)
- [#1215592](https://bugzilla.redhat.com/1215592): Crash in dht_getxattr_cbk
- [#1215660](https://bugzilla.redhat.com/1215660): tiering: cksum mismach for tiered volume.
- [#1215896](https://bugzilla.redhat.com/1215896): Typos in the messages logged by the CTR translator
- [#1216067](https://bugzilla.redhat.com/1216067): Autogenerated files delivered in tarball
- [#1216187](https://bugzilla.redhat.com/1216187): readdir-ahead needs to be enabled by default for new volumes on gluster-3.7
- [#1216898](https://bugzilla.redhat.com/1216898): Data Tiering: Volume inconsistency errors getting logged when attaching uneven(odd) number of hot bricks in hot tier(pure distribute tier layer) to a dist-rep volume
- [#1216931](https://bugzilla.redhat.com/1216931): [Snapshot] Snapshot scheduler show status disable even when it is enabled
- [#1216960](https://bugzilla.redhat.com/1216960): data tiering: do not allow tiering related volume set options on a regular volume
- [#1217311](https://bugzilla.redhat.com/1217311): Disperse volume: gluster volume status doesn't show shd status
- [#1217701](https://bugzilla.redhat.com/1217701): ec test spurious failures
- [#1217766](https://bugzilla.redhat.com/1217766): Spurious failures in tests/bugs/distribute/bug-1122443.t
- [#1217786](https://bugzilla.redhat.com/1217786): Data Tiering : Adding performance to unlink/link/rename in CTR Xlator
- [#1217788](https://bugzilla.redhat.com/1217788): spurious failure bug-908146.t
- [#1217937](https://bugzilla.redhat.com/1217937): DHT/Tiering/Rebalancer: The Client PID set by tiering migration is getting reset by dht migration
- [#1217949](https://bugzilla.redhat.com/1217949): Null check before freeing dir_dfmeta and tmp_container
- [#1218055](https://bugzilla.redhat.com/1218055): "Snap_scheduler disable" should have different return codes for different failures.
- [#1218060](https://bugzilla.redhat.com/1218060): [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message
- [#1218120](https://bugzilla.redhat.com/1218120): Regression failures in tests/bugs/snapshot/bug-1162498.t
- [#1218164](https://bugzilla.redhat.com/1218164): [SNAPSHOT] : Correction required in output message after initilalising snap_scheduler
- [#1218287](https://bugzilla.redhat.com/1218287): Use tiering only if all nodes are capable of it at proper version
- [#1218304](https://bugzilla.redhat.com/1218304): Intermittent failure of basic/afr/data-self-heal.t
- [#1218552](https://bugzilla.redhat.com/1218552): Rsync Hang and Georep fails to Sync files
- [#1218573](https://bugzilla.redhat.com/1218573): [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down
- [#1218625](https://bugzilla.redhat.com/1218625): glfs.h:46:21: fatal error: sys/acl.h: No such file or directory
- [#1218638](https://bugzilla.redhat.com/1218638): tiering documentation
- [#1218717](https://bugzilla.redhat.com/1218717): Files migrated should stay on a tier for a full cycle
- [#1218854](https://bugzilla.redhat.com/1218854): Clean up should not empty the contents of  the global config file
- [#1218951](https://bugzilla.redhat.com/1218951): Spurious failures in fop-sanity.t
- [#1218960](https://bugzilla.redhat.com/1218960): Rebalance Status output lists an extra colon " : " after  volume rebalance: <vol_name>: success:
- [#1219032](https://bugzilla.redhat.com/1219032): cli: While attaching tier cli sholud always ask question whether you really want to attach a tier or not.
- [#1219355](https://bugzilla.redhat.com/1219355): glusterd:Scrub and bitd reconfigure functions were not calling if quota is not enabled.
- [#1219442](https://bugzilla.redhat.com/1219442): [Snapshot] Do not run scheduler if ovirt scheduler is running
- [#1219479](https://bugzilla.redhat.com/1219479): [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are  generated in the snapped brick.
- [#1219485](https://bugzilla.redhat.com/1219485): nfs-ganesha: Discrepancies with lock states recovery during migration
- [#1219637](https://bugzilla.redhat.com/1219637): Gluster small-file creates do not scale with brick count
- [#1219732](https://bugzilla.redhat.com/1219732): brick-op failure for glusterd command should log error message in cmd_history.log
- [#1219738](https://bugzilla.redhat.com/1219738): Regression failures in tests/bugs/snapshot/bug-1112559.t
- [#1219784](https://bugzilla.redhat.com/1219784): bitrot: glusterd is crashing when user enable bitrot on the volume
- [#1219816](https://bugzilla.redhat.com/1219816): Spurious failure in tests/bugs/replicate/bug-976800.t
- [#1219846](https://bugzilla.redhat.com/1219846): Data Tiering: glusterd(management) communication issues seen on tiering setup
- [#1219894](https://bugzilla.redhat.com/1219894): [georep]: Creating geo-rep session kills all the brick process
- [#1219937](https://bugzilla.redhat.com/1219937): Running status second time shows no active sessions
- [#1219954](https://bugzilla.redhat.com/1219954): The python-gluster RPM should be 'noarch'
- [#1220016](https://bugzilla.redhat.com/1220016): bitrot testcases fail spuriously
- [#1220058](https://bugzilla.redhat.com/1220058): Disable known bad tests
- [#1220173](https://bugzilla.redhat.com/1220173): SEEK_HOLE support (optimization)
- [#1220329](https://bugzilla.redhat.com/1220329): DHT Rebalance : Misleading log messages for linkfiles
- [#1220332](https://bugzilla.redhat.com/1220332): dHT rebalance: Dict_copy log messages when running rebalance on a dist-rep volume
- [#1220348](https://bugzilla.redhat.com/1220348): Client hung up on listing the files on a perticular directory
- [#1220381](https://bugzilla.redhat.com/1220381): unable to start the volume with the latest beta1 rpms
- [#1220670](https://bugzilla.redhat.com/1220670): snap_scheduler script must be usable as python module.
- [#1220713](https://bugzilla.redhat.com/1220713): Scrubber should be disabled once bitrot is reset
- [#1221008](https://bugzilla.redhat.com/1221008): libgfapi: Segfault seen when glfs_*() methods are invoked with invalid glfd
- [#1221025](https://bugzilla.redhat.com/1221025): Glusterd crashes after enabling quota limit on a distrep volume.
- [#1221095](https://bugzilla.redhat.com/1221095): Fix nfs/mount3.c build warnings reported in Koji
- [#1221104](https://bugzilla.redhat.com/1221104): Sharding - Skip update of block count and size for directories in readdirp callback
- [#1221128](https://bugzilla.redhat.com/1221128): `gluster volume heal <vol-name> split-brain' tries to heal even with insufficient arguments
- [#1221131](https://bugzilla.redhat.com/1221131): NFS-Ganesha: ACL should not be enabled by default
- [#1221145](https://bugzilla.redhat.com/1221145): ctdb's ping_pong lock tester fails with input/output error on disperse volume mounted with glusterfs
- [#1221270](https://bugzilla.redhat.com/1221270): Do not allow detach-tier commands on a non tiered volume
- [#1221481](https://bugzilla.redhat.com/1221481): `ls' on a directory which has files with mismatching gfid's does not list anything
- [#1221490](https://bugzilla.redhat.com/1221490): fuse: check return value of setuid
- [#1221544](https://bugzilla.redhat.com/1221544): [Backup]: Unable to create a glusterfind session
- [#1221577](https://bugzilla.redhat.com/1221577): glusterfsd crashed on a quota enabled volume where snapshots were scheduled
- [#1221696](https://bugzilla.redhat.com/1221696): rebalance failing on one of the node
- [#1221737](https://bugzilla.redhat.com/1221737): Multi-threaded SHD support
- [#1221889](https://bugzilla.redhat.com/1221889): Log EEXIST errors in DEBUG level in fops MKNOD and MKDIR
- [#1221914](https://bugzilla.redhat.com/1221914): Implement MKNOD fop in bit-rot.
- [#1221938](https://bugzilla.redhat.com/1221938): SIGNING FAILURE  Error messages  are poping up in the bitd log
- [#1221970](https://bugzilla.redhat.com/1221970): tiering: use sperate log/socket/pid file for tiering
- [#1222013](https://bugzilla.redhat.com/1222013): Simplify creation and set-up of meta-volume (shared storage)
- [#1222088](https://bugzilla.redhat.com/1222088): Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
- [#1222092](https://bugzilla.redhat.com/1222092): rebalance failed after attaching the tier to the volume.
- [#1222126](https://bugzilla.redhat.com/1222126): DHT: lookup-unhashed feature breaks runtime compatibility with older client versions
- [#1222238](https://bugzilla.redhat.com/1222238): features/changelog:  buffer overrun in changelog-helpers
- [#1222317](https://bugzilla.redhat.com/1222317): Building packages on RHEL-5 based distributions fail
- [#1222319](https://bugzilla.redhat.com/1222319): Remove all occurrences of #include "config.h"
- [#1222378](https://bugzilla.redhat.com/1222378): GlusterD fills the logs when the NFS-server is disabled
- [#1222379](https://bugzilla.redhat.com/1222379): Fix infinite looping in shard_readdir(p) on '/'
- [#1222769](https://bugzilla.redhat.com/1222769): libglusterfs: fix uninitialized argument value
- [#1222840](https://bugzilla.redhat.com/1222840): I/O's hanging on tiered volumes (NFS)
- [#1222898](https://bugzilla.redhat.com/1222898): geo-replication: fix memory leak in gsyncd
- [#1223185](https://bugzilla.redhat.com/1223185): [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6
- [#1223213](https://bugzilla.redhat.com/1223213): gluster volume status fails with locking failed error message
- [#1223280](https://bugzilla.redhat.com/1223280): [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
- [#1223338](https://bugzilla.redhat.com/1223338): glusterd could crash in remove-brick-status when local remove-brick process has just completed
- [#1223378](https://bugzilla.redhat.com/1223378): gfid-access: Remove dead increment (dead store)
- [#1223385](https://bugzilla.redhat.com/1223385): packaging: .pc files included in -api-devel should be in -devel
- [#1223432](https://bugzilla.redhat.com/1223432): Update gluster op version to 30701
- [#1223625](https://bugzilla.redhat.com/1223625): rebalance : output of rebalance status should show ' run time ' in proper format (day,hour:min:sec)
- [#1223642](https://bugzilla.redhat.com/1223642): [geo-rep]: With tarssh the file is created at slave but it doesnt get sync
- [#1223739](https://bugzilla.redhat.com/1223739): Quota: Do not allow set/unset  of quota limit in heterogeneous cluster
- [#1223741](https://bugzilla.redhat.com/1223741): non-root geo-replication session goes to faulty state, when the session is started
- [#1223759](https://bugzilla.redhat.com/1223759): Sharding - Fix posix compliance test failures.
- [#1223772](https://bugzilla.redhat.com/1223772): Though brick demon is not running, gluster vol status command shows the pid
- [#1223798](https://bugzilla.redhat.com/1223798): Quota: spurious failures with quota testcases
- [#1223889](https://bugzilla.redhat.com/1223889): readdirp return 64bits inodes even if enable-ino32 is set
- [#1223937](https://bugzilla.redhat.com/1223937): Outdated autotools helper config.* files
- [#1224016](https://bugzilla.redhat.com/1224016): NFS: IOZone tests hang, disconnects and hung tasks seen in logs.
- [#1224098](https://bugzilla.redhat.com/1224098): [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
- [#1224290](https://bugzilla.redhat.com/1224290): peers connected in the middle of a transaction are participating in the transaction
- [#1224596](https://bugzilla.redhat.com/1224596): [RFE] Provide hourly scrubbing option
- [#1224600](https://bugzilla.redhat.com/1224600): [RFE] Move signing trigger mechanism to [f]setxattr()
- [#1224611](https://bugzilla.redhat.com/1224611): Skip zero byte files when triggering signing
- [#1224857](https://bugzilla.redhat.com/1224857): DHT - rebalance - when any brick/sub-vol is down and rebalance is not performing any action(fixing lay-out or migrating data) it should not say 'Starting rebalance on volume <vol-name> has been successful' .
- [#1225018](https://bugzilla.redhat.com/1225018): Scripts/Binaries are not installed with +x bit
- [#1225323](https://bugzilla.redhat.com/1225323): Glusterfs client crash during fd migration after graph switch
- [#1225328](https://bugzilla.redhat.com/1225328): afr: unrecognized option in re-balance volfile
- [#1225330](https://bugzilla.redhat.com/1225330): tiering: tier daemon not restarting during volume/glusterd restart
- [#1225424](https://bugzilla.redhat.com/1225424): [Backup]: Misleading error message when glusterfind delete is given with non-existent volume
- [#1225465](https://bugzilla.redhat.com/1225465): [Backup]: Glusterfind session entry persists even after volume is deleted
- [#1225491](https://bugzilla.redhat.com/1225491): [AFR-V2] - afr_final_errno() should treat op_ret > 0 also as success
- [#1225542](https://bugzilla.redhat.com/1225542): [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state
- [#1225564](https://bugzilla.redhat.com/1225564): [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
- [#1225566](https://bugzilla.redhat.com/1225566): [geo-rep]:  Traceback "ValueError: filedescriptor out of range in select()" observed while creating huge set of data on master
- [#1225571](https://bugzilla.redhat.com/1225571): [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
- [#1225572](https://bugzilla.redhat.com/1225572): nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found
- [#1225716](https://bugzilla.redhat.com/1225716): tests : remove brick command execution displays success even after, one of the bricks down.
- [#1225793](https://bugzilla.redhat.com/1225793): Spurious failure in tests/bugs/disperse/bug-1161621.t
- [#1226005](https://bugzilla.redhat.com/1226005): should not spawn another migration daemon on graph switch
- [#1226223](https://bugzilla.redhat.com/1226223): Mount broker user add command removes existing volume for a mountbroker user when second volume is attached to same user
- [#1226253](https://bugzilla.redhat.com/1226253): gluster volume heal info crashes
- [#1226276](https://bugzilla.redhat.com/1226276): Volume heal info not reporting files in split brain and core dumping, after upgrading to 3.7.0
- [#1226279](https://bugzilla.redhat.com/1226279): GF_CONTENT_KEY should not be handled unless we are sure no other operations are in progress
- [#1226307](https://bugzilla.redhat.com/1226307): Volume start fails when glusterfs is source compiled with GCC v5.1.1
- [#1226367](https://bugzilla.redhat.com/1226367): bug-973073.t fails spuriously
- [#1226384](https://bugzilla.redhat.com/1226384): build: xlators/mgmt/glusterd/src/glusterd-errno.h is not in dist tarball
- [#1226507](https://bugzilla.redhat.com/1226507): Honour afr self-heal volume set options from clients
- [#1226551](https://bugzilla.redhat.com/1226551): libglusterfs: Copy _all_ members of gf_dirent_t in entry_copy()
- [#1226714](https://bugzilla.redhat.com/1226714): auth_cache_entry structure barely gets cached
- [#1226717](https://bugzilla.redhat.com/1226717): racy condition in nfs/auth-cache feature
- [#1226829](https://bugzilla.redhat.com/1226829): gf_store_save_value fails to check for errors, leading to emptying files in /var/lib/glusterd/
- [#1226881](https://bugzilla.redhat.com/1226881): tiering:compiler warning with gcc v5.1.1
- [#1226902](https://bugzilla.redhat.com/1226902): bitrot: scrubber is crashing while user set any scrubber tunable value.
- [#1227204](https://bugzilla.redhat.com/1227204): glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3
- [#1227449](https://bugzilla.redhat.com/1227449): Fix deadlock in timer-wheel del_timer() API
- [#1227583](https://bugzilla.redhat.com/1227583): [Virt-RHGS] Creating a image on gluster volume using qemu-img + gfapi throws error messages related to rpc_transport
- [#1227590](https://bugzilla.redhat.com/1227590): bug-857330/xml.t fails spuriously
- [#1227624](https://bugzilla.redhat.com/1227624): tests/geo-rep: Existing geo-rep regressino test suite is time consuming.
- [#1227646](https://bugzilla.redhat.com/1227646): Glusterd fails to start after volume restore, tier attach and node reboot
- [#1227654](https://bugzilla.redhat.com/1227654): linux untar hanged after the bricks are up in a 8+4 config
- [#1227667](https://bugzilla.redhat.com/1227667): Minor improvements and code cleanup for protocol server/client
- [#1227803](https://bugzilla.redhat.com/1227803): tiering: tier status shows as " progressing " but there is no rebalance daemon running
- [#1227884](https://bugzilla.redhat.com/1227884): Update gluster op version to 30702
- [#1227894](https://bugzilla.redhat.com/1227894): Increment op-version requirement for lookup-optimize configuration option
- [#1227904](https://bugzilla.redhat.com/1227904): Memory leak in marker xlator
- [#1227996](https://bugzilla.redhat.com/1227996): Objects are not signed upon truncate()
- [#1228093](https://bugzilla.redhat.com/1228093): Glusterd crash
- [#1228111](https://bugzilla.redhat.com/1228111): [Backup]: Crash observed when glusterfind pre is run after deleting a directory containing files
- [#1228112](https://bugzilla.redhat.com/1228112): tiering:glusterd crashed when trying to detach-tier commit force on a non-tiered volume.
- [#1228157](https://bugzilla.redhat.com/1228157): Provide and use a common way to do reference counting of (internal) structures
- [#1228415](https://bugzilla.redhat.com/1228415): Not able to export volume using nfs-ganesha
- [#1228492](https://bugzilla.redhat.com/1228492): [geo-rep]: RENAME are not synced to slave when quota is enabled.
- [#1228613](https://bugzilla.redhat.com/1228613): [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node
- [#1228635](https://bugzilla.redhat.com/1228635): Do not invoke glfs_fini for glfs-heal processes.
- [#1228680](https://bugzilla.redhat.com/1228680): bitrot: (rfe) object signing wait time value should be tunable.
- [#1228696](https://bugzilla.redhat.com/1228696): geo-rep: gverify.sh throws error if slave_host entry is not added to know_hosts file
- [#1228731](https://bugzilla.redhat.com/1228731): nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful
- [#1228818](https://bugzilla.redhat.com/1228818): Add documentation for lookup-optimize configuration option in DHT
- [#1228952](https://bugzilla.redhat.com/1228952): Disperse volume : glusterfs crashed
- [#1229127](https://bugzilla.redhat.com/1229127): afr: Correction to self-heal-daemon documentation
- [#1229134](https://bugzilla.redhat.com/1229134): [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot
- [#1229139](https://bugzilla.redhat.com/1229139): glusterd: glusterd crashing if you run  re-balance and vol status  command parallely.
- [#1229172](https://bugzilla.redhat.com/1229172): [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t
- [#1229297](https://bugzilla.redhat.com/1229297): [Quota] : Inode quota spurious failure
- [#1229609](https://bugzilla.redhat.com/1229609): Quota:  " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs
- [#1229639](https://bugzilla.redhat.com/1229639): build: fix gitclean target
- [#1229658](https://bugzilla.redhat.com/1229658): STACK_RESET may crash with concurrent statedump requests to a glusterfs process
- [#1229825](https://bugzilla.redhat.com/1229825): Add regression test for cluster lock in a heterogeneous cluster
- [#1229860](https://bugzilla.redhat.com/1229860): context of access control translator should be updated properly for GF_POSIX_ACL_*_KEY xattrs
- [#1229948](https://bugzilla.redhat.com/1229948): Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives
- [#1230007](https://bugzilla.redhat.com/1230007): [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink
- [#1230015](https://bugzilla.redhat.com/1230015): [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available
- [#1230017](https://bugzilla.redhat.com/1230017): [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions
- [#1230090](https://bugzilla.redhat.com/1230090): [geo-rep]: use_meta_volume config option should be validated for its values
- [#1230111](https://bugzilla.redhat.com/1230111): [Backup]: Glusterfind delete does not delete the session related information present in $GLUSTERD_WORKDIR
- [#1230121](https://bugzilla.redhat.com/1230121): [glusterd] glusterd crashed while trying to remove a bricks - one selected from each replica set - after shrinking nX3 to nX2 to nX1
- [#1230127](https://bugzilla.redhat.com/1230127): [Backup]: Chown/chgrp for a directory does not get recorded as a MODIFY entry in the outfile
- [#1230647](https://bugzilla.redhat.com/1230647): Disperse volume : client crashed while running IO
- [#1231132](https://bugzilla.redhat.com/1231132): Detect and send ENOTSUP if upcall feature is not enabled
- [#1231197](https://bugzilla.redhat.com/1231197): Snapshot daemon failed to run on newly created dist-rep volume with uss enabled
- [#1231205](https://bugzilla.redhat.com/1231205): [geo-rep]: rsync should be made dependent package for geo-replication
- [#1231257](https://bugzilla.redhat.com/1231257): nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes
- [#1231264](https://bugzilla.redhat.com/1231264): DHT : for many operation directory/file path is  '(null)' in brick  log
- [#1231268](https://bugzilla.redhat.com/1231268): Fix invalid logic in tier.t
- [#1231425](https://bugzilla.redhat.com/1231425): use after free bug in dht
- [#1231437](https://bugzilla.redhat.com/1231437): Rebalance is failing in test cluster framework.
- [#1231617](https://bugzilla.redhat.com/1231617): Scrubber crash upon pause
- [#1231619](https://bugzilla.redhat.com/1231619): BitRot :- Handle brick re-connection sanely in bitd/scrub process
- [#1231620](https://bugzilla.redhat.com/1231620): scrub frequecny and throttle change information need to be present in Scrubber log
- [#1231738](https://bugzilla.redhat.com/1231738): nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start
- [#1231789](https://bugzilla.redhat.com/1231789): Not able to create snapshots for geo-replicated volumes when session is created with root user
- [#1231876](https://bugzilla.redhat.com/1231876): Snapshot: When Cluster.enable-shared-storage is enable, shared storage should get mount after Node reboot
- [#1232001](https://bugzilla.redhat.com/1232001): nfs-ganesha: 8 node pcs cluster setup fails
- [#1232165](https://bugzilla.redhat.com/1232165): NFS Authentication Performance Issue
- [#1232172](https://bugzilla.redhat.com/1232172): Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time
- [#1232183](https://bugzilla.redhat.com/1232183): cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
- [#1232238](https://bugzilla.redhat.com/1232238): [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
- [#1232304](https://bugzilla.redhat.com/1232304): libglusterfs: delete duplicate code in libglusterfs/src/dict.c
- [#1232378](https://bugzilla.redhat.com/1232378): [remove-brick]: Creation of file from NFS  writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
- [#1232391](https://bugzilla.redhat.com/1232391): Sharding - Use (f)xattrop (as opposed to (f)setxattr) to update shard size and block count
- [#1232430](https://bugzilla.redhat.com/1232430): [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
- [#1232572](https://bugzilla.redhat.com/1232572): quota: quota list displays double the size of previous value, post heal completion.
- [#1232658](https://bugzilla.redhat.com/1232658): Change default values of allow-insecure and bind-insecure
- [#1232666](https://bugzilla.redhat.com/1232666): [geo-rep]: Segmentation faults are observed on all the master nodes
- [#1232678](https://bugzilla.redhat.com/1232678): Disperse volume : data corruption with appending writes in 8+4 config
- [#1232686](https://bugzilla.redhat.com/1232686): quorum calculation might go for toss for a concurrent peer probe command
- [#1232693](https://bugzilla.redhat.com/1232693): glusterd crashed when testing heal full on replaced disks
- [#1232729](https://bugzilla.redhat.com/1232729): [Backup]: Glusterfind session(s) created before starting the volume results in 'changelog not available' error, eventually
- [#1232912](https://bugzilla.redhat.com/1232912): [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
- [#1233018](https://bugzilla.redhat.com/1233018): tests: Add the command being 'TEST'ed in all gluster logs
- [#1233139](https://bugzilla.redhat.com/1233139): Null pointer dreference in dht_migrate_complete_check_task
- [#1233151](https://bugzilla.redhat.com/1233151): rm command fails with "Transport end point not connected" during add brick
- [#1233162](https://bugzilla.redhat.com/1233162): [Quota] The root of the volume on which the quota is set shows the volume size more than actual volume size, when checked with "df" command.
- [#1233246](https://bugzilla.redhat.com/1233246): nfs-ganesha: add node fails to add a new node to the cluster
- [#1233258](https://bugzilla.redhat.com/1233258): Possible double execution of the state machine for fops that start other subfops
- [#1233411](https://bugzilla.redhat.com/1233411): [geo-rep]: UnboundLocalError: local variable 'fd' referenced before assignment
- [#1233544](https://bugzilla.redhat.com/1233544): gluster v set help needs to be updated for cluster.enable-shared-storage option
- [#1233617](https://bugzilla.redhat.com/1233617): Introduce an ATOMIC_WRITE flag in posix writev
- [#1233624](https://bugzilla.redhat.com/1233624): nfs-ganesha: ganesha-ha.sh --refresh-config not working
- [#1234286](https://bugzilla.redhat.com/1234286): changelog: directory renames not getting recorded
- [#1234474](https://bugzilla.redhat.com/1234474): nfs-ganesha:delete node throws error and pcs status also notifies about failures, in fact I/O also doesn't resume post grace period
- [#1234694](https://bugzilla.redhat.com/1234694): [geo-rep]: Setting meta volume config to false when meta volume is stopped/deleted leads geo-rep to faulty
- [#1234819](https://bugzilla.redhat.com/1234819): glusterd: glusterd crashes while importing a USS enabled volume which is already started
- [#1234842](https://bugzilla.redhat.com/1234842): GlusterD does not store updated peerinfo objects.
- [#1234882](https://bugzilla.redhat.com/1234882): [geo-rep]: Feature fan-out fails with the use of meta volume config
- [#1235007](https://bugzilla.redhat.com/1235007): Allow only lookup and delete operation on file that is in split-brain
- [#1235195](https://bugzilla.redhat.com/1235195): quota: marker accounting miscalculated when renaming a file on with write is in progress
- [#1235216](https://bugzilla.redhat.com/1235216): tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
- [#1235231](https://bugzilla.redhat.com/1235231): unix domain sockets on Gluster/NFS are created as fifo/pipe
- [#1235246](https://bugzilla.redhat.com/1235246): Missing trusted.ec.config xattr for files after heal process
- [#1235269](https://bugzilla.redhat.com/1235269): Data Tiering: Files not getting promoted once demoted
- [#1235292](https://bugzilla.redhat.com/1235292): [geo-rep]: set_geo_rep_pem_keys.sh needs modification in gluster path to support mount broker functionality
- [#1235359](https://bugzilla.redhat.com/1235359): [geo-rep]: Mountbroker setup goes to Faulty with ssh 'Permission Denied' Errors
- [#1235538](https://bugzilla.redhat.com/1235538): Porting the left out gf_log messages to the new logging API
- [#1235542](https://bugzilla.redhat.com/1235542): Upcall: Directory or file creation should send cache invalidation requests to parent directories
- [#1235582](https://bugzilla.redhat.com/1235582): snapd crashed due to stack overflow
- [#1235751](https://bugzilla.redhat.com/1235751): peer probe results in Peer Rejected(Connected)
- [#1235921](https://bugzilla.redhat.com/1235921): POSIX: brick logs filled with _gf_log_callingfn due to this==NULL in dict_get
- [#1235927](https://bugzilla.redhat.com/1235927): memory corruption in the way we maintain migration information in inodes.
- [#1235989](https://bugzilla.redhat.com/1235989): Do null check before dict_ref
- [#1236009](https://bugzilla.redhat.com/1236009): do an explicit lookup on the inodes linked in readdirp
- [#1236032](https://bugzilla.redhat.com/1236032): Tiering: unlink failed with error "Invalid argument"
- [#1236065](https://bugzilla.redhat.com/1236065): Disperse volume: FUSE I/O error after self healing the failed disk files
- [#1236128](https://bugzilla.redhat.com/1236128): Quota list is not working on tiered volume.
- [#1236212](https://bugzilla.redhat.com/1236212): Migration does not work when EC is used as a tiered volume.
- [#1236270](https://bugzilla.redhat.com/1236270): [Backup]: File movement across directories does not get captured in the output file in a X3 volume
- [#1236512](https://bugzilla.redhat.com/1236512): DHT + rebalance :-  file permission got changed (sticky bit and setgid is set) after file migration failure
- [#1236561](https://bugzilla.redhat.com/1236561): Ganesha volume export failed
- [#1236945](https://bugzilla.redhat.com/1236945): glusterfsd crashed while rebalance and self-heal were in progress
- [#1237000](https://bugzilla.redhat.com/1237000): Add a test case for verifying NO empty changelog created
- [#1237174](https://bugzilla.redhat.com/1237174): Incorrect state created in '/var/lib/nfs/statd'
- [#1237381](https://bugzilla.redhat.com/1237381): Throttle background heals in disperse volumes
- [#1238054](https://bugzilla.redhat.com/1238054): Consecutive volume start/stop operations when ganesha.enable is on, leads to errors
- [#1238063](https://bugzilla.redhat.com/1238063): libgfchangelog: Example programs are not working.
- [#1238072](https://bugzilla.redhat.com/1238072): protocol/server doesn't reconfigure auth.ssl-allow options
- [#1238135](https://bugzilla.redhat.com/1238135): Initialize daemons on demand
- [#1238188](https://bugzilla.redhat.com/1238188): Not able to recover the corrupted file on Replica volume
- [#1238224](https://bugzilla.redhat.com/1238224): setting enable-shared-storage without mentioning the domain, doesn't enables shared storage
- [#1238508](https://bugzilla.redhat.com/1238508): Renamed Files are missing after self-heal
- [#1238593](https://bugzilla.redhat.com/1238593): tiering/snapshot: Tier daemon failed to start during volume start after restoring into a tiered volume from a non-tiered volume.
- [#1238661](https://bugzilla.redhat.com/1238661): When bind-insecure is enabled, bricks may not be able to bind to port assigned by Glusterd
- [#1238747](https://bugzilla.redhat.com/1238747): Crash in Quota enforcer
- [#1238788](https://bugzilla.redhat.com/1238788): Fix build on Mac OS X, header guard macros
- [#1238791](https://bugzilla.redhat.com/1238791): Fix build on Mac OS X, gfapi symbol versions
- [#1238793](https://bugzilla.redhat.com/1238793): Fix build on Mac OS X, timerwheel spinlock
- [#1238796](https://bugzilla.redhat.com/1238796): Fix build on Mac OS X, configure(.ac)
- [#1238798](https://bugzilla.redhat.com/1238798): Fix build on Mac OS X, ACLs
- [#1238936](https://bugzilla.redhat.com/1238936): 'unable to get transaction op-info' error seen in glusterd log while executing gluster volume status command
- [#1238952](https://bugzilla.redhat.com/1238952): gf_msg_callingfn does not log the callers of the function in which it is called
- [#1239037](https://bugzilla.redhat.com/1239037): disperse: Wrong values for "cluster.heal-timeout" could be assigned using CLI
- [#1239044](https://bugzilla.redhat.com/1239044): [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"
- [#1239269](https://bugzilla.redhat.com/1239269): [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
- [#1240161](https://bugzilla.redhat.com/1240161): glusterfsd crashed after volume start force
- [#1240184](https://bugzilla.redhat.com/1240184): snap-view:mount crash if debug mode is enabled
- [#1240210](https://bugzilla.redhat.com/1240210): Metadata self-heal is not handling failures while heal properly
- [#1240218](https://bugzilla.redhat.com/1240218): Scrubber log should mark file corrupted message as Alert not as information
- [#1240219](https://bugzilla.redhat.com/1240219): Scrubber log should mark file corrupted message as Alert not as information
- [#1240229](https://bugzilla.redhat.com/1240229): Unable to pause georep session if one of the nodes in cluster is not part of master volume.
- [#1240244](https://bugzilla.redhat.com/1240244): Unable to examine file in metadata split-brain after setting `replica.split-brain-choice' attribute to a particular replica
- [#1240254](https://bugzilla.redhat.com/1240254): quota+afr: quotad crash "afr_local_init (local=0x0, priv=0x7fddd0372220, op_errno=0x7fddce1434dc) at afr-common.c:4112"
- [#1240284](https://bugzilla.redhat.com/1240284): Disperse volume: NFS crashed
- [#1240564](https://bugzilla.redhat.com/1240564): Gluster commands timeout on SSL enabled system, after adding new node to trusted storage pool
- [#1240577](https://bugzilla.redhat.com/1240577): Data Tiering: Database locks observed on tiered volumes on continous writes to a file
- [#1240581](https://bugzilla.redhat.com/1240581): quota/marker: marker code cleanup
- [#1240598](https://bugzilla.redhat.com/1240598): quota/marker: lk_owner is null while acquiring inodelk in rename operation
- [#1240621](https://bugzilla.redhat.com/1240621): tiering: Tier daemon stopped prior to graph switch.
- [#1240654](https://bugzilla.redhat.com/1240654): quota: allowed to set soft-limit %age beyond 100%
- [#1240916](https://bugzilla.redhat.com/1240916): glfs_loc_link: Update loc.inode with the existing inode incase if already exits
- [#1240949](https://bugzilla.redhat.com/1240949): quota: In enforcer, caching parents in ctx during build ancestry is not working
- [#1240952](https://bugzilla.redhat.com/1240952): [USS]: snapd process is not killed once the glusterd comes back
- [#1240970](https://bugzilla.redhat.com/1240970): [Data Tiering]: HOT Files get demoted from hot tier
- [#1240991](https://bugzilla.redhat.com/1240991): Quota: After rename operation ,  gluster v quota <volname> list-objects command give  incorrect no. of  files in output
- [#1241054](https://bugzilla.redhat.com/1241054): Data Tiering: Rename of file is not heating up the file
- [#1241071](https://bugzilla.redhat.com/1241071): Spurious failure of ./tests/bugs/snapshot/bug-1109889.t
- [#1241104](https://bugzilla.redhat.com/1241104): Handle negative fcntl flock->l_len values
- [#1241133](https://bugzilla.redhat.com/1241133): nfs-ganesha: execution of script ganesha-ha.sh throws a error for a file
- [#1241153](https://bugzilla.redhat.com/1241153): quota: marker accounting can get miscalculated after upgrade to 3.7
- [#1241274](https://bugzilla.redhat.com/1241274): Peer not recognized after IP address change
- [#1241379](https://bugzilla.redhat.com/1241379): Reduce 'CTR disabled' brick log message from ERROR to INFO/DEBUG
- [#1241480](https://bugzilla.redhat.com/1241480): ganesha volume export fails in rhel7.1
- [#1241788](https://bugzilla.redhat.com/1241788): syncop:Include iatt to 'syncop_link' args
- [#1241882](https://bugzilla.redhat.com/1241882): GlusterD cannot restart after being probed into a cluster.
- [#1241895](https://bugzilla.redhat.com/1241895): nfs-ganesha: add-node logic does not copy the "/etc/ganesha/exports" directory to the correct path on the newly added node
- [#1242030](https://bugzilla.redhat.com/1242030): nfs-ganesha: bricks crash while executing acl related operation for named group/user
- [#1242041](https://bugzilla.redhat.com/1242041): nfs-ganesha : Multiple setting of nfs4_acl on a same file will cause brick crash
- [#1242254](https://bugzilla.redhat.com/1242254): fops fail with EIO on nfs mount after add-brick and rebalance
- [#1242280](https://bugzilla.redhat.com/1242280): Handle all errors equal in dict_set_bin()
- [#1242317](https://bugzilla.redhat.com/1242317): [RFE] Improve I/O latency during signing
- [#1242333](https://bugzilla.redhat.com/1242333): rdma : pending - porting log messages to a new framework
- [#1242421](https://bugzilla.redhat.com/1242421): Enable multi-threaded epoll for glusterd process
- [#1242504](https://bugzilla.redhat.com/1242504): [Data Tiering]: Frequency Counters of un-selected file in the DB wont get clear after a promotion/demotion cycle
- [#1242570](https://bugzilla.redhat.com/1242570): GlusterD crashes when management encryption is enabled
- [#1242609](https://bugzilla.redhat.com/1242609): replacing a offline brick fails with "replace-brick" command
- [#1242742](https://bugzilla.redhat.com/1242742): Gluster peer probe with negative num
- [#1242809](https://bugzilla.redhat.com/1242809): Performance: Impact of Bitrot on I/O Performance
- [#1242819](https://bugzilla.redhat.com/1242819): Quota list on a volume hangs after glusterd restart an a node.
- [#1242875](https://bugzilla.redhat.com/1242875): Quota: Quota Daemon doesn't start after node reboot
- [#1242892](https://bugzilla.redhat.com/1242892): SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable.
- [#1242894](https://bugzilla.redhat.com/1242894): [RFE] 'gluster volume help' output could be sorted alphabetically
- [#1243108](https://bugzilla.redhat.com/1243108): bash tab completion fails with "grep: Invalid range end"
- [#1243187](https://bugzilla.redhat.com/1243187): Disperse volume : client glusterfs crashed while running IO
- [#1243382](https://bugzilla.redhat.com/1243382): EC volume: Replace bricks is not healing version of root directory
- [#1243391](https://bugzilla.redhat.com/1243391): fail the fops if inode context get fails
- [#1243753](https://bugzilla.redhat.com/1243753): Gluster cli logs invalid argument error on every gluster command execution
- [#1243774](https://bugzilla.redhat.com/1243774): glusterd crashed when a client which doesn't support SSL tries to mount a SSL enabled gluster volume
- [#1243785](https://bugzilla.redhat.com/1243785): [Backup]: Password of the peer nodes prompted whenever a glusterfind session is deleted.
- [#1243798](https://bugzilla.redhat.com/1243798): quota/marker: dir count in inode quota is not atomic
- [#1243805](https://bugzilla.redhat.com/1243805): Gluster-nfs : unnecessary logging message in nfs.log for export feature
- [#1243806](https://bugzilla.redhat.com/1243806): logging:  Revert usage of global xlator for log buffer
- [#1243812](https://bugzilla.redhat.com/1243812): [Backup]: Crash observed when keyboard interrupt is encountered in the middle of any glusterfind command
- [#1243838](https://bugzilla.redhat.com/1243838): [Backup]: Glusterfind list shows the session as corrupted on the peer node
- [#1243890](https://bugzilla.redhat.com/1243890): huge mem leak in posix xattrop
- [#1243946](https://bugzilla.redhat.com/1243946): RFE: posix: xattrop 'GF_XATTROP_ADD_DEF_ARRAY' implementation
- [#1244109](https://bugzilla.redhat.com/1244109): quota: brick crashes when create and remove performed in parallel
- [#1244144](https://bugzilla.redhat.com/1244144): [Backup]: Glusterfind pre attribute '--output-prefix' not working as expected in case of DELETEs
- [#1244165](https://bugzilla.redhat.com/1244165): [RHEV-RHGS] App VMs paused due to IO error caused by split-brain, after initiating remove-brick operation
- [#1244613](https://bugzilla.redhat.com/1244613): using fop's dict for resolving causes problems
- [#1245045](https://bugzilla.redhat.com/1245045): Data Loss:Remove brick commit passing when remove-brick process has not even started(due to killing glusterd)
- [#1245065](https://bugzilla.redhat.com/1245065): "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes
- [#1245142](https://bugzilla.redhat.com/1245142): DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
- [#1245276](https://bugzilla.redhat.com/1245276): ec returns EIO error in cases where a more specific error could be returned
- [#1245331](https://bugzilla.redhat.com/1245331): volume start  command is failing  when  glusterfs compiled  with debug enabled
- [#1245380](https://bugzilla.redhat.com/1245380): [RFE] Render all mounts of a volume defunct upon access revocation
- [#1245425](https://bugzilla.redhat.com/1245425): IFS is not set back after used as "[" in log_newer function of include.rc
- [#1245544](https://bugzilla.redhat.com/1245544): quota/marker: errors in log file 'Failed to get metadata for'
- [#1245547](https://bugzilla.redhat.com/1245547): sharding - Fix unlink of sparse files
- [#1245558](https://bugzilla.redhat.com/1245558): gluster vol quota dist-vol list is not displaying quota informatio.
- [#1245689](https://bugzilla.redhat.com/1245689): ec sequentializes all reads, limiting read throughtput
- [#1245895](https://bugzilla.redhat.com/1245895): gluster snapshot status --xml gives back unexpected non xml output
- [#1245935](https://bugzilla.redhat.com/1245935): Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
- [#1245981](https://bugzilla.redhat.com/1245981): forgotten inodes are not being signed
- [#1246052](https://bugzilla.redhat.com/1246052): Deceiving log messages like "Failing STAT on gfid : split-brain observed. [Input/output error]" reported
- [#1246082](https://bugzilla.redhat.com/1246082): sharding - Populate the aggregated ia_size and ia_blocks before unwinding (f)setattr to upper layers
- [#1246229](https://bugzilla.redhat.com/1246229): tier_lookup_heal.t contains incorrect file_on_fast_tier function
- [#1246275](https://bugzilla.redhat.com/1246275): POSIX ACLs as used by a FUSE mount can not use more than 32 groups
- [#1246432](https://bugzilla.redhat.com/1246432): ./tests/basic/volume-snapshot.t  spurious fail causing glusterd crash.
- [#1246736](https://bugzilla.redhat.com/1246736): client3_3_removexattr_cbk floods the logs with "No data available" messages
- [#1246794](https://bugzilla.redhat.com/1246794): GF_LOG_NONE logs always
- [#1247108](https://bugzilla.redhat.com/1247108): sharding - OS installation on vm image hangs on a sharded volume
- [#1247152](https://bugzilla.redhat.com/1247152): SSL improvements: ECDH, DH, CRL, and accessible options
- [#1247529](https://bugzilla.redhat.com/1247529): [geo-rep]: rename followed by deletes causes ESTALE
- [#1247536](https://bugzilla.redhat.com/1247536): Dist-geo-rep : checkpoint doesn't reach even though all the files have been synced through hybrid crawl.
- [#1247563](https://bugzilla.redhat.com/1247563): ACL created on a dht.linkto file on a files that skipped rebalance
- [#1247603](https://bugzilla.redhat.com/1247603): glusterfs : fix double free possibility in the code
- [#1247765](https://bugzilla.redhat.com/1247765): Glusterfsd crashes because of thread-unsafe code in gf_authenticate
- [#1247930](https://bugzilla.redhat.com/1247930): rpc: check for unprivileged port should start at 1024 and not beyond 1024
- [#1248298](https://bugzilla.redhat.com/1248298): [upgrade] After upgrade from 3.5 to 3.6 onwards version, bumping up op-version failed
- [#1248306](https://bugzilla.redhat.com/1248306): tiering: rename fails with "Device or resource busy" error message
- [#1248415](https://bugzilla.redhat.com/1248415): rebalance stuck at 0 byte when auth.allow is set
- [#1248521](https://bugzilla.redhat.com/1248521): quota : display the size equivalent to the soft limit percentage in gluster v quota <volname> list* command
- [#1248669](https://bugzilla.redhat.com/1248669): all: Make all the xlator fops static to avoid incorrect symbol resolution
- [#1248887](https://bugzilla.redhat.com/1248887): AFR: Make [f]xattrop metadata transaction
- [#1249391](https://bugzilla.redhat.com/1249391): Fix build on Mac OS X, booleans
- [#1249499](https://bugzilla.redhat.com/1249499): Make ping-timeout option configurable at a volume-level
- [#1250009](https://bugzilla.redhat.com/1250009): Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf"
- [#1250170](https://bugzilla.redhat.com/1250170): Write performance from a Windows client on 3-way replicated volume decreases substantially when one brick in the replica set is brought down
- [#1250297](https://bugzilla.redhat.com/1250297): [New] - glusterfs dead when user creates a rdma volume
- [#1250387](https://bugzilla.redhat.com/1250387): [RFE] changes needed in snapshot info command's xml output.
- [#1250441](https://bugzilla.redhat.com/1250441): Sharding - Excessive logging of messages of the kind 'Failed to get trusted.glusterfs.shard.file-size for bf292f5b-6dd6-45a8-b03c-aaf5bb973c50'
- [#1250582](https://bugzilla.redhat.com/1250582): Quota: volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled.
- [#1250601](https://bugzilla.redhat.com/1250601): nfs-ganesha: remove the entry of the deleted node
- [#1250628](https://bugzilla.redhat.com/1250628): nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"
- [#1250797](https://bugzilla.redhat.com/1250797): rpc: Address issues with transport object reference and leak
- [#1250803](https://bugzilla.redhat.com/1250803): Perf: Metadata operation(ls -l) performance regression.
- [#1250828](https://bugzilla.redhat.com/1250828): Tiering: segfault when trying to rename a file
- [#1250855](https://bugzilla.redhat.com/1250855): sharding - Renames on non-sharded files failing with ENOMEM
- [#1251042](https://bugzilla.redhat.com/1251042): while re-configuring the scrubber frequency, scheduling is not happening based on current time
- [#1251121](https://bugzilla.redhat.com/1251121): Unable to demote files in tiered volumes when cold tier is EC.
- [#1251346](https://bugzilla.redhat.com/1251346): statfs giving incorrect values for AFR arbiter volumes
- [#1251446](https://bugzilla.redhat.com/1251446): Disperse volume: fuse mount hung after self healing
- [#1251449](https://bugzilla.redhat.com/1251449): posix_make_ancestryfromgfid doesn't set op_errno
- [#1251454](https://bugzilla.redhat.com/1251454): marker: set loc.parent if NULL
- [#1251592](https://bugzilla.redhat.com/1251592): Fix the tests infra
- [#1251674](https://bugzilla.redhat.com/1251674): Add known failures to bad_tests list
- [#1251821](https://bugzilla.redhat.com/1251821): /usr/lib/glusterfs/ganesha/ganesha_ha.sh is distro specific
- [#1251824](https://bugzilla.redhat.com/1251824): Sharding - Individual shards' ownership differs from that of the original file
- [#1251857](https://bugzilla.redhat.com/1251857): nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
- [#1251980](https://bugzilla.redhat.com/1251980): dist-geo-rep: geo-rep status shows Active/Passive even when all the gsync processes in a node are killed
- [#1252121](https://bugzilla.redhat.com/1252121): tier.t contains pattern matching error in check_counters function
- [#1252244](https://bugzilla.redhat.com/1252244): DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation  few files are not accessible and not listed on mount and  more than one Directory have same gfid
- [#1252263](https://bugzilla.redhat.com/1252263): Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
- [#1252374](https://bugzilla.redhat.com/1252374): tests: no cleanup on receiving external signals INT, TERM and HUP
- [#1252410](https://bugzilla.redhat.com/1252410): libgfapi : adding follow flag to glfs_h_lookupat()
- [#1252448](https://bugzilla.redhat.com/1252448): Probing a new  node, which is part of another cluster, should throw proper error message in logs and CLI
- [#1252586](https://bugzilla.redhat.com/1252586): Legacy files pre-existing tier attach must be promoted
- [#1252695](https://bugzilla.redhat.com/1252695): posix : pending - porting log messages to a new framework
- [#1252696](https://bugzilla.redhat.com/1252696): After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log
- [#1252737](https://bugzilla.redhat.com/1252737): xml output for volume status on tiered volume
- [#1252807](https://bugzilla.redhat.com/1252807): libgfapi : pending - porting log messages to a new framework
- [#1252808](https://bugzilla.redhat.com/1252808): protocol server : Pending - porting log messages to a new framework
- [#1252825](https://bugzilla.redhat.com/1252825): Though scrubber settings changed on one volume log shows all volumes scrubber information
- [#1252836](https://bugzilla.redhat.com/1252836): libglusterfs: Pending - Porting log messages to new framework
- [#1253149](https://bugzilla.redhat.com/1253149): performance translators: Pending - porting logging messages to new logging framework
- [#1253309](https://bugzilla.redhat.com/1253309): AFR: gluster v restart force or brick process restart doesn't heal the files
- [#1253828](https://bugzilla.redhat.com/1253828): glusterd: remove unused large memory/buffer allocations
- [#1253831](https://bugzilla.redhat.com/1253831): glusterd: clean dead initializations
- [#1253967](https://bugzilla.redhat.com/1253967): glusterfs doesn't include firewalld rules
- [#1253970](https://bugzilla.redhat.com/1253970): garbage files created in /var/run/gluster
- [#1254121](https://bugzilla.redhat.com/1254121): Start self-heal and display correct heal info after replace brick
- [#1254127](https://bugzilla.redhat.com/1254127): Spurious failure blocking NetBSD regression runs
- [#1254146](https://bugzilla.redhat.com/1254146): quota: numbers of warning messages in nfs.log a single file itself
- [#1254167](https://bugzilla.redhat.com/1254167): `gluster volume heal <vol-name> split-brain' changes required for entry-split-brain
- [#1254428](https://bugzilla.redhat.com/1254428): Data Tiering : Writes to a file being promoted/demoted are missing once the file migration is complete
- [#1254451](https://bugzilla.redhat.com/1254451): Data Tiering : Some tier xlator_fops translate to the default fops
- [#1254494](https://bugzilla.redhat.com/1254494): nfs-ganesha: refresh-config stdout output does not make sense
- [#1254850](https://bugzilla.redhat.com/1254850): Fix build on Mac OS X, glfs_h_lookupat symbol version
- [#1254863](https://bugzilla.redhat.com/1254863): non-default symver macros are incorrect
- [#1255310](https://bugzilla.redhat.com/1255310): Snapshot: When soft limit is reached, auto-delete is enable, create snapshot doesn't logs anything in log files
- [#1255386](https://bugzilla.redhat.com/1255386): snapd/quota/nfs daemon's runs on the node, even after that node was detached from trusted storage pool
- [#1255599](https://bugzilla.redhat.com/1255599): Remove unwanted tests from volume-snapshot.t
- [#1255693](https://bugzilla.redhat.com/1255693): Tiering status command is very cumbersome.
- [#1255694](https://bugzilla.redhat.com/1255694): glusterd: volume status backward compatibility
- [#1256243](https://bugzilla.redhat.com/1256243): remove-brick: avoid mknod op falling on decommissioned brick even after fix-layout has happened on parent directory
- [#1256352](https://bugzilla.redhat.com/1256352): gluster-nfs : contents of export file is not updated correctly in its context
- [#1256580](https://bugzilla.redhat.com/1256580): sharding - VM image size as seen from the mount keeps growing beyond configured size on a sharded volume
- [#1256588](https://bugzilla.redhat.com/1256588): arbiter-statfs.t fails spuriously in NetBSD regression
- [#1257076](https://bugzilla.redhat.com/1257076): DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on
- [#1257110](https://bugzilla.redhat.com/1257110): Logging : unnecessary log  message "REMOVEXATTR No data available " when files are written to glusterfs mount
- [#1257149](https://bugzilla.redhat.com/1257149): Provide more meaningful errors on peer probe and peer detach
- [#1257533](https://bugzilla.redhat.com/1257533): snapshot delete all command fails with --xml option.
- [#1257694](https://bugzilla.redhat.com/1257694): quota: removexattr on /d/backends/patchy/.glusterfs/79/99/799929ec-f546-4bbf-8549-801b79623262 (for trusted.glusterfs.quota.add7e3f8-833b-48ec-8a03-f7cd09925468.contri) [No such file or directory]
- [#1257709](https://bugzilla.redhat.com/1257709): Copy NFS-Ganesha export files as part of volume snapshot creation
- [#1257792](https://bugzilla.redhat.com/1257792): bug-1238706-daemons-stop-on-peer-cleanup.t fails occasionally
- [#1257847](https://bugzilla.redhat.com/1257847): Dist-geo-rep: Geo-replication doesn't work with NetBSD
- [#1257911](https://bugzilla.redhat.com/1257911): add policy mechanism for promotion and demotion
- [#1258196](https://bugzilla.redhat.com/1258196): gNFSd: NFS mount fails with "Remote I/O error"
- [#1258311](https://bugzilla.redhat.com/1258311): trace xlator: Print write size also in trace_writev logs
- [#1258334](https://bugzilla.redhat.com/1258334): Sharding - Unlink of VM images can sometimes fail with EINVAL
- [#1258714](https://bugzilla.redhat.com/1258714): bug-948686.t fails spuriously
- [#1258766](https://bugzilla.redhat.com/1258766): quota test 'quota-nfs.t' fails spuriously
- [#1258801](https://bugzilla.redhat.com/1258801): Change order of marking AFR post op
- [#1258883](https://bugzilla.redhat.com/1258883): build: compile error on RHEL5
- [#1258905](https://bugzilla.redhat.com/1258905): Sharding - read/write performance improvements for VM workload
- [#1258975](https://bugzilla.redhat.com/1258975): packaging: gluster-server install failure due to %ghost of hooks/.../delete
- [#1259225](https://bugzilla.redhat.com/1259225): Add node of nfs-ganesha not working on rhel7.1
- [#1259298](https://bugzilla.redhat.com/1259298): Tier xattr name is misleading (trusted.tier-gfid)
- [#1259572](https://bugzilla.redhat.com/1259572): client is sending io to arbiter with replica 2
- [#1259651](https://bugzilla.redhat.com/1259651): sharding - Fix reads on zero-byte shards representing holes in the file
- [#1260051](https://bugzilla.redhat.com/1260051): DHT: Few files are missing after remove-brick operation
- [#1260147](https://bugzilla.redhat.com/1260147): fuse client crashed during i/o
- [#1260185](https://bugzilla.redhat.com/1260185): Data Tiering:Regression:Commit of detach tier passes without directly without even issuing a detach tier start
- [#1260545](https://bugzilla.redhat.com/1260545): Quota+Rebalance : While rebalance is in progress , quota list shows 'Used Space' more than the Hard Limit set
- [#1260561](https://bugzilla.redhat.com/1260561): transport and port should be optional arguments for glfs_set_volfile_server
- [#1260611](https://bugzilla.redhat.com/1260611): snapshot: from nfs-ganesha mount no content seen in .snaps/<snapshot-name> directory
- [#1260637](https://bugzilla.redhat.com/1260637): sharding - Do not expose internal sharding xattrs to the application.
- [#1260730](https://bugzilla.redhat.com/1260730): Database locking due to write contention between CTR sql connection and tier migrator sql connection
- [#1260848](https://bugzilla.redhat.com/1260848): Disperse volume: df -h on a nfs mount throws Invalid argument error
- [#1260918](https://bugzilla.redhat.com/1260918): [BACKUP]: If more than 1 node in cluster are not added in known_host, glusterfind create command hungs
- [#1261260](https://bugzilla.redhat.com/1261260): [RFE]: Have reads be performed on same bricks for a given file
- [#1261276](https://bugzilla.redhat.com/1261276): Tier/shd: Tracker bug for tier and shd compatibility
- [#1261399](https://bugzilla.redhat.com/1261399): [HC] Fuse mount crashes, when client-quorum is not met
- [#1261404](https://bugzilla.redhat.com/1261404): No quota API to get real hard-limit value.
- [#1261444](https://bugzilla.redhat.com/1261444): cli : volume start will create/overwrite ganesha export file
- [#1261482](https://bugzilla.redhat.com/1261482): glusterd_copy_file can cause file corruption
- [#1261741](https://bugzilla.redhat.com/1261741): Tier: glusterd crash when trying to detach , when hot tier is having exactly one brick and cold tier is of replica type
- [#1261757](https://bugzilla.redhat.com/1261757): Tiering/glusted: volume status failed after detach tier start
- [#1261773](https://bugzilla.redhat.com/1261773): features.sharding is not available in 'gluster volume set help'
- [#1261819](https://bugzilla.redhat.com/1261819): Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)
- [#1261837](https://bugzilla.redhat.com/1261837): Data Tiering:Volume task status showing as remove brick when detach tier is trigger
- [#1261841](https://bugzilla.redhat.com/1261841): [HC] Implement fallocate, discard and zerofill with sharding
- [#1261862](https://bugzilla.redhat.com/1261862): Data Tiering: detach-tier start  force  command not available on a tier volume(unlike which is possible in force remove-brick)
- [#1261927](https://bugzilla.redhat.com/1261927): Minor improvements and code cleanup for rpc
- [#1262345](https://bugzilla.redhat.com/1262345): `getfattr -n replica.split-brain-status <file>' command hung on the mount
- [#1262438](https://bugzilla.redhat.com/1262438): Error not propagated correctly if selfheal layout lock fails
- [#1262805](https://bugzilla.redhat.com/1262805): [upgrade] After upgrade from 3.5 to 3.6, probing a new 3.6 node is moving the peer to rejected state
- [#1262881](https://bugzilla.redhat.com/1262881): nfs-ganesha: refresh-config stdout output includes dbus messages "method return sender=:1.61 -> dest=:1.65 reply_serial=2"
- [#1263056](https://bugzilla.redhat.com/1263056): libgfapi: brick process crashes if attr KEY length > 255 for glfs_lgetxattr(...)
- [#1263087](https://bugzilla.redhat.com/1263087): RHEL7/systemd : can't have server in debug mode anymore
- [#1263100](https://bugzilla.redhat.com/1263100): Data Tiering: Tiering related information is not displayed in gluster volume status xml output
- [#1263177](https://bugzilla.redhat.com/1263177): Data Tieirng:Change error message as detach-tier error message throws as "remove-brick"
- [#1263204](https://bugzilla.redhat.com/1263204): Data Tiering:Setting only promote frequency and no demote frequency causes crash
- [#1263224](https://bugzilla.redhat.com/1263224): 'gluster v tier/attach-tier/detach-tier help' command shows the usage, and then throws 'Tier command failed' error message
- [#1263549](https://bugzilla.redhat.com/1263549): I/O failure on attaching tier
- [#1263726](https://bugzilla.redhat.com/1263726): Data Tieirng:Detach tier status shows number of failures even when all files are migrated successfully
- [#1265148](https://bugzilla.redhat.com/1265148): Dist-geo-rep: Support geo-replication to work with sharding
- [#1265470](https://bugzilla.redhat.com/1265470): AFR : "gluster volume heal <volume_name info" doesn't report the fqdn of storage nodes.
- [#1265479](https://bugzilla.redhat.com/1265479): AFR: cluster options like data-self-heal, metadata-self-heal and entry-self-heal should not be allowed to set, if volume is not distribute-replicate volume
- [#1265516](https://bugzilla.redhat.com/1265516): sharding - Add more logs in failure code paths + port existing messages to the msg-id framework
- [#1265522](https://bugzilla.redhat.com/1265522): Geo-Replication failes on uppercase hostnames
- [#1265531](https://bugzilla.redhat.com/1265531): Message ids in quota-messages.h should start from 120000 as opposed to 110000
- [#1265677](https://bugzilla.redhat.com/1265677): Have a way to disable readdirp on dht from glusterd volume set command
- [#1265893](https://bugzilla.redhat.com/1265893): Perf: Getting bad performance while doing ls
- [#1266476](https://bugzilla.redhat.com/1266476): RFE : Feature: Periodic FOP statistics dumps for v3.6.x/v3.7.x
- [#1266818](https://bugzilla.redhat.com/1266818): Disabling enable-shared-storage deletes the volume with the name - "gluster_shared_storage"
- [#1266834](https://bugzilla.redhat.com/1266834): AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously
- [#1266875](https://bugzilla.redhat.com/1266875): geo-replication: [RFE] Geo-replication + Tiering
- [#1266877](https://bugzilla.redhat.com/1266877): Possible memory leak during rebalance with large quantity of files
- [#1266883](https://bugzilla.redhat.com/1266883): protocol/server: do not define the number of inodes in terms of memory units
- [#1267539](https://bugzilla.redhat.com/1267539): Data Tiering:CLI crashes with segmentation fault when user tries "gluster v tier" command
- [#1267812](https://bugzilla.redhat.com/1267812): Data Tiering:Promotions and demotions fail after quota hard limits are hit for a tier volume
- [#1267950](https://bugzilla.redhat.com/1267950): need a way to pause/stop tiering to take snapshot
- [#1267967](https://bugzilla.redhat.com/1267967): core: use syscall wrappers instead of making direct syscalls
- [#1268755](https://bugzilla.redhat.com/1268755): Data Tiering:Throw a warning when user issues a detach-tier commit command
- [#1268790](https://bugzilla.redhat.com/1268790): Add bug-1221481-allow-fops-on-dir-split-brain.t to bad test
- [#1268796](https://bugzilla.redhat.com/1268796): Test tests/bugs/shard/bug-1245547.t failing consistently when run with patch http://review.gluster.org/#/c/11938/
- [#1268810](https://bugzilla.redhat.com/1268810): gluster v status --xml for a replicated hot tier volume
- [#1268822](https://bugzilla.redhat.com/1268822): tier/cli: number of bricks remains the same in v info --xml
- [#1269375](https://bugzilla.redhat.com/1269375): rm -rf on /run/gluster/vol/<directory name>/ is not showing quota output header for other quota limit applied directories
- [#1269461](https://bugzilla.redhat.com/1269461): Feature: Entry self-heal performance enhancements using more granular changelogs
- [#1269470](https://bugzilla.redhat.com/1269470): Self-heal daemon crashes when bricks godown at the time of data heal
- [#1269696](https://bugzilla.redhat.com/1269696): Glusterfsd crashes on pmap signin failure
- [#1269754](https://bugzilla.redhat.com/1269754): Core:Blocker:Segmentation fault when using fallocate command on a gluster volume
- [#1270328](https://bugzilla.redhat.com/1270328): Rare transpoint endpoint not connected error in tier.t tests.
- [#1270668](https://bugzilla.redhat.com/1270668): Index entries are not being purged in case of file does not exist
- [#1270694](https://bugzilla.redhat.com/1270694): Introduce priv dump in shard xlator for better debugging
- [#1271148](https://bugzilla.redhat.com/1271148): Tier: Do not promote/demote files on which POSIX locks are held
- [#1271150](https://bugzilla.redhat.com/1271150): libglusterfs : glusterd was not restarting after setting key=value length beyond PATH_MAX (4096) character
- [#1271310](https://bugzilla.redhat.com/1271310): RFE : Feature: Tunable FOP sampling for v3.6.x/v3.7.x
- [#1271325](https://bugzilla.redhat.com/1271325): RFE: use code generation for repetitive stuff
- [#1271358](https://bugzilla.redhat.com/1271358): ECVOL: glustershd log grows quickly and fills up the root volume
- [#1271907](https://bugzilla.redhat.com/1271907): Improvement in install & package header files
- [#1272006](https://bugzilla.redhat.com/1272006): tools/glusterfind: add query command to list files without session
- [#1272207](https://bugzilla.redhat.com/1272207): Data Tiering:Filenames with spaces are not getting migrated at all
- [#1272319](https://bugzilla.redhat.com/1272319): Tier : Move common functions into tier.rc
- [#1272339](https://bugzilla.redhat.com/1272339): Creating a already deleted snapshot-clone deletes the corresponding snapshot.
- [#1272362](https://bugzilla.redhat.com/1272362): Fix in afr transaction code
- [#1272411](https://bugzilla.redhat.com/1272411): quota: set quota version for files/directories
- [#1272460](https://bugzilla.redhat.com/1272460): Disk usage mismatching after self-heal
- [#1272557](https://bugzilla.redhat.com/1272557): [Tier]: man page of gluster should be updated to list tier commands
- [#1272949](https://bugzilla.redhat.com/1272949): I/O failure on attaching tier on nfs client
- [#1272986](https://bugzilla.redhat.com/1272986): [sharding+geo-rep]: On existing slave mount, reading files fails to show sharded file content
- [#1273043](https://bugzilla.redhat.com/1273043): Data Tiering:Lot of Promotions/Demotions failed error messages
- [#1273215](https://bugzilla.redhat.com/1273215): Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
- [#1273315](https://bugzilla.redhat.com/1273315): fuse: Avoid redundant lookup on "." and ".." as part of every readdirp
- [#1273372](https://bugzilla.redhat.com/1273372): Data Tiering:getting failed to fsync on germany-hot-dht (Structure needs cleaning) warning
- [#1273387](https://bugzilla.redhat.com/1273387): FUSE clients in a container environment hang and do not recover post losing connections to all bricks
- [#1273726](https://bugzilla.redhat.com/1273726): Fully support data-tiering in 3.7.x, remove out of 'experimental' status
- [#1274626](https://bugzilla.redhat.com/1274626): Remove selinux mount option from "man mount.glusterfs"
- [#1274629](https://bugzilla.redhat.com/1274629): Data Tiering:error "[2015-10-14 18:15:09.270483] E [MSGID: 122037] [ec-common.c:1502:ec_update_size_version_done] 0-tiervolume-disperse-1: Failed to update version and size [Input/output error]"
- [#1274847](https://bugzilla.redhat.com/1274847): CTR should be enabled on attach tier, disabled otherwise.
- [#1275247](https://bugzilla.redhat.com/1275247): I/O hangs while self-heal is in progress on files
- [#1275383](https://bugzilla.redhat.com/1275383): Data Tiering:Getting lookup failed on files in hot tier, when volume is restarted
- [#1275489](https://bugzilla.redhat.com/1275489): Enhance the naming used for bugs for better name space
- [#1275502](https://bugzilla.redhat.com/1275502): [Tier]: Typo in the output while setting the wrong value of low/hi watermark
- [#1275524](https://bugzilla.redhat.com/1275524): Data Tiering:heat counters not getting reset and also internal ops seem to be heating the files
- [#1275616](https://bugzilla.redhat.com/1275616): snap-max-hard-limit for snapshots always shows as 256 in info file.
- [#1275966](https://bugzilla.redhat.com/1275966): RFE : Exporting multiple subdirectory entries for  gluster volume using cli
- [#1276018](https://bugzilla.redhat.com/1276018): Wrong value of snap-max-hard-limit observed in 'gluster volume info'.
- [#1276023](https://bugzilla.redhat.com/1276023): Clone creation should not be successful when the node participating in volume goes down.
- [#1276028](https://bugzilla.redhat.com/1276028): [RFE] Geo-replication support for Volumes running in docker containers
- [#1276031](https://bugzilla.redhat.com/1276031): Assertion failure while moving files between directories on a dispersed volume
- [#1276141](https://bugzilla.redhat.com/1276141): Data Tiering: Tiering deamon is seeing each part of a file in a Disperse cold volume as a different file
- [#1276203](https://bugzilla.redhat.com/1276203): add-brick on a replicate volume could lead to data-loss
- [#1276243](https://bugzilla.redhat.com/1276243): gluster-nfs : Server crashed due to an invalid reference
- [#1276386](https://bugzilla.redhat.com/1276386): vol replace-brick fails when transport.socket.bind-address is set in glusterd
- [#1276423](https://bugzilla.redhat.com/1276423): glusterd: probing a new  node(>=3.6)  from 3.5 cluster is moving the peer to rejected state
- [#1276562](https://bugzilla.redhat.com/1276562): Data Tiering:tiering deamon crashes when trying to heat the file
- [#1276643](https://bugzilla.redhat.com/1276643): Upgrading a subset of cluster to 3.7.5 leads to issues with glusterd commands
- [#1276675](https://bugzilla.redhat.com/1276675): Arbiter volume becomes replica volume in some cases
- [#1276839](https://bugzilla.redhat.com/1276839): Geo-replication doesn't deal properly with sparse files
- [#1276989](https://bugzilla.redhat.com/1276989): ec-readdir.t is failing consistently
- [#1277024](https://bugzilla.redhat.com/1277024): BSD Smoke fails with _IOS_SAMP_DIR undeclared
- [#1277076](https://bugzilla.redhat.com/1277076): Monitor should restart the worker process when Changelog agent dies
- [#1277081](https://bugzilla.redhat.com/1277081): [New] - Message displayed after attach tier is misleading
- [#1277105](https://bugzilla.redhat.com/1277105): vol quota enable fails when transport.socket.bind-address is set in glusterd
- [#1277352](https://bugzilla.redhat.com/1277352): [Tier]: restarting volume reports "insert/update failure" in cold brick logs
- [#1277481](https://bugzilla.redhat.com/1277481): Upgrading to 3.7.-5-5 has changed volume to distributed disperse
- [#1277533](https://bugzilla.redhat.com/1277533): stop-all-gluster-processes.sh doesn't return correct return status
- [#1277716](https://bugzilla.redhat.com/1277716): fix lookup-unhashed for tiered volumes.
- [#1277997](https://bugzilla.redhat.com/1277997): vol heal info fails when transport.socket.bind-address is set in glusterd
- [#1278326](https://bugzilla.redhat.com/1278326): [New] - Files in a tiered volume gets promoted when bitd signs them
- [#1278418](https://bugzilla.redhat.com/1278418): Spurious failure in bug-1275616.t
- [#1278476](https://bugzilla.redhat.com/1278476): move mount-nfs-auth.t to failed tests lists
- [#1278689](https://bugzilla.redhat.com/1278689): quota/marker: quota accouting goes wrong with renaming file when IO in progress
- [#1278709](https://bugzilla.redhat.com/1278709): Tests/tiering: Correct typo in bug-1214222-directories_miising_after_attach_tier.t in bad_tests
- [#1278927](https://bugzilla.redhat.com/1278927): [New] - Message shown in gluster vol tier <volname> status output is incorrect.
- [#1279166](https://bugzilla.redhat.com/1279166): Data Tiering:Metadata changes to a file should not heat/promote the file
- [#1279297](https://bugzilla.redhat.com/1279297): Remove bug-1275616.t from bad tests list
- [#1279327](https://bugzilla.redhat.com/1279327): [Snapshot]: Clone creation fails on tiered volume with pre-validation failed message
- [#1279376](https://bugzilla.redhat.com/1279376): Data Tiering:Rename of  cold file to a hot file causing split brain and showing two copies of files in mount point
- [#1279484](https://bugzilla.redhat.com/1279484): glusterfsd to support volfile-server-transport type "unix"
- [#1279637](https://bugzilla.redhat.com/1279637): Data Tiering:Regression:Detach tier commit is passing when detach tier is in progress
- [#1279705](https://bugzilla.redhat.com/1279705): AFR: 3-way-replication: Transport point not connected error messaged not displayed when one of the replica pair is down
- [#1279730](https://bugzilla.redhat.com/1279730): guest paused due to IO error from gluster based storage doesn't resume automatically or manually
- [#1279739](https://bugzilla.redhat.com/1279739): libgfapi to support set_volfile-server-transport type "unix"
- [#1279836](https://bugzilla.redhat.com/1279836): Fails to build twice in a row
- [#1279921](https://bugzilla.redhat.com/1279921): volume info of %s obtained from %s: ambiguous uuid - Starting geo-rep session
- [#1280428](https://bugzilla.redhat.com/1280428): fops-during-migration-pause.t spurious failure
- [#1281230](https://bugzilla.redhat.com/1281230): dht must avoid fresh lookups when a single replica pair goes offline
- [#1281265](https://bugzilla.redhat.com/1281265): DHT :- log is full of  ' Found anomalies in /<DIR> (gfid = 00000000-0000-0000-0000-000000000000)' -  for each Directory which was self healed
- [#1281598](https://bugzilla.redhat.com/1281598): Data Tiering: "ls" count taking link files and promote/demote files into consideration both on fuse and nfs mount
- [#1281892](https://bugzilla.redhat.com/1281892): packaging: gfind_missing_files are not in geo-rep %if ... %endif conditional
- [#1282076](https://bugzilla.redhat.com/1282076): cache mode must be the default mode for tiered volumes
- [#1282322](https://bugzilla.redhat.com/1282322): [GlusterD]: Volume start fails post add-brick on a volume which is not started
- [#1282331](https://bugzilla.redhat.com/1282331): Geo-replication is logging in Localtime
- [#1282390](https://bugzilla.redhat.com/1282390): Data Tiering:delete command rm -rf not deleting files  the linkto file(hashed)  which are under migration  and possible spit-brain observed and possible disk wastage
- [#1282461](https://bugzilla.redhat.com/1282461): [upgrade] Error messages seen in glusterd logs, while upgrading from RHGS 2.1.6 to RHGS 3.1
- [#1282673](https://bugzilla.redhat.com/1282673): ./tests/basic/tier/record-metadata-heat.t is failing upstream
- [#1282751](https://bugzilla.redhat.com/1282751): Large system file distribution is broken
- [#1282761](https://bugzilla.redhat.com/1282761): EC: File healing promotes it to hot tier
- [#1282915](https://bugzilla.redhat.com/1282915): glusterfs does not register with rpcbind on restart
- [#1283032](https://bugzilla.redhat.com/1283032): While file is self healing append to the file hangs
- [#1283103](https://bugzilla.redhat.com/1283103): Setting security.* xattrs fails
- [#1283178](https://bugzilla.redhat.com/1283178): [GlusterD]: Incorrect peer status showing  if volume restart done before entire cluster update.
- [#1283211](https://bugzilla.redhat.com/1283211): check_host_list() should be more robust
- [#1283485](https://bugzilla.redhat.com/1283485): Warning messages seen in glusterd logs in executing gluster volume set help
- [#1283488](https://bugzilla.redhat.com/1283488): [Tier]: Space is missed b/w the words  in the detach  tier  stop error message
- [#1283567](https://bugzilla.redhat.com/1283567): qupta/marker: backward compatibility with quota xattr vesrioning
- [#1283948](https://bugzilla.redhat.com/1283948): glupy default CFLAGS conflict with our CFLAGS when --enable-debug is used
- [#1283983](https://bugzilla.redhat.com/1283983): nfs-ganesha: Upcall sent on null gfid
- [#1284090](https://bugzilla.redhat.com/1284090): sometimes files are not getting demoted from hot tier to cold tier
- [#1284357](https://bugzilla.redhat.com/1284357): Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
- [#1284365](https://bugzilla.redhat.com/1284365): Sharding - Extending writes filling incorrect final size in postbuf
- [#1284372](https://bugzilla.redhat.com/1284372): [Tier]: Stopping and Starting tier volume triggers fixing layout which fails on local host
- [#1284419](https://bugzilla.redhat.com/1284419): Resource leak in marker
- [#1284752](https://bugzilla.redhat.com/1284752): quota cli: enhance quota list command to list usage even if the limit is not set
- [#1284789](https://bugzilla.redhat.com/1284789): Snapshot creation after attach-tier causes glusterd crash
- [#1284823](https://bugzilla.redhat.com/1284823): fops-during-migration.t fails if hot and cold tiers are dist-rep
- [#1285046](https://bugzilla.redhat.com/1285046): AFR self-heal-daemon option is still set on volume though tier is detached
- [#1285152](https://bugzilla.redhat.com/1285152): store afr pending xattrs as a volume option
- [#1285173](https://bugzilla.redhat.com/1285173): Create doesn't remember flags it is opened with
- [#1285230](https://bugzilla.redhat.com/1285230): Data Tiering:File create terminates with "Input/output error" as split brain is observed
- [#1285241](https://bugzilla.redhat.com/1285241): Corrupted objects list  does not get cleared even after all the files in the volume are deleted and count increases as old + new count
- [#1285288](https://bugzilla.redhat.com/1285288): Better indication of arbiter brick presence in a volume.
- [#1285483](https://bugzilla.redhat.com/1285483): legacy_many_files.t fails upstream
- [#1285488](https://bugzilla.redhat.com/1285488): [geo-rep]: Recommended Shared volume use on geo-replication is broken
- [#1285616](https://bugzilla.redhat.com/1285616): Brick crashes because of race in bit-rot init
- [#1285634](https://bugzilla.redhat.com/1285634): Self-heal triggered every couple of seconds and a 3-node 1-arbiter setup
- [#1285660](https://bugzilla.redhat.com/1285660): sharding - reads fail on sharded volume while running iozone
- [#1285663](https://bugzilla.redhat.com/1285663): tiering: Seeing error messages  E "/usr/lib64/glusterfs/3.7.5/xlator/features/changetimerecorder.so(ctr_lookup+0x54f) [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier
- [#1285968](https://bugzilla.redhat.com/1285968): cli/geo-rep : remove unused code
- [#1285989](https://bugzilla.redhat.com/1285989): bitrot: bitrot scrub status command should display the correct value of total number of scrubbed, unsigned file
- [#1286017](https://bugzilla.redhat.com/1286017): We need to skip data self-heal for arbiter bricks
- [#1286029](https://bugzilla.redhat.com/1286029): Data Tiering:File create terminates with "Input/output error" as split brain is observed
- [#1286279](https://bugzilla.redhat.com/1286279): tools/glusterfind: add --full option to query command
- [#1286346](https://bugzilla.redhat.com/1286346): Data Tiering:Don't allow or reset the frequency threshold values to zero when record counter features.record-counter is turned off
- [#1286656](https://bugzilla.redhat.com/1286656): Data Tiering:Read heat not getting calculated and read operations not heating the file with counter enabled
- [#1286735](https://bugzilla.redhat.com/1286735): RFE: add setup and teardown for fuse tests
- [#1286910](https://bugzilla.redhat.com/1286910): Tier: ec xattrs are set on a newly created file present in the non-ec hot tier
- [#1286959](https://bugzilla.redhat.com/1286959): [GlusterD]: After log rotate of cmd_history.log file,  the next executed gluster commands are not present in the cmd_history.log  file.
- [#1286974](https://bugzilla.redhat.com/1286974): Without detach tier commit, status changes back to tier migration
- [#1286988](https://bugzilla.redhat.com/1286988): bitrot: gluster man page and gluster cli usage does not mention the new scrub status cmd
- [#1287027](https://bugzilla.redhat.com/1287027): glusterd: cli is showing command success for rebalance commands(command which uses op_sm framework)  even though staging is failed in follower node.
- [#1287455](https://bugzilla.redhat.com/1287455): glusterd: all the daemon's of existing volume stopping upon peer detach
- [#1287503](https://bugzilla.redhat.com/1287503): Full heal of volume fails on some nodes "Commit failed on X", and glustershd logs "Couldn't get xlator xl-0"
- [#1287517](https://bugzilla.redhat.com/1287517): Memory leak in glusterd
- [#1287519](https://bugzilla.redhat.com/1287519): [geo-rep+tiering]: symlinks are not getting synced to slave on tiered master setup
- [#1287539](https://bugzilla.redhat.com/1287539): xattrs on directories are unavailable on distributed replicated volume after adding new bricks
- [#1287723](https://bugzilla.redhat.com/1287723): Handle Rsync/Tar errors effectively
- [#1287763](https://bugzilla.redhat.com/1287763): glusterfs does not allow passing standard SElinux mount options to fuse
- [#1287842](https://bugzilla.redhat.com/1287842): Few snapshot creation fails with pre-validation failed message on tiered volume.
- [#1287872](https://bugzilla.redhat.com/1287872): add bug-924726.t to ignore list in regression
- [#1287992](https://bugzilla.redhat.com/1287992): [GlusterD]Probing a node having standalone volume, should not happen
- [#1287996](https://bugzilla.redhat.com/1287996): [Quota]: Peer  status is in "Rejected" state with Quota enabled volume
- [#1288019](https://bugzilla.redhat.com/1288019): Possible memory leak in the tiered daemon
- [#1288059](https://bugzilla.redhat.com/1288059): glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
- [#1288474](https://bugzilla.redhat.com/1288474): tiering: quota list command is not working after attach or detach
- [#1288517](https://bugzilla.redhat.com/1288517): Data Tiering: new set of gluster v tier commands not working as expected
- [#1288857](https://bugzilla.redhat.com/1288857): Use after free bug in notify_kernel_loop in fuse-bridge code
- [#1288995](https://bugzilla.redhat.com/1288995): [tiering]: Tier daemon crashed on two of eight nodes and lot of "demotion failed" seen in the system
- [#1289068](https://bugzilla.redhat.com/1289068): libgfapi: Errno incorrectly set to EINVAL even on success
- [#1289258](https://bugzilla.redhat.com/1289258): core: use syscall wrappers instead of making direct syscalls; pread, pwrite
- [#1289428](https://bugzilla.redhat.com/1289428): Test ./tests/bugs/fuse/bug-924726.t fails
- [#1289447](https://bugzilla.redhat.com/1289447): Sharding - Iozone on sharded volume fails on NFS
- [#1289578](https://bugzilla.redhat.com/1289578): [Tier]: Failed to open "demotequeryfile-master-tier-dht" errors logged on the node having only cold bricks
- [#1289584](https://bugzilla.redhat.com/1289584): brick_up_status in tests/volume.rc is not correct
- [#1289602](https://bugzilla.redhat.com/1289602): After detach-tier start writes still go to hot tier
- [#1289840](https://bugzilla.redhat.com/1289840): Sharding: Remove dependency on performance.strict-write-ordering
- [#1289845](https://bugzilla.redhat.com/1289845): spurious failure of bug-1279376-rename-demoted-file.t
- [#1289859](https://bugzilla.redhat.com/1289859): Symlinks Rename fails in Symlink not exists in Slave
- [#1289869](https://bugzilla.redhat.com/1289869): Compile is broken in gluster master
- [#1289916](https://bugzilla.redhat.com/1289916): Client will not get notified about changes to volume if  node used while mounting goes down
- [#1289935](https://bugzilla.redhat.com/1289935): Glusterfind hook script failing if /var/lib/glusterd/glusterfind dir was absent
- [#1290125](https://bugzilla.redhat.com/1290125): tests/basic/afr/arbiter-statfs.t fails most of the times on NetBSD
- [#1290151](https://bugzilla.redhat.com/1290151): hook script for CTDB should not change Samba config
- [#1290270](https://bugzilla.redhat.com/1290270): Several intermittent regression failures
- [#1290421](https://bugzilla.redhat.com/1290421): changelog: CHANGELOG rename error is logged on every changelog rollover
- [#1290604](https://bugzilla.redhat.com/1290604): S30Samba scripts do not work on systemd systems
- [#1290677](https://bugzilla.redhat.com/1290677): tiering: T files getting created , even after disk quota exceeds
- [#1290734](https://bugzilla.redhat.com/1290734): [GlusterD]: GlusterD log is filled with error messages - " Failed to aggregate response from  node/brick"
- [#1290766](https://bugzilla.redhat.com/1290766): [RFE] quota: enhance quota enable and disable process
- [#1290865](https://bugzilla.redhat.com/1290865): nfs-ganesha server do not enter grace period during failover/failback
- [#1290965](https://bugzilla.redhat.com/1290965): [Tiering] + [DHT] - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
- [#1290975](https://bugzilla.redhat.com/1290975): File is not demoted after self heal (split-brain)
- [#1291212](https://bugzilla.redhat.com/1291212): Regular files are listed as 'T' files on nfs mount
- [#1291259](https://bugzilla.redhat.com/1291259): Upcall/Cache-Invalidation:  Use parent stbuf while updating parent entry
- [#1291537](https://bugzilla.redhat.com/1291537): [RFE] Provide mechanism to spin up reproducible test environment for all developers
- [#1291566](https://bugzilla.redhat.com/1291566): first file created after hot tier full fails to create, but later ends up as a stale erroneous file (file with ???????????)
- [#1291603](https://bugzilla.redhat.com/1291603): [tiering]: read/write freq-threshold allows negative values
- [#1291701](https://bugzilla.redhat.com/1291701): Renames/deletes failed with "No such file or directory" when few of the bricks from the hot tier went offline
- [#1292067](https://bugzilla.redhat.com/1292067): Data Tiering:Watermark:File continuously trying to demote itself but failing " [dht-rebalance.c:608:__dht_rebalance_create_dst_file] 0-wmrk-tier-dht: chown failed for //AP.BH.avi on wmrk-cold-dht (No such file or directory)"
- [#1292084](https://bugzilla.redhat.com/1292084): [georep+tiering]: Geo-replication sync is broken if cold tier is EC
- [#1292112](https://bugzilla.redhat.com/1292112): [Tier]: start tier daemon using rebal tier start doesnt start tierd if it is failed on any of single node
- [#1292379](https://bugzilla.redhat.com/1292379): md5sum of files mismatch after the self-heal is complete on the file
- [#1292463](https://bugzilla.redhat.com/1292463): [geo-rep]: ChangelogException: [Errno 22] Invalid argument observed upon rebooting the ACTIVE master node
- [#1292671](https://bugzilla.redhat.com/1292671): [tiering]: cluster.tier-max-files option in tiering is not honored
- [#1292749](https://bugzilla.redhat.com/1292749): Friend update floods can render the cluster incapable of handling other commands
- [#1292954](https://bugzilla.redhat.com/1292954): all: fix various errors/warnings reported by cppcheck
- [#1293034](https://bugzilla.redhat.com/1293034): Creation of files on hot tier volume taking very long time
- [#1293133](https://bugzilla.redhat.com/1293133): all: fix clang compile warnings
- [#1293223](https://bugzilla.redhat.com/1293223): Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
- [#1293227](https://bugzilla.redhat.com/1293227): Minor improvements and code cleanup for locks translator
- [#1293256](https://bugzilla.redhat.com/1293256): [Tier]: "Bad file descriptor" on removal of symlink only on tiered volume
- [#1293293](https://bugzilla.redhat.com/1293293): afr: warn if pending xattrs missing during init()
- [#1293414](https://bugzilla.redhat.com/1293414): [GlusterD]: Peer detach happening with a  node which is hosting volume bricks
- [#1293523](https://bugzilla.redhat.com/1293523): tier-snapshot.t runs too slowly on RHEL6
- [#1293558](https://bugzilla.redhat.com/1293558): gluster cli crashed while performing 'gluster vol bitrot <vol_name> scrub status'
- [#1293601](https://bugzilla.redhat.com/1293601): quota: handle quota xattr removal when quota is enabled again
- [#1293932](https://bugzilla.redhat.com/1293932): [Tiering]: When files are heated continuously, promotions are too aggressive that it promotes files way beyond high water mark
- [#1293950](https://bugzilla.redhat.com/1293950): Gluster manpage doesn't show georeplication options
- [#1293963](https://bugzilla.redhat.com/1293963): [Tier]: can not delete symlinks from client using rm
- [#1294051](https://bugzilla.redhat.com/1294051): Though files are in split-brain able to perform writes to the file
- [#1294053](https://bugzilla.redhat.com/1294053): Excessive logging in mount when bricks of the replica are down
- [#1294209](https://bugzilla.redhat.com/1294209): glusterfs.spec.in: use %global per Fedora packaging guidelines
- [#1294223](https://bugzilla.redhat.com/1294223): uses deprecated find -perm +xxx syntax
- [#1294446](https://bugzilla.redhat.com/1294446): Ganesha hook script executes showmount and causes a hang
- [#1294448](https://bugzilla.redhat.com/1294448): [tiering]: Incorrect display of 'gluster v tier help'
- [#1294479](https://bugzilla.redhat.com/1294479): quota: limit xattr not healed for a sub-directory on a newly added bricks
- [#1294497](https://bugzilla.redhat.com/1294497): gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
- [#1294588](https://bugzilla.redhat.com/1294588): Dist-geo-rep : geo-rep worker crashed while init with [Errno 34] Numerical result out of range.
- [#1294600](https://bugzilla.redhat.com/1294600): [Tier]: Killing glusterfs tier process doesn't reflect as failed/faulty in tier status
- [#1294637](https://bugzilla.redhat.com/1294637): [tiering]: Tiering isn't started after attaching hot tier and hence no promotion/demotion
- [#1294743](https://bugzilla.redhat.com/1294743): Lot of Inode not found messages in glfsheal log file
- [#1294786](https://bugzilla.redhat.com/1294786): Good files does not promoted in a tiered volume when bitrot is enabled
- [#1294794](https://bugzilla.redhat.com/1294794): "Transport endpoint not connected" in heal info though hot tier bricks are up
- [#1294809](https://bugzilla.redhat.com/1294809): mount options no longer valid: noexec, nosuid, noatime
- [#1294826](https://bugzilla.redhat.com/1294826): Speed up regression tests
- [#1295107](https://bugzilla.redhat.com/1295107): Fix mem leaks related to gfapi applications
- [#1295504](https://bugzilla.redhat.com/1295504): S29CTDBsetup hook script contains outdated documentation comments
- [#1295505](https://bugzilla.redhat.com/1295505): S29CTDB hook scripts contain comment references to downstream products and versions
- [#1295520](https://bugzilla.redhat.com/1295520): Manual mount command in S29CTDBsetup script lacks options (_netdev ...)
- [#1295702](https://bugzilla.redhat.com/1295702): Fix spurious failure in bug-1221481-allow-fops-on-dir-split-brain.t
- [#1295704](https://bugzilla.redhat.com/1295704): RFE: Provide a mechanism to disable some tests in regression
- [#1295763](https://bugzilla.redhat.com/1295763): Unable to modify quota hard limit on tier volume after disk limit got exceeded
- [#1295784](https://bugzilla.redhat.com/1295784): dht: misleading indentation, gcc-6
- [#1296174](https://bugzilla.redhat.com/1296174): geo-rep: hard-link rename issue on changelog replay
- [#1296206](https://bugzilla.redhat.com/1296206): Geo-Replication Session goes "FAULTY" when application logs rolled on master
- [#1296399](https://bugzilla.redhat.com/1296399): Stale stat information for corrupted objects (replicated volume)
- [#1296496](https://bugzilla.redhat.com/1296496): [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"
- [#1296611](https://bugzilla.redhat.com/1296611): Rebalance crashed after detach tier.
- [#1296818](https://bugzilla.redhat.com/1296818): Move away from gf_log completely to gf_msg
- [#1296992](https://bugzilla.redhat.com/1296992): Stricter dependencies for glusterfs-server
- [#1297172](https://bugzilla.redhat.com/1297172): Client self-heals block the FOP that triggered the heals
- [#1297195](https://bugzilla.redhat.com/1297195): no-mtab (-n) mount option ignore next mount option
- [#1297311](https://bugzilla.redhat.com/1297311): Attach tier : Creates fail with invalid argument errors
- [#1297373](https://bugzilla.redhat.com/1297373): [write-behind] : Write/Append to a full volume causes fuse client to crash
- [#1297638](https://bugzilla.redhat.com/1297638): gluster vol get  volname user.metadata-text" Command fails with "volume get option: failed: Did you mean cluster.metadata-self-heal?"
- [#1297695](https://bugzilla.redhat.com/1297695): heal info reporting slow when IO is in progress on the volume
- [#1297740](https://bugzilla.redhat.com/1297740): tests/bugs/quota/bug-1049323.t fails in fedora
- [#1297750](https://bugzilla.redhat.com/1297750): volume info xml does not show arbiter details
- [#1297897](https://bugzilla.redhat.com/1297897): RFE: "heal" commands output should have a fixed fields
- [#1298111](https://bugzilla.redhat.com/1298111): Fix sparse-file-self-heal.t and remove from bad tests
- [#1298439](https://bugzilla.redhat.com/1298439): GlusterD restart, starting the bricks when server quorum not met
- [#1298498](https://bugzilla.redhat.com/1298498): glusterfs crash during load testing
- [#1298520](https://bugzilla.redhat.com/1298520): tests : Modifying tests for crypt xlator
- [#1299410](https://bugzilla.redhat.com/1299410): [Fuse: ] crash while --attribute-timeout and -entry-timeout are set to 0
- [#1299497](https://bugzilla.redhat.com/1299497): Quota Aux mount crashed
- [#1299710](https://bugzilla.redhat.com/1299710): Glusterd: Creation of volume is failing if one of the brick is down on the server
- [#1299819](https://bugzilla.redhat.com/1299819): Snapshot creation fails on a tiered volume
- [#1300152](https://bugzilla.redhat.com/1300152): Rebalance process crashed during cleanup_and_exit
- [#1300253](https://bugzilla.redhat.com/1300253): Test open-behind.t failing fairly often on NetBSD
- [#1300412](https://bugzilla.redhat.com/1300412): Data Tiering:Change the default tiering values to optimize tiering settings
- [#1300564](https://bugzilla.redhat.com/1300564): I/O failure during a graph change followed by an option change.
- [#1300596](https://bugzilla.redhat.com/1300596): 'gluster volume get' returns 0 value for server-quorum-ratio
- [#1300929](https://bugzilla.redhat.com/1300929): Lot of assertion failures are seen in nfs logs with disperse volume
- [#1300956](https://bugzilla.redhat.com/1300956): [RFE] Schedule Geo-replication
- [#1300979](https://bugzilla.redhat.com/1300979): [Snapshot]: Snapshot restore stucks in post validation.
- [#1301032](https://bugzilla.redhat.com/1301032): [georep+tiering]: Hardlink sync is broken if master volume is tiered
- [#1301227](https://bugzilla.redhat.com/1301227): Tiering should break out of iterating query file once cycle time completes.
- [#1301352](https://bugzilla.redhat.com/1301352): Point users of glusterfs-hadoop to the upstream project
- [#1301473](https://bugzilla.redhat.com/1301473): [Tiering]: Values of watermarks, min free disk etc will be miscalculated with quota set on root directory of gluster volume
- [#1302200](https://bugzilla.redhat.com/1302200): Unable to get the client statedump, as /var/run/gluster directory is not available by default
- [#1302201](https://bugzilla.redhat.com/1302201): Scrubber crash (list corruption)
- [#1302205](https://bugzilla.redhat.com/1302205): Improve error message for unsupported clients
- [#1302234](https://bugzilla.redhat.com/1302234): [SNAPSHOT]: Decrease the VHD_SIZE in snapshot.rc
- [#1302257](https://bugzilla.redhat.com/1302257): [tiering]: Quota object limits not adhered to, in a tiered volume
- [#1302291](https://bugzilla.redhat.com/1302291): Self heal command gives error "Launching heal operation to perform index self heal on volume vol0 has been unsuccessful"
- [#1302307](https://bugzilla.redhat.com/1302307): Vim commands from a non-root user fails to execute on fuse mount with trash feature enabled
- [#1302554](https://bugzilla.redhat.com/1302554): Able to create files when quota limit is set to 0
- [#1302772](https://bugzilla.redhat.com/1302772): promotions not balanced across hot tier sub-volumes
- [#1302948](https://bugzilla.redhat.com/1302948): tar complains: <fileName>: file changed as we read it
- [#1303028](https://bugzilla.redhat.com/1303028): Tiering status and rebalance status stops getting updated
- [#1303269](https://bugzilla.redhat.com/1303269): After GlusterD restart,  Remove-brick commit happening even though data migration not completed.
- [#1303501](https://bugzilla.redhat.com/1303501): access-control : spurious error log message on every setxattr call
- [#1303828](https://bugzilla.redhat.com/1303828): [USS]: If .snaps already exists, ls -la lists it even after enabling USS
- [#1303829](https://bugzilla.redhat.com/1303829): [feat] Compound translator
- [#1303895](https://bugzilla.redhat.com/1303895): promotions not happening when space is created on previously full hot tier
- [#1303945](https://bugzilla.redhat.com/1303945): Memory leak in dht
- [#1303995](https://bugzilla.redhat.com/1303995): SMB: SMB crashes with AIO enabled on reads + vers=3.0
- [#1304301](https://bugzilla.redhat.com/1304301): self-heald.t spurious failure
- [#1304348](https://bugzilla.redhat.com/1304348): Allow GlusterFS to build with URCU 0.6
- [#1304686](https://bugzilla.redhat.com/1304686): Start self-heal and display correct heal info after replace brick
- [#1304966](https://bugzilla.redhat.com/1304966): DHT: Take blocking locks while renaming files
- [#1304970](https://bugzilla.redhat.com/1304970): [quota]: Incorrect disk usage shown on a tiered volume
- [#1304988](https://bugzilla.redhat.com/1304988): DHT: Rebalance hang while migrating the files of disperse volume
- [#1305277](https://bugzilla.redhat.com/1305277): [Tier]: Endup in multiple entries of same file on client after rename which had a hardlinks
- [#1305839](https://bugzilla.redhat.com/1305839): Wrong interpretation of disk size in gverify.sh script
- [#1306193](https://bugzilla.redhat.com/1306193): cd to .snaps fails with "transport endpoint not connected" after force start of the volume.
- [#1306199](https://bugzilla.redhat.com/1306199): gluster volume heal info takes extra 2 seconds
- [#1306220](https://bugzilla.redhat.com/1306220): quota: xattr trusted.glusterfs.quota.limit-objects not healed on a root of newly added brick
- [#1306264](https://bugzilla.redhat.com/1306264): glfs_lseek returns incorrect offset for SEEK_SET and SEEK_CUR flags
- [#1306560](https://bugzilla.redhat.com/1306560): Accessing program list in build_prog_details () should be lock protected
- [#1306807](https://bugzilla.redhat.com/1306807): use mutex on single core machines
- [#1306852](https://bugzilla.redhat.com/1306852): Tiering threads can starve each other
- [#1306897](https://bugzilla.redhat.com/1306897): Remove split-brain-healing.t from bad tests
- [#1307208](https://bugzilla.redhat.com/1307208): dht: NULL layouts referenced while the I/O is going on tiered volume
- [#1308402](https://bugzilla.redhat.com/1308402): Newly created  volume start, starting the bricks when server quorum not met
- [#1308900](https://bugzilla.redhat.com/1308900): build: fix build break
- [#1308961](https://bugzilla.redhat.com/1308961): [New] -  quarantine folder becomes empty and bitrot status does not list any files which are corrupted
- [#1309238](https://bugzilla.redhat.com/1309238): Issues with refresh-config when the ".export_added" has different values on different nodes
- [#1309342](https://bugzilla.redhat.com/1309342): Wrong permissions set on previous copy of truncated files inside trash directory
- [#1309462](https://bugzilla.redhat.com/1309462): Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
- [#1309659](https://bugzilla.redhat.com/1309659): [tiering]: Performing a gluster vol reset, turns off 'features.ctr-enabled' on a tiered volume
- [#1309999](https://bugzilla.redhat.com/1309999): Data Tiering:Don't allow a detach-tier commit if detach-tier start has failed to complete
- [#1310080](https://bugzilla.redhat.com/1310080): [RFE]Add --no-encode option to the `glusterfind pre` command
- [#1310171](https://bugzilla.redhat.com/1310171): Incorrect file size on mount if stat is served from the arbiter brick.
- [#1310437](https://bugzilla.redhat.com/1310437): rsyslog can't be completely removed due to dependency in libglusterfs
- [#1310620](https://bugzilla.redhat.com/1310620): gfapi : listxattr is broken for handle ops.
- [#1310677](https://bugzilla.redhat.com/1310677): glusterd crashed when probing a node with firewall enabled on only one node
- [#1310755](https://bugzilla.redhat.com/1310755): glusterd: coverity warning in glusterd-snapshot-utils.c copy_nfs_ganesha_file()
- [#1311124](https://bugzilla.redhat.com/1311124): Implement inode_forget_cbk() similar fops in gfapi
- [#1311146](https://bugzilla.redhat.com/1311146): glfs_dup() functionality is broken
- [#1311178](https://bugzilla.redhat.com/1311178): Tier: Actual files are not demoted and keep on trying to demoted deleted files
- [#1311874](https://bugzilla.redhat.com/1311874): Peer probe from a reinstalled node should fail
- [#1312036](https://bugzilla.redhat.com/1312036): tests: upstream test infra brocken
- [#1312226](https://bugzilla.redhat.com/1312226): Readdirp op_ret is modified by client xlator in case of xdata_rsp presence
- [#1312346](https://bugzilla.redhat.com/1312346): nfs: fix lock variable type
- [#1312354](https://bugzilla.redhat.com/1312354): changelog: fix typecasting of function
- [#1312816](https://bugzilla.redhat.com/1312816): gfid-reset of a directory in distributed replicate volume doesn't set gfid on 2nd till last subvolumes
- [#1312845](https://bugzilla.redhat.com/1312845): Protocol server/client handshake gap
- [#1312897](https://bugzilla.redhat.com/1312897): glusterfs-server %post script is not quiet, prints "success" during installation
- [#1313135](https://bugzilla.redhat.com/1313135): RFE: Need type of gfid in index_readdir
- [#1313206](https://bugzilla.redhat.com/1313206): Encrypted rpc clients do not reconnect sometimes
- [#1313228](https://bugzilla.redhat.com/1313228): promotions and demotions not happening after attach tier due to fix layout taking very long time(3 days)
- [#1313293](https://bugzilla.redhat.com/1313293): [HC] glusterfs mount crashed
- [#1313300](https://bugzilla.redhat.com/1313300): quota: reduce latency for testcase ./tests/bugs/quota/bug-1293601.t
- [#1313303](https://bugzilla.redhat.com/1313303): [geo-rep]: Session goes to faulty with Errno 13: Permission denied
- [#1313495](https://bugzilla.redhat.com/1313495): migrate files based on file size
- [#1313628](https://bugzilla.redhat.com/1313628): Brick ports get changed after GlusterD restart
- [#1313775](https://bugzilla.redhat.com/1313775): ec-read-policy.t can report a false-failure
- [#1313901](https://bugzilla.redhat.com/1313901): glusterd: does not start
- [#1314150](https://bugzilla.redhat.com/1314150): Choose self-heal source as local subvolume if possible
- [#1314204](https://bugzilla.redhat.com/1314204): nfs-ganesha setup fails on fedora
- [#1314291](https://bugzilla.redhat.com/1314291): tier:  GCC throws Unused variable warning for conf in tier_link_cbk function
- [#1314549](https://bugzilla.redhat.com/1314549): remove replace-brick-self-heal.t from bad tests
- [#1314649](https://bugzilla.redhat.com/1314649): disperse: Provide an option to enable/disable eager lock
- [#1315024](https://bugzilla.redhat.com/1315024): glusterfs-libs postun scriptlet fail /sbin/ldconfig: relative path `1' used to build cache
- [#1315168](https://bugzilla.redhat.com/1315168): Fd based fops should not be logging ENOENT/ESTALE
- [#1315186](https://bugzilla.redhat.com/1315186): setting lower op-version should throw failure message
- [#1315465](https://bugzilla.redhat.com/1315465): glusterfs brick process crashed
- [#1315560](https://bugzilla.redhat.com/1315560): ./tests/basic/tier/tier-file-create.t dumping core fairly often on build machines in Linux
- [#1315601](https://bugzilla.redhat.com/1315601): Geo-replication CPU usage is 100%
- [#1315659](https://bugzilla.redhat.com/1315659): [Tier]: Following volume restart, tierd shows failure at status on some nodes
- [#1315666](https://bugzilla.redhat.com/1315666): Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
- [#1316327](https://bugzilla.redhat.com/1316327): Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
- [#1316437](https://bugzilla.redhat.com/1316437): snapd doesn't come up automatically after node reboot.
- [#1316462](https://bugzilla.redhat.com/1316462): Fix possible failure in tests/basic/afr/arbiter.t
- [#1316499](https://bugzilla.redhat.com/1316499): volume set on user.* domain trims all white spaces in the value
- [#1316819](https://bugzilla.redhat.com/1316819): Errors seen in cli.log, while executing the command 'gluster snapshot info --xml'
- [#1316848](https://bugzilla.redhat.com/1316848): Peers goes to rejected state after reboot of one node when quota is enabled on cloned volume.
- [#1317278](https://bugzilla.redhat.com/1317278): GlusterFS 3.8.0 tracker
- [#1317361](https://bugzilla.redhat.com/1317361): Do not succeed mkdir without gfid-req
- [#1317424](https://bugzilla.redhat.com/1317424): nfs-ganesha server do not enter grace period during failover/failback
- [#1317785](https://bugzilla.redhat.com/1317785): Cache swift xattrs
- [#1317902](https://bugzilla.redhat.com/1317902): Different epoch values for each of NFS-Ganesha heads
- [#1317948](https://bugzilla.redhat.com/1317948): inode ref leaks with perf-test.sh
- [#1318107](https://bugzilla.redhat.com/1318107): Typo in log message for posix_mkdir log
- [#1318158](https://bugzilla.redhat.com/1318158): Client's App is having issues retrieving files from share 1002976973
- [#1318544](https://bugzilla.redhat.com/1318544): Glusterd crashed during volume status of snapd daemon
- [#1318546](https://bugzilla.redhat.com/1318546): Glusterd crashed just after a peer probe command failed.
- [#1318751](https://bugzilla.redhat.com/1318751): cluster/afr: Fix partial heals in 3-way replication
- [#1318757](https://bugzilla.redhat.com/1318757): trash xlator : trash_unlink_mkdir_cbk() enters in an infinte loop which results segfault
- [#1319374](https://bugzilla.redhat.com/1319374): smbd crashes while accessing multiple volume shares via same client
- [#1319581](https://bugzilla.redhat.com/1319581): Marker: Lot of dict_get errors in brick log!!
- [#1319706](https://bugzilla.redhat.com/1319706): Add a script that converts the gfid-string of a directory into absolute path name w.r.t the brick path.
- [#1319717](https://bugzilla.redhat.com/1319717): glusterfind pre test projects_media2 /tmp/123   rh-storage2 - pre failed: Traceback ...
- [#1319992](https://bugzilla.redhat.com/1319992): RFE: Lease support for gluster
- [#1320101](https://bugzilla.redhat.com/1320101): Client log gets flooded by default with fop stats under DEBUG level
- [#1320388](https://bugzilla.redhat.com/1320388): [GSS]-gluster v heal volname info does not work with enabled ssl/tls
- [#1320458](https://bugzilla.redhat.com/1320458): Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP
- [#1320489](https://bugzilla.redhat.com/1320489): glfs-mgmt: fix connecting to multiple volfile transports
- [#1320716](https://bugzilla.redhat.com/1320716): RFE Sort volume quota <volume> list output alphabetically by path
- [#1320818](https://bugzilla.redhat.com/1320818): Over some time Files which were accessible become inaccessible(music files)
- [#1321322](https://bugzilla.redhat.com/1321322): afr: add mtime based split-brain resolution to CLI
- [#1321554](https://bugzilla.redhat.com/1321554): assert failure happens when parallel rm -rf is issued on nfs mounts
- [#1321762](https://bugzilla.redhat.com/1321762): glusterd: response not aligned
- [#1321872](https://bugzilla.redhat.com/1321872): el6 - Installing glusterfs-ganesha-3.7.9-1.el6rhs.x86_64 fails with dependency on /usr/bin/dbus-send
- [#1321955](https://bugzilla.redhat.com/1321955): Self-heal and manual heal not healing some file
- [#1322214](https://bugzilla.redhat.com/1322214): [HC] Add disk in a Hyper-converged environment fails when glusterfs is running in directIO mode
- [#1322237](https://bugzilla.redhat.com/1322237): glusterd pmap scan wastes time scanning for not relevant ports
- [#1322253](https://bugzilla.redhat.com/1322253): gluster volume heal info shows conservative merge entries as in split-brain
- [#1322262](https://bugzilla.redhat.com/1322262): Glusterd crashes when a message is passed through rpc which is not available
- [#1322320](https://bugzilla.redhat.com/1322320): build: git ignore files generated by fdl xlator
- [#1322323](https://bugzilla.redhat.com/1322323): fdl: fix make clean
- [#1322489](https://bugzilla.redhat.com/1322489): marker: account goes bad with rm -rf
- [#1322772](https://bugzilla.redhat.com/1322772): glusterd: glusted didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3. The underlying file system may be in bad state [No such file or directory]"
- [#1322801](https://bugzilla.redhat.com/1322801): nfs-ganesha installation : no pacemaker package installed for RHEL 6.7
- [#1322805](https://bugzilla.redhat.com/1322805): [scale] Brick process does not start after node reboot
- [#1322825](https://bugzilla.redhat.com/1322825): IO-stats, client profile is overwritten when it is on the same node as bricks
- [#1322850](https://bugzilla.redhat.com/1322850): Healing queue rarely empty
- [#1323040](https://bugzilla.redhat.com/1323040): Inconsistent directory structure on dht subvols caused by parent layouts going stale during entry create operations because of fix-layout
- [#1323287](https://bugzilla.redhat.com/1323287): TIER : Attach tier fails
- [#1323360](https://bugzilla.redhat.com/1323360): quota/cli: quota list with path not working when limit is not set
- [#1323486](https://bugzilla.redhat.com/1323486): quota: check inode limits only when new file/dir is created and not with write FOP
- [#1323659](https://bugzilla.redhat.com/1323659): rpc: assign port only if it is unreserved
- [#1324004](https://bugzilla.redhat.com/1324004): arbiter volume write performance is bad.
- [#1324439](https://bugzilla.redhat.com/1324439): SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
- [#1324509](https://bugzilla.redhat.com/1324509): Continuous nfs_grace_monitor log messages observed in /var/log/messages
- [#1325683](https://bugzilla.redhat.com/1325683): the wrong variable was being checked for gf_strdup
- [#1325822](https://bugzilla.redhat.com/1325822): Too many log messages showing inode ctx is NULL for 00000000-0000-0000-0000-000000000000
- [#1325841](https://bugzilla.redhat.com/1325841): Volume stop is failing when one of brick is down due to underlying filesystem crash
- [#1326085](https://bugzilla.redhat.com/1326085): [rfe]posix-locks: Lock migration
- [#1326308](https://bugzilla.redhat.com/1326308): WORM/Retention Feature
- [#1326410](https://bugzilla.redhat.com/1326410): /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
- [#1326627](https://bugzilla.redhat.com/1326627): nfs-ganesha crashes with segfault error while doing refresh config on volume.
- [#1327507](https://bugzilla.redhat.com/1327507): [DHT-Rebalance]: with few brick process down, rebalance process isn't killed even after stopping rebalance process
- [#1327553](https://bugzilla.redhat.com/1327553): [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
- [#1327976](https://bugzilla.redhat.com/1327976): [RFE] Provide vagrant developer setup for GlusterFS
- [#1328010](https://bugzilla.redhat.com/1328010): snapshot-clone: clone volume doesn't start after node reboot
- [#1328043](https://bugzilla.redhat.com/1328043): [FEAT] Renaming NSR to JBR
- [#1328399](https://bugzilla.redhat.com/1328399): [geo-rep]: schedule_georep.py doesn't touch the mount in every iteration
- [#1328502](https://bugzilla.redhat.com/1328502): Move FOP enumerations and other network protocol bits to XDR generated headers
- [#1328696](https://bugzilla.redhat.com/1328696): quota : fix null dereference issue
- [#1329129](https://bugzilla.redhat.com/1329129): runner: extract and return actual exit status of child
- [#1329501](https://bugzilla.redhat.com/1329501): self-heal does fsyncs even after setting ensure-durability off
- [#1329503](https://bugzilla.redhat.com/1329503): [tiering]: during detach tier operation, Input/output error is seen with new file writes on NFS mount
- [#1329773](https://bugzilla.redhat.com/1329773): Inode leaks found in data-self-heal
- [#1329870](https://bugzilla.redhat.com/1329870): Lots of [global.glusterfs - usage-type (null) memusage] are seen in statedump
- [#1330052](https://bugzilla.redhat.com/1330052): [RFE] We need more debug info from stack wind and unwind calls
- [#1330225](https://bugzilla.redhat.com/1330225): gluster is not using pthread_equal to compare thread
- [#1330248](https://bugzilla.redhat.com/1330248): glusterd: SSL certificate depth volume option is incorrect
- [#1330346](https://bugzilla.redhat.com/1330346): distaflibs: structure directory tree to follow setuptools namespace packages format
- [#1330353](https://bugzilla.redhat.com/1330353): [Tiering]: promotion of files may not be balanced on distributed hot tier when promoting files with size as that of max.mb
- [#1330476](https://bugzilla.redhat.com/1330476): libgfapi:Setting need_lookup on wrong list
- [#1330481](https://bugzilla.redhat.com/1330481): glusterd restart is failing if volume brick is down due to underlying FS crash.
- [#1330567](https://bugzilla.redhat.com/1330567): SAMBA+TIER : File size is not getting updated when created on windows samba share mount
- [#1330583](https://bugzilla.redhat.com/1330583): glusterfs-libs postun ldconfig: relative path '1' used to build cache
- [#1330616](https://bugzilla.redhat.com/1330616): Minor improvements and code cleanup for libglusterfs
- [#1330974](https://bugzilla.redhat.com/1330974): Swap order of GF_EVENT_SOME_CHILD_DOWN enum to match the release3.-7 branch
- [#1331042](https://bugzilla.redhat.com/1331042): glusterfsd: return actual exit status on mount process
- [#1331253](https://bugzilla.redhat.com/1331253): glusterd: fix max pmap alloc to GF_PORT_MAX
- [#1331289](https://bugzilla.redhat.com/1331289): glusterd memory overcommit
- [#1331658](https://bugzilla.redhat.com/1331658): [geo-rep]: schedule_georep.py doesn't work when invoked using cron
- [#1332020](https://bugzilla.redhat.com/1332020): multiple regression failures for tests/basic/quota-ancestry-building.t
- [#1332021](https://bugzilla.redhat.com/1332021): multiple failures for testcase: tests/basic/inode-quota-enforcing.t
- [#1332022](https://bugzilla.redhat.com/1332022): multiple failures for testcase: tests/bugs/disperse/bug-1304988.t
- [#1332045](https://bugzilla.redhat.com/1332045): multiple failures for testcase: tests/basic/quota.t
- [#1332162](https://bugzilla.redhat.com/1332162): Support mandatory locking in glusterfs
- [#1332370](https://bugzilla.redhat.com/1332370): DHT: Once remove brick start failed in between Remove brick commit should not be allowed
- [#1332396](https://bugzilla.redhat.com/1332396): posix: Set correct d_type for readdirp() calls
- [#1332414](https://bugzilla.redhat.com/1332414): protocol/server: address double free's
- [#1332788](https://bugzilla.redhat.com/1332788): Wrong op-version for mandatory-locks volume set option
- [#1332789](https://bugzilla.redhat.com/1332789): quota: client gets IO error instead of disk quota exceed when the limit is exceeded
- [#1332839](https://bugzilla.redhat.com/1332839): values for Number of Scrubbed files, Number of Unsigned files, Last completed scrub time and Duration of last scrub are shown as zeros in bit rot scrub status
- [#1332845](https://bugzilla.redhat.com/1332845): Disperse volume fails on high load and logs show some assertion failures
- [#1332864](https://bugzilla.redhat.com/1332864): glusterd + bitrot : Creating clone of snapshot. error "xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file:
- [#1333243](https://bugzilla.redhat.com/1333243): [AFR]: "volume heal info" command is failing during in-service upgrade to latest.
- [#1333244](https://bugzilla.redhat.com/1333244): Fix excessive logging due to NULL dict in dht
- [#1333266](https://bugzilla.redhat.com/1333266): SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.
- [#1333711](https://bugzilla.redhat.com/1333711): [scale] Brick process does not start after node reboot
- [#1333803](https://bugzilla.redhat.com/1333803): Detach tier fire before the background fixlayout is complete may result in failure
- [#1333900](https://bugzilla.redhat.com/1333900): /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
- [#1334074](https://bugzilla.redhat.com/1334074): No xml output on gluster volume heal info command with --xml
- [#1334268](https://bugzilla.redhat.com/1334268): GlusterFS 3.8 fails to build in the CentOS Community Build System
- [#1334287](https://bugzilla.redhat.com/1334287): Under high read load, sometimes the message "XDR decoding failed" appears in the logs and read fails
- [#1334443](https://bugzilla.redhat.com/1334443): SAMBA-VSS : Permission denied issue while restoring the directory from windows client 1 when files are deleted from windows client 2
- [#1334699](https://bugzilla.redhat.com/1334699): readdir-ahead does not fetch xattrs that md-cache needs in it's internal calls
- [#1334836](https://bugzilla.redhat.com/1334836): [features/worm] - when disabled, worm xl should simply pass requested fops to its child xl
- [#1334994](https://bugzilla.redhat.com/1334994): Fix the message ids in Client
- [#1335017](https://bugzilla.redhat.com/1335017): set errno in case of inode_link failures
- [#1335282](https://bugzilla.redhat.com/1335282): Wrong constant used in length based comparison for XATTR_SECURITY_PREFIX
- [#1335283](https://bugzilla.redhat.com/1335283): Self Heal fails on a replica3 volume with 'disk quota exceeded'
- [#1335284](https://bugzilla.redhat.com/1335284): [HC] Add disk in a Hyper-converged environment fails when glusterfs is running in directIO mode
- [#1335285](https://bugzilla.redhat.com/1335285): tar complains: <fileName>: file changed as we read it
- [#1335433](https://bugzilla.redhat.com/1335433): Self heal shows different information for the same volume from each node
- [#1335726](https://bugzilla.redhat.com/1335726): stop all gluster processes should also also include glusterfs mount process
- [#1335730](https://bugzilla.redhat.com/1335730): mount/fuse: Logging improvements
- [#1335822](https://bugzilla.redhat.com/1335822): Revert "features/shard: Make o-direct writes work with sharding: http://review.gluster.org/#/c/13846/"
- [#1335829](https://bugzilla.redhat.com/1335829): Heal info shows split-brain for .shard directory though only one brick was down
- [#1336136](https://bugzilla.redhat.com/1336136): PREFIX is not honoured during build and install
- [#1336152](https://bugzilla.redhat.com/1336152): [Tiering]: Files remain in hot tier even after detach tier completes
- [#1336198](https://bugzilla.redhat.com/1336198): failover is not working with latest builds.
- [#1336268](https://bugzilla.redhat.com/1336268): features/index: clang compile warnings in index.c
- [#1336472](https://bugzilla.redhat.com/1336472): [Tiering]: The message 'Max cycle time reached..exiting migration' incorrectly displayed as an 'error' in the logs
- [#1336704](https://bugzilla.redhat.com/1336704): [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users
- [#1336794](https://bugzilla.redhat.com/1336794): assorted typos and spelling mistakes from Debian lintian
- [#1336798](https://bugzilla.redhat.com/1336798): Unexporting a volume sometimes fails with "Dynamic export addition/deletion failed".
- [#1336801](https://bugzilla.redhat.com/1336801): ganesha exported volumes doesn't get synced up on shutdown node when it comes up.
- [#1336854](https://bugzilla.redhat.com/1336854): scripts: bash-isms in scripts
- [#1336947](https://bugzilla.redhat.com/1336947): [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs
- [#1337114](https://bugzilla.redhat.com/1337114): Modified volume options are not syncing once  glusterd comes up.
- [#1337127](https://bugzilla.redhat.com/1337127): rpc: change client insecure port ceiling from 65535 to 49151
- [#1337130](https://bugzilla.redhat.com/1337130): Revert "glusterd/afr: store afr pending xattrs as a volume option" patch on 3.8 branch
- [#1337387](https://bugzilla.redhat.com/1337387): Add arbiter brick hotplug
- [#1337394](https://bugzilla.redhat.com/1337394): DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after  renaming Directories from multiple client at same time
- [#1337596](https://bugzilla.redhat.com/1337596): Mounting a volume over NFS with a subdir followed by a / returns "Invalid argument"
- [#1337638](https://bugzilla.redhat.com/1337638): Leases: Fix lease failures in certain scenarios
- [#1337652](https://bugzilla.redhat.com/1337652): log flooded with Could not map name=xxxx to a UUID when config'd with long hostnames
- [#1337780](https://bugzilla.redhat.com/1337780): tests/bugs/write-behind/1279730.t fails spuriously
- [#1337795](https://bugzilla.redhat.com/1337795): tests/basic/afr/tarissue.t fails regression
- [#1337822](https://bugzilla.redhat.com/1337822): one of vm goes to paused state when network goes down and comes up back
- [#1337839](https://bugzilla.redhat.com/1337839): Files present in the .shard folder even after deleting all the vms from the UI
- [#1337870](https://bugzilla.redhat.com/1337870): Some of VMs go to paused state when there is concurrent I/O on vms
- [#1337908](https://bugzilla.redhat.com/1337908): SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
- [#1338051](https://bugzilla.redhat.com/1338051): ENOTCONN error during parallel rmdir
- [#1338968](https://bugzilla.redhat.com/1338968): common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
- [#1339137](https://bugzilla.redhat.com/1339137): fuse: In fuse_first_lookup(), dict is not un-referenced in case create_frame returns an empty pointer.
- [#1339192](https://bugzilla.redhat.com/1339192): Missing autotools helper config.* files
- [#1339228](https://bugzilla.redhat.com/1339228): gfapi: set mem_acct for the variables created for upcall