<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/nfs/server/src, branch v3.5.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>nfs: prevent assertion error with MOUNT over UDP</title>
<updated>2014-07-10T07:15:18+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-07-08T06:59:42+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a4369c456de4304ebdb252bc2783d634a56f8301'/>
<id>a4369c456de4304ebdb252bc2783d634a56f8301</id>
<content type='text'>
The MOUNT service over UDP runs in a separate thread. This thread does
not have the correct *THIS xlator set. *THIS points to the global (base)
xlator structure, but GF_CALLOC() requires it to be the NFS-xlator so
that assertions can get validated correctly.

This is solved by passing the NFS-xlator to the pthread function, and
setting the *THIS pointer explicitly in the new thread.

It seems that on occasion (needs further investigation) MOUNT over UDP
does not unregister itself. There can also be issues when the kernel NLM
implementation has been registered at portmap/rpcbind, so adding some
unregister procedures in the cleanup of the test-cases.

Cherry picked from commit ec74ceedaa41047b88d270c00eeb071b73e19664:
&gt; Change-Id: I3be5a420fc800bbcc14198d0b6faf4cf2c7300b1
&gt; BUG: 1116503
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/8241
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I3be5a420fc800bbcc14198d0b6faf4cf2c7300b1
BUG: 1116997
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8258
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The MOUNT service over UDP runs in a separate thread. This thread does
not have the correct *THIS xlator set. *THIS points to the global (base)
xlator structure, but GF_CALLOC() requires it to be the NFS-xlator so
that assertions can get validated correctly.

This is solved by passing the NFS-xlator to the pthread function, and
setting the *THIS pointer explicitly in the new thread.

It seems that on occasion (needs further investigation) MOUNT over UDP
does not unregister itself. There can also be issues when the kernel NLM
implementation has been registered at portmap/rpcbind, so adding some
unregister procedures in the cleanup of the test-cases.

Cherry picked from commit ec74ceedaa41047b88d270c00eeb071b73e19664:
&gt; Change-Id: I3be5a420fc800bbcc14198d0b6faf4cf2c7300b1
&gt; BUG: 1116503
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/8241
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I3be5a420fc800bbcc14198d0b6faf4cf2c7300b1
BUG: 1116997
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8258
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc: Validate RPC procedure number before fetch</title>
<updated>2014-07-08T10:36:18+00:00</updated>
<author>
<name>Santosh Kumar Pradhan</name>
<email>spradhan@redhat.com</email>
</author>
<published>2014-07-03T11:41:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3d7b19cd1ecd53f0808b07df7c4ac801fd48f3c3'/>
<id>3d7b19cd1ecd53f0808b07df7c4ac801fd48f3c3</id>
<content type='text'>
While accessing the procedures of given RPC program in,
rpcsvc_get_program_vector_sizer(), It was not checking boundary
conditions which would cause buffer overflow and subsequently SEGV.

Make sure rpcsvc_actor_t arrays have numactors number of actors.

FIX:
Validate the RPC procedure number before fetching the actor.

Upstream main review: http://review.gluster.org/7726

BUG: 1096020

Change-Id: Iaf207ee976cb56fa9a554ec82c9eab36d3b289ed
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8228
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While accessing the procedures of given RPC program in,
rpcsvc_get_program_vector_sizer(), It was not checking boundary
conditions which would cause buffer overflow and subsequently SEGV.

Make sure rpcsvc_actor_t arrays have numactors number of actors.

FIX:
Validate the RPC procedure number before fetching the actor.

Upstream main review: http://review.gluster.org/7726

BUG: 1096020

Change-Id: Iaf207ee976cb56fa9a554ec82c9eab36d3b289ed
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8228
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: Support wildcard in RPC auth allow/reject</title>
<updated>2014-07-02T11:28:21+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-07-02T09:11:43+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f25c549c959e06e70eefc5744dc5f93668411de2'/>
<id>f25c549c959e06e70eefc5744dc5f93668411de2</id>
<content type='text'>
RFE: Support wildcard in "nfs.rpc-auth-allow" and
"nfs.rpc-auth-reject". e.g.
  *.redhat.com
  192.168.1[1-5].*
  192.168.1[1-5].*, *.redhat.com, 192.168.21.9

  Along with wildcard, support for subnetwork or IP range e.g.
  192.168.10.23/24

The option will be validated for following categories:
1) Anonymous i.e. "*"
2) Wildcard pattern i.e. string containing any ('*', '?', '[')
3) IPv4 address
4) IPv6 address
5) FQDN
6) subnetwork or IPv4 range

Currently this does not support IPv6 subnetwork.

Cherry-picked from 00e247ee44067f2b3e7ca5f7e6dc2f7934c97181:
&gt; Change-Id: Iac8caf5e490c8174d61111dad47fd547d4f67bf4
&gt; BUG: 1086097
&gt; Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7485
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I18ef0a914cd403c1f9e66d1b03ecd29465cbce95
BUG: 1115369
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8223
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
RFE: Support wildcard in "nfs.rpc-auth-allow" and
"nfs.rpc-auth-reject". e.g.
  *.redhat.com
  192.168.1[1-5].*
  192.168.1[1-5].*, *.redhat.com, 192.168.21.9

  Along with wildcard, support for subnetwork or IP range e.g.
  192.168.10.23/24

The option will be validated for following categories:
1) Anonymous i.e. "*"
2) Wildcard pattern i.e. string containing any ('*', '?', '[')
3) IPv4 address
4) IPv6 address
5) FQDN
6) subnetwork or IPv4 range

Currently this does not support IPv6 subnetwork.

Cherry-picked from 00e247ee44067f2b3e7ca5f7e6dc2f7934c97181:
&gt; Change-Id: Iac8caf5e490c8174d61111dad47fd547d4f67bf4
&gt; BUG: 1086097
&gt; Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7485
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I18ef0a914cd403c1f9e66d1b03ecd29465cbce95
BUG: 1115369
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8223
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: Fix multi-homed m/c issue in NFS subdir auth</title>
<updated>2014-07-02T07:53:18+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-06-29T14:30:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cacc1311626aa8b2dfe9f937cf1b14bb534a8937'/>
<id>cacc1311626aa8b2dfe9f937cf1b14bb534a8937</id>
<content type='text'>
NFS subdir authentication doesn't correctly handle multi-homed
(host with multiple NIC having multiple IP addr) OR multi-protocol
(IPv4 and IPv6) network addresses.

When user/admin sets HOSTNAME in gluster CLI for NFS subdir auth,
mnt3_verify_auth() routine does not iterate over all the resolved
n/w addrs returned by getaddrinfo() n/w API. Instead, it just tests
with the one returned first.

1. Iterate over all the n/w addrs (linked list) returned by getaddrinfo().
2. Move the n/w mask calculation part to mnt3_export_fill_hostspec()
   instead of doing it in mnt3_verify_auth() i.e. calculating for each
   mount request. It does not change for MOUNT req.
3. Integrate "subnet support code rpc-auth.addr.&lt;volname&gt;.allow"
   and "NFS subdir auth code" to remove code duplication.

Cherry-picked from commit d3f0de90d0c5166e63f5764d2f21703fd29ce976:
&gt; Change-Id: I26b0def52c22cda35ca11766afca3df5fd4360bf
&gt; BUG: 1102293
&gt; Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/8048
&gt; Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;

Change-Id: Ie92a8ac602bec2cd77268acb7b23ad8ba3c52f5f
BUG: 1112980
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8198
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
NFS subdir authentication doesn't correctly handle multi-homed
(host with multiple NIC having multiple IP addr) OR multi-protocol
(IPv4 and IPv6) network addresses.

When user/admin sets HOSTNAME in gluster CLI for NFS subdir auth,
mnt3_verify_auth() routine does not iterate over all the resolved
n/w addrs returned by getaddrinfo() n/w API. Instead, it just tests
with the one returned first.

1. Iterate over all the n/w addrs (linked list) returned by getaddrinfo().
2. Move the n/w mask calculation part to mnt3_export_fill_hostspec()
   instead of doing it in mnt3_verify_auth() i.e. calculating for each
   mount request. It does not change for MOUNT req.
3. Integrate "subnet support code rpc-auth.addr.&lt;volname&gt;.allow"
   and "NFS subdir auth code" to remove code duplication.

Cherry-picked from commit d3f0de90d0c5166e63f5764d2f21703fd29ce976:
&gt; Change-Id: I26b0def52c22cda35ca11766afca3df5fd4360bf
&gt; BUG: 1102293
&gt; Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/8048
&gt; Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;

Change-Id: Ie92a8ac602bec2cd77268acb7b23ad8ba3c52f5f
BUG: 1112980
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8198
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: Make NFS DRC off by default</title>
<updated>2014-06-10T09:37:13+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-06-09T07:57:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=de19f3952b9e9a22db9b4af55e74b557aa71bae9'/>
<id>de19f3952b9e9a22db9b4af55e74b557aa71bae9</id>
<content type='text'>
DRC in NFS causes memory bloat and there are known memory corruptions.
It would be good to disable drc by default till the feature is stable.

Cherry picked from 4215d071cec4fc8a62ca4fd6212d83f931838829:
&gt; Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17
&gt; BUG: 1105524
&gt; Original-patch-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/8004
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17
BUG: 1105524
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8013
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
DRC in NFS causes memory bloat and there are known memory corruptions.
It would be good to disable drc by default till the feature is stable.

Cherry picked from 4215d071cec4fc8a62ca4fd6212d83f931838829:
&gt; Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17
&gt; BUG: 1105524
&gt; Original-patch-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/8004
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17
BUG: 1105524
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8013
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: warn and truncate grouplist if RPC/AUTH can not hold everything</title>
<updated>2014-05-22T13:02:21+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-05-12T01:51:15+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=57ec16e7f6d08b9a1c07f8ece3db630b08557372'/>
<id>57ec16e7f6d08b9a1c07f8ece3db630b08557372</id>
<content type='text'>
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.

The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2

In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:

    1 | pid
    1 | uid
    1 | gid
    1 | groups_len
   XX | groups_val (GF_MAX_AUX_GROUPS=65535)
    1 | lk_owner_len
   YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
  ----+-------------------------------------------
    5 | total xdr-units

  one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
  MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
  XX + YY can be 95 to fill the 100 xdr-units.

  Note that the on-wire protocol has tighter requirements than the
  internal structures. It is possible for xlators to use more groups and
  a bigger lk_owner than that can be sent by a GlusterFS-client.

This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.

The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)

Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52:
&gt; BUG: 1053579
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7202
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1096425
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7829
Reviewed-by: Lalatendu Mohanty &lt;lmohanty@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.

The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2

In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:

    1 | pid
    1 | uid
    1 | gid
    1 | groups_len
   XX | groups_val (GF_MAX_AUX_GROUPS=65535)
    1 | lk_owner_len
   YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
  ----+-------------------------------------------
    5 | total xdr-units

  one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
  MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
  XX + YY can be 95 to fill the 100 xdr-units.

  Note that the on-wire protocol has tighter requirements than the
  internal structures. It is possible for xlators to use more groups and
  a bigger lk_owner than that can be sent by a GlusterFS-client.

This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.

The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)

Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52:
&gt; BUG: 1053579
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7202
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1096425
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7829
Reviewed-by: Lalatendu Mohanty &lt;lmohanty@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: Set default outstanding RPC limit to 16</title>
<updated>2014-03-07T20:11:28+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2014-03-06T13:13:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0e8e79350947b9dedd02b1cc2339b8dc42ca599d'/>
<id>0e8e79350947b9dedd02b1cc2339b8dc42ca599d</id>
<content type='text'>
Backport of http://review.gluster.org/#/c/6696/

With 64, NFS server hangs with large I/O load (~ 64 threads writing
to NFS server). The test results from Ben England (Performance expert)
suggest to set it as 16 instead of 64.

Change-Id: Iaa9dda512904d2e359d8122a05e5bf65f99a7e78
BUG: 1073441
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7200
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/#/c/6696/

With 64, NFS server hangs with large I/O load (~ 64 threads writing
to NFS server). The test results from Ben England (Performance expert)
suggest to set it as 16 instead of 64.

Change-Id: Iaa9dda512904d2e359d8122a05e5bf65f99a7e78
BUG: 1073441
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7200
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc,glusterd: Use rpc_clnt notifyfn to cleanup mydata</title>
<updated>2013-12-23T14:58:18+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-23T08:37:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8c4e79c446fdfea00c1589a625ba1f1a63fdecc5'/>
<id>8c4e79c446fdfea00c1589a625ba1f1a63fdecc5</id>
<content type='text'>
        Backport of http://review.gluster.org/5512

rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6566
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/5512

rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6566
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: Client cache invalidation with bad fsid</title>
<updated>2013-12-17T18:26:10+00:00</updated>
<author>
<name>Santosh Kumar Pradhan</name>
<email>spradhan@redhat.com</email>
</author>
<published>2013-12-17T12:02:14+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=54201dd0495e52e0722898ab7fdad007d28d1676'/>
<id>54201dd0495e52e0722898ab7fdad007d28d1676</id>
<content type='text'>
1. Problem:
Couple of issues are seen when NFS-ACL is turned ON. i.e.
i) NFS directory access is too slow, impacting customer workflows
   with ACL
ii)dbench fails with 100 directories.

2. Root cause: Frequent cache invalidation in the client side when ACL
is turned ON with NFS because NFS server getacl() code returns the
wrong fsid to the client.

3. This attr-cache invlaidation triggers the frequent LOOKUP ops for
each file instead of relying on the readdir or readdirp data. As
a result performance gets impacted.

4. In case of dbench workload, the problem is more severe. e.g.

Client side rpcdebug output:
===========================

Dec 16 10:16:53 santosh-3 kernel: NFS:
         nfs_update_inode(0:1b/12061953567282551806 ct=2 info=0x7e7f)
Dec 16 10:16:53 santosh-3 kernel: NFS:
         nfs_fhget(0:1b/12061953567282551806 ct=2)
Dec 16 10:16:53 santosh-3 kernel: &lt;-- nfs_xdev_get_sb() = -116 [splat]
Dec 16 10:16:53 santosh-3 kernel: nfs_do_submount: done
Dec 16 10:16:53 santosh-3 kernel: &lt;-- nfs_do_submount() =
ffffffffffffff8c
Dec 16 10:16:53 santosh-3 kernel: &lt;-- nfs_follow_mountpoint() =
ffffffffffffff8c
Dec 16 10:16:53 santosh-3 kernel: NFS: dentry_delete(clients/client77,
20008)

As per Jeff Layton, This occurs when the client detects that the fsid on
a filehandle is different from its parent. At that point, it tries to
do a new submount of the new filesystem onto the correct point. It means
client got a superblock reference for the new fs and is now looking to
set up the root of the mount. It calls nfs_get_root to do that, which
basically takes the superblock and a filehandle and returns a dentry.
The problem here is that the dentry-&gt;d_inode you're getting back looks
wrong. It's not a directory as expected -- it's something else. So the
client gives up and tosses back an ESTALE.

Which clearly says that, In getacl() code while it does the stat() call
to get the attrs, it forgets to populate the deviceid or fsid before
going ahead and does getxattr().

FIX:
1. Fill the deviceid in iatt.
2. Do bit more clean up for the confusing part of the code.

NB: Many many thanks to Niels de Vos and Jeff Layton for their
help to debug the issue.

Change-Id: I44d8d2fa3ec7fb33a67dfdd4bbe2c45cdf67db8c
BUG: 1043737
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6526
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. Problem:
Couple of issues are seen when NFS-ACL is turned ON. i.e.
i) NFS directory access is too slow, impacting customer workflows
   with ACL
ii)dbench fails with 100 directories.

2. Root cause: Frequent cache invalidation in the client side when ACL
is turned ON with NFS because NFS server getacl() code returns the
wrong fsid to the client.

3. This attr-cache invlaidation triggers the frequent LOOKUP ops for
each file instead of relying on the readdir or readdirp data. As
a result performance gets impacted.

4. In case of dbench workload, the problem is more severe. e.g.

Client side rpcdebug output:
===========================

Dec 16 10:16:53 santosh-3 kernel: NFS:
         nfs_update_inode(0:1b/12061953567282551806 ct=2 info=0x7e7f)
Dec 16 10:16:53 santosh-3 kernel: NFS:
         nfs_fhget(0:1b/12061953567282551806 ct=2)
Dec 16 10:16:53 santosh-3 kernel: &lt;-- nfs_xdev_get_sb() = -116 [splat]
Dec 16 10:16:53 santosh-3 kernel: nfs_do_submount: done
Dec 16 10:16:53 santosh-3 kernel: &lt;-- nfs_do_submount() =
ffffffffffffff8c
Dec 16 10:16:53 santosh-3 kernel: &lt;-- nfs_follow_mountpoint() =
ffffffffffffff8c
Dec 16 10:16:53 santosh-3 kernel: NFS: dentry_delete(clients/client77,
20008)

As per Jeff Layton, This occurs when the client detects that the fsid on
a filehandle is different from its parent. At that point, it tries to
do a new submount of the new filesystem onto the correct point. It means
client got a superblock reference for the new fs and is now looking to
set up the root of the mount. It calls nfs_get_root to do that, which
basically takes the superblock and a filehandle and returns a dentry.
The problem here is that the dentry-&gt;d_inode you're getting back looks
wrong. It's not a directory as expected -- it's something else. So the
client gives up and tosses back an ESTALE.

Which clearly says that, In getacl() code while it does the stat() call
to get the attrs, it forgets to populate the deviceid or fsid before
going ahead and does getxattr().

FIX:
1. Fill the deviceid in iatt.
2. Do bit more clean up for the confusing part of the code.

NB: Many many thanks to Niels de Vos and Jeff Layton for their
help to debug the issue.

Change-Id: I44d8d2fa3ec7fb33a67dfdd4bbe2c45cdf67db8c
BUG: 1043737
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6526
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: Inconsistent behaviour of setfacl/getfacl</title>
<updated>2013-12-05T18:36:11+00:00</updated>
<author>
<name>Santosh Kumar Pradhan</name>
<email>spradhan@redhat.com</email>
</author>
<published>2013-12-04T02:55:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3c68dc35611f75a7d401f9b61d3b40cd6cc90968'/>
<id>3c68dc35611f75a7d401f9b61d3b40cd6cc90968</id>
<content type='text'>
The permissions returned by NFS ACL are wrong, which are rejected
by NFS client as "Invalid argument". Refactor the NFS ACL code
to return the proper permissions which would match with the
requested permissions.

Upstream master review: http://review.gluster.org/6368

Change-Id: Ieb079b5da98b061291b44655e18a1dee92a8e463
BUG: 1035218
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6418
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The permissions returned by NFS ACL are wrong, which are rejected
by NFS client as "Invalid argument". Refactor the NFS ACL code
to return the proper permissions which would match with the
requested permissions.

Upstream master review: http://review.gluster.org/6368

Change-Id: Ieb079b5da98b061291b44655e18a1dee92a8e463
BUG: 1035218
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6418
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
