<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/rpc/rpc-lib, branch v3.5.1beta</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>rpc: warn and truncate grouplist if RPC/AUTH can not hold everything</title>
<updated>2014-05-22T13:02:21+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-05-12T01:51:15+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=57ec16e7f6d08b9a1c07f8ece3db630b08557372'/>
<id>57ec16e7f6d08b9a1c07f8ece3db630b08557372</id>
<content type='text'>
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.

The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2

In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:

    1 | pid
    1 | uid
    1 | gid
    1 | groups_len
   XX | groups_val (GF_MAX_AUX_GROUPS=65535)
    1 | lk_owner_len
   YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
  ----+-------------------------------------------
    5 | total xdr-units

  one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
  MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
  XX + YY can be 95 to fill the 100 xdr-units.

  Note that the on-wire protocol has tighter requirements than the
  internal structures. It is possible for xlators to use more groups and
  a bigger lk_owner than that can be sent by a GlusterFS-client.

This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.

The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)

Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52:
&gt; BUG: 1053579
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7202
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1096425
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7829
Reviewed-by: Lalatendu Mohanty &lt;lmohanty@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.

The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2

In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:

    1 | pid
    1 | uid
    1 | gid
    1 | groups_len
   XX | groups_val (GF_MAX_AUX_GROUPS=65535)
    1 | lk_owner_len
   YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
  ----+-------------------------------------------
    5 | total xdr-units

  one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
  MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
  XX + YY can be 95 to fill the 100 xdr-units.

  Note that the on-wire protocol has tighter requirements than the
  internal structures. It is possible for xlators to use more groups and
  a bigger lk_owner than that can be sent by a GlusterFS-client.

This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.

The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)

Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52:
&gt; BUG: 1053579
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7202
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1096425
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7829
Reviewed-by: Lalatendu Mohanty &lt;lmohanty@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libgfapi:  Added support to fetch volume info from glusterd and store in glfs object.</title>
<updated>2014-05-12T15:14:52+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2014-05-12T10:16:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0c87b67ba9659a2d029d8136835331301b7b6ceb'/>
<id>0c87b67ba9659a2d029d8136835331301b7b6ceb</id>
<content type='text'>
Defined new APIs in the libgfapi module, given a glfs object,
 * to send handshake RPC call to glusterd process to fetch UUID of the volume
 * store it in the glusterfs_context linked to the glfs object.
 * to parse UUID from its cannonical string format into 16-byte array
   before sending it to the libgfapi users.

Defined a RPC call in glusterd which can be used to query volume related
info by other processes using 'clnt_handshake_procs'.

Note - Currently this RPC call to glusterd process is used only to fetch UUID.
But it can be extended to get other volume related structures as well.

In addition to the above, defined a new variable to keep track of such handshake
RPCs still in progress to make sure all the corresponding RPC callbacks have been
processed before libgfapi returns the glfs object initialized.

Also bumping up the GFAPI current version number since there is a new API
"glfs_get_volume_id" defined and exposed by libgfapi as part of these changes.

Cherry picked from commit 5adb10b9ac1c634334f29732e062b12d747ae8c5:
&gt; Change-Id: I303f76d7177d32d25bdb301b1dbcf5cd73f42807
&gt; BUG: 1095775
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7218
&gt; Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

This change differs a little from the patch in the master branch:
- libgfapi in 3.5 does not have glfs_get_volfile(), so there were some
  merge conflicts resolved,
- libgfapi in 3.5 is not versioned, the configure.ac changes related to
  the versioning have been skipped,
- in the master branch only the XDR .x files are available, release-3.5
  requires a manual re-generation of the relates .h and .c files.

Change-Id: I52c32d0e69a52a7f4285f74164bca6fd83c4f3b3
BUG: 1095775
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7741
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Defined new APIs in the libgfapi module, given a glfs object,
 * to send handshake RPC call to glusterd process to fetch UUID of the volume
 * store it in the glusterfs_context linked to the glfs object.
 * to parse UUID from its cannonical string format into 16-byte array
   before sending it to the libgfapi users.

Defined a RPC call in glusterd which can be used to query volume related
info by other processes using 'clnt_handshake_procs'.

Note - Currently this RPC call to glusterd process is used only to fetch UUID.
But it can be extended to get other volume related structures as well.

In addition to the above, defined a new variable to keep track of such handshake
RPCs still in progress to make sure all the corresponding RPC callbacks have been
processed before libgfapi returns the glfs object initialized.

Also bumping up the GFAPI current version number since there is a new API
"glfs_get_volume_id" defined and exposed by libgfapi as part of these changes.

Cherry picked from commit 5adb10b9ac1c634334f29732e062b12d747ae8c5:
&gt; Change-Id: I303f76d7177d32d25bdb301b1dbcf5cd73f42807
&gt; BUG: 1095775
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7218
&gt; Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

This change differs a little from the patch in the master branch:
- libgfapi in 3.5 does not have glfs_get_volfile(), so there were some
  merge conflicts resolved,
- libgfapi in 3.5 is not versioned, the configure.ac changes related to
  the versioning have been skipped,
- in the master branch only the XDR .x files are available, release-3.5
  requires a manual re-generation of the relates .h and .c files.

Change-Id: I52c32d0e69a52a7f4285f74164bca6fd83c4f3b3
BUG: 1095775
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7741
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc: Ignore INODELK/ENTRYLK/LK for throttling</title>
<updated>2014-05-08T12:58:16+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2014-04-23T08:35:10+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=45a0322066513259e61c7a4b1b1ed1a0bd3a0827'/>
<id>45a0322066513259e61c7a4b1b1ed1a0bd3a0827</id>
<content type='text'>
Problem:
When iozone is in progress, number of blocking inodelks sometimes becomes greater
than the threshold number of rpc requests allowed for that client
(RPCSVC_DEFAULT_OUTSTANDING_RPC_LIMIT). Subsequent requests from that client
will not be read until all the outstanding requests are processed and replied
to. But because no more requests are read from that client, unlocks on the
already granted locks will never come thus the number of outstanding requests
would never come down. This leads to a ping-timeout on the client.

Fix:
Do not account INODELK/ENTRYLK/LK for throttling

BUG: 1089470
Change-Id: I9cc2c259d2462159cea913d95f98a565acb8e0c0
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7570
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When iozone is in progress, number of blocking inodelks sometimes becomes greater
than the threshold number of rpc requests allowed for that client
(RPCSVC_DEFAULT_OUTSTANDING_RPC_LIMIT). Subsequent requests from that client
will not be read until all the outstanding requests are processed and replied
to. But because no more requests are read from that client, unlocks on the
already granted locks will never come thus the number of outstanding requests
would never come down. This leads to a ping-timeout on the client.

Fix:
Do not account INODELK/ENTRYLK/LK for throttling

BUG: 1089470
Change-Id: I9cc2c259d2462159cea913d95f98a565acb8e0c0
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7570
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: Set default outstanding RPC limit to 16</title>
<updated>2014-03-07T20:11:28+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2014-03-06T13:13:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0e8e79350947b9dedd02b1cc2339b8dc42ca599d'/>
<id>0e8e79350947b9dedd02b1cc2339b8dc42ca599d</id>
<content type='text'>
Backport of http://review.gluster.org/#/c/6696/

With 64, NFS server hangs with large I/O load (~ 64 threads writing
to NFS server). The test results from Ben England (Performance expert)
suggest to set it as 16 instead of 64.

Change-Id: Iaa9dda512904d2e359d8122a05e5bf65f99a7e78
BUG: 1073441
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7200
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/#/c/6696/

With 64, NFS server hangs with large I/O load (~ 64 threads writing
to NFS server). The test results from Ben England (Performance expert)
suggest to set it as 16 instead of 64.

Change-Id: Iaa9dda512904d2e359d8122a05e5bf65f99a7e78
BUG: 1073441
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7200
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: more glusterd and cli fixes for geo-rep.</title>
<updated>2014-01-27T17:44:33+00:00</updated>
<author>
<name>Ajeet Jha</name>
<email>ajha@redhat.com</email>
</author>
<published>2013-12-02T07:25:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f33d09f23b089bd07437eb714f8dffa43460d6b5'/>
<id>f33d09f23b089bd07437eb714f8dffa43460d6b5</id>
<content type='text'>
    -&gt; handle option validation cases in reset case.
    -&gt; Creating valid conf path when glusterd restarts.
    -&gt; Reading the gsyncd worker thread status and displaying it.
    -&gt; Displaying status-detail per worker.
    -&gt; Fetch checkpoint info in geo-rep status.
    -&gt; use-tarssh value validation added.

misc: misc geo-rep fixes based on cluster, logrotate etc..
    -&gt; cluster/dht: fix 'stime' getxattr getting overwritten.
    -&gt; cluster/afr: return max of 'stime' values in subvol.
    -&gt; geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
    -&gt; cluster/dht: fix convoluted logic while aggregating.
    -&gt; cluster/*: fix 'stime' min/max fetch logic.

Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@gmail.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6810
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
    -&gt; handle option validation cases in reset case.
    -&gt; Creating valid conf path when glusterd restarts.
    -&gt; Reading the gsyncd worker thread status and displaying it.
    -&gt; Displaying status-detail per worker.
    -&gt; Fetch checkpoint info in geo-rep status.
    -&gt; use-tarssh value validation added.

misc: misc geo-rep fixes based on cluster, logrotate etc..
    -&gt; cluster/dht: fix 'stime' getxattr getting overwritten.
    -&gt; cluster/afr: return max of 'stime' values in subvol.
    -&gt; geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
    -&gt; cluster/dht: fix convoluted logic while aggregating.
    -&gt; cluster/*: fix 'stime' min/max fetch logic.

Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@gmail.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6810
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc,glusterd: Use rpc_clnt notifyfn to cleanup mydata</title>
<updated>2013-12-23T14:58:18+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-23T08:37:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8c4e79c446fdfea00c1589a625ba1f1a63fdecc5'/>
<id>8c4e79c446fdfea00c1589a625ba1f1a63fdecc5</id>
<content type='text'>
        Backport of http://review.gluster.org/5512

rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6566
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/5512

rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6566
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/quota: Improvements to quota</title>
<updated>2013-11-26T18:24:02+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2013-09-16T12:16:50+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ab3ab1978a4768e9eed8e23b47e72b25046e607a'/>
<id>ab3ab1978a4768e9eed8e23b47e72b25046e607a</id>
<content type='text'>
* Two stages of quota enforcement is done:

  Soft and hard quota Upon reaching soft quota limit on the directory
  it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log)
  and no more writes allowed after hard
  quota limit. After reaching the soft-limit the daemon alerts the
  user/admin repeatively for every 'alert-time', which is
  configurable.

* Quota enforcer is moved to server-side.

  It  takes care of enforcing quota. Since enforcer doesn't have the
  cluster view, it relies on another service called
  quota-aggregator. Aggregator, on query can return the size of a
  directory based on the cluster view.

  Enforcer is always loaded in the server graph and is by passed if
  the feature is not enabled.

  Options specific to enforcer:

  server-quota - Specifies whether the feature is on/off. It is used
  to by pass the quota if turned off.

  deem-statfs - If set to on, it takes quota limits into consideration
  while estimating fs size. (df command). The algorithm followed is,
  i.   Adjust statvfs based on limit configured on root.
  ii.  If limit is set on the inode passed, use size/limits on that inode to
       populate statvfs. Otherwise, use size/limits configured on root.
  iii. Upon statvfs, update the ctx-&gt;size on the inode.
  iv.  Don't let DHT aggregate, instead take the maximum of the usages from the
       subvols of the DHT, since each of it contains the complete information.

  Enforcer also makes use of gfid-to-path conversion functionality to
  work correctly when a client like nfs predominently relies on
  nameless lookups.

* Quota Aggregator acts as a thin client to provide cluster view

  Its a lightweight *gluster client* process with no mount point,
  started upon enabling quota or restarting the volume. This is a
  single process run on each brick, which can answer queries on all
  volumes in the cluster. Its volfile stored in
  GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol.

Credits:
Raghavendra Bhat        &lt;rabhat@redhat.com&gt;
Varun Shastry           &lt;vshastry@redhat.com&gt;
Shishir Gowda           &lt;sgowda@redhat.com&gt;
Kruthika Dhananjay      &lt;kdhananj@redhat.com&gt;
Brian Foster            &lt;bfoster@redhat.com&gt;
Krishnan Parthasarathi  &lt;kparthas@redhat.com&gt;

Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de
BUG: 969461
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5952
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* Two stages of quota enforcement is done:

  Soft and hard quota Upon reaching soft quota limit on the directory
  it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log)
  and no more writes allowed after hard
  quota limit. After reaching the soft-limit the daemon alerts the
  user/admin repeatively for every 'alert-time', which is
  configurable.

* Quota enforcer is moved to server-side.

  It  takes care of enforcing quota. Since enforcer doesn't have the
  cluster view, it relies on another service called
  quota-aggregator. Aggregator, on query can return the size of a
  directory based on the cluster view.

  Enforcer is always loaded in the server graph and is by passed if
  the feature is not enabled.

  Options specific to enforcer:

  server-quota - Specifies whether the feature is on/off. It is used
  to by pass the quota if turned off.

  deem-statfs - If set to on, it takes quota limits into consideration
  while estimating fs size. (df command). The algorithm followed is,
  i.   Adjust statvfs based on limit configured on root.
  ii.  If limit is set on the inode passed, use size/limits on that inode to
       populate statvfs. Otherwise, use size/limits configured on root.
  iii. Upon statvfs, update the ctx-&gt;size on the inode.
  iv.  Don't let DHT aggregate, instead take the maximum of the usages from the
       subvols of the DHT, since each of it contains the complete information.

  Enforcer also makes use of gfid-to-path conversion functionality to
  work correctly when a client like nfs predominently relies on
  nameless lookups.

* Quota Aggregator acts as a thin client to provide cluster view

  Its a lightweight *gluster client* process with no mount point,
  started upon enabling quota or restarting the volume. This is a
  single process run on each brick, which can answer queries on all
  volumes in the cluster. Its volfile stored in
  GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol.

Credits:
Raghavendra Bhat        &lt;rabhat@redhat.com&gt;
Varun Shastry           &lt;vshastry@redhat.com&gt;
Shishir Gowda           &lt;sgowda@redhat.com&gt;
Kruthika Dhananjay      &lt;kdhananj@redhat.com&gt;
Brian Foster            &lt;bfoster@redhat.com&gt;
Krishnan Parthasarathi  &lt;kparthas@redhat.com&gt;

Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de
BUG: 969461
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5952
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: More clean up for Gluster NFS</title>
<updated>2013-11-25T13:40:57+00:00</updated>
<author>
<name>Santosh Kumar Pradhan</name>
<email>spradhan@redhat.com</email>
</author>
<published>2013-11-21T11:43:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b87d96b97d4a0cdc0883bec8ea8b4730b82fb3ba'/>
<id>b87d96b97d4a0cdc0883bec8ea8b4730b82fb3ba</id>
<content type='text'>
1) Fix the typo in NFS default ACL
   The typo was introduced as part of the Fix to BZ 1009210 i.e.
   http://review.gluster.org/5980. The user ACL xattr structure
   was passed to default ACL xattr.

2) Clean up NFS code to avoid unnecessary SEGV in
   rpcsvc_drc_reconfigure() which was not validating the
   svc-&gt;drc. Add a routine rpcsvc_drc_deinit() to handle
   the clean up of DRC specific data structures. For init(),
   use rpcsvc_drc_init().

3) nfs_init_state() was returning wrong value even if the
   registration with portmapper failed, causing the NFS
   server process to hang around. As a result it used to
   get SEGV during rpcsvc_drc_reconfigure().

4) Clean up memfactor usage across nfs.c nfs3.c.

Change-Id: I5cea26cb68dd8a822ec0ae104952f67fe63fa703
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6329
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1) Fix the typo in NFS default ACL
   The typo was introduced as part of the Fix to BZ 1009210 i.e.
   http://review.gluster.org/5980. The user ACL xattr structure
   was passed to default ACL xattr.

2) Clean up NFS code to avoid unnecessary SEGV in
   rpcsvc_drc_reconfigure() which was not validating the
   svc-&gt;drc. Add a routine rpcsvc_drc_deinit() to handle
   the clean up of DRC specific data structures. For init(),
   use rpcsvc_drc_init().

3) nfs_init_state() was returning wrong value even if the
   registration with portmapper failed, causing the NFS
   server process to hang around. As a result it used to
   get SEGV during rpcsvc_drc_reconfigure().

4) Clean up memfactor usage across nfs.c nfs3.c.

Change-Id: I5cea26cb68dd8a822ec0ae104952f67fe63fa703
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6329
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: RFE for NFS connection behavior</title>
<updated>2013-11-15T00:07:02+00:00</updated>
<author>
<name>Santosh Kumar Pradhan</name>
<email>spradhan@redhat.com</email>
</author>
<published>2013-10-28T07:16:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e479660d9dd8bf7017c7dc78ccfa6edd9c51ec7a'/>
<id>e479660d9dd8bf7017c7dc78ccfa6edd9c51ec7a</id>
<content type='text'>
Implement reconfigure() for NFS xlator so that volume set/reset wont
restart the NFS server process. But few options can not be reconfigured
dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be
restarted.

Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c
BUG: 1027409
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6236
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Implement reconfigure() for NFS xlator so that volume set/reset wont
restart the NFS server process. But few options can not be reconfigured
dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be
restarted.

Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c
BUG: 1027409
Signed-off-by: Santosh Kumar Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6236
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bd_map: Remove bd_map xlator</title>
<updated>2013-11-13T19:38:28+00:00</updated>
<author>
<name>M. Mohan Kumar</name>
<email>mohan@in.ibm.com</email>
</author>
<published>2013-11-13T17:14:42+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=15a8ecd9b3eedf80881bd3dba81f16b7d2cb7c97'/>
<id>15a8ecd9b3eedf80881bd3dba81f16b7d2cb7c97</id>
<content type='text'>
Remove bd_map xlator and CLI related changes.

Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Remove bd_map xlator and CLI related changes.

Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
