<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/protocol/server/src/server.h, branch v3.3.1qa1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>core: adding extra data for fops</title>
<updated>2012-03-22T23:40:27+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2012-03-20T11:52:24+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9d3af972f516b6ba38d2736ce2016e34a452d569'/>
<id>9d3af972f516b6ba38d2736ce2016e34a452d569</id>
<content type='text'>
with this change, the xlator APIs will have a dictionary as extra
argument, which is passed between all the layers. This can be
utilized for overloading in some of the operations.

Change-Id: I58a8186b3ef647650280e63f3e5e9b9de7827b40
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
BUG: 782265
Reviewed-on: http://review.gluster.com/2960
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
with this change, the xlator APIs will have a dictionary as extra
argument, which is passed between all the layers. This can be
utilized for overloading in some of the operations.

Change-Id: I58a8186b3ef647650280e63f3e5e9b9de7827b40
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
BUG: 782265
Reviewed-on: http://review.gluster.com/2960
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Handle server send reply failure gracefully.</title>
<updated>2012-03-19T15:11:43+00:00</updated>
<author>
<name>Mohammed Junaid</name>
<email>junaid@redhat.com</email>
</author>
<published>2012-03-19T14:26:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=83277598bda524f44b76feed6adc7b19fc49d49a'/>
<id>83277598bda524f44b76feed6adc7b19fc49d49a</id>
<content type='text'>
Server send reply failure should not call server connection cleanup because
if a reconnection happens with in the grace-timeout the connection object is
reused. We must cleanup only on grace-timeout.

Change-Id: I7d171a863382646ff392031c2b845fe4f0d3d5dc
BUG: 803365
Signed-off-by: Mohammed Junaid &lt;junaid@redhat.com&gt;
Reviewed-on: http://review.gluster.com/2947
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Server send reply failure should not call server connection cleanup because
if a reconnection happens with in the grace-timeout the connection object is
reused. We must cleanup only on grace-timeout.

Change-Id: I7d171a863382646ff392031c2b845fe4f0d3d5dc
BUG: 803365
Signed-off-by: Mohammed Junaid &lt;junaid@redhat.com&gt;
Reviewed-on: http://review.gluster.com/2947
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Clear internal locks on disconnect</title>
<updated>2012-03-18T08:09:43+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-03-18T07:37:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9fd44bd90ecb60760919bda85308132341f857f9'/>
<id>9fd44bd90ecb60760919bda85308132341f857f9</id>
<content type='text'>
If there is a disconnect observed on the client when the
inode/entry unlock is issued, but the reconnection to server
happens with in the grace-time period the inode/entry lk will
live and the unlock will never come from that client.
The internal locks should be cleared on disconnect.

Change-Id: Ib45b1035cfe3b1de381ef3b331c930011e7403be
BUG: 803209
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2966
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If there is a disconnect observed on the client when the
inode/entry unlock is issued, but the reconnection to server
happens with in the grace-time period the inode/entry lk will
live and the unlock will never come from that client.
The internal locks should be cleared on disconnect.

Change-Id: Ib45b1035cfe3b1de381ef3b331c930011e7403be
BUG: 803209
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2966
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Avoid race in add/del locker, connection_cleanup</title>
<updated>2012-03-18T06:22:24+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-03-03T11:58:17+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=6aab9d9602dc1ef62a2d1d63aa1764a062bf9d9f'/>
<id>6aab9d9602dc1ef62a2d1d63aa1764a062bf9d9f</id>
<content type='text'>
conn-&gt;ltable address keeps changing in
server_connection_cleanup every time it is called.
i.e. New ltable is created every time it is called.
Here is the race that happened:
---------------------------------------------------
thread-1                        | thread-2
add_locker is called with       |
conn-&gt;ltable. lets call the     |
ltable address lt1              |
                                | connection cleanup is called
                                | and do_lock_table_cleanup is
                                | triggered for lt1. locker
                                | lists are splice_inited under
                                | the lt1-&gt;lock
lt1 adds the locker under       |
lt1-&gt;lock (lets call this l1)   |
                                | GF_FREE(lt1) happens in
                                | do_lock_table_cleanup

The locker l1 that is added just before lt1 is freed will never
be cleared in the subsequent server_connection_cleanups as there
does not exist a reference to the locker. The stale lock remains
in the locks xlator even though the transport on which it was
issued is destroyed.

Change-Id: I0a02f16c703d1e7598b083aa1057cda9624eb3fe
BUG: 787601
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2957
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
conn-&gt;ltable address keeps changing in
server_connection_cleanup every time it is called.
i.e. New ltable is created every time it is called.
Here is the race that happened:
---------------------------------------------------
thread-1                        | thread-2
add_locker is called with       |
conn-&gt;ltable. lets call the     |
ltable address lt1              |
                                | connection cleanup is called
                                | and do_lock_table_cleanup is
                                | triggered for lt1. locker
                                | lists are splice_inited under
                                | the lt1-&gt;lock
lt1 adds the locker under       |
lt1-&gt;lock (lets call this l1)   |
                                | GF_FREE(lt1) happens in
                                | do_lock_table_cleanup

The locker l1 that is added just before lt1 is freed will never
be cleared in the subsequent server_connection_cleanups as there
does not exist a reference to the locker. The stale lock remains
in the locks xlator even though the transport on which it was
issued is destroyed.

Change-Id: I0a02f16c703d1e7598b083aa1057cda9624eb3fe
BUG: 787601
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2957
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Remove connection from conf-&gt;conns w.o. race</title>
<updated>2012-03-13T11:58:45+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-03-09T19:24:12+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fa5b0347193f8d1a4b917a2edb338423cb175e66'/>
<id>fa5b0347193f8d1a4b917a2edb338423cb175e66</id>
<content type='text'>
1) Adding the connection to conf-&gt;conns used to
happen in conf-&gt;mutex, but removing happened under conn-&gt;lock.
Fixed that as below.
When the connection object is created conn's ref, bind_ref count
is set to '1'. For bind_ref ref/unref happens under conf-&gt;mutex
whenever server_connection_get, put is called.
When bind_ref goes to '0' connection object is removed from
conf-&gt;conns under conf-&gt;mutex.  After it is removed from the list,
conn_unref is called outside the conf-&gt;mutex.
conn_ref/unref still happens under conn-&gt;lock.

2) Fixed races in server_connection_cleaup in grace_timer_handler
and server_setvolume.

Change-Id: Ie7b63b10f658af909a11c3327066667f5b7bd114
BUG: 801675
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2911
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1) Adding the connection to conf-&gt;conns used to
happen in conf-&gt;mutex, but removing happened under conn-&gt;lock.
Fixed that as below.
When the connection object is created conn's ref, bind_ref count
is set to '1'. For bind_ref ref/unref happens under conf-&gt;mutex
whenever server_connection_get, put is called.
When bind_ref goes to '0' connection object is removed from
conf-&gt;conns under conf-&gt;mutex.  After it is removed from the list,
conn_unref is called outside the conf-&gt;mutex.
conn_ref/unref still happens under conn-&gt;lock.

2) Fixed races in server_connection_cleaup in grace_timer_handler
and server_setvolume.

Change-Id: Ie7b63b10f658af909a11c3327066667f5b7bd114
BUG: 801675
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2911
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Make conn object ref-counted</title>
<updated>2012-03-01T17:12:10+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-02-23T09:16:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=86f631f4283cba7185e5b1d5a3be4b9a614ed985'/>
<id>86f631f4283cba7185e5b1d5a3be4b9a614ed985</id>
<content type='text'>
Change-Id: I992a7f8a75edfe7d75afaa1abe0ad45e8f351c8b
BUG: 796581
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2806
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I992a7f8a75edfe7d75afaa1abe0ad45e8f351c8b
BUG: 796581
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2806
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>transport/socket: configuring tcp window-size</title>
<updated>2012-02-29T10:10:44+00:00</updated>
<author>
<name>Rajesh Amaravathi</name>
<email>rajesh@redhat.com</email>
</author>
<published>2012-02-22T09:21:53+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2da18b6724b7daf7c3a973515fc3d59e7d2c4622'/>
<id>2da18b6724b7daf7c3a973515fc3d59e7d2c4622</id>
<content type='text'>
Till now, send and recieve buffer window sizes for sockets
were set to a default glusterfs-specific value.
Linux's default window sizes have been found to be better
w.r.t performance, and hence, no more setting it to any
default value.

However, if one wishes, there's the new configuration option:
   network.tcp-window-size &lt;sane_size&gt;
which takes a size value (int or human readable) and will set
the window size of sockets for both clients and servers.
Nfs clients will also be updated with the same.

Change-Id: I841479bbaea791b01086c42f58401ed297ff16ea
BUG: 795635
Signed-off-by: Rajesh Amaravathi &lt;rajesh@redhat.com&gt;
Reviewed-on: http://review.gluster.com/2821
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Till now, send and recieve buffer window sizes for sockets
were set to a default glusterfs-specific value.
Linux's default window sizes have been found to be better
w.r.t performance, and hence, no more setting it to any
default value.

However, if one wishes, there's the new configuration option:
   network.tcp-window-size &lt;sane_size&gt;
which takes a size value (int or human readable) and will set
the window size of sockets for both clients and servers.
Nfs clients will also be updated with the same.

Change-Id: I841479bbaea791b01086c42f58401ed297ff16ea
BUG: 795635
Signed-off-by: Rajesh Amaravathi &lt;rajesh@redhat.com&gt;
Reviewed-on: http://review.gluster.com/2821
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/client,server: fcntl lock self healing.</title>
<updated>2012-02-20T12:45:31+00:00</updated>
<author>
<name>Mohammed Junaid</name>
<email>junaid@redhat.com</email>
</author>
<published>2012-02-08T12:36:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f764516c2e526624ce0088963924ff2d88304553'/>
<id>f764516c2e526624ce0088963924ff2d88304553</id>
<content type='text'>
Currently(with out this patch), on a disconnect the server cleans up
the transport which inturn closes the fd's and releases the locks acquired on
those fd's by that client. On a reconnect, client just reopens the fd's but
doesn't reacquire the locks. The application that had previously acquired
the locks still is under the assumption that it is the owner of those locks
which might have been granted to other clients(if they request) by the server
leading to data corruption.

This patch allows the client to reacquire the fcntl locks (held on the fd's)
during client-server handshake.

* The server identifies the client via process-uuid-xl (which is a combination
  of uuid and client-protocol name, it is assumed to be unique) and lk-version
  number.

* The client maintains a list of process-uuid-xl, lk-version pair for each
  accepted connection. On a connect, the server traverses the list for a
  matching pair, if a matching pair is not found the the server returns
  lk-version with value 0, else it returns the lk-version it has in store.

* On a disconnect, the server and client enter grace period, and on the
  completion of the grace period, the client bumps up its lk-version number
  (which means, it will reacquire the locks the next time) and the server will
  distroy the connection. If reconnection happens within the grace period, the
  server will find the matching (process-uuid-xl, lk-version) pair in its list
  which guarantees that the fd's and there corresponding locks are still valid
  for this client.

Configurable options:
  To set grace-timeout, the following options are
    option server.grace-timeout value
    option client.grace-timeout value

  To enable or disable the lk-heal,
    option lk-heal [on|off]

gluster volume set command can be used to configurable options
Change-Id: Id677ef1087b300d649f278b8b2aa0d94eae85ed2
BUG: 795386
Signed-off-by: Mohammed Junaid &lt;junaid@redhat.com&gt;
Reviewed-on: http://review.gluster.com/2766
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently(with out this patch), on a disconnect the server cleans up
the transport which inturn closes the fd's and releases the locks acquired on
those fd's by that client. On a reconnect, client just reopens the fd's but
doesn't reacquire the locks. The application that had previously acquired
the locks still is under the assumption that it is the owner of those locks
which might have been granted to other clients(if they request) by the server
leading to data corruption.

This patch allows the client to reacquire the fcntl locks (held on the fd's)
during client-server handshake.

* The server identifies the client via process-uuid-xl (which is a combination
  of uuid and client-protocol name, it is assumed to be unique) and lk-version
  number.

* The client maintains a list of process-uuid-xl, lk-version pair for each
  accepted connection. On a connect, the server traverses the list for a
  matching pair, if a matching pair is not found the the server returns
  lk-version with value 0, else it returns the lk-version it has in store.

* On a disconnect, the server and client enter grace period, and on the
  completion of the grace period, the client bumps up its lk-version number
  (which means, it will reacquire the locks the next time) and the server will
  distroy the connection. If reconnection happens within the grace period, the
  server will find the matching (process-uuid-xl, lk-version) pair in its list
  which guarantees that the fd's and there corresponding locks are still valid
  for this client.

Configurable options:
  To set grace-timeout, the following options are
    option server.grace-timeout value
    option client.grace-timeout value

  To enable or disable the lk-heal,
    option lk-heal [on|off]

gluster volume set command can be used to configurable options
Change-Id: Id677ef1087b300d649f278b8b2aa0d94eae85ed2
BUG: 795386
Signed-off-by: Mohammed Junaid &lt;junaid@redhat.com&gt;
Reviewed-on: http://review.gluster.com/2766
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: get xattrs also as part of readdirp</title>
<updated>2012-01-25T10:03:44+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@gluster.com</email>
</author>
<published>2012-01-18T12:36:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cf8486cbef329ef66868f658fa35f470f97db462'/>
<id>cf8486cbef329ef66868f658fa35f470f97db462</id>
<content type='text'>
readdirp_req() call sends a dict_t * as an argument, which
contains all the xattr keys for which the entries got in
readdirp_rsp() are having xattr value filled dictionary.

Change-Id: I8b7e1290740ea3e884e67d19156ce849227167c0
Signed-off-by: Amar Tumballi &lt;amar@gluster.com&gt;
BUG: 765785
Reviewed-on: http://review.gluster.com/771
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
readdirp_req() call sends a dict_t * as an argument, which
contains all the xattr keys for which the entries got in
readdirp_rsp() are having xattr value filled dictionary.

Change-Id: I8b7e1290740ea3e884e67d19156ce849227167c0
Signed-off-by: Amar Tumballi &lt;amar@gluster.com&gt;
BUG: 765785
Reviewed-on: http://review.gluster.com/771
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: change lk-owner as a 1k buffer</title>
<updated>2012-01-25T04:14:17+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@gluster.com</email>
</author>
<published>2012-01-16T23:58:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b02afc6d008f9959db28244eb2b9dd3b9ef92393'/>
<id>b02afc6d008f9959db28244eb2b9dd3b9ef92393</id>
<content type='text'>
so, NLM can send the lk-owner field directly to the locks translators,
while doing the same effort, also enabled sending maximum of 500 aux gid
over protocol.

Change-Id: I87c2514392748416f7ffe21d5154faad2e413969
Signed-off-by: Amar Tumballi &lt;amar@gluster.com&gt;
BUG: 767229
Reviewed-on: http://review.gluster.com/779
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
so, NLM can send the lk-owner field directly to the locks translators,
while doing the same effort, also enabled sending maximum of 500 aux gid
over protocol.

Change-Id: I87c2514392748416f7ffe21d5154faad2e413969
Signed-off-by: Amar Tumballi &lt;amar@gluster.com&gt;
BUG: 767229
Reviewed-on: http://review.gluster.com/779
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
