<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/protocol/server/src/server-helpers.c, branch v3.3.1qa1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>protocol/client: Concatenate the graph uuid along with process uuid in setlk_version.</title>
<updated>2012-05-25T16:33:25+00:00</updated>
<author>
<name>Mohammed Junaid</name>
<email>junaid@redhat.com</email>
</author>
<published>2012-05-11T07:21:38+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=65275d048bb031f93a690c69cb32bb751298c212'/>
<id>65275d048bb031f93a690c69cb32bb751298c212</id>
<content type='text'>
Change-Id: Idec06c5ef1d440864e465f008a38c86395b52aba
BUG: 820831
Signed-off-by: Mohammed Junaid &lt;junaid@redhat.com&gt;
Reviewed-on: http://review.gluster.com/3439
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Idec06c5ef1d440864e465f008a38c86395b52aba
BUG: 820831
Signed-off-by: Mohammed Junaid &lt;junaid@redhat.com&gt;
Reviewed-on: http://review.gluster.com/3439
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: del_locker should delete one locker per unlock</title>
<updated>2012-05-22T11:30:35+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-05-19T05:44:25+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=86c8e411143a9ee2bfd81270676ebd2a987cf946'/>
<id>86c8e411143a9ee2bfd81270676ebd2a987cf946</id>
<content type='text'>
BUG: 771595
Change-Id: If1c352b2d65938ad07f2e4b70c0e58c2d3be11bc
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/3399
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
BUG: 771595
Change-Id: If1c352b2d65938ad07f2e4b70c0e58c2d3be11bc
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/3399
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Check if dict arg is NULL in setxattr</title>
<updated>2012-04-17T08:14:58+00:00</updated>
<author>
<name>shishir gowda</name>
<email>shishirng@gluster.com</email>
</author>
<published>2012-04-17T05:53:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=6b0a44b4df1c1f7d70b2296862d25bf166009ebc'/>
<id>6b0a44b4df1c1f7d70b2296862d25bf166009ebc</id>
<content type='text'>
Change-Id: I44d199ffa5d08115cc0aa7cb0b99298a9907af60
BUG: 808067
Signed-off-by: shishir gowda &lt;shishirng@gluster.com&gt;
Reviewed-on: http://review.gluster.com/3164
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I44d199ffa5d08115cc0aa7cb0b99298a9907af60
BUG: 808067
Signed-off-by: shishir gowda &lt;shishirng@gluster.com&gt;
Reviewed-on: http://review.gluster.com/3164
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: adding extra data for fops</title>
<updated>2012-03-22T23:40:27+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2012-03-20T11:52:24+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9d3af972f516b6ba38d2736ce2016e34a452d569'/>
<id>9d3af972f516b6ba38d2736ce2016e34a452d569</id>
<content type='text'>
with this change, the xlator APIs will have a dictionary as extra
argument, which is passed between all the layers. This can be
utilized for overloading in some of the operations.

Change-Id: I58a8186b3ef647650280e63f3e5e9b9de7827b40
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
BUG: 782265
Reviewed-on: http://review.gluster.com/2960
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
with this change, the xlator APIs will have a dictionary as extra
argument, which is passed between all the layers. This can be
utilized for overloading in some of the operations.

Change-Id: I58a8186b3ef647650280e63f3e5e9b9de7827b40
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
BUG: 782265
Reviewed-on: http://review.gluster.com/2960
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Clear internal locks on disconnect</title>
<updated>2012-03-18T08:09:43+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-03-18T07:37:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9fd44bd90ecb60760919bda85308132341f857f9'/>
<id>9fd44bd90ecb60760919bda85308132341f857f9</id>
<content type='text'>
If there is a disconnect observed on the client when the
inode/entry unlock is issued, but the reconnection to server
happens with in the grace-time period the inode/entry lk will
live and the unlock will never come from that client.
The internal locks should be cleared on disconnect.

Change-Id: Ib45b1035cfe3b1de381ef3b331c930011e7403be
BUG: 803209
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2966
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If there is a disconnect observed on the client when the
inode/entry unlock is issued, but the reconnection to server
happens with in the grace-time period the inode/entry lk will
live and the unlock will never come from that client.
The internal locks should be cleared on disconnect.

Change-Id: Ib45b1035cfe3b1de381ef3b331c930011e7403be
BUG: 803209
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2966
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: add and remove the transports from the list, inside the lock</title>
<updated>2012-03-18T06:27:26+00:00</updated>
<author>
<name>Raghavendra Bhat</name>
<email>raghavendrabhat@gluster.com</email>
</author>
<published>2012-03-16T10:55:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fb406f942befbe48eec75043d89ecd0824f91dd6'/>
<id>fb406f942befbe48eec75043d89ecd0824f91dd6</id>
<content type='text'>
Till now for graph changes, glusterfs client used to remember the old graph
also. Hence the transport object on the server corresponding the old graph
never received disconnect. But now since the graph cleanup is happening,
transport on the server side gets disconnect for the cleaned up graph.

Server maintains, all the transports in a list. But addition of the new
transport to the list, or removal of the transport from the list is not
happening within the lock. Thus if a thread is accessing a transport
(in cases of statedump, where each transprt's information is dumped),
and the server gets a disconnect on that transport, then it leads to
segfault of the process.

To avoid it do the list (of transports) manipulation inside the lock.

Change-Id: I50e8389d5ec8f1c52b8d401ef8c8ddd262e82548
BUG: 803815
Signed-off-by: Raghavendra Bhat &lt;raghavendrabhat@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2958
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Till now for graph changes, glusterfs client used to remember the old graph
also. Hence the transport object on the server corresponding the old graph
never received disconnect. But now since the graph cleanup is happening,
transport on the server side gets disconnect for the cleaned up graph.

Server maintains, all the transports in a list. But addition of the new
transport to the list, or removal of the transport from the list is not
happening within the lock. Thus if a thread is accessing a transport
(in cases of statedump, where each transprt's information is dumped),
and the server gets a disconnect on that transport, then it leads to
segfault of the process.

To avoid it do the list (of transports) manipulation inside the lock.

Change-Id: I50e8389d5ec8f1c52b8d401ef8c8ddd262e82548
BUG: 803815
Signed-off-by: Raghavendra Bhat &lt;raghavendrabhat@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2958
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Avoid race in add/del locker, connection_cleanup</title>
<updated>2012-03-18T06:22:24+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-03-03T11:58:17+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=6aab9d9602dc1ef62a2d1d63aa1764a062bf9d9f'/>
<id>6aab9d9602dc1ef62a2d1d63aa1764a062bf9d9f</id>
<content type='text'>
conn-&gt;ltable address keeps changing in
server_connection_cleanup every time it is called.
i.e. New ltable is created every time it is called.
Here is the race that happened:
---------------------------------------------------
thread-1                        | thread-2
add_locker is called with       |
conn-&gt;ltable. lets call the     |
ltable address lt1              |
                                | connection cleanup is called
                                | and do_lock_table_cleanup is
                                | triggered for lt1. locker
                                | lists are splice_inited under
                                | the lt1-&gt;lock
lt1 adds the locker under       |
lt1-&gt;lock (lets call this l1)   |
                                | GF_FREE(lt1) happens in
                                | do_lock_table_cleanup

The locker l1 that is added just before lt1 is freed will never
be cleared in the subsequent server_connection_cleanups as there
does not exist a reference to the locker. The stale lock remains
in the locks xlator even though the transport on which it was
issued is destroyed.

Change-Id: I0a02f16c703d1e7598b083aa1057cda9624eb3fe
BUG: 787601
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2957
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
conn-&gt;ltable address keeps changing in
server_connection_cleanup every time it is called.
i.e. New ltable is created every time it is called.
Here is the race that happened:
---------------------------------------------------
thread-1                        | thread-2
add_locker is called with       |
conn-&gt;ltable. lets call the     |
ltable address lt1              |
                                | connection cleanup is called
                                | and do_lock_table_cleanup is
                                | triggered for lt1. locker
                                | lists are splice_inited under
                                | the lt1-&gt;lock
lt1 adds the locker under       |
lt1-&gt;lock (lets call this l1)   |
                                | GF_FREE(lt1) happens in
                                | do_lock_table_cleanup

The locker l1 that is added just before lt1 is freed will never
be cleared in the subsequent server_connection_cleanups as there
does not exist a reference to the locker. The stale lock remains
in the locks xlator even though the transport on which it was
issued is destroyed.

Change-Id: I0a02f16c703d1e7598b083aa1057cda9624eb3fe
BUG: 787601
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2957
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Remove connection from conf-&gt;conns w.o. race</title>
<updated>2012-03-13T11:58:45+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-03-09T19:24:12+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fa5b0347193f8d1a4b917a2edb338423cb175e66'/>
<id>fa5b0347193f8d1a4b917a2edb338423cb175e66</id>
<content type='text'>
1) Adding the connection to conf-&gt;conns used to
happen in conf-&gt;mutex, but removing happened under conn-&gt;lock.
Fixed that as below.
When the connection object is created conn's ref, bind_ref count
is set to '1'. For bind_ref ref/unref happens under conf-&gt;mutex
whenever server_connection_get, put is called.
When bind_ref goes to '0' connection object is removed from
conf-&gt;conns under conf-&gt;mutex.  After it is removed from the list,
conn_unref is called outside the conf-&gt;mutex.
conn_ref/unref still happens under conn-&gt;lock.

2) Fixed races in server_connection_cleaup in grace_timer_handler
and server_setvolume.

Change-Id: Ie7b63b10f658af909a11c3327066667f5b7bd114
BUG: 801675
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2911
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1) Adding the connection to conf-&gt;conns used to
happen in conf-&gt;mutex, but removing happened under conn-&gt;lock.
Fixed that as below.
When the connection object is created conn's ref, bind_ref count
is set to '1'. For bind_ref ref/unref happens under conf-&gt;mutex
whenever server_connection_get, put is called.
When bind_ref goes to '0' connection object is removed from
conf-&gt;conns under conf-&gt;mutex.  After it is removed from the list,
conn_unref is called outside the conf-&gt;mutex.
conn_ref/unref still happens under conn-&gt;lock.

2) Fixed races in server_connection_cleaup in grace_timer_handler
and server_setvolume.

Change-Id: Ie7b63b10f658af909a11c3327066667f5b7bd114
BUG: 801675
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2911
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: Make conn object ref-counted</title>
<updated>2012-03-01T17:12:10+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pranithk@gluster.com</email>
</author>
<published>2012-02-23T09:16:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=86f631f4283cba7185e5b1d5a3be4b9a614ed985'/>
<id>86f631f4283cba7185e5b1d5a3be4b9a614ed985</id>
<content type='text'>
Change-Id: I992a7f8a75edfe7d75afaa1abe0ad45e8f351c8b
BUG: 796581
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2806
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I992a7f8a75edfe7d75afaa1abe0ad45e8f351c8b
BUG: 796581
Signed-off-by: Pranith Kumar K &lt;pranithk@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2806
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: unref the dict and free the memory to avoid memleak</title>
<updated>2012-02-27T10:17:34+00:00</updated>
<author>
<name>Raghavendra Bhat</name>
<email>raghavendrabhat@gluster.com</email>
</author>
<published>2012-02-23T17:28:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=85471322df9676cc344cc2b03627c02ed90da3cd'/>
<id>85471322df9676cc344cc2b03627c02ed90da3cd</id>
<content type='text'>
Change-Id: Ib7a1f8cbab039fefb73dc35560a035d5688b0e32
BUG: 796186
Signed-off-by: Raghavendra Bhat &lt;raghavendrabhat@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2808
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ib7a1f8cbab039fefb73dc35560a035d5688b0e32
BUG: 796186
Signed-off-by: Raghavendra Bhat &lt;raghavendrabhat@gluster.com&gt;
Reviewed-on: http://review.gluster.com/2808
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
