<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/rpc/rpc-transport, branch v6.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>transport/socket: log shutdown msg occasionally</title>
<updated>2019-04-16T13:38:58+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2019-03-22T05:10:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b1186532c7fd093232617979aa8f3e932a645278'/>
<id>b1186532c7fd093232617979aa8f3e932a645278</id>
<content type='text'>
Change-Id: If3fc0884e7e2f45de2d278b98693b7a473220a5e
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Fixes: bz#1679904
(cherry picked from commit ec1b84300fe267dd12c1e42e7e91905db935f1e2)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: If3fc0884e7e2f45de2d278b98693b7a473220a5e
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Fixes: bz#1679904
(cherry picked from commit ec1b84300fe267dd12c1e42e7e91905db935f1e2)
</pre>
</div>
</content>
</entry>
<entry>
<title>socket: socket event handlers now return void</title>
<updated>2019-03-02T11:54:24+00:00</updated>
<author>
<name>Milind Changire</name>
<email>mchangir@redhat.com</email>
</author>
<published>2019-02-15T08:50:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4cb1d6d94ac85c5e79171f8989b545ca098b61d9'/>
<id>4cb1d6d94ac85c5e79171f8989b545ca098b61d9</id>
<content type='text'>
Problem:
Returning any value from socket event handlers to the event sub-system
doesn't make sense since event sub-system cannot handle socket
sub-system errors.

Solution:
Change return type of all socket event handlers to 'void'

mainline:
&gt; Change-Id: I70dc2c57f12b7ea2fae41120f71aa0d7fe0b2b6f
&gt; Fixes: bz#1651246
&gt; Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/c/glusterfs/+/22221

Change-Id: I70dc2c57f12b7ea2fae41120f71aa0d7fe0b2b6f
Fixes: bz#1683900
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
(cherry picked from commit 776ba851c6ee6c265253d44cf1d6e4e3d4a21772)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Returning any value from socket event handlers to the event sub-system
doesn't make sense since event sub-system cannot handle socket
sub-system errors.

Solution:
Change return type of all socket event handlers to 'void'

mainline:
&gt; Change-Id: I70dc2c57f12b7ea2fae41120f71aa0d7fe0b2b6f
&gt; Fixes: bz#1651246
&gt; Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/c/glusterfs/+/22221

Change-Id: I70dc2c57f12b7ea2fae41120f71aa0d7fe0b2b6f
Fixes: bz#1683900
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
(cherry picked from commit 776ba851c6ee6c265253d44cf1d6e4e3d4a21772)
</pre>
</div>
</content>
</entry>
<entry>
<title>socket: don't pass return value from protocol handler to event handler</title>
<updated>2019-01-22T06:59:37+00:00</updated>
<author>
<name>Zhang Huan</name>
<email>zhanghuan@open-fs.com</email>
</author>
<published>2019-01-03T09:57:38+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cd5714554627fe90ee2c77685cb410a8fb25eceb'/>
<id>cd5714554627fe90ee2c77685cb410a8fb25eceb</id>
<content type='text'>
Event handler handles socket level error only, while protocol handler
handles in protocol level error. If protocol handler decides to
disconnect on error in any case, it should call disconnect instead of
return an error back to event handler.

Change-Id: I9375be98cc52cb969085333f3c7229a91207d1bd
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Event handler handles socket level error only, while protocol handler
handles in protocol level error. If protocol handler decides to
disconnect on error in any case, it should call disconnect instead of
return an error back to event handler.

Change-Id: I9375be98cc52cb969085333f3c7229a91207d1bd
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>socket: fix issue when socket read return with EAGAIN</title>
<updated>2019-01-22T06:52:30+00:00</updated>
<author>
<name>Zhang Huan</name>
<email>zhanghuan@open-fs.com</email>
</author>
<published>2018-12-29T08:26:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4d9935a4db67be0237db5fc6a2b51086635571f6'/>
<id>4d9935a4db67be0237db5fc6a2b51086635571f6</id>
<content type='text'>
In the case socket read returns EAGAIN, positive value about remaining
vector to send is returned. This return value will be passed all the way
back to event handler, making it complains.

[2018-12-29 08:02:25.603199] T [socket.c:1640:__socket_read_simple_payload] 0-test-client-0-extra.0: partial read on non-blocking socket.
[2018-12-29 08:02:25.603201] T [rpc-clnt.c:654:rpc_clnt_reply_init] 0-test-client-2-extra.1: received rpc message (RPC XID: 0xfa6 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 12) from rpc-transport (test-client-2-extra.1)
[2018-12-29 08:02:25.603207] T [socket.c:3129:socket_event_handler] 0-test-client-0-extra.0: (sock:32) socket_event_poll_in returned 1

Formerly, in socket_proto_state_machine, return value of socket_readv is
used to check if message is all read-in. In this commit, it is checked
whether size of bytes indicated in header are all read in. In this way,
only 0 and -1 will be returned from socket_proto_state_machine(),
indicating whether there is error in the underlying socket.

Change-Id: I8be0d178b049f0720d738a03aec41c4b375d2972
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In the case socket read returns EAGAIN, positive value about remaining
vector to send is returned. This return value will be passed all the way
back to event handler, making it complains.

[2018-12-29 08:02:25.603199] T [socket.c:1640:__socket_read_simple_payload] 0-test-client-0-extra.0: partial read on non-blocking socket.
[2018-12-29 08:02:25.603201] T [rpc-clnt.c:654:rpc_clnt_reply_init] 0-test-client-2-extra.1: received rpc message (RPC XID: 0xfa6 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 12) from rpc-transport (test-client-2-extra.1)
[2018-12-29 08:02:25.603207] T [socket.c:3129:socket_event_handler] 0-test-client-0-extra.0: (sock:32) socket_event_poll_in returned 1

Formerly, in socket_proto_state_machine, return value of socket_readv is
used to check if message is all read-in. In this commit, it is checked
whether size of bytes indicated in header are all read in. In this way,
only 0 and -1 will be returned from socket_proto_state_machine(),
indicating whether there is error in the underlying socket.

Change-Id: I8be0d178b049f0720d738a03aec41c4b375d2972
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>socket: fix issue when socket write return with EAGAIN</title>
<updated>2019-01-17T08:29:48+00:00</updated>
<author>
<name>Zhang Huan</name>
<email>zhanghuan@open-fs.com</email>
</author>
<published>2018-12-29T07:51:13+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0301a66bda44582e3a48519f2a5d365b0c38090d'/>
<id>0301a66bda44582e3a48519f2a5d365b0c38090d</id>
<content type='text'>
In the case socket write return with EAGAIN, the remaining vector count
is return all way back to event handler, making followup pollin event to
skip handling and dispatch loop complains about failure. Even thought
temporary write failure is not an error.

[2018-12-29 07:31:41.772310] E [MSGID: 101191] [event-epoll.c:674:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler

Change-Id: Idf03d120b5f7619eda19720a583cbcc3e7da2504
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In the case socket write return with EAGAIN, the remaining vector count
is return all way back to event handler, making followup pollin event to
skip handling and dispatch loop complains about failure. Even thought
temporary write failure is not an error.

[2018-12-29 07:31:41.772310] E [MSGID: 101191] [event-epoll.c:674:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler

Change-Id: Idf03d120b5f7619eda19720a583cbcc3e7da2504
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>socket: fix counting of socket total_bytes_read and total_bytes_write</title>
<updated>2019-01-17T08:29:32+00:00</updated>
<author>
<name>Zhang Huan</name>
<email>zhanghuan@open-fs.com</email>
</author>
<published>2018-12-27T06:13:48+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=22778ca88977fa061c468ca257aec74d4e7d09f4'/>
<id>22778ca88977fa061c468ca257aec74d4e7d09f4</id>
<content type='text'>
Change-Id: If35d0dbae963facf00ab6bcf07c6e4d1706ed982
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: If35d0dbae963facf00ab6bcf07c6e4d1706ed982
updates: bz#1666143
Signed-off-by: Zhang Huan &lt;zhanghuan@open-fs.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "iobuf: Get rid of pre allocated iobuf_pool and use per thread mem pool"</title>
<updated>2019-01-08T11:16:03+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-01-04T07:04:50+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=37653efdc7681d1b0f255054ec2f9c9ddd4c8b14'/>
<id>37653efdc7681d1b0f255054ec2f9c9ddd4c8b14</id>
<content type='text'>
This reverts commit b87c397091bac6a4a6dec4e45a7671fad4a11770.

There seems to be some performance regression with the patch and hence recommended to have it reverted.

Updates: #325
Change-Id: Id85d6203173a44fad6cf51d39b3e96f37afcec09
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit b87c397091bac6a4a6dec4e45a7671fad4a11770.

There seems to be some performance regression with the patch and hence recommended to have it reverted.

Updates: #325
Change-Id: Id85d6203173a44fad6cf51d39b3e96f37afcec09
</pre>
</div>
</content>
</entry>
<entry>
<title>socket: Remove redundant in_lock in incoming message handling</title>
<updated>2018-12-26T05:15:08+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2018-12-21T04:28:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a1e7acc93a416fec6b87cc5601a9922759156771'/>
<id>a1e7acc93a416fec6b87cc5601a9922759156771</id>
<content type='text'>
A given epoll thread can handle only one incoming (POLLIN) request.
And until the socket is rearmed for listening, it is guaranteed that
there won't be any new incoming requests. As a result, the priv-&gt;in_lock
which guards the socket proto state machine seems redundant.

This patch removes priv-&gt;in_lock.

Change-Id: I26b6ddd852aba8c10385833b85ffd2e53e46cb8c
updates: bz#1467614
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A given epoll thread can handle only one incoming (POLLIN) request.
And until the socket is rearmed for listening, it is guaranteed that
there won't be any new incoming requests. As a result, the priv-&gt;in_lock
which guards the socket proto state machine seems redundant.

This patch removes priv-&gt;in_lock.

Change-Id: I26b6ddd852aba8c10385833b85ffd2e53e46cb8c
updates: bz#1467614
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rdma: fix possible buffer overflow</title>
<updated>2018-12-19T11:24:57+00:00</updated>
<author>
<name>Rinku Kothiya</name>
<email>rkothiya@redhat.com</email>
</author>
<published>2018-12-17T14:25:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e3ec41af9a9f4d906dd7b512b3f4f91a6f338f4b'/>
<id>e3ec41af9a9f4d906dd7b512b3f4f91a6f338f4b</id>
<content type='text'>
used snprintf instead of sprintf and if the source string is bigger
than destination then logged a warning message.

clang warning: ‘%s’ directive writing up to 1024 bytes into a region
of size 108.

updates: bz#1622665

Change-Id: Ia5e7c53d35d8178dd2c75708698599fe8bded5de
Signed-off-by: Rinku Kothiya &lt;rkothiya@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
used snprintf instead of sprintf and if the source string is bigger
than destination then logged a warning message.

clang warning: ‘%s’ directive writing up to 1024 bytes into a region
of size 108.

updates: bz#1622665

Change-Id: Ia5e7c53d35d8178dd2c75708698599fe8bded5de
Signed-off-by: Rinku Kothiya &lt;rkothiya@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>iobuf: Get rid of pre allocated iobuf_pool and use per thread mem pool</title>
<updated>2018-12-18T09:35:24+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2018-11-21T06:39:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b87c397091bac6a4a6dec4e45a7671fad4a11770'/>
<id>b87c397091bac6a4a6dec4e45a7671fad4a11770</id>
<content type='text'>
The current implementation of iobuf_pool has two problems:
- prealloc of 12.5MB memory, this limits the scale factor of the gluster
  processes due to RAM requirements
- lock contention, as the current implementation has one global
  iobuf_pool lock. Credits for debugging and addressing the same goes to
  Krutika Dhananjay &lt;kdhananj@redhat.com&gt;. Issue: #410

Hence changing the iobuf implementation to use per thread mem pool.
This may theoritically appear to cause perf dip as there is no preallocation.
But per thread mem pool will not have significant perf impact as the last
allocated memory is kept alive for subsequent allocs, for some time.
The worst case would be if iobufs requested are of random sizes each time.
The best case is, if we get iobuf request of the same size. From the perf
tests, this patch did not seem to cause any perf decrease.

Note that, with this patch, the rdma performance is going to degrade
drastically. In one of the previous patchsets we had fixes to not
degrade rdma perf, but rdma is not supported and also not tested [1].
Hence the decision was to not have code in rdma that is not tested
and not supported.

[1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html

Updates: #325
Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The current implementation of iobuf_pool has two problems:
- prealloc of 12.5MB memory, this limits the scale factor of the gluster
  processes due to RAM requirements
- lock contention, as the current implementation has one global
  iobuf_pool lock. Credits for debugging and addressing the same goes to
  Krutika Dhananjay &lt;kdhananj@redhat.com&gt;. Issue: #410

Hence changing the iobuf implementation to use per thread mem pool.
This may theoritically appear to cause perf dip as there is no preallocation.
But per thread mem pool will not have significant perf impact as the last
allocated memory is kept alive for subsequent allocs, for some time.
The worst case would be if iobufs requested are of random sizes each time.
The best case is, if we get iobuf request of the same size. From the perf
tests, this patch did not seem to cause any perf decrease.

Note that, with this patch, the rdma performance is going to degrade
drastically. In one of the previous patchsets we had fixes to not
degrade rdma perf, but rdma is not supported and also not tested [1].
Hence the decision was to not have code in rdma that is not tested
and not supported.

[1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html

Updates: #325
Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
