<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs, branch v3.10.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: socketfile &amp; pidfile related fixes for brick multiplexing feature</title>
<updated>2017-05-10T10:42:05+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2017-05-08T13:59:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=38496dd45780e651647c294b782268557ce31836'/>
<id>38496dd45780e651647c294b782268557ce31836</id>
<content type='text'>
Problem: While brick-muliplexing is on after restarting glusterd, CLI is
         not showing pid of all brick processes in all volumes.

Solution: While brick-mux is on all local brick process communicated through one
          UNIX socket but as per current code (glusterd_brick_start) it is trying
          to communicate with separate UNIX socket for each volume which is populated
          based on brick-name and vol-name.Because of multiplexing design only one
          UNIX socket is opened so it is throwing poller error and not able to
          fetch correct status of brick process through cli process.
          To resolve the problem write a new function glusterd_set_socket_filepath_for_mux
          that will call by glusterd_brick_start to validate about the existence of socketpath.
          To avoid the continuous EPOLLERR erros in  logs update socket_connect code.

Test:     To reproduce the issue followed below steps
          1) Create two distributed volumes(dist1 and dist2)
          2) Set cluster.brick-multiplex is on
          3) kill glusterd
          4) run command gluster v status
          After apply the patch it shows correct pid for all volumes

&gt; BUG: 1444596
&gt; Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17101
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit 21c7f7baccfaf644805e63682e5a7d2a9864a1e6)

Change-Id: I1892c80b9ffa93974f20c92d421660bcf93c4cda
BUG: 1449002
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17210
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: While brick-muliplexing is on after restarting glusterd, CLI is
         not showing pid of all brick processes in all volumes.

Solution: While brick-mux is on all local brick process communicated through one
          UNIX socket but as per current code (glusterd_brick_start) it is trying
          to communicate with separate UNIX socket for each volume which is populated
          based on brick-name and vol-name.Because of multiplexing design only one
          UNIX socket is opened so it is throwing poller error and not able to
          fetch correct status of brick process through cli process.
          To resolve the problem write a new function glusterd_set_socket_filepath_for_mux
          that will call by glusterd_brick_start to validate about the existence of socketpath.
          To avoid the continuous EPOLLERR erros in  logs update socket_connect code.

Test:     To reproduce the issue followed below steps
          1) Create two distributed volumes(dist1 and dist2)
          2) Set cluster.brick-multiplex is on
          3) kill glusterd
          4) run command gluster v status
          After apply the patch it shows correct pid for all volumes

&gt; BUG: 1444596
&gt; Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17101
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit 21c7f7baccfaf644805e63682e5a7d2a9864a1e6)

Change-Id: I1892c80b9ffa93974f20c92d421660bcf93c4cda
BUG: 1449002
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17210
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: fix synclocks' handling of "woken" flag</title>
<updated>2017-04-13T15:39:09+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-03-20T16:31:33+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f1a64f7c75fdc6a0ebea71ed2827d3dbc12761b5'/>
<id>f1a64f7c75fdc6a0ebea71ed2827d3dbc12761b5</id>
<content type='text'>
The "woken" flag wasn't being reset when it should have been, leading
(eventually) to a SEGV when someone tried to folow a synclock's waitq
to a task structure that had been freed while still on the queue.  See
the bug report for (far) more detail.

Backport of:
&gt; Commit 31377765dbbb8d49292c4362837a695adcbc6cb4
&gt; BUG: 1434062
&gt; Reviewed-on: https://review.gluster.org/16926

Change-Id: I5cd9ae1bcb831555274108b292181ec2a29b6d95
BUG: 1441474
Signed-off-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-on: https://review.gluster.org/17043
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The "woken" flag wasn't being reset when it should have been, leading
(eventually) to a SEGV when someone tried to folow a synclock's waitq
to a task structure that had been freed while still on the queue.  See
the bug report for (far) more detail.

Backport of:
&gt; Commit 31377765dbbb8d49292c4362837a695adcbc6cb4
&gt; BUG: 1434062
&gt; Reviewed-on: https://review.gluster.org/16926

Change-Id: I5cd9ae1bcb831555274108b292181ec2a29b6d95
BUG: 1441474
Signed-off-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-on: https://review.gluster.org/17043
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd+libglusterfs: add null checks during attach</title>
<updated>2017-03-10T19:51:13+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-03-09T17:49:27+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=41eba3545c46c4cd0b9fcf6fc87284adc64ebcf5'/>
<id>41eba3545c46c4cd0b9fcf6fc87284adc64ebcf5</id>
<content type='text'>
It's possible (though unlikely) that we could get a brick-attach
request while we're not ready to process it (ctx-&gt;active not set yet).
Add code to guard against this possibility, and return appropriate
error indicators.

Backport of:
&gt; 90b2b9b29f552fe9ab53de5c4123003522399e6d
&gt; BUG: 1430860
&gt; Reviewed-on: https://review.gluster.org/16883

Change-Id: Icb3bc52ce749258a3f03cbbbdf4c2320c5c541a0
BUG: 1422769
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16888
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It's possible (though unlikely) that we could get a brick-attach
request while we're not ready to process it (ctx-&gt;active not set yet).
Add code to guard against this possibility, and return appropriate
error indicators.

Backport of:
&gt; 90b2b9b29f552fe9ab53de5c4123003522399e6d
&gt; BUG: 1430860
&gt; Reviewed-on: https://review.gluster.org/16883

Change-Id: Icb3bc52ce749258a3f03cbbbdf4c2320c5c541a0
BUG: 1422769
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16888
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Don't trigger data/metadata heal on Lookups</title>
<updated>2017-02-27T15:34:08+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2017-01-25T10:01:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=27ac070dc9612cfcd591464dbaa40ed52b84e23f'/>
<id>27ac070dc9612cfcd591464dbaa40ed52b84e23f</id>
<content type='text'>
Problem-1
If Lookup which doesn't take any locks observes version mismatch it can't be
trusted. If we launch a heal based on this information it will lead to
self-heals which will affect I/O performance in the cases where Lookup is
wrong. Considering self-heal-daemon and operations on the inode from client
which take locks can still trigger heal we can choose to not attempt a heal on
Lookup.

Problem-2:
Fixed spurious failure of
tests/bitrot/bug-1373520.t
For the issues above, what was happening was that ec_heal_inspect()
is preventing 'name' heal to happen

Problem-3:
tests/basic/ec/ec-background-heals.t
To be honest I don't know what the problem was, while fixing
the 2 problems above, I made some changes to ec_heal_inspect() and
ec_need_heal() after which when I tried to recreate the spurious
failure it just didn't happen even after a long time.

 &gt;BUG: 1414287
 &gt;Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
 &gt;Change-Id: Ife2535e1d0b267712973673f6d474e288f3c6834
 &gt;Reviewed-on: https://review.gluster.org/16468
 &gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
 &gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;

BUG: 1419824
Change-Id: I340b48cd416b07890bf3a5427562f5e3f88a481f
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16765
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem-1
If Lookup which doesn't take any locks observes version mismatch it can't be
trusted. If we launch a heal based on this information it will lead to
self-heals which will affect I/O performance in the cases where Lookup is
wrong. Considering self-heal-daemon and operations on the inode from client
which take locks can still trigger heal we can choose to not attempt a heal on
Lookup.

Problem-2:
Fixed spurious failure of
tests/bitrot/bug-1373520.t
For the issues above, what was happening was that ec_heal_inspect()
is preventing 'name' heal to happen

Problem-3:
tests/basic/ec/ec-background-heals.t
To be honest I don't know what the problem was, while fixing
the 2 problems above, I made some changes to ec_heal_inspect() and
ec_need_heal() after which when I tried to recreate the spurious
failure it just didn't happen even after a long time.

 &gt;BUG: 1414287
 &gt;Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
 &gt;Change-Id: Ife2535e1d0b267712973673f6d474e288f3c6834
 &gt;Reviewed-on: https://review.gluster.org/16468
 &gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
 &gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;

BUG: 1419824
Change-Id: I340b48cd416b07890bf3a5427562f5e3f88a481f
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16765
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>posix: Fix creation of files with S_ISVTX on FreeBSD</title>
<updated>2017-02-20T15:42:51+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>xhernandez@datalab.es</email>
</author>
<published>2017-01-10T16:21:56+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=14b26480e26dbb2e40db039c4fad95548247dddd'/>
<id>14b26480e26dbb2e40db039c4fad95548247dddd</id>
<content type='text'>
On FreeBSD the S_ISVTX flag is completely ignored when creating a
regular file. Since gluster needs to create files with this flag set,
specialy for DHT link files, it's necessary to force the flag.

This fix does this by calling fchmod() after creating a file that
must have this flag set.

&gt; Change-Id: I51eecfe4642974df6106b9084a0b144835a4997a
&gt; BUG: 1411228
&gt; Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
&gt; Reviewed-on: https://review.gluster.org/16417
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;

Change-Id: I2087516383bd132c59bbab98eda8f2243a2163fe
BUG: 1424973
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-on: https://review.gluster.org/16686
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On FreeBSD the S_ISVTX flag is completely ignored when creating a
regular file. Since gluster needs to create files with this flag set,
specialy for DHT link files, it's necessary to force the flag.

This fix does this by calling fchmod() after creating a file that
must have this flag set.

&gt; Change-Id: I51eecfe4642974df6106b9084a0b144835a4997a
&gt; BUG: 1411228
&gt; Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
&gt; Reviewed-on: https://review.gluster.org/16417
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;

Change-Id: I2087516383bd132c59bbab98eda8f2243a2163fe
BUG: 1424973
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-on: https://review.gluster.org/16686
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Fix a crash due to race between inode_ctx_set and inode_ref</title>
<updated>2017-02-20T12:25:29+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-02-15T05:48:31+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d10c5375b33520f36fd6acbd47b617d43f529ca2'/>
<id>d10c5375b33520f36fd6acbd47b617d43f529ca2</id>
<content type='text'>
Issue:
Currently inode ref count is guarded by inode_table-&gt;lock, and
inode_ctx is guarded by inode-&gt;lock. With the new patch [1]
inode_ref was modified to change the inode_ctx to track the ref
count per xlator. Thus inode_ref performed under inode_table-&gt;lock
is modifying inode_ctx which has to be modified only under inode-&gt;lock

Solution:
When a inode is created, inode_ctx holder is allocated for all the xlators.
Hence in case of inode_ctx_set instead of using the first free index in
inode ctx holder, we can have predecided index for every xlator in the graph.

Credits Pranith K &lt;pkarampu@redhat.com&gt;

[1] http://review.gluster.org/13736

&gt; Reviewed-on: https://review.gluster.org/16622
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;

Change-Id: I1bfe111c211fcc4fcd761bba01dc87c4c69b5170
BUG: 1423385
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16655
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
Currently inode ref count is guarded by inode_table-&gt;lock, and
inode_ctx is guarded by inode-&gt;lock. With the new patch [1]
inode_ref was modified to change the inode_ctx to track the ref
count per xlator. Thus inode_ref performed under inode_table-&gt;lock
is modifying inode_ctx which has to be modified only under inode-&gt;lock

Solution:
When a inode is created, inode_ctx holder is allocated for all the xlators.
Hence in case of inode_ctx_set instead of using the first free index in
inode ctx holder, we can have predecided index for every xlator in the graph.

Credits Pranith K &lt;pkarampu@redhat.com&gt;

[1] http://review.gluster.org/13736

&gt; Reviewed-on: https://review.gluster.org/16622
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;

Change-Id: I1bfe111c211fcc4fcd761bba01dc87c4c69b5170
BUG: 1423385
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16655
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add a cli command to trigger a statedump on a client</title>
<updated>2017-02-14T01:53:10+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-01-22T16:44:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=99ce0d43fffa9b2094edcd4917df2ff9ca7afe5d'/>
<id>99ce0d43fffa9b2094edcd4917df2ff9ca7afe5d</id>
<content type='text'>
With this, we will be able to trigger statedumps on remote Gluster
clients, mainly targetted for applications using libgfapi.

Design:
SIGUSR signal is the most comman way of taking a statedump in Gluster.
But it cannot be used for libgfapi based processes, as the process
loading the library might have already consumed SIGUSR signal. Hence
going by the command way.

One has to issue a Gluster command to initiate a statedump on the
libgfapi based client. The command takes hostname and PID as an
argument. All the glusterds in the cluster, check if they are connected
to the specified hostname, and send an RPC request to all the connected
clients from that hostname (via the mgmt connection).

&gt; URL: http://review.gluster.org/16357
&gt; BUG: 1169302
&gt; Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; [ndevos: minor fixes and split patch in smaller pieces]
&gt; Reviewed-on-master: https://review.gluster.org/9228
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt; Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;

BUG: 1418981
Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91
Signed-off-by: Shyam &lt;srangana@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16601
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With this, we will be able to trigger statedumps on remote Gluster
clients, mainly targetted for applications using libgfapi.

Design:
SIGUSR signal is the most comman way of taking a statedump in Gluster.
But it cannot be used for libgfapi based processes, as the process
loading the library might have already consumed SIGUSR signal. Hence
going by the command way.

One has to issue a Gluster command to initiate a statedump on the
libgfapi based client. The command takes hostname and PID as an
argument. All the glusterds in the cluster, check if they are connected
to the specified hostname, and send an RPC request to all the connected
clients from that hostname (via the mgmt connection).

&gt; URL: http://review.gluster.org/16357
&gt; BUG: 1169302
&gt; Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; [ndevos: minor fixes and split patch in smaller pieces]
&gt; Reviewed-on-master: https://review.gluster.org/9228
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt; Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;

BUG: 1418981
Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91
Signed-off-by: Shyam &lt;srangana@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16601
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs+changetimerecorder: reduce log noise</title>
<updated>2017-02-10T19:23:08+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-02-08T20:21:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=14c622976bcc30bbd76e420dc220c005ea786135'/>
<id>14c622976bcc30bbd76e420dc220c005ea786135</id>
<content type='text'>
The logging about translator options is so verbose that it
significantly slows down scalability tests - sometimes even to the
point where it induces timing-related failures.  Quiet, please.

Backport of:
&gt; Change-Id: If0766e2a80746bba586e67e6019ff7084d68b425
&gt; Reviewed-on: https://review.gluster.org/16569

Change-Id: I65117e69427ce1d6a2490832c5c9ab57ee29004e
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16599
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The logging about translator options is so verbose that it
significantly slows down scalability tests - sometimes even to the
point where it induces timing-related failures.  Quiet, please.

Backport of:
&gt; Change-Id: If0766e2a80746bba586e67e6019ff7084d68b425
&gt; Reviewed-on: https://review.gluster.org/16569

Change-Id: I65117e69427ce1d6a2490832c5c9ab57ee29004e
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16599
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: fix serious leak of xlator_t structures</title>
<updated>2017-02-09T21:01:45+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-02-09T00:45:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=226d7c442509172b2209515841ef499ec12fc9f2'/>
<id>226d7c442509172b2209515841ef499ec12fc9f2</id>
<content type='text'>
There's a lot of logic (and some long comments) around how to free
these structures safely, but then we didn't do it.  Now we do.

Backport of:
&gt; Change-Id: I9731ae75c60e99cc43d33d0813a86912db97fd96
&gt; BUG: 1420571
&gt; Reviewed-on: https://review.gluster.org/16570

Change-Id: I54415b614b277224196f5723bce5a4c5a404d881
BUG: 1420810
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16583
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There's a lot of logic (and some long comments) around how to free
these structures safely, but then we didn't do it.  Now we do.

Backport of:
&gt; Change-Id: I9731ae75c60e99cc43d33d0813a86912db97fd96
&gt; BUG: 1420571
&gt; Reviewed-on: https://review.gluster.org/16570

Change-Id: I54415b614b277224196f5723bce5a4c5a404d881
BUG: 1420810
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16583
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: make memory pools more thread-friendly</title>
<updated>2017-02-03T00:44:09+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-10-14T14:04:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1ed73ffa16cb7fe4415acbdb095da6a4628f711a'/>
<id>1ed73ffa16cb7fe4415acbdb095da6a4628f711a</id>
<content type='text'>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Backport of:
&gt; Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/15645

BUG: 1418091
Change-Id: Id09bbea41f65fcd245822607bc204f3a34904dc2
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16531
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Backport of:
&gt; Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/15645

BUG: 1418091
Change-Id: Id09bbea41f65fcd245822607bc204f3a34904dc2
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16531
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
