<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-handler.c, branch v6.0rc1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: get-state command should not fail if any brick is gone bad</title>
<updated>2019-02-05T14:40:25+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-02-04T09:37:14+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=90922d20f55e26b23bfab0fbc4e179e305c38037'/>
<id>90922d20f55e26b23bfab0fbc4e179e305c38037</id>
<content type='text'>
Problem: get-state command will error out, if any of the underlying
brick(s) of volume(s) in the cluster go bad.

It is expected that get-state command should not error out, but
should generate an output successfully.

Solution: In glusterd_get_state(), a statfs call is made on the
brick path for every bricks of the volumes to calculate the total
and free memory available. If any of statfs call fails on any
brick, we should not error out and should report total memory and free
memory of that brick as 0.

This patch also handles a statfs failure scenario in
glusterd_store_retrieve_bricks().

fixes: bz#1672205

Change-Id: Ia9e8a1d8843b65949d72fd6809bd21d39b31ad83
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: get-state command will error out, if any of the underlying
brick(s) of volume(s) in the cluster go bad.

It is expected that get-state command should not error out, but
should generate an output successfully.

Solution: In glusterd_get_state(), a statfs call is made on the
brick path for every bricks of the volumes to calculate the total
and free memory available. If any of statfs call fails on any
brick, we should not error out and should report total memory and free
memory of that brick as 0.

This patch also handles a statfs failure scenario in
glusterd_store_retrieve_bricks().

fixes: bz#1672205

Change-Id: Ia9e8a1d8843b65949d72fd6809bd21d39b31ad83
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: use address-family option from vol file</title>
<updated>2019-01-22T13:47:19+00:00</updated>
<author>
<name>Milind Changire</name>
<email>mchangir@redhat.com</email>
</author>
<published>2019-01-22T06:40:59+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b6c417785e54620331ae35d6971fe8bef98b4619'/>
<id>b6c417785e54620331ae35d6971fe8bef98b4619</id>
<content type='text'>
This patch helps enable IPv6 connections in the cluster.
The default address-family is IPv4 without using this option explicitly.

When address-family is set to "inet6" in the /etc/glusterfs/glusterd.vol
file, the mount command-line also needs to have
-o xlator-option="transport.address-family=inet6" added to it.

This option also gets added to the brick command-line.
Snapshot and gfapi use-cases should also use this option to pass in the
inet6 address-family.

Change-Id: I97db91021af27bacb6d7578e33ea4817f66d7270
fixes: bz#1635863
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch helps enable IPv6 connections in the cluster.
The default address-family is IPv4 without using this option explicitly.

When address-family is set to "inet6" in the /etc/glusterfs/glusterd.vol
file, the mount command-line also needs to have
-o xlator-option="transport.address-family=inet6" added to it.

This option also gets added to the brick command-line.
Snapshot and gfapi use-cases should also use this option to pass in the
inet6 address-family.

Change-Id: I97db91021af27bacb6d7578e33ea4817f66d7270
fixes: bz#1635863
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Avoid dict_leak in __glusterd_handle_cli_uuid_get function</title>
<updated>2019-01-22T05:12:50+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-01-21T12:23:12+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4deeab02e3d15bd266f24d0f7b28f0e5401fa950'/>
<id>4deeab02e3d15bd266f24d0f7b28f0e5401fa950</id>
<content type='text'>
Change-Id: Iefe08b136044495f6fa2b092c9e8c833efee1400
fixes: bz#1667905
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Iefe08b136044495f6fa2b092c9e8c833efee1400
fixes: bz#1667905
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix crash</title>
<updated>2019-01-13T12:50:46+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-01-10T10:47:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=741f652769bc074fe85da1485e0e16df6e6766f1'/>
<id>741f652769bc074fe85da1485e0e16df6e6766f1</id>
<content type='text'>
Problem: running "gluster get-state glusterd odir /get-state"
resulted in glusterd crash.

Cause: In the above command output directory has been specified
without "/" at the end. If "/" is not given at the end, "/" will
be added to path using "strcat", so the added character "/" is
not having memory allocated. When tried to free, glusterd will
crash as"/" has no memory allocated.

Solution: Instead of concatenating "/" to output directory, add
it to output filename.

Change-Id: I5dc00a71e46fbef4d07fe99ae23b36fb60dec1c2
fixes: bz#1665038
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: running "gluster get-state glusterd odir /get-state"
resulted in glusterd crash.

Cause: In the above command output directory has been specified
without "/" at the end. If "/" is not given at the end, "/" will
be added to path using "strcat", so the added character "/" is
not having memory allocated. When tried to free, glusterd will
crash as"/" has no memory allocated.

Solution: Instead of concatenating "/" to output directory, add
it to output filename.

Change-Id: I5dc00a71e46fbef4d07fe99ae23b36fb60dec1c2
fixes: bz#1665038
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: migrating rebalance commands to mgmt_v3 framework</title>
<updated>2018-12-18T04:01:52+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-30T10:46:55+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0b4b111fbd80a5d400a07d61e2b99f230f9be76f'/>
<id>0b4b111fbd80a5d400a07d61e2b99f230f9be76f</id>
<content type='text'>
Current rebalance commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.

Change-Id: I6faf4a6335c2e2f3d54bbde79908a7749e4613e7
fixes: bz#1655827
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Current rebalance commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.

Change-Id: I6faf4a6335c2e2f3d54bbde79908a7749e4613e7
fixes: bz#1655827
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Resolve memory leak in some glusterd functions</title>
<updated>2018-12-10T15:04:48+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2018-12-07T07:05:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2b7b6ff28fa92335613d0b5715acd552cfcfd759'/>
<id>2b7b6ff28fa92335613d0b5715acd552cfcfd759</id>
<content type='text'>
Problem: Functions allocate memory for req structure but after submit
         request they missed to cleanup memory

Solution: After submit request cleanup allocated mmeory

Change-Id: I8f995787ed8986b882f008ccd588670b5d4139f5
updates: bz#1633930
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Functions allocate memory for req structure but after submit
         request they missed to cleanup memory

Solution: After submit request cleanup allocated mmeory

Change-Id: I8f995787ed8986b882f008ccd588670b5d4139f5
updates: bz#1633930
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: migrating profile commands to mgmt_v3 framework</title>
<updated>2018-12-04T16:21:42+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-08T13:20:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e48678567a5fb01166a065f9e192b80e721e4181'/>
<id>e48678567a5fb01166a065f9e192b80e721e4181</id>
<content type='text'>
Current profile commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.

The following tests were performed on the patch:
case 1:
1. On a 3 node cluster, created and started 3 volumes
2. Mounted all the three volumes and wrote some data
3. Started profile operation for all the volumes
4. Ran "gluster v status" from N1,
       "gluster v profile &lt;volname1&gt; info" form N2,
       "gluster v profile &lt;volname2&gt; info" from N3 simultaneously in a
   loop for around 10000 times
5. Didn't find any cores generated.

case 2:
1. Repeat the steps 1,2 and 3 from case 1.
2. Ran "gluster v status" from N1,
       "gluster v profile &lt;volname1&gt; info" form N2(terminal 1),
       "gluster v profile &lt;volname2&gt; info" from N2(terminal 2)
   simultaneously in a loop.
3. No cores were generated.

fixes: bz#1654181
Change-Id: I83044cf5aee3970ef94066c89fcc41783ed468a6
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Current profile commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.

The following tests were performed on the patch:
case 1:
1. On a 3 node cluster, created and started 3 volumes
2. Mounted all the three volumes and wrote some data
3. Started profile operation for all the volumes
4. Ran "gluster v status" from N1,
       "gluster v profile &lt;volname1&gt; info" form N2,
       "gluster v profile &lt;volname2&gt; info" from N3 simultaneously in a
   loop for around 10000 times
5. Didn't find any cores generated.

case 2:
1. Repeat the steps 1,2 and 3 from case 1.
2. Ran "gluster v status" from N1,
       "gluster v profile &lt;volname1&gt; info" form N2(terminal 1),
       "gluster v profile &lt;volname2&gt; info" from N2(terminal 2)
   simultaneously in a loop.
3. No cores were generated.

fixes: bz#1654181
Change-Id: I83044cf5aee3970ef94066c89fcc41783ed468a6
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: perform rcu_read_lock/unlock() under cleanup_lock mutex</title>
<updated>2018-12-03T17:03:57+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-28T10:43:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2bb0e89e4bb113a93c6e786446a140cd99261af8'/>
<id>2bb0e89e4bb113a93c6e786446a140cd99261af8</id>
<content type='text'>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/mux: Optimize brick disconnect handler code</title>
<updated>2018-11-18T06:10:31+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2018-11-15T07:48:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b4faa9e7a25bdf0582f8b0fd69aa1381c307a61e'/>
<id>b4faa9e7a25bdf0582f8b0fd69aa1381c307a61e</id>
<content type='text'>
Removed unnecessary iteration during brick disconnect
handler when multiplex is enabled.

Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
fixes: bz#1650115
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Removed unnecessary iteration during brick disconnect
handler when multiplex is enabled.

Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
fixes: bz#1650115
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
