<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-utils.h, branch v7.0alpha</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Optimize code to copy dictionary in handshake code path</title>
<updated>2019-05-31T14:20:25+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-05-17T13:56:48+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f8f09178bb890924a8050b466cc2e7a0a30e35a7'/>
<id>f8f09178bb890924a8050b466cc2e7a0a30e35a7</id>
<content type='text'>
Problem: While high no. of volumes are configured around 2000
         glusterd has bottleneck during handshake at the time
         of copying dictionary

Solution: To avoid the bottleneck serialize a dictionary instead
          of copying key-value pair one by one

Change-Id: I9fb332f432e4f915bc3af8dcab38bed26bda2b9a
fixes: bz#1711297
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: While high no. of volumes are configured around 2000
         glusterd has bottleneck during handshake at the time
         of copying dictionary

Solution: To avoid the bottleneck serialize a dictionary instead
          of copying key-value pair one by one

Change-Id: I9fb332f432e4f915bc3af8dcab38bed26bda2b9a
fixes: bz#1711297
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/tier: remove tier related code from glusterd</title>
<updated>2019-05-27T07:50:24+00:00</updated>
<author>
<name>Hari Gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2019-05-02T13:03:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e1cc4275583dfd8ae8d0433587f39854c1851794'/>
<id>e1cc4275583dfd8ae8d0433587f39854c1851794</id>
<content type='text'>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: remove glusterd_check_volume_exists() call</title>
<updated>2019-04-11T15:19:30+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-08T13:24:46+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=7108913337f24d3d7a39d919a29872fe50072cf4'/>
<id>7108913337f24d3d7a39d919a29872fe50072cf4</id>
<content type='text'>
As the same functionality is covered in glusterd_volinfo_find

Updates: bz#1193929
Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As the same functionality is covered in glusterd_volinfo_find

Updates: bz#1193929
Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/mux: Optimize brick disconnect handler code</title>
<updated>2018-11-18T06:10:31+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2018-11-15T07:48:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b4faa9e7a25bdf0582f8b0fd69aa1381c307a61e'/>
<id>b4faa9e7a25bdf0582f8b0fd69aa1381c307a61e</id>
<content type='text'>
Removed unnecessary iteration during brick disconnect
handler when multiplex is enabled.

Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
fixes: bz#1650115
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Removed unnecessary iteration during brick disconnect
handler when multiplex is enabled.

Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
fixes: bz#1650115
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: Portmap entries showing stale brick entries when bricks are down</title>
<updated>2018-11-12T03:31:57+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2018-11-06T10:53:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bcf1e8b07491b48c5372924dbbbad5b8391c6d81'/>
<id>bcf1e8b07491b48c5372924dbbbad5b8391c6d81</id>
<content type='text'>
Problem: pmap is showing stale brick entries after down the brick
         because of glusterd_brick_rpc_notify call gf_is_service_running
         before call pmap_registry_remove to ensure about brick instance.

Solutiom: 1) Change the condition in gf_is_pid_running to ensure about
             process existence, use open instead of access to achieve
             the same
          2) Call search_brick_path_from_proc in __glusterd_brick_rpc_notify
             along with gf_is_service_running

Change-Id: Ia663ac61c01fdee6c12f47c0300cdf93f19b6a19
fixes: bz#1646892
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: pmap is showing stale brick entries after down the brick
         because of glusterd_brick_rpc_notify call gf_is_service_running
         before call pmap_registry_remove to ensure about brick instance.

Solutiom: 1) Change the condition in gf_is_pid_running to ensure about
             process existence, use open instead of access to achieve
             the same
          2) Call search_brick_path_from_proc in __glusterd_brick_rpc_notify
             along with gf_is_service_running

Change-Id: Ia663ac61c01fdee6c12f47c0300cdf93f19b6a19
fixes: bz#1646892
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: allow shared-storage to use bricks under glusterd working directory</title>
<updated>2018-11-08T11:53:57+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-06T14:14:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bdb4ca184913c82ccf9552298f5d5b597794f2aa'/>
<id>bdb4ca184913c82ccf9552298f5d5b597794f2aa</id>
<content type='text'>
With commit 44e4db, we are not allowing user to create a volume
using glusterd's working directory as a brick or any sub directory
under glusterd's working directory as a brick.This has broken
shared-storage since the volume "gluster-shared-storage" is
created using the bricks under glusterd's working directory.

With this patch, we let the "gluster-shared-storage" volume
to use bricks under glusterd's working directory.

fixes: bz#1647029
Change-Id: Ifcbcf4576eea12cf46f199dea287b29bd3ec3bfd
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With commit 44e4db, we are not allowing user to create a volume
using glusterd's working directory as a brick or any sub directory
under glusterd's working directory as a brick.This has broken
shared-storage since the volume "gluster-shared-storage" is
created using the bricks under glusterd's working directory.

With this patch, we let the "gluster-shared-storage" volume
to use bricks under glusterd's working directory.

fixes: bz#1647029
Change-Id: Ifcbcf4576eea12cf46f199dea287b29bd3ec3bfd
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Land clang-format changes</title>
<updated>2018-09-12T11:52:48+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T11:52:48+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=45a71c0548b6fd2c757aa2e7b7671a1411948894'/>
<id>45a71c0548b6fd2c757aa2e7b7671a1411948894</id>
<content type='text'>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</pre>
</div>
</content>
</entry>
<entry>
<title>Some (mgmt) xlators: use dict_{setn|getn|deln|get_int32n|set_int32n|set_strn}</title>
<updated>2018-09-09T01:53:59+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2018-09-03T10:55:01+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=09198e203ece6925791a8a3a6121c5f808e4e873'/>
<id>09198e203ece6925791a8a3a6121c5f808e4e873</id>
<content type='text'>
In a previous patch (https://review.gluster.org/20769) we've
added the key length to be passed to dict_* funcs, to remove the need
to strlen() it. This patch moves some xlators to use it.

- It also adds dict_get_int32n which was missing.
- It also reduces the size of some key variables.
They were set to 1024b or PATH_MAX, where sometimes 64 bytes were
really enough.

Please review carefully:
1. That I did not reduce some the size of the key variables too much.
2. That I did not mix up some keys.

Compile-tested only!

Change-Id: Ic729baf179f40e8d02bc2350491d4bb9b6934266
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In a previous patch (https://review.gluster.org/20769) we've
added the key length to be passed to dict_* funcs, to remove the need
to strlen() it. This patch moves some xlators to use it.

- It also adds dict_get_int32n which was missing.
- It also reduces the size of some key variables.
They were set to 1024b or PATH_MAX, where sometimes 64 bytes were
really enough.

Please review carefully:
1. That I did not reduce some the size of the key variables too much.
2. That I did not mix up some keys.

Compile-tested only!

Change-Id: Ic729baf179f40e8d02bc2350491d4bb9b6934266
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
