<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-messages.h, branch release-8</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd/snapshot: Add a warning message when a volume config changes</title>
<updated>2020-03-12T11:27:16+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2020-03-10T15:36:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=75691851391694d27ad5dcb797df8d4a82fd9e2d'/>
<id>75691851391694d27ad5dcb797df8d4a82fd9e2d</id>
<content type='text'>
While changing a volume configuration, there is a chance that the
brick layout on the disk might be changed. If snapshot is present
on such volumes that will effects it's working. So this patch adds
a warning message if snapshot is present while a volume config change
happen.

Change-Id: I7256863fef734841fce0bc9ad94d5d201b1813d5
Fixes: bz#1812144
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While changing a volume configuration, there is a chance that the
brick layout on the disk might be changed. If snapshot is present
on such volumes that will effects it's working. So this patch adds
a warning message if snapshot is present while a volume config change
happen.

Change-Id: I7256863fef734841fce0bc9ad94d5d201b1813d5
Fixes: bz#1812144
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "glusterd: (storhaug) remove ganesha (843e1b0)"</title>
<updated>2019-08-24T02:08:57+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2017-10-16T08:54:29+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=673f2f90f35e5ea381bd22a43403278f89c957a5'/>
<id>673f2f90f35e5ea381bd22a43403278f89c957a5</id>
<content type='text'>
please note as an additional change, macro GLUSTERD_GET_SNAP_DIR
moved from glusterd-store.c to glusterd-snapshot-utils.h

Change-Id: I811efefc148453fe32e4f0d322e80455447cec71
updates: #663
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
please note as an additional change, macro GLUSTERD_GET_SNAP_DIR
moved from glusterd-store.c to glusterd-snapshot-utils.h

Change-Id: I811efefc148453fe32e4f0d322e80455447cec71
updates: #663
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: create separate logdirs for cluster.rc instances</title>
<updated>2019-08-14T03:21:30+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2019-07-25T06:21:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=6ef9ffea7fee43ea6f59c8c36fd183f90e9c26f8'/>
<id>6ef9ffea7fee43ea6f59c8c36fd183f90e9c26f8</id>
<content type='text'>
Create a separate logdir for each host instance created by
cluster.rc. This makes it easier to determine the files
belonging to a particular instance.

Change-Id: Ic8321f83f98995412b7d5f095b3d3f0391767a8b
Fixes: bz#1733042
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Create a separate logdir for each host instance created by
cluster.rc. This makes it easier to determine the files
belonging to a particular instance.

Change-Id: Ic8321f83f98995412b7d5f095b3d3f0391767a8b
Fixes: bz#1733042
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: improve logging in __server_getspec()</title>
<updated>2019-05-08T15:04:50+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-05-08T02:28:27+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d42221bec9ebb87df0c10b8e99edde60729250e3'/>
<id>d42221bec9ebb87df0c10b8e99edde60729250e3</id>
<content type='text'>
updates: bz#1193929

Change-Id: Idad745d5869c92e6bed71842f14bc1a3362ca4bd
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
updates: bz#1193929

Change-Id: Idad745d5869c92e6bed71842f14bc1a3362ca4bd
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Land clang-format changes</title>
<updated>2018-09-12T11:52:48+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T11:52:48+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=45a71c0548b6fd2c757aa2e7b7671a1411948894'/>
<id>45a71c0548b6fd2c757aa2e7b7671a1411948894</id>
<content type='text'>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Introduce daemon-log-level cluster wide option</title>
<updated>2018-07-03T14:29:33+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-07-02T15:18:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=524c869976c837c2ef13b5ef50020e7769188e4d'/>
<id>524c869976c837c2ef13b5ef50020e7769188e4d</id>
<content type='text'>
This option, applicable to the node level daemons can be very helpful in
controlling the log level of these services. Please note any daemon
which is started prior to setting the specific value of this option (if
not INFO) will need to go through a restart to have this change into
effect.

Change-Id: I7f6d2620bab2b094c737f5cc816bc093e9c9c4c9
fixes: bz#1597473
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This option, applicable to the node level daemons can be very helpful in
controlling the log level of these services. Please note any daemon
which is started prior to setting the specific value of this option (if
not INFO) will need to go through a restart to have this change into
effect.

Change-Id: I7f6d2620bab2b094c737f5cc816bc093e9c9c4c9
fixes: bz#1597473
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: removing the unnecessary glusterd message</title>
<updated>2018-06-14T21:35:22+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-06-14T14:28:54+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=10581852dfc36eb6ead0784cb2e984cd39ae2b10'/>
<id>10581852dfc36eb6ead0784cb2e984cd39ae2b10</id>
<content type='text'>
Fixes: bz#1589253
Change-Id: I5510250a3d094e19e471b3ee47bf13ea9ee8aff5
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes: bz#1589253
Change-Id: I5510250a3d094e19e471b3ee47bf13ea9ee8aff5
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix for shd not coming up</title>
<updated>2018-06-13T15:34:08+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-06-08T14:09:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=87f392e7fa22c059a95ff58f1653a285afc0f27a'/>
<id>87f392e7fa22c059a95ff58f1653a285afc0f27a</id>
<content type='text'>
Problem: After creating and starting n(n is large) distribute-replicated
volumes using a script, if we create and start (n+1)th distribute-replicate
volume manually self heal daemon is down.

Solution: In glusterd_proc_stop after giving SIGTERM signal if the
process is still running, we are giving a SIGKILL. As SIGKILL will
not perform any cleanup process, we need to remove the pidfile.

Fixes: bz#1589253
Change-Id: I7c114334eec74c8d0f21b3e45cf7db6b8ef28af1
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: After creating and starting n(n is large) distribute-replicated
volumes using a script, if we create and start (n+1)th distribute-replicate
volume manually self heal daemon is down.

Solution: In glusterd_proc_stop after giving SIGTERM signal if the
process is still running, we are giving a SIGKILL. As SIGKILL will
not perform any cleanup process, we need to remove the pidfile.

Fixes: bz#1589253
Change-Id: I7c114334eec74c8d0f21b3e45cf7db6b8ef28af1
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
