<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/basic/glusterd, branch v8dev</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>mgmt/glusterd: Make changes related to cloudsync xlator</title>
<updated>2019-04-10T08:11:56+00:00</updated>
<author>
<name>Anuradha Talur</name>
<email>atalur@commvault.com</email>
</author>
<published>2018-11-20T01:57:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d837b4518c3148752b2eb4d55d22f22c57aa96c2'/>
<id>d837b4518c3148752b2eb4d55d22f22c57aa96c2</id>
<content type='text'>
1) The placement of cloudsync xlator has been changed
to make it shard xlator's child. If cloudsync has to
work with shard in the graph, it needs to be child of shard.

Change-Id: Ib55424fdcb7ce8edae9f19b8a6e3d3ba86c1f0c4
fixes: bz#1642168
Signed-off-by: Anuradha Talur &lt;atalur@commvault.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1) The placement of cloudsync xlator has been changed
to make it shard xlator's child. If cloudsync has to
work with shard in the graph, it needs to be child of shard.

Change-Id: Ib55424fdcb7ce8edae9f19b8a6e3d3ba86c1f0c4
fixes: bz#1642168
Signed-off-by: Anuradha Talur &lt;atalur@commvault.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>stripe: remove the translator from build and glusterd</title>
<updated>2018-10-31T02:24:49+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2018-10-03T10:00:10+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=733139551322e49e7e5617356cf96e30780d2749'/>
<id>733139551322e49e7e5617356cf96e30780d2749</id>
<content type='text'>
Based on the proposal to remove few features as they are not
actively maintained [1], removing stripe translator from the
build. Also make sure there are no regression tests involving
stripe translator.

[1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html

Note that this patch aims at removing the translator from build, and
a followup patch is needed to remove the code from repository.

Updates: bz#1364707
Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Based on the proposal to remove few features as they are not
actively maintained [1], removing stripe translator from the
build. Also make sure there are no regression tests involving
stripe translator.

[1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html

Note that this patch aims at removing the translator from build, and
a followup patch is needed to remove the code from repository.

Updates: bz#1364707
Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec : Prevent volume create without redundant brick</title>
<updated>2018-10-24T13:15:50+00:00</updated>
<author>
<name>Sunil Kumar Acharya</name>
<email>sheggodu@redhat.com</email>
</author>
<published>2018-10-24T12:41:13+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e15a5a5cc67fa3323722353bb4ccca0ea41aa594'/>
<id>e15a5a5cc67fa3323722353bb4ccca0ea41aa594</id>
<content type='text'>
Problem:
EC volumes can be created without any redundant brick.

Solution:
Updated the conditional check to avoid volume create without
redundant brick.

fixes: bz#1642448
Change-Id: I0cb334b1b9378d67fcb8abf793dbe312c3179c0b
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
EC volumes can be created without any redundant brick.

Solution:
Updated the conditional check to avoid volume create without
redundant brick.

fixes: bz#1642448
Change-Id: I0cb334b1b9378d67fcb8abf793dbe312c3179c0b
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix volfile_server_switch.t spurious failure</title>
<updated>2016-08-22T17:30:18+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-08-22T08:29:01+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=eddada59f7ad3cf21463a558a5f62591f4b72c68'/>
<id>eddada59f7ad3cf21463a558a5f62591f4b72c68</id>
<content type='text'>
1. Unnecessary self probe is removed
2. After every probe a peer_count check is added to give the test to time finish
handhake.

Change-Id: Iab52548f8b781e7968250cd98fdbeaf02472970d
BUG: 1368953
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15231
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. Unnecessary self probe is removed
2. After every probe a peer_count check is added to give the test to time finish
handhake.

Change-Id: Iab52548f8b781e7968250cd98fdbeaf02472970d
BUG: 1368953
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15231
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd-client: switch volfile server incase existing connection breaks</title>
<updated>2016-04-12T12:14:22+00:00</updated>
<author>
<name>Prasanna Kumar Kalever</name>
<email>prasanna.kalever@redhat.com</email>
</author>
<published>2016-03-17T08:20:31+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=05bc8bfd2a11d280fe0aaac6c7ae86ea5ff08164'/>
<id>05bc8bfd2a11d280fe0aaac6c7ae86ea5ff08164</id>
<content type='text'>
Problem:
Currently, say we have 10 Node gluster volume, and mounted it using
Node 1 (N1) as volfile server and the rest as backup volfile servers

$ mount -t glusterfs -obackup-volfile-servers=&lt;N2&gt;:&lt;N3&gt;:...:&lt;N10&gt; &lt;N1&gt;:/vol /mnt

if N1 goes down we still be able to access the same mount point,
but the problem is that if we add or remove bricks to the volume
whoes volfile server is down in our case N1, that info will not be
passed to client, because connection between glusterfs and glusterd (of N1)
will be disconnected due to which we cannot store files to the newly
added bricks until N1 comes back

Solution:
If N1 goes down iterate through the nodes specified in
backup-volfile-servers list and try to establish the connection between
glusterfs and glsuterd, hence we don't really have to wait
until N1 comes back to store files in newly added bricks that are
successfully added when N1 was down

Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
BUG: 1289916
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13002
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently, say we have 10 Node gluster volume, and mounted it using
Node 1 (N1) as volfile server and the rest as backup volfile servers

$ mount -t glusterfs -obackup-volfile-servers=&lt;N2&gt;:&lt;N3&gt;:...:&lt;N10&gt; &lt;N1&gt;:/vol /mnt

if N1 goes down we still be able to access the same mount point,
but the problem is that if we add or remove bricks to the volume
whoes volfile server is down in our case N1, that info will not be
passed to client, because connection between glusterfs and glusterd (of N1)
will be disconnected due to which we cannot store files to the newly
added bricks until N1 comes back

Solution:
If N1 goes down iterate through the nodes specified in
backup-volfile-servers list and try to establish the connection between
glusterfs and glsuterd, hence we don't really have to wait
until N1 comes back to store files in newly added bricks that are
successfully added when N1 was down

Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
BUG: 1289916
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13002
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: Store arbiter-count and restore it</title>
<updated>2015-11-04T21:39:32+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2015-10-30T10:26:14+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1918d618a91fe9d9021916d53091173f4ad6b882'/>
<id>1918d618a91fe9d9021916d53091173f4ad6b882</id>
<content type='text'>
Problem:
1) Glusterd doesn't remember about arbiter information of replica volume in
   store.  When glusterd goes down and comes backup, arbiter volumes will
   become replica volumes.

2) Glusterd doesn't import/export arbiter information to/from the other peers.

3) Volume info doesn't show any arbiter count in the output.

Fix:
1) Persist arbiter information in glusterd-store
2) Import/Export arbiter information of the volume
3) Change volume info output to show arbiter count.

Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361
BUG: 1276675
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12475
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
1) Glusterd doesn't remember about arbiter information of replica volume in
   store.  When glusterd goes down and comes backup, arbiter volumes will
   become replica volumes.

2) Glusterd doesn't import/export arbiter information to/from the other peers.

3) Volume info doesn't show any arbiter count in the output.

Fix:
1) Persist arbiter information in glusterd-store
2) Import/Export arbiter information of the volume
3) Change volume info output to show arbiter count.

Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361
BUG: 1276675
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12475
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: Provide CLI to create disperse volume with data, redundancy counts</title>
<updated>2015-02-23T12:34:24+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2015-02-09T08:50:28+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a671b9a575b89c263af161293e78e49484859ec7'/>
<id>a671b9a575b89c263af161293e78e49484859ec7</id>
<content type='text'>
Change-Id: Iba44be565c895e26b19b5ff85a886873f6b53e5c
BUG: 1177601
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9616
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Iba44be565c895e26b19b5ff85a886873f6b53e5c
BUG: 1177601
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9616
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests : spurious failure fix in heald.t</title>
<updated>2015-02-04T16:14:46+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2015-02-04T08:43:05+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=552958c95ff4c4fea7c370a87e937e4a88973da1'/>
<id>552958c95ff4c4fea7c370a87e937e4a88973da1</id>
<content type='text'>
Problem : heald.t uses EXPECT to check whether shd process is up or not, but as
shd is spawned with NO_WAIT end of volume start transaction doesn't gurantee
that the process will be up by that time.

Solution : Use EXPECT_WITHIN instead of EXPECT

Change-Id: Ic81725aa7e7cde9c0c873837fcc4a73d8318dfa0
BUG: 1163543
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9575
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem : heald.t uses EXPECT to check whether shd process is up or not, but as
shd is spawned with NO_WAIT end of volume start transaction doesn't gurantee
that the process will be up by that time.

Solution : Use EXPECT_WITHIN instead of EXPECT

Change-Id: Ic81725aa7e7cde9c0c873837fcc4a73d8318dfa0
BUG: 1163543
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9575
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: Implement Volume heal enable/disable</title>
<updated>2015-01-20T10:24:24+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2014-12-29T10:02:28+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=7510d8edf4e7bea50e0c1f041202f063a5d138af'/>
<id>7510d8edf4e7bea50e0c1f041202f063a5d138af</id>
<content type='text'>
For volumes with replicate, disperse xlators, self-heal daemon should do
healing. This patch provides enable/disable functionality for the xlators to be
part of self-heal-daemon. Replicate already had this functionality with
'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it
uniform for both types of volumes. Internally it still does 'volume set' based
on the volume type.

Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29
BUG: 1177601
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9358
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For volumes with replicate, disperse xlators, self-heal daemon should do
healing. This patch provides enable/disable functionality for the xlators to be
part of self-heal-daemon. Replicate already had this functionality with
'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it
uniform for both types of volumes. Internally it still does 'volume set' based
on the volume type.

Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29
BUG: 1177601
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9358
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
