<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src, branch v3.7.13</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: compare uuid instead of hostname address resolution</title>
<updated>2016-07-07T09:33:41+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-07-03T10:21:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c93839ed1b4d10e492a6e0aacb19e42c761d4bf3'/>
<id>c93839ed1b4d10e492a6e0aacb19e42c761d4bf3</id>
<content type='text'>
Backport of http://review.gluster.org/14849

In glusterd_get_brickinfo () brick's hostname is address resolved. This adds an
unnecessary latency since it uses calls like getaddrinfo (). Instead given the
local brick's uuid is already known a comparison of MY_UUID and brickinfo-&gt;uuid
is much more light weight than the previous approach.

On a scale testing where cluster hosting ~400 volumes spanning across 4 nodes,
if a node goes for a reboot, few of the bricks don't come up. After few days of
analysis its found that glusterd_pmap_sigin () was taking signficant amount of
latency and further code walthrough revealed this unnecessary address
resolution. Applying this fix solves the issue and now all the brick processes
come up on a node reboot.

Change-Id: I299b8660ce0da6f3f739354f5c637bc356d82133
BUG: 1352833
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14849
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14861
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/14849

In glusterd_get_brickinfo () brick's hostname is address resolved. This adds an
unnecessary latency since it uses calls like getaddrinfo (). Instead given the
local brick's uuid is already known a comparison of MY_UUID and brickinfo-&gt;uuid
is much more light weight than the previous approach.

On a scale testing where cluster hosting ~400 volumes spanning across 4 nodes,
if a node goes for a reboot, few of the bricks don't come up. After few days of
analysis its found that glusterd_pmap_sigin () was taking signficant amount of
latency and further code walthrough revealed this unnecessary address
resolution. Applying this fix solves the issue and now all the brick processes
come up on a node reboot.

Change-Id: I299b8660ce0da6f3f739354f5c637bc356d82133
BUG: 1352833
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14849
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14861
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fail volume delete if one of the node is down</title>
<updated>2016-07-04T11:49:18+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-06-09T12:52:43+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1a21cfba8e7a5f4ac1b8a8c3b8e06574b237420d'/>
<id>1a21cfba8e7a5f4ac1b8a8c3b8e06574b237420d</id>
<content type='text'>
Backport of http://review.gluster.org/14681

Deleting a volume on a cluster where one of the node in the cluster is down is
buggy since once that node comes back the resync of the same volume will happen.
Till we bring in the soft delete feature tracked in
http://review.gluster.org/12963 this is a safe guard to block the volume
deletion.

Please note the test file which is backported from this commit has an issue
where we start the volume and then try to delete it which is anyway going to
fail. So the test actually doesn't validate the fix.
http://review.gluster.org/#/c/14693/ in master fixed the problem and the same is
ported as part of this commit as well.

Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
BUG: 1344634
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14681
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-on: http://review.gluster.org/14692
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/14681

Deleting a volume on a cluster where one of the node in the cluster is down is
buggy since once that node comes back the resync of the same volume will happen.
Till we bring in the soft delete feature tracked in
http://review.gluster.org/12963 this is a safe guard to block the volume
deletion.

Please note the test file which is backported from this commit has an issue
where we start the volume and then try to delete it which is anyway going to
fail. So the test actually doesn't validate the fix.
http://review.gluster.org/#/c/14693/ in master fixed the problem and the same is
ported as part of this commit as well.

Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
BUG: 1344634
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14681
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-on: http://review.gluster.org/14692
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: Avoid started status check if same host</title>
<updated>2016-06-14T06:23:14+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2016-06-06T09:14:35+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0eddfc43be1939f4fe892cd09c7356091b243b3a'/>
<id>0eddfc43be1939f4fe892cd09c7356091b243b3a</id>
<content type='text'>
After carrying out add-brick, session creation is carried out
again, to involve new brick in the session. This needs to be done,
even if the session is in Started state.

While involving slave uuid as part of a session, User is warned
if session is in Started state. This check needs to be avoided
if it is same slave host and session creation needs to be proceeded.

Change-Id: Ic73edd5bd9e3ee55da96f5aceec0bafa14d3f3dd
BUG: 1344605
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14653
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
(cherry picked from commit c62493efadbcf5085bbd65a409eed9391301c154)
Reviewed-on: http://review.gluster.org/14710
Tested-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
After carrying out add-brick, session creation is carried out
again, to involve new brick in the session. This needs to be done,
even if the session is in Started state.

While involving slave uuid as part of a session, User is warned
if session is in Started state. This check needs to be avoided
if it is same slave host and session creation needs to be proceeded.

Change-Id: Ic73edd5bd9e3ee55da96f5aceec0bafa14d3f3dd
BUG: 1344605
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14653
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
(cherry picked from commit c62493efadbcf5085bbd65a409eed9391301c154)
Reviewed-on: http://review.gluster.org/14710
Tested-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: upgrade path when slave vol uuid involved</title>
<updated>2016-06-13T10:36:33+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2016-05-19T15:43:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2b984e60b97971f1cd2e661081efee3c4a387bcc'/>
<id>2b984e60b97971f1cd2e661081efee3c4a387bcc</id>
<content type='text'>
slave volume uuid is involved in identifying a geo-replication
session.

This patch addresses upgrade path, where existing geo-rep session
is gracefully upgraded to involve slave volume uuid.

Change-Id: Ib7ff5109b161592f24fc86fc7e93a407655fab86
BUG: 1342453
Reviewed-on: http://review.gluster.org/#/c/14425
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14641
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
slave volume uuid is involved in identifying a geo-replication
session.

This patch addresses upgrade path, where existing geo-rep session
is gracefully upgraded to involve slave volume uuid.

Change-Id: Ib7ff5109b161592f24fc86fc7e93a407655fab86
BUG: 1342453
Reviewed-on: http://review.gluster.org/#/c/14425
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14641
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: remove quota related options from snap volfile</title>
<updated>2016-06-03T15:38:27+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2016-06-01T17:31:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8d493c22deaf52db82f03c049f99d5ac5857769f'/>
<id>8d493c22deaf52db82f03c049f99d5ac5857769f</id>
<content type='text'>
enabling inode-quota on a snapshot volume is unnecessary, because
snapshot is a read-only volume. So we don't need to enforce quota
on a snapshot volume.

This patch will remove the quota related options from snapshot
volfile.

Backport of&gt;
&gt;Change-Id: Iddabcb83820dac2384924a01d45abe1ef1e95600
&gt;BUG: 1341796
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/14608
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;

(cherry picked from commit 03d523504230c336cf585159266e147945f31153)

Change-Id: I3c41d8abe8631e57a5fc58e9a0d7550c558ab231
BUG: 1342374
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14629
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
enabling inode-quota on a snapshot volume is unnecessary, because
snapshot is a read-only volume. So we don't need to enforce quota
on a snapshot volume.

This patch will remove the quota related options from snapshot
volfile.

Backport of&gt;
&gt;Change-Id: Iddabcb83820dac2384924a01d45abe1ef1e95600
&gt;BUG: 1341796
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/14608
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;

(cherry picked from commit 03d523504230c336cf585159266e147945f31153)

Change-Id: I3c41d8abe8631e57a5fc58e9a0d7550c558ab231
BUG: 1342374
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14629
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Fix snapshot creation with geo-rep</title>
<updated>2016-06-02T05:50:47+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-06-01T07:12:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3bd5c5858fd3df42374934ac8251a8620e6b62b9'/>
<id>3bd5c5858fd3df42374934ac8251a8620e6b62b9</id>
<content type='text'>
Backport of http://review.gluster.org/#/c/14595

The construction of path to geo-rep session directory
is broken with the commit "http://review.gluster.org/13111"
as it saves the slave volume uuid in 'gsync_slaves'
dictionary. This patch fixes the same.

Change-Id: Ic7fc3c37d368549feb44b3a08d60157ce61227c3
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
BUG: 1341478
Reviewed-on: http://review.gluster.org/14597
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/#/c/14595

The construction of path to geo-rep session directory
is broken with the commit "http://review.gluster.org/13111"
as it saves the slave volume uuid in 'gsync_slaves'
dictionary. This patch fixes the same.

Change-Id: Ic7fc3c37d368549feb44b3a08d60157ce61227c3
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
BUG: 1341478
Reviewed-on: http://review.gluster.org/14597
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: copy real_path from older brickinfo during brick import</title>
<updated>2016-05-20T09:33:20+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-05-11T12:54:40+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=93b72a135a7793b5b71c0eb6498c5fe529827ca6'/>
<id>93b72a135a7793b5b71c0eb6498c5fe529827ca6</id>
<content type='text'>
Backport of http://review.gluster.org/14306

In glusterd_import_new_brick () new_brickinfo-&gt;real_path will not be populated
for the first time and hence if the underlying file system is bad for the same
brick, import will fail resulting in inconsistent configuration data.

Fix is to populate real_path from old brickinfo object.

Also there were many cases where we were unnecessarily calling realpath() and
that may cause in failure. For eg - if a remove brick is executed with a brick
whoose underlying file system has crashed, remove-brick fails since realpath()
call fails. We'd need to call realpath() here as the value is of no use.Hence
passing construct_realpath as _gf_false in glusterd_volume_brickinfo_get_by_brick ()
is a must in such cases.

Change-Id: I7ec93871dc9e616f5d565ad5e540b2f1cacaf9dc
BUG: 1337113
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14306
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14410
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/14306

In glusterd_import_new_brick () new_brickinfo-&gt;real_path will not be populated
for the first time and hence if the underlying file system is bad for the same
brick, import will fail resulting in inconsistent configuration data.

Fix is to populate real_path from old brickinfo object.

Also there were many cases where we were unnecessarily calling realpath() and
that may cause in failure. For eg - if a remove brick is executed with a brick
whoose underlying file system has crashed, remove-brick fails since realpath()
call fails. We'd need to call realpath() here as the value is of no use.Hence
passing construct_realpath as _gf_false in glusterd_volume_brickinfo_get_by_brick ()
is a must in such cases.

Change-Id: I7ec93871dc9e616f5d565ad5e540b2f1cacaf9dc
BUG: 1337113
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14306
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14410
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: slave volume uuid to identify a geo-rep session</title>
<updated>2016-05-19T06:47:09+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2015-12-29T13:52:36+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=70192bfe5f7f956843d094ec9cb484b23ce45556'/>
<id>70192bfe5f7f956843d094ec9cb484b23ce45556</id>
<content type='text'>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1335728
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
 (cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969)
Reviewed-on: http://review.gluster.org/14322
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1335728
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
 (cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969)
Reviewed-on: http://review.gluster.org/14322
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/tier: return -1 to cli on detach commit when detach unfinished</title>
<updated>2016-05-13T18:01:31+00:00</updated>
<author>
<name>Dan Lambright</name>
<email>dlambrig@redhat.com</email>
</author>
<published>2016-05-11T17:51:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=93dd9a34e9cad915ba0f99ac1cc383fc4ec129ff'/>
<id>93dd9a34e9cad915ba0f99ac1cc383fc4ec129ff</id>
<content type='text'>
If we try to commit a detach tier before it is finished, we should
flag an error. This patch adds a return value -1 for this case to
be propagated back to the CLI.

Change-Id: I619dbe662b2fd06ebdd97702b2d223560017db51
BUG: 1335792
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14327
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If we try to commit a detach tier before it is finished, we should
flag an error. This patch adds a return value -1 for this case to
be propagated back to the CLI.

Change-Id: I619dbe662b2fd06ebdd97702b2d223560017db51
BUG: 1335792
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14327
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd-ganesha : copy ganesha export configuration files during reboot</title>
<updated>2016-05-06T12:59:44+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2016-04-18T16:04:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=058a5a7bf5fa099fe2d6de86ac35fa5594b1ebd8'/>
<id>058a5a7bf5fa099fe2d6de86ac35fa5594b1ebd8</id>
<content type='text'>
glusterd creates export conf file for ganesha using hook script during
volume start and ganesha_manage_export() for volume set command. But this
routine is not added in glusterd restart scenario.
Consider the following case, in a three node cluster a volume got exported
via ganesha while one of the node is offline(glusterd is not running).
When the node comes back online, that volume is not exported on that node
due to the above mentioned issue.
Also I have removed unused variables from glusterd_handle_ganesha_op()
For this patch to work pcs cluster should running on that be node.

Upstream reference
&gt;Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
&gt;BUG: 1330097
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;i
&gt;Reviewed-on: http://review.gluster.org/14063
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
BUG: 1333661
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14233
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd creates export conf file for ganesha using hook script during
volume start and ganesha_manage_export() for volume set command. But this
routine is not added in glusterd restart scenario.
Consider the following case, in a three node cluster a volume got exported
via ganesha while one of the node is offline(glusterd is not running).
When the node comes back online, that volume is not exported on that node
due to the above mentioned issue.
Also I have removed unused variables from glusterd_handle_ganesha_op()
For this patch to work pcs cluster should running on that be node.

Upstream reference
&gt;Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
&gt;BUG: 1330097
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;i
&gt;Reviewed-on: http://review.gluster.org/14063
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
BUG: 1333661
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14233
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
