<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd, branch v3.7.14</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>changelog/rpc: Fix rpc_clnt_t mem leaks</title>
<updated>2016-07-27T10:13:33+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-03-07T06:15:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5a32c26bb59bdd20bdfc9ea00ce90c7d1c64aa04'/>
<id>5a32c26bb59bdd20bdfc9ea00ce90c7d1c64aa04</id>
<content type='text'>
Backport of http://review.gluster.org/13658

PROBLEM:
   1. Freeing up rpc_clnt object might lead to crashes. Well,
      it was not a necessity to free rpc-clnt object till now
      because all the existing use cases needs to reconnect
      back on disconnects. Hence timer code was not taking
      ref on rpc-clnt object.

      Glusterd had some use-cases that led to crash due to
      ping-timer and they fixed only those code paths that
      involve ping-timer.

      Now, since changelog has an use-case where rpc-clnt
      need to be freed up, we need to fix timer code to take
      refs

   2. In changelog, because of issue 1, only mydata was being
      freed which is incorrect. And there are races where
      rpc-clnt object would access the freed mydata which
      would lead to crashes.

      Since changelog xlator resides on brick side and is long
      living process, if multiple libgfchangelog consumers
      register to changelog and disconnect/reconnect mulitple
      times, it would result in leak of 'rpc-clnt' object
      for every connect/disconnect.

SOLUTION:
   1. Handle ref/unref of 'rpc_clnt' structure in timer
      functions properly.
   2. In changelog, unref 'rpc_clnt' in RPC_CLNT_DISCONNECT
      after disabling timers and free mydata on RPC_CLNT_DESTROY.

RPC SETUP IN CHANGELOG:
   1. changelog xlator initiates rpc server say 'changelog_rpc_server'
   2. libgfchangelog initiates one rpc server say 'libgfchangelog_rpc_server'
   3. libgfchangelog initiates rpc client and connects to 'changelog_rpc_server'
   4. In return changelog_rpc_server initiates a rpc client and connects back
      to 'libgfchangelog_rpc_server'

REF/UNREF HANDLING IN TIMER FUNCTIONS:
Let's say rpc clnt refcount = 1
   1. Take the ref before reigstering callback to timer queue
           &gt;&gt;&gt;&gt;  rpc_clnt_ref (say ref count becomes = 2)
   2. Register a callback to timer say 'callback1'
   3. If register fails:
           &gt;&gt;&gt;&gt; rpc_clnt_unref (ref count = 1)
   4. On timer expiration, 'callback1' gets called. So unref rpc clnt at the end
      in 'callback1'. This is corresponding to ref taken in step 1
           &gt;&gt;&gt;&gt; rpc_clnt_unref (ref count = 1)
   5. The cycle from step-1 to step-4 continues....until timer cancel event happens
   6. timer cancel of say 'callback1'
           If timer cancel fails:
                 Do nothing, Step-4 would have unrefd
           If timer cancel succeeds:
                 &gt;&gt;&gt;&gt; rpc_clnt_unref (ref count = 1)

Change-Id: I91389bc511b8b1a17824941970ee8d2c29a74a09
BUG: 1359363
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 637ce9e2e27e9f598a4a6c5a04cd339efaa62076)
Reviewed-on: http://review.gluster.org/14993
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/13658

PROBLEM:
   1. Freeing up rpc_clnt object might lead to crashes. Well,
      it was not a necessity to free rpc-clnt object till now
      because all the existing use cases needs to reconnect
      back on disconnects. Hence timer code was not taking
      ref on rpc-clnt object.

      Glusterd had some use-cases that led to crash due to
      ping-timer and they fixed only those code paths that
      involve ping-timer.

      Now, since changelog has an use-case where rpc-clnt
      need to be freed up, we need to fix timer code to take
      refs

   2. In changelog, because of issue 1, only mydata was being
      freed which is incorrect. And there are races where
      rpc-clnt object would access the freed mydata which
      would lead to crashes.

      Since changelog xlator resides on brick side and is long
      living process, if multiple libgfchangelog consumers
      register to changelog and disconnect/reconnect mulitple
      times, it would result in leak of 'rpc-clnt' object
      for every connect/disconnect.

SOLUTION:
   1. Handle ref/unref of 'rpc_clnt' structure in timer
      functions properly.
   2. In changelog, unref 'rpc_clnt' in RPC_CLNT_DISCONNECT
      after disabling timers and free mydata on RPC_CLNT_DESTROY.

RPC SETUP IN CHANGELOG:
   1. changelog xlator initiates rpc server say 'changelog_rpc_server'
   2. libgfchangelog initiates one rpc server say 'libgfchangelog_rpc_server'
   3. libgfchangelog initiates rpc client and connects to 'changelog_rpc_server'
   4. In return changelog_rpc_server initiates a rpc client and connects back
      to 'libgfchangelog_rpc_server'

REF/UNREF HANDLING IN TIMER FUNCTIONS:
Let's say rpc clnt refcount = 1
   1. Take the ref before reigstering callback to timer queue
           &gt;&gt;&gt;&gt;  rpc_clnt_ref (say ref count becomes = 2)
   2. Register a callback to timer say 'callback1'
   3. If register fails:
           &gt;&gt;&gt;&gt; rpc_clnt_unref (ref count = 1)
   4. On timer expiration, 'callback1' gets called. So unref rpc clnt at the end
      in 'callback1'. This is corresponding to ref taken in step 1
           &gt;&gt;&gt;&gt; rpc_clnt_unref (ref count = 1)
   5. The cycle from step-1 to step-4 continues....until timer cancel event happens
   6. timer cancel of say 'callback1'
           If timer cancel fails:
                 Do nothing, Step-4 would have unrefd
           If timer cancel succeeds:
                 &gt;&gt;&gt;&gt; rpc_clnt_unref (ref count = 1)

Change-Id: I91389bc511b8b1a17824941970ee8d2c29a74a09
BUG: 1359363
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 637ce9e2e27e9f598a4a6c5a04cd339efaa62076)
Reviewed-on: http://review.gluster.org/14993
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>georep: add reset-sync-time option for session delete</title>
<updated>2016-07-21T10:04:23+00:00</updated>
<author>
<name>Milind Changire</name>
<email>mchangir@redhat.com</email>
</author>
<published>2016-04-22T11:26:47+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=301e4e8366759c45aaff03a7953ab5248b5f61de'/>
<id>301e4e8366759c45aaff03a7953ab5248b5f61de</id>
<content type='text'>
Set the stime xattr at all the brick roots to (0,0) if the argument
reset-sync-time has been provided on the command-line.
To avoid testing against directory specific stime, the remote
stime is assumed to be minus_infinity, if the root directory
stime is set to (0,0), before the directory scan begins.
This triggers a full volume resync to slave in the case of a
geo-rep session recreation with the same master-slave volume
pair.

Command synopsis:
gluster volume geo-replication &lt;MASTERVOL&gt; &lt;SLAVE&gt;::&lt;SLAVEVOL&gt; delete \
    [reset-sync-time]

Update gluster cli man page to include new sub-command reset-sync-time.

Change-Id: Ie4ce03b9425ed9bb81eda8681058c0fc6f990948
BUG: 1357772
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14051
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
(cherry picked from commit 70fd68d94f768c098b3178c151fa92c5079a8cfd)
Reviewed-on: http://review.gluster.org/14952
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Set the stime xattr at all the brick roots to (0,0) if the argument
reset-sync-time has been provided on the command-line.
To avoid testing against directory specific stime, the remote
stime is assumed to be minus_infinity, if the root directory
stime is set to (0,0), before the directory scan begins.
This triggers a full volume resync to slave in the case of a
geo-rep session recreation with the same master-slave volume
pair.

Command synopsis:
gluster volume geo-replication &lt;MASTERVOL&gt; &lt;SLAVE&gt;::&lt;SLAVEVOL&gt; delete \
    [reset-sync-time]

Update gluster cli man page to include new sub-command reset-sync-time.

Change-Id: Ie4ce03b9425ed9bb81eda8681058c0fc6f990948
BUG: 1357772
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14051
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
(cherry picked from commit 70fd68d94f768c098b3178c151fa92c5079a8cfd)
Reviewed-on: http://review.gluster.org/14952
</pre>
</div>
</content>
</entry>
<entry>
<title>Glusterd: printing the node details on error message of rebalance</title>
<updated>2016-07-15T13:46:10+00:00</updated>
<author>
<name>hari</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2016-05-23T06:37:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c28aca72289c6526abc48d08f669c50e13ef4ab5'/>
<id>c28aca72289c6526abc48d08f669c50e13ef4ab5</id>
<content type='text'>
        back-port of : http://review.gluster.org/#/c/14495

Problem: on the rebalance start with one of the glusterd being
down among the volume, the error message says only about the
brick path.

Fix: adding the node details

&gt;Change-Id: I5827d3a9a15b0461c9ce3a51c0b16246ca58f335
&gt;BUG: 1337899
&gt;Signed-off-by: hari &lt;hgowtham@redhat.com&gt;

Change-Id: I3075f3a73e289dfe577742a3d5086531026f567d
BUG: 1339923
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14540
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        back-port of : http://review.gluster.org/#/c/14495

Problem: on the rebalance start with one of the glusterd being
down among the volume, the error message says only about the
brick path.

Fix: adding the node details

&gt;Change-Id: I5827d3a9a15b0461c9ce3a51c0b16246ca58f335
&gt;BUG: 1337899
&gt;Signed-off-by: hari &lt;hgowtham@redhat.com&gt;

Change-Id: I3075f3a73e289dfe577742a3d5086531026f567d
BUG: 1339923
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14540
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix gsyncd upgrade issue</title>
<updated>2016-07-15T13:45:23+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-07-11T19:09:31+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ce1905c58fd604f1ef00a55609a832d1cf658c7e'/>
<id>ce1905c58fd604f1ef00a55609a832d1cf658c7e</id>
<content type='text'>
Backport of http://review.gluster.org/14898/

Problem:
    gluster upgrade is not generating new volfiles

Cause:
During upgrade, "glusterd --xlator-option *.upgrade=on -N"
is run to generate new volfiles. It is run post 'glusterfs'
rpm installation. The above command fails during upgrade
if geo-replication is installed. This is because on
glusterd start 'gsyncd' binary is called to configure
geo-replication related stuff. Since 'glusterfs' rpm is
installed prior to 'geo-rep' rpm, the 'gsyncd' binary
used to glusterd upgrade command is of old version and
hence it fails before generating new volfiles.

Solution:
Don't call geo-replication configure during upgrade/downgrade.
Geo-replication configuration happens during start of glusterd
after upgrade.

Change-Id: Id58ea44ead9f69982f86fb68dc5b9ee3f6cd11a1
BUG: 1356426
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 1b998788ece8c8b52657e8b9aae65d3279690c5b)
Reviewed-on: http://review.gluster.org/14915
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/14898/

Problem:
    gluster upgrade is not generating new volfiles

Cause:
During upgrade, "glusterd --xlator-option *.upgrade=on -N"
is run to generate new volfiles. It is run post 'glusterfs'
rpm installation. The above command fails during upgrade
if geo-replication is installed. This is because on
glusterd start 'gsyncd' binary is called to configure
geo-replication related stuff. Since 'glusterfs' rpm is
installed prior to 'geo-rep' rpm, the 'gsyncd' binary
used to glusterd upgrade command is of old version and
hence it fails before generating new volfiles.

Solution:
Don't call geo-replication configure during upgrade/downgrade.
Geo-replication configuration happens during start of glusterd
after upgrade.

Change-Id: Id58ea44ead9f69982f86fb68dc5b9ee3f6cd11a1
BUG: 1356426
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 1b998788ece8c8b52657e8b9aae65d3279690c5b)
Reviewed-on: http://review.gluster.org/14915
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>feature/bitrot: Show whether scrub is in progress/idle</title>
<updated>2016-07-15T13:42:25+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-07-04T11:55:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8c6700bc7bc78ed4754bf2e59fd28a40530d4e76'/>
<id>8c6700bc7bc78ed4754bf2e59fd28a40530d4e76</id>
<content type='text'>
Backport of http://review.gluster.org/14864/

Bitrot scrub status shows whether the scrub is paused
or active. It doesn't show whether the scrubber is
actually scrubbing or waiting in the timer wheel
for the next schedule. This patch shows this status
with "In Progress" and "Idle" respectively.

Change-Id: I995d8553d1ff166503ae1e7b46282fc3ba961f0b
BUG: 1355635
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit f4757d256e3e00132ef204c01ed61f78f705ad6b)
Reviewed-on: http://review.gluster.org/14900
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/14864/

Bitrot scrub status shows whether the scrub is paused
or active. It doesn't show whether the scrubber is
actually scrubbing or waiting in the timer wheel
for the next schedule. This patch shows this status
with "In Progress" and "Idle" respectively.

Change-Id: I995d8553d1ff166503ae1e7b46282fc3ba961f0b
BUG: 1355635
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit f4757d256e3e00132ef204c01ed61f78f705ad6b)
Reviewed-on: http://review.gluster.org/14900
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: compare uuid instead of hostname address resolution</title>
<updated>2016-07-07T09:33:41+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-07-03T10:21:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c93839ed1b4d10e492a6e0aacb19e42c761d4bf3'/>
<id>c93839ed1b4d10e492a6e0aacb19e42c761d4bf3</id>
<content type='text'>
Backport of http://review.gluster.org/14849

In glusterd_get_brickinfo () brick's hostname is address resolved. This adds an
unnecessary latency since it uses calls like getaddrinfo (). Instead given the
local brick's uuid is already known a comparison of MY_UUID and brickinfo-&gt;uuid
is much more light weight than the previous approach.

On a scale testing where cluster hosting ~400 volumes spanning across 4 nodes,
if a node goes for a reboot, few of the bricks don't come up. After few days of
analysis its found that glusterd_pmap_sigin () was taking signficant amount of
latency and further code walthrough revealed this unnecessary address
resolution. Applying this fix solves the issue and now all the brick processes
come up on a node reboot.

Change-Id: I299b8660ce0da6f3f739354f5c637bc356d82133
BUG: 1352833
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14849
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14861
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/14849

In glusterd_get_brickinfo () brick's hostname is address resolved. This adds an
unnecessary latency since it uses calls like getaddrinfo (). Instead given the
local brick's uuid is already known a comparison of MY_UUID and brickinfo-&gt;uuid
is much more light weight than the previous approach.

On a scale testing where cluster hosting ~400 volumes spanning across 4 nodes,
if a node goes for a reboot, few of the bricks don't come up. After few days of
analysis its found that glusterd_pmap_sigin () was taking signficant amount of
latency and further code walthrough revealed this unnecessary address
resolution. Applying this fix solves the issue and now all the brick processes
come up on a node reboot.

Change-Id: I299b8660ce0da6f3f739354f5c637bc356d82133
BUG: 1352833
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14849
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14861
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fail volume delete if one of the node is down</title>
<updated>2016-07-04T11:49:18+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-06-09T12:52:43+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1a21cfba8e7a5f4ac1b8a8c3b8e06574b237420d'/>
<id>1a21cfba8e7a5f4ac1b8a8c3b8e06574b237420d</id>
<content type='text'>
Backport of http://review.gluster.org/14681

Deleting a volume on a cluster where one of the node in the cluster is down is
buggy since once that node comes back the resync of the same volume will happen.
Till we bring in the soft delete feature tracked in
http://review.gluster.org/12963 this is a safe guard to block the volume
deletion.

Please note the test file which is backported from this commit has an issue
where we start the volume and then try to delete it which is anyway going to
fail. So the test actually doesn't validate the fix.
http://review.gluster.org/#/c/14693/ in master fixed the problem and the same is
ported as part of this commit as well.

Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
BUG: 1344634
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14681
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-on: http://review.gluster.org/14692
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/14681

Deleting a volume on a cluster where one of the node in the cluster is down is
buggy since once that node comes back the resync of the same volume will happen.
Till we bring in the soft delete feature tracked in
http://review.gluster.org/12963 this is a safe guard to block the volume
deletion.

Please note the test file which is backported from this commit has an issue
where we start the volume and then try to delete it which is anyway going to
fail. So the test actually doesn't validate the fix.
http://review.gluster.org/#/c/14693/ in master fixed the problem and the same is
ported as part of this commit as well.

Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
BUG: 1344634
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14681
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-on: http://review.gluster.org/14692
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: Avoid started status check if same host</title>
<updated>2016-06-14T06:23:14+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2016-06-06T09:14:35+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0eddfc43be1939f4fe892cd09c7356091b243b3a'/>
<id>0eddfc43be1939f4fe892cd09c7356091b243b3a</id>
<content type='text'>
After carrying out add-brick, session creation is carried out
again, to involve new brick in the session. This needs to be done,
even if the session is in Started state.

While involving slave uuid as part of a session, User is warned
if session is in Started state. This check needs to be avoided
if it is same slave host and session creation needs to be proceeded.

Change-Id: Ic73edd5bd9e3ee55da96f5aceec0bafa14d3f3dd
BUG: 1344605
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14653
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
(cherry picked from commit c62493efadbcf5085bbd65a409eed9391301c154)
Reviewed-on: http://review.gluster.org/14710
Tested-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
After carrying out add-brick, session creation is carried out
again, to involve new brick in the session. This needs to be done,
even if the session is in Started state.

While involving slave uuid as part of a session, User is warned
if session is in Started state. This check needs to be avoided
if it is same slave host and session creation needs to be proceeded.

Change-Id: Ic73edd5bd9e3ee55da96f5aceec0bafa14d3f3dd
BUG: 1344605
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14653
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
(cherry picked from commit c62493efadbcf5085bbd65a409eed9391301c154)
Reviewed-on: http://review.gluster.org/14710
Tested-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: upgrade path when slave vol uuid involved</title>
<updated>2016-06-13T10:36:33+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2016-05-19T15:43:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2b984e60b97971f1cd2e661081efee3c4a387bcc'/>
<id>2b984e60b97971f1cd2e661081efee3c4a387bcc</id>
<content type='text'>
slave volume uuid is involved in identifying a geo-replication
session.

This patch addresses upgrade path, where existing geo-rep session
is gracefully upgraded to involve slave volume uuid.

Change-Id: Ib7ff5109b161592f24fc86fc7e93a407655fab86
BUG: 1342453
Reviewed-on: http://review.gluster.org/#/c/14425
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14641
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
slave volume uuid is involved in identifying a geo-replication
session.

This patch addresses upgrade path, where existing geo-rep session
is gracefully upgraded to involve slave volume uuid.

Change-Id: Ib7ff5109b161592f24fc86fc7e93a407655fab86
BUG: 1342453
Reviewed-on: http://review.gluster.org/#/c/14425
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14641
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: remove quota related options from snap volfile</title>
<updated>2016-06-03T15:38:27+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2016-06-01T17:31:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8d493c22deaf52db82f03c049f99d5ac5857769f'/>
<id>8d493c22deaf52db82f03c049f99d5ac5857769f</id>
<content type='text'>
enabling inode-quota on a snapshot volume is unnecessary, because
snapshot is a read-only volume. So we don't need to enforce quota
on a snapshot volume.

This patch will remove the quota related options from snapshot
volfile.

Backport of&gt;
&gt;Change-Id: Iddabcb83820dac2384924a01d45abe1ef1e95600
&gt;BUG: 1341796
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/14608
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;

(cherry picked from commit 03d523504230c336cf585159266e147945f31153)

Change-Id: I3c41d8abe8631e57a5fc58e9a0d7550c558ab231
BUG: 1342374
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14629
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
enabling inode-quota on a snapshot volume is unnecessary, because
snapshot is a read-only volume. So we don't need to enforce quota
on a snapshot volume.

This patch will remove the quota related options from snapshot
volfile.

Backport of&gt;
&gt;Change-Id: Iddabcb83820dac2384924a01d45abe1ef1e95600
&gt;BUG: 1341796
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/14608
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt;Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;

(cherry picked from commit 03d523504230c336cf585159266e147945f31153)

Change-Id: I3c41d8abe8631e57a5fc58e9a0d7550c558ab231
BUG: 1342374
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14629
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
