<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests, branch v4.1.6</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: ensure volinfo-&gt;caps is set to correct value</title>
<updated>2018-11-05T20:38:01+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-03T18:28:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=224895148d95742c1f36b48bb79d8b9ef1ff0cd6'/>
<id>224895148d95742c1f36b48bb79d8b9ef1ff0cd6</id>
<content type='text'>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

fixes: bz#1643052
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

fixes: bz#1643052
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: correction in tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t</title>
<updated>2018-10-25T13:19:16+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-08T14:03:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=79806baba7c49d028d1db60dc8aacfae7b202745'/>
<id>79806baba7c49d028d1db60dc8aacfae7b202745</id>
<content type='text'>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643075
Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643075
Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gfapi: Bug fixes in leases processing code-path</title>
<updated>2018-10-22T16:36:03+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2018-10-10T16:07:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fa4710bb8fbc852971d763d8727e3755436ea9c8'/>
<id>fa4710bb8fbc852971d763d8727e3755436ea9c8</id>
<content type='text'>
This patch fixes below issues in gfapi lease code-path
* 'glfs_setfsleasid' should allow NULL input to be
   able to reset leaseid
* Applications should be allowed to (un)register for
  upcall notifications of type GLFS_EVENT_LEASE_RECALL
* APIs added to read contents of GLFS_EVENT_LEASE_RECALL
  argument which is of type "struct glfs_upcall_lease"

This is backport of below mainline path -
https://review.gluster.org/#/c/glusterfs/+/21391

Change-Id: I3320ddf235cc82fad561e13b9457ebd64db6c76b
updates: #350
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes below issues in gfapi lease code-path
* 'glfs_setfsleasid' should allow NULL input to be
   able to reset leaseid
* Applications should be allowed to (un)register for
  upcall notifications of type GLFS_EVENT_LEASE_RECALL
* APIs added to read contents of GLFS_EVENT_LEASE_RECALL
  argument which is of type "struct glfs_upcall_lease"

This is backport of below mainline path -
https://review.gluster.org/#/c/glusterfs/+/21391

Change-Id: I3320ddf235cc82fad561e13b9457ebd64db6c76b
updates: #350
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t</title>
<updated>2018-10-22T16:36:03+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-21T12:02:52+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b63dfd84fc8b3e08e3f005f71bf493c633452612'/>
<id>b63dfd84fc8b3e08e3f005f71bf493c633452612</id>
<content type='text'>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641761
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641761
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: prevent winding inodelks twice for arbiter volumes</title>
<updated>2018-10-10T12:55:44+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-10T12:27:33+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5b1a94468863451d1762063e954785f4ef065374'/>
<id>5b1a94468863451d1762063e954785f4ef065374</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1637953
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1637953
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: fix incorrect reporting of directory split-brain</title>
<updated>2018-10-05T14:43:20+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-09-27T12:13:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e3e13d2d727bab46ce168c4a3b4cce2d476638ca'/>
<id>e3e13d2d727bab46ce168c4a3b4cce2d476638ca</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1633634
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1633634
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Delegate name-heal when possible</title>
<updated>2018-09-21T13:27:00+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-08-27T07:10:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a642f594a00e943e1fa1e121a3f4d331fed1c70b'/>
<id>a642f594a00e943e1fa1e121a3f4d331fed1c70b</id>
<content type='text'>
Problem:
When name-self-heal is triggered on the mount, it blocks
lookup until name-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs name heal and all of them trigger heals waiting
for other clients to complete heal.

Fix:
When a name-heal is needed but quorum number of names have the
file and pending xattrs exist on the parent, then better to
delegate the heal to SHD which will be completed as part of
entry-heal of the parent directory. We could also do the same
for quorum-number of names not present but we don't have
any known use-case where this is a frequent occurrence so
not changing that part at the moment. When there is a gfid
mismatch or missing gfid it is important to complete the heal
so that next rename doesn't assume everything is fine and
perform a rename etc

fixes bz#1625575
Change-Id: I8b002c85dffc6eb6f2833e742684a233daefeb2c
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When name-self-heal is triggered on the mount, it blocks
lookup until name-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs name heal and all of them trigger heals waiting
for other clients to complete heal.

Fix:
When a name-heal is needed but quorum number of names have the
file and pending xattrs exist on the parent, then better to
delegate the heal to SHD which will be completed as part of
entry-heal of the parent directory. We could also do the same
for quorum-number of names not present but we don't have
any known use-case where this is a frequent occurrence so
not changing that part at the moment. When there is a gfid
mismatch or missing gfid it is important to complete the heal
so that next rename doesn't assume everything is fine and
perform a rename etc

fixes bz#1625575
Change-Id: I8b002c85dffc6eb6f2833e742684a233daefeb2c
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Delegate metadata heal with pending xattrs to SHD</title>
<updated>2018-09-21T13:27:00+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-08-27T06:16:33+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=00dae0e2ea1260a082af33dac5b06ca6aa68ccc3'/>
<id>00dae0e2ea1260a082af33dac5b06ca6aa68ccc3</id>
<content type='text'>
Problem:
When metadata-self-heal is triggered on the mount, it blocks
lookup until metadata-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs metadata heal and all of them trigger heals waiting
for other clients to complete heal.

Fix:
Only when the heal is needed but the pending xattrs are not set,
trigger metadata heal that could block lookup. This is the only
case where different clients may give different metadata to the
clients without heals, which should be avoided.

Updates bz#1625575
Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When metadata-self-heal is triggered on the mount, it blocks
lookup until metadata-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs metadata heal and all of them trigger heals waiting
for other clients to complete heal.

Fix:
Only when the heal is needed but the pending xattrs are not set,
trigger metadata heal that could block lookup. This is the only
case where different clients may give different metadata to the
clients without heals, which should be avoided.

Updates bz#1625575
Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libgfchangelog: Fix changelog history API</title>
<updated>2018-09-21T13:25:59+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2018-08-21T10:09:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bbab048f0a06c34656c2c11c3cec4f2dd9883037'/>
<id>bbab048f0a06c34656c2c11c3cec4f2dd9883037</id>
<content type='text'>
Problem:
If requested start time and end time doesn't fall into
first HTIME file, then history API fails even though
continuous changelogs are avaiable for the requested range
in other HTIME files. This is induced by changelog disable
and enable which creates fresh HTIME index file.

Cause and Analysis:
Each HTIME index file represents the availability of
continuous changelogs. If changelog is disabled and enabled,
a new HTIME index file is created represents non availability
of continuous changelogs. So as long as the requested start
and end falls into single HTIME index file and not across,
history API should succeed.

But History API checks for the changelogs only in first
HTIME index file and errors out if not available.

Fix:
Check in all HTIME index files for availability of continuous
changelogs for requested change.

Backport of:
 &gt; Patch: https://review.gluster.org/21016/ 
 &gt; BUG: bz#1622549
 &gt; Change-Id: I80eeceb5afbd1b89f86a9dc4c320e161907d3559
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 35aa67001c8fac99b040fbc61f36ef4f1b1590ac)


fixes: bz#1630141
Change-Id: I80eeceb5afbd1b89f86a9dc4c320e161907d3559
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
If requested start time and end time doesn't fall into
first HTIME file, then history API fails even though
continuous changelogs are avaiable for the requested range
in other HTIME files. This is induced by changelog disable
and enable which creates fresh HTIME index file.

Cause and Analysis:
Each HTIME index file represents the availability of
continuous changelogs. If changelog is disabled and enabled,
a new HTIME index file is created represents non availability
of continuous changelogs. So as long as the requested start
and end falls into single HTIME index file and not across,
history API should succeed.

But History API checks for the changelogs only in first
HTIME index file and errors out if not available.

Fix:
Check in all HTIME index files for availability of continuous
changelogs for requested change.

Backport of:
 &gt; Patch: https://review.gluster.org/21016/ 
 &gt; BUG: bz#1622549
 &gt; Change-Id: I80eeceb5afbd1b89f86a9dc4c320e161907d3559
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 35aa67001c8fac99b040fbc61f36ef4f1b1590ac)


fixes: bz#1630141
Change-Id: I80eeceb5afbd1b89f86a9dc4c320e161907d3559
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>geo-rep: Fix issues related config set</title>
<updated>2018-09-21T13:25:43+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2018-09-14T07:42:26+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e50a6ee2c913b5b4df53f0efca4de66c5262d1e1'/>
<id>e50a6ee2c913b5b4df53f0efca4de66c5262d1e1</id>
<content type='text'>
1. '--ignore-mising-args' option for rsync is not
   being used even though the rsync version is
   greater than 3.1.0. Fixed the same.

2. '--existing' option for rsync is also not being
   used. Fixed the same.

3. geo-rep config fails to set rsync-options as the
   value contains '--'. Interestingly, python argsparse
   treats the value with '--' (e.g., --ignore-missing-args)
   as option. But when passed with something like
   --value=--ignore-missing-args, it succeeds. Fixed the
   same.

Backport of:
 &gt; Patch: https://review.gluster.org/21191
 &gt; Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1629561

Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1630140
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. '--ignore-mising-args' option for rsync is not
   being used even though the rsync version is
   greater than 3.1.0. Fixed the same.

2. '--existing' option for rsync is also not being
   used. Fixed the same.

3. geo-rep config fails to set rsync-options as the
   value contains '--'. Interestingly, python argsparse
   treats the value with '--' (e.g., --ignore-missing-args)
   as option. But when passed with something like
   --value=--ignore-missing-args, it succeeds. Fixed the
   same.

Backport of:
 &gt; Patch: https://review.gluster.org/21191
 &gt; Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1629561

Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1630140
</pre>
</div>
</content>
</entry>
</feed>
