<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster, branch v7.7</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/afr: Prioritize ENOSPC over other errors</title>
<updated>2020-06-22T05:42:20+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2020-05-21T09:48:59+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1a91aa4dbbe096ac0eb06c2a2977b6d30e555114'/>
<id>1a91aa4dbbe096ac0eb06c2a2977b6d30e555114</id>
<content type='text'>
Problem:
In a replicate/arbiter volume if file creations or writes fails on
quorum number of bricks and on one brick it is due to ENOSPC and
on other brick it fails for a different reason, it may fail with
errors other than ENOSPC in some cases.

Fix:
Prioritize ENOSPC over other lesser priority errors and do not set
op_errno in posix_gfid_set if op_ret is 0 to avoid receiving any
error_no which can be misinterpreted by __afr_dir_write_finalize().

Also removing the function afr_has_arbiter_fop_cbk_quorum() which
might consider a successful reply form a single brick as quorum
success in some cases, whereas we always need fop to be successful
on quorum number of bricks in arbiter configuration.

Change-Id: I106e267f8b9451f681022f1cccb410d9bc824c08
Fixes: #1254
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
(cherry picked from commit fa63b45ca5edf172b1b89b28b5db3c5129cc57b6)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a replicate/arbiter volume if file creations or writes fails on
quorum number of bricks and on one brick it is due to ENOSPC and
on other brick it fails for a different reason, it may fail with
errors other than ENOSPC in some cases.

Fix:
Prioritize ENOSPC over other lesser priority errors and do not set
op_errno in posix_gfid_set if op_ret is 0 to avoid receiving any
error_no which can be misinterpreted by __afr_dir_write_finalize().

Also removing the function afr_has_arbiter_fop_cbk_quorum() which
might consider a successful reply form a single brick as quorum
success in some cases, whereas we always need fop to be successful
on quorum number of bricks in arbiter configuration.

Change-Id: I106e267f8b9451f681022f1cccb410d9bc824c08
Fixes: #1254
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
(cherry picked from commit fa63b45ca5edf172b1b89b28b5db3c5129cc57b6)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: more quorum checks in lookup and new entry marking</title>
<updated>2020-06-18T05:57:18+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2020-06-16T05:17:47+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2b9db55d2bf152bdb114f79dd6b7f2a12c379c26'/>
<id>2b9db55d2bf152bdb114f79dd6b7f2a12c379c26</id>
<content type='text'>
Problem: See github issue for details.

Fix:
-In lookup if the entry exists in 2 out of 3 bricks, don't fail the
lookup with ENOENT just because there is an entrylk on the parent.
Consider quorum before deciding.

-If entry FOP does not succeed on quorum no. of bricks, do not perform
new entry mark.

Fixes: #1303
Change-Id: I56df8c89ad53b29fa450c7930a7b7ccec9f4a6c5
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit c4a6748f25d2c1ab3ebcf89952278ebf94c8d371)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: See github issue for details.

Fix:
-In lookup if the entry exists in 2 out of 3 bricks, don't fail the
lookup with ENOENT just because there is an entrylk on the parent.
Consider quorum before deciding.

-If entry FOP does not succeed on quorum no. of bricks, do not perform
new entry mark.

Fixes: #1303
Change-Id: I56df8c89ad53b29fa450c7930a7b7ccec9f4a6c5
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit c4a6748f25d2c1ab3ebcf89952278ebf94c8d371)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Return correct error code and log message</title>
<updated>2020-06-15T10:50:39+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2020-05-05T12:47:49+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=928b56eb0d54f08ee292f582d07101c89d6fa3aa'/>
<id>928b56eb0d54f08ee292f582d07101c89d6fa3aa</id>
<content type='text'>
In case of readdir was send with an FD on which opendir
was failed, this FD will be useless and we return it with error.
For now, we are returning it with EINVAL without logging any
message in log file.

Return a correct error code and also log the message to improve thing to debug.

fixes: #1220
Change-Id: Iaf035254b9c5aa52fa43ace72d328be622b06169
(cherry picked from commit af70cb5eedd80207cd184e69f2a4fb252b72d070)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In case of readdir was send with an FD on which opendir
was failed, this FD will be useless and we return it with error.
For now, we are returning it with EINVAL without logging any
message in log file.

Return a correct error code and also log the message to improve thing to debug.

fixes: #1220
Change-Id: Iaf035254b9c5aa52fa43ace72d328be622b06169
(cherry picked from commit af70cb5eedd80207cd184e69f2a4fb252b72d070)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: event gen changes</title>
<updated>2020-05-09T12:14:06+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2020-04-10T09:00:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4d3a249418ef6b391c93943ff29f58145448cf7b'/>
<id>4d3a249418ef6b391c93943ff29f58145448cf7b</id>
<content type='text'>
The general idea of the changes is to prevent resetting event generation
to zero in the inode ctx, since event gen is something that should
follow 'causal order'.

Change #1:
For a read txn, in inode refresh cbk, if event_generation is
found zero, we are failing the read fop. This is not needed
because change in event gen is only a marker for the next inode refresh to
happen and should not be taken into account by the current read txn.

Change #2:
The event gen being zero above can happen if there is a racing lookup,
which resets even get (in afr_lookup_done) if there are non zero afr
xattrs. The resetting is done only to trigger an inode refresh and a
possible client side heal on the next lookup. That can be acheived by
setting the need_refresh flag in the inode ctx. So replaced all
occurences of resetting even gen to zero with a call to
afr_inode_need_refresh_set().

Change #3:
In both lookup and discover path, we are doing an inode refresh which is
not required since all 3 essentially do the same thing- update the inode
ctx with the good/bad copies from the brick replies. Inode refresh also
triggers background heals, but I think it is okay to do it when we call
refresh during the read and write txns and not in the lookup path.

The .ts which relied on inode refresh in lookup path to trigger heals are
now changed to do read txn so that inode refresh and the heal happens.

Change-Id: Iebf39a9be6ffd7ffd6e4046c96b0fa78ade6c5ec
Fixes: #1179
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reported-by: Erik Jacobson &lt;erik.jacobson at hpe.com&gt;
(cherry picked from commit f0fcd909ad4535b60c9208d4804ebe6afe421a09)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The general idea of the changes is to prevent resetting event generation
to zero in the inode ctx, since event gen is something that should
follow 'causal order'.

Change #1:
For a read txn, in inode refresh cbk, if event_generation is
found zero, we are failing the read fop. This is not needed
because change in event gen is only a marker for the next inode refresh to
happen and should not be taken into account by the current read txn.

Change #2:
The event gen being zero above can happen if there is a racing lookup,
which resets even get (in afr_lookup_done) if there are non zero afr
xattrs. The resetting is done only to trigger an inode refresh and a
possible client side heal on the next lookup. That can be acheived by
setting the need_refresh flag in the inode ctx. So replaced all
occurences of resetting even gen to zero with a call to
afr_inode_need_refresh_set().

Change #3:
In both lookup and discover path, we are doing an inode refresh which is
not required since all 3 essentially do the same thing- update the inode
ctx with the good/bad copies from the brick replies. Inode refresh also
triggers background heals, but I think it is okay to do it when we call
refresh during the read and write txns and not in the lookup path.

The .ts which relied on inode refresh in lookup path to trigger heals are
now changed to do read txn so that inode refresh and the heal happens.

Change-Id: Iebf39a9be6ffd7ffd6e4046c96b0fa78ade6c5ec
Fixes: #1179
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reported-by: Erik Jacobson &lt;erik.jacobson at hpe.com&gt;
(cherry picked from commit f0fcd909ad4535b60c9208d4804ebe6afe421a09)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: mark pending xattrs as a part of metadata heal</title>
<updated>2020-04-07T11:09:25+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-12-24T07:30:19+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b1071f4257f139ed2a01ffb881e05d1ac00a4d37'/>
<id>b1071f4257f139ed2a01ffb881e05d1ac00a4d37</id>
<content type='text'>
...if pending xattrs are zero for all children.

Problem:
If there are no pending xattrs and a metadata heal needs to be
performed, it can be possible that we end up with xattrs inadvertendly
deleted from all bricks, as explained in the  BZ.

Fix:
After picking one among the sources as the good copy, mark pending xattrs on
all sources to blame the sinks. Now even if this metadata heal fails midway,
a subsequent heal will still choose one of the valid sources that it
picked previously.

Updates: #1067
Change-Id: If1b050b70b0ad911e162c04db4d89b263e2b8d7b
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 2d5ba449e9200b16184b1e7fc84cabd015f1f779)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
...if pending xattrs are zero for all children.

Problem:
If there are no pending xattrs and a metadata heal needs to be
performed, it can be possible that we end up with xattrs inadvertendly
deleted from all bricks, as explained in the  BZ.

Fix:
After picking one among the sources as the good copy, mark pending xattrs on
all sources to blame the sinks. Now even if this metadata heal fails midway,
a subsequent heal will still choose one of the valid sources that it
picked previously.

Updates: #1067
Change-Id: If1b050b70b0ad911e162c04db4d89b263e2b8d7b
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 2d5ba449e9200b16184b1e7fc84cabd015f1f779)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Change handling of heal failure to avoid crash</title>
<updated>2020-03-17T14:27:36+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2019-07-11T11:22:49+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5b9e4bbeb115d39e1a5e929b4ffbe981e4597ae7'/>
<id>5b9e4bbeb115d39e1a5e929b4ffbe981e4597ae7</id>
<content type='text'>
Problem:
ec_getxattr_heal_cbk was called with NULL as second argument
in case heal was failing.
This function was dereferencing "cookie" argument which caused crash.

Solution:
Cookie is changed to carry the value that was supposed to be
stored in fop-&gt;data, so even in the case when fop is NULL in error
case, there won't be any NULL dereference.

Thanks to Xavi for the suggestion about the fix.

Change-Id: I0798000d5cadb17c3c2fbfa1baf77033ffc2bb8c
updates: #1061
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
ec_getxattr_heal_cbk was called with NULL as second argument
in case heal was failing.
This function was dereferencing "cookie" argument which caused crash.

Solution:
Cookie is changed to carry the value that was supposed to be
stored in fop-&gt;data, so even in the case when fop is NULL in error
case, there won't be any NULL dereference.

Thanks to Xavi for the suggestion about the fix.

Change-Id: I0798000d5cadb17c3c2fbfa1baf77033ffc2bb8c
updates: #1061
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: fix race when bricks come up</title>
<updated>2020-03-16T08:18:51+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2020-03-01T18:49:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=87f1163e83339b2c2923106e0d9a45e8a327bb7a'/>
<id>87f1163e83339b2c2923106e0d9a45e8a327bb7a</id>
<content type='text'>
The was a problem when self-heal was sending lookups at the same time
that one of the bricks was coming up. In this case there was a chance
that the number of 'up' bricks changes in the middle of sending the
requests to subvolumes which caused a discrepancy in the expected
number of replies and the actual number of sent requests.

This discrepancy caused that AFR continued executing requests before
all requests were complete. Eventually, the frame of the pending
request was destroyed when the operation terminated, causing a use-
after-free issue when the answer was finally received.

In theory the same thing could happen in the reverse way, i.e. AFR
tries to wait for more replies than sent requests, causing a hang.

Backport of:
&gt; Change-Id: I7ed6108554ca379d532efb1a29b2de8085410b70
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Fixes: bz#1808875

Change-Id: I7ed6108554ca379d532efb1a29b2de8085410b70
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1809438
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The was a problem when self-heal was sending lookups at the same time
that one of the bricks was coming up. In this case there was a chance
that the number of 'up' bricks changes in the middle of sending the
requests to subvolumes which caused a discrepancy in the expected
number of replies and the actual number of sent requests.

This discrepancy caused that AFR continued executing requests before
all requests were complete. Eventually, the frame of the pending
request was destroyed when the operation terminated, causing a use-
after-free issue when the answer was finally received.

In theory the same thing could happen in the reverse way, i.e. AFR
tries to wait for more replies than sent requests, causing a hang.

Backport of:
&gt; Change-Id: I7ed6108554ca379d532efb1a29b2de8085410b70
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Fixes: bz#1808875

Change-Id: I7ed6108554ca379d532efb1a29b2de8085410b70
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1809438
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: skip updating ctx-&gt;loc again when ec_fix_open/opendir</title>
<updated>2020-03-16T08:01:00+00:00</updated>
<author>
<name>Kinglong Mee</name>
<email>kinglongmee@gmail.com</email>
</author>
<published>2019-07-11T10:57:13+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=36e26424f5eabe1332a49c4ffaa3214d38675e29'/>
<id>36e26424f5eabe1332a49c4ffaa3214d38675e29</id>
<content type='text'>
The ec_manager_open/opendir memsets ctx-&gt;loc which causes
memory/inode leak, and ec_fheal uses ctx-&gt;loc out of fd-&gt;lock
that loc_copy may copy bad data when memset it.

This patch skips updating ctx-&gt;loc when it is initilizaed.
With it, ctx-&gt;loc is filled once, and never updated.

Change-Id: I3bf5ffce4caf4c1c667f7acaa14b451d37a3550a
fixes: bz#1806843
Signed-off-by: Kinglong Mee &lt;mijinlong@horiscale.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The ec_manager_open/opendir memsets ctx-&gt;loc which causes
memory/inode leak, and ec_fheal uses ctx-&gt;loc out of fd-&gt;lock
that loc_copy may copy bad data when memset it.

This patch skips updating ctx-&gt;loc when it is initilizaed.
With it, ctx-&gt;loc is filled once, and never updated.

Change-Id: I3bf5ffce4caf4c1c667f7acaa14b451d37a3550a
fixes: bz#1806843
Signed-off-by: Kinglong Mee &lt;mijinlong@horiscale.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: prevent spurious entry heals leading to gfid split-brain</title>
<updated>2020-02-25T17:07:04+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2020-02-11T09:04:48+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2c5b724c0071ae184dd4bfd441d7346106387ed5'/>
<id>2c5b724c0071ae184dd4bfd441d7346106387ed5</id>
<content type='text'>
Problem:
In a hyperconverged setup with granular-entry-heal enabled, if a file is
recreated while one of the bricks is down, and an index heal is triggered
(with the brick still down), entry-self heal was doing a spurious heal
with just the 2 good bricks. It was doing a post-op leading to removal
of the filename from .glusterfs/indices/entry-changes as well as
erroneous setting of afr xattrs on the parent. When the brick came up,
the xattrs were cleared, resulting in the renamed file not getting
healed and leading to gfid split-brain and EIO on the mount.

Fix:
Proceed with entry heal only when shd can connect to all bricks of the replica,
just like in data and metadata heal.

fixes: bz#1804591
Change-Id: I916ae26ad1fabf259bc6362da52d433b7223b17e
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 06453d77d056fbaa393a137ca277a20e38d2f67e)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a hyperconverged setup with granular-entry-heal enabled, if a file is
recreated while one of the bricks is down, and an index heal is triggered
(with the brick still down), entry-self heal was doing a spurious heal
with just the 2 good bricks. It was doing a post-op leading to removal
of the filename from .glusterfs/indices/entry-changes as well as
erroneous setting of afr xattrs on the parent. When the brick came up,
the xattrs were cleared, resulting in the renamed file not getting
healed and leading to gfid split-brain and EIO on the mount.

Fix:
Proceed with entry heal only when shd can connect to all bricks of the replica,
just like in data and metadata heal.

fixes: bz#1804591
Change-Id: I916ae26ad1fabf259bc6362da52d433b7223b17e
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 06453d77d056fbaa393a137ca277a20e38d2f67e)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/thin-arbiter: Wait for TA connection before ta-file lookup</title>
<updated>2020-02-18T06:49:31+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2020-01-03T11:24:33+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fc0f7442f038ce86344dd6b735abcff6a50533b8'/>
<id>fc0f7442f038ce86344dd6b735abcff6a50533b8</id>
<content type='text'>
Problem:
When we mount a ta volume, as soon as 2 data bricks are connected
we consider that the mount is done and then send a lookup/create on
ta file on ta node. However, this connection with ta node might not
have been completed.
Due to this delay, ta replica id file will not be created and we
will see ENOTCONN error in log file if we do lookup.

Solution:
As we know that this ta node could have a higher latency, we should
wait for reasonable time for connection to happen before sending
lookup/create on replica id file.

fixes: bz#1804058
Change-Id: I36f90865afe617e4e84cee57fec832a16f5dd6cc
(cherry picked from commit a7fa54ddea3fe429f143b37e4de06a93b49d776a)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When we mount a ta volume, as soon as 2 data bricks are connected
we consider that the mount is done and then send a lookup/create on
ta file on ta node. However, this connection with ta node might not
have been completed.
Due to this delay, ta replica id file will not be created and we
will see ENOTCONN error in log file if we do lookup.

Solution:
As we know that this ta node could have a higher latency, we should
wait for reasonable time for connection to happen before sending
lookup/create on replica id file.

fixes: bz#1804058
Change-Id: I36f90865afe617e4e84cee57fec832a16f5dd6cc
(cherry picked from commit a7fa54ddea3fe429f143b37e4de06a93b49d776a)
</pre>
</div>
</content>
</entry>
</feed>
