<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/features/shard, branch v9dev</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>features/shard: Fix crash during shards cleanup in error cases</title>
<updated>2020-03-26T04:44:39+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2020-03-23T06:17:10+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cc43ac8651de9aa508b01cb259b43c02d89b2afc'/>
<id>cc43ac8651de9aa508b01cb259b43c02d89b2afc</id>
<content type='text'>
A crash is seen during a reattempt to clean up shards in background
upon remount. And this happens even on remount (which means a remount
is no workaround for the crash).

In such a situation, the in-memory base inode object will not be
existent (new process, non-existent base shard).
So local-&gt;resolver_base_inode will be NULL.

In the event of an error (in this case, of space running out), the
process would crash at the time of logging the error in the following line -

        gf_msg(this-&gt;name, GF_LOG_ERROR, local-&gt;op_errno, SHARD_MSG_FOP_FAILED,
               "failed to delete shards of %s",
               uuid_utoa(local-&gt;resolver_base_inode-&gt;gfid));

Fixed that by using local-&gt;base_gfid as the source of gfid when
local-&gt;resolver_base_inode is NULL.

Change-Id: I0b49f2b58becd0d8874b3d4b14ff8d92a89d02d5
Fixes: #1127
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A crash is seen during a reattempt to clean up shards in background
upon remount. And this happens even on remount (which means a remount
is no workaround for the crash).

In such a situation, the in-memory base inode object will not be
existent (new process, non-existent base shard).
So local-&gt;resolver_base_inode will be NULL.

In the event of an error (in this case, of space running out), the
process would crash at the time of logging the error in the following line -

        gf_msg(this-&gt;name, GF_LOG_ERROR, local-&gt;op_errno, SHARD_MSG_FOP_FAILED,
               "failed to delete shards of %s",
               uuid_utoa(local-&gt;resolver_base_inode-&gt;gfid));

Fixed that by using local-&gt;base_gfid as the source of gfid when
local-&gt;resolver_base_inode is NULL.

Change-Id: I0b49f2b58becd0d8874b3d4b14ff8d92a89d02d5
Fixes: #1127
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>multiple: fix bad type cast</title>
<updated>2020-01-10T00:58:19+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-12-20T13:14:32+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=00c090b093c147a95bfb8fce93f08303993e1995'/>
<id>00c090b093c147a95bfb8fce93f08303993e1995</id>
<content type='text'>
When using inode_ctx_get() or inode_ctx_set(), a 'uint64_t *' is expected.
In many cases, the value to retrieve or store is a pointer, which will be
of smaller size in some architectures (for example 32-bits). In this case,
directly passing the address of the pointer casted to an 'uint64_t *' is
wrong and can cause memory corruption.

Change-Id: Iae616da9dda528df6743fa2f65ae5cff5ad23258
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1785611
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When using inode_ctx_get() or inode_ctx_set(), a 'uint64_t *' is expected.
In many cases, the value to retrieve or store is a pointer, which will be
of smaller size in some architectures (for example 32-bits). In this case,
directly passing the address of the pointer casted to an 'uint64_t *' is
wrong and can cause memory corruption.

Change-Id: Iae616da9dda528df6743fa2f65ae5cff5ad23258
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1785611
</pre>
</div>
</content>
</entry>
<entry>
<title>tests/shard: fix tests/bugs/shard/unlinks-and-renames.t failure</title>
<updated>2019-11-01T01:03:51+00:00</updated>
<author>
<name>Sheetal Pamecha</name>
<email>spamecha@redhat.com</email>
</author>
<published>2019-10-22T09:04:06+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=13cb14811cdea477780e58ba35493479c7a04a25'/>
<id>13cb14811cdea477780e58ba35493479c7a04a25</id>
<content type='text'>
on rhel8 machine cleanup of shards is not happening properly for a
sharded file with hard-links. It needs to refresh the hard link count
to make it successful

The problem occurs when a sharded file with hard-links gets removed.
When the last link file is removed, all shards need to be cleaned up.
But in the current code structure shard xlator, instead of sending a lookup
to get the link count uses stale cache values of inodectx. Therby removing
the base shard but not the shards present in /.shard directory.

This fix will make sure that it marks in the first unlink's callback that
the inode ctx needs a refresh so that in the next operation, it will be
refreshed by looking up the file on-disk.

fixes: bz#1764110
Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
on rhel8 machine cleanup of shards is not happening properly for a
sharded file with hard-links. It needs to refresh the hard link count
to make it successful

The problem occurs when a sharded file with hard-links gets removed.
When the last link file is removed, all shards need to be cleaned up.
But in the current code structure shard xlator, instead of sending a lookup
to get the link count uses stale cache values of inodectx. Therby removing
the base shard but not the shards present in /.shard directory.

This fix will make sure that it marks in the first unlink's callback that
the inode ctx needs a refresh so that in the next operation, it will be
refreshed by looking up the file on-disk.

fixes: bz#1764110
Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: support split-brain CLI for replica 3</title>
<updated>2019-10-09T06:35:43+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-09-28T03:23:08+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=47dbd753187f69b3835d2e42fdbe7485874c4b3e'/>
<id>47dbd753187f69b3835d2e42fdbe7485874c4b3e</id>
<content type='text'>
Ever since we added quorum checks for lookups in afr via commit
bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution
commands would not work for replica 3 because there would be no
readables for the lookup fop.

The argument was that split-brains do not occur in replica 3 but we do
see (data/metadata) split-brain cases once in a while which indicate that there are
a few bugs/corner cases yet to be discovered and fixed.

Fortunately, commit  8016d51a3bbd410b0b927ed66be50a09574b7982 added
GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we
leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD,
split-brain resolution commands will work for replica 3 volumes too.

Likewise, the check is added in shard_lookup as well to permit resolving
split-brains by specifying "/.shard/shard-file.xx" as the file name
(which previously used to fail with EPERM).

Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9
Fixes: bz#1756938
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Ever since we added quorum checks for lookups in afr via commit
bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution
commands would not work for replica 3 because there would be no
readables for the lookup fop.

The argument was that split-brains do not occur in replica 3 but we do
see (data/metadata) split-brain cases once in a while which indicate that there are
a few bugs/corner cases yet to be discovered and fixed.

Fortunately, commit  8016d51a3bbd410b0b927ed66be50a09574b7982 added
GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we
leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD,
split-brain resolution commands will work for replica 3 volumes too.

Likewise, the check is added in shard_lookup as well to permit resolving
split-brains by specifying "/.shard/shard-file.xx" as the file name
(which previously used to fail with EPERM).

Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9
Fixes: bz#1756938
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Send correct size when reads are sent beyond file size</title>
<updated>2019-08-12T13:30:20+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2019-08-07T06:42:43+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=51237eda7c4b3846d08c5d24d1e3fe9b7ffba1d4'/>
<id>51237eda7c4b3846d08c5d24d1e3fe9b7ffba1d4</id>
<content type='text'>
Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b
fixes bz#1738419
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b
fixes bz#1738419
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>graph/shd: Use glusterfs_graph_deactivate to free the xl rec</title>
<updated>2019-06-27T06:04:05+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-06-24T10:19:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e8f8d16fc4b8e5be48f4d7c9ff9d170934ffb7fc'/>
<id>e8f8d16fc4b8e5be48f4d7c9ff9d170934ffb7fc</id>
<content type='text'>
We were using glusterfs_graph_fini to free the xl rec from
glusterfs_process_volfp as well as glusterfs_graph_cleanup.

Instead we can use glusterfs_graph_deactivate, which is does
fini as well as other common rec free.

Change-Id: Ie4a5f2771e5254aa5ed9f00c3672a6d2cc8e4bc1
Updates: bz#1716695
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We were using glusterfs_graph_fini to free the xl rec from
glusterfs_process_volfp as well as glusterfs_graph_cleanup.

Instead we can use glusterfs_graph_deactivate, which is does
fini as well as other common rec free.

Change-Id: Ie4a5f2771e5254aa5ed9f00c3672a6d2cc8e4bc1
Updates: bz#1716695
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: cleanup iovec functions</title>
<updated>2019-06-11T13:28:07+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-05-31T16:40:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=952cf7e4f4393fcd9cf8c16b013d8f28915c990e'/>
<id>952cf7e4f4393fcd9cf8c16b013d8f28915c990e</id>
<content type='text'>
This patch cleans some iovec code and creates two additional helper
functions to simplify management of iovec structures.

  iov_range_copy(struct iovec *dst, uint32_t dst_count, uint32_t dst_offset,
                 struct iovec *src, uint32_t src_count, uint32_t src_offset,
                 uint32_t size);

    This function copies up to 'size' bytes from 'src' at offset
    'src_offset' to 'dst' at 'dst_offset'. It returns the number of
    bytes copied.

  iov_skip(struct iovec *iovec, uint32_t count, uint32_t size);

    This function removes the initial 'size' bytes from 'iovec' and
    returns the updated number of iovec vectors remaining.

The signature of iov_subset() has also been modified to make it safer
and easier to use. The new signature is:

  iov_subset(struct iovec *src, int src_count, uint32_t start, uint32_t size,
             struct iovec **dst, int32_t dst_count);

    This function creates a new iovec array containing the subset of the
    'src' vector starting at 'start' with size 'size'. The resulting
    array is allocated if '*dst' is NULL, or copied to '*dst' if it fits
    (based on 'dst_count'). It returns the number of iovec vectors used.

A new set of functions to iterate through an iovec array have been
created. They can be used to simplify the implementation of other
iovec-based helper functions.

Change-Id: Ia5fe57e388e23392a8d6cdab17670e337cadd587
Updates: bz#1193929
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch cleans some iovec code and creates two additional helper
functions to simplify management of iovec structures.

  iov_range_copy(struct iovec *dst, uint32_t dst_count, uint32_t dst_offset,
                 struct iovec *src, uint32_t src_count, uint32_t src_offset,
                 uint32_t size);

    This function copies up to 'size' bytes from 'src' at offset
    'src_offset' to 'dst' at 'dst_offset'. It returns the number of
    bytes copied.

  iov_skip(struct iovec *iovec, uint32_t count, uint32_t size);

    This function removes the initial 'size' bytes from 'iovec' and
    returns the updated number of iovec vectors remaining.

The signature of iov_subset() has also been modified to make it safer
and easier to use. The new signature is:

  iov_subset(struct iovec *src, int src_count, uint32_t start, uint32_t size,
             struct iovec **dst, int32_t dst_count);

    This function creates a new iovec array containing the subset of the
    'src' vector starting at 'start' with size 'size'. The resulting
    array is allocated if '*dst' is NULL, or copied to '*dst' if it fits
    (based on 'dst_count'). It returns the number of iovec vectors used.

A new set of functions to iterate through an iovec array have been
created. They can be used to simplify the implementation of other
iovec-based helper functions.

Change-Id: Ia5fe57e388e23392a8d6cdab17670e337cadd587
Updates: bz#1193929
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Fix extra unref when inode object is lru'd out and added back</title>
<updated>2019-06-09T17:28:07+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2019-04-05T06:59:23+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=dc119e9c2f3db6d029ab1c5a81c171180db58192'/>
<id>dc119e9c2f3db6d029ab1c5a81c171180db58192</id>
<content type='text'>
Long tale of double unref! But do read...

In cases where a shard base inode is evicted from lru list while still
being part of fsync list but added back soon before its unlink, there
could be an extra inode_unref() leading to premature inode destruction
leading to crash.

One such specific case is the following -

Consider features.shard-deletion-rate = features.shard-lru-limit = 2.
This is an oversimplified example but explains the problem clearly.

First, a file is FALLOCATE'd to a size so that number of shards under
/.shard = 3 &gt; lru-limit.
Shards 1, 2 and 3 need to be resolved. 1 and 2 are resolved first.
Resultant lru list:
                               1 -----&gt; 2
refs on base inode -          (1)  +   (1) = 2
3 needs to be resolved. So 1 is lru'd out. Resultant lru list -
		               2 -----&gt; 3
refs on base inode -          (1)  +   (1) = 2

Note that 1 is inode_unlink()d but not destroyed because there are
non-zero refs on it since it is still participating in this ongoing
FALLOCATE operation.

FALLOCATE is sent on all participant shards. In the cbk, all of them are
added to fync_list.
Resulting fsync list -
                               1 -----&gt; 2 -----&gt; 3 (order doesn't matter)
refs on base inode -          (1)  +   (1)  +   (1) = 3
Total refs = 3 + 2 = 5

Now an attempt is made to unlink this file. Background deletion is triggered.
The first $shard-deletion-rate shards need to be unlinked in the first batch.
So shards 1 and 2 need to be resolved. inode_resolve fails on 1 but succeeds
on 2 and so it's moved to tail of list.
lru list now -
                              3 -----&gt; 2
No change in refs.

shard 1 is looked up. In lookup_cbk, it's linked and added back to lru list
at the cost of evicting shard 3.
lru list now -
                              2 -----&gt; 1
refs on base inode:          (1)  +   (1) = 2
fsync list now -
                              1 -----&gt; 2 (again order doesn't matter)
refs on base inode -         (1)  +   (1) = 2
Total refs = 2 + 2 = 4
After eviction, it is found 3 needs fsync. So fsync is wound, yet to be ack'd.
So it is still inode_link()d.

Now deletion of shards 1 and 2 completes. lru list is empty. Base inode unref'd and
destroyed.
In the next batched deletion, 3 needs to be deleted. It is inode_resolve()able.
It is added back to lru list but base inode passed to __shard_update_shards_inode_list()
is NULL since the inode is destroyed. But its ctx-&gt;inode still contains base inode ptr
from first addition to lru list for no additional ref on it.
lru list now -
                              3
refs on base inode -         (0)
Total refs on base inode = 0
Unlink is sent on 3. It completes. Now since the ctx contains ptr to base_inode and the
shard is part of lru list, base shard is unref'd leading to a crash.

FIX:
When shard is readded back to lru list, copy the base inode pointer as is into its inode ctx,
even if it is NULL. This is needed to prevent double unrefs at the time of deleting it.

Change-Id: I99a44039da2e10a1aad183e84f644d63ca552462
Updates: bz#1696136
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Long tale of double unref! But do read...

In cases where a shard base inode is evicted from lru list while still
being part of fsync list but added back soon before its unlink, there
could be an extra inode_unref() leading to premature inode destruction
leading to crash.

One such specific case is the following -

Consider features.shard-deletion-rate = features.shard-lru-limit = 2.
This is an oversimplified example but explains the problem clearly.

First, a file is FALLOCATE'd to a size so that number of shards under
/.shard = 3 &gt; lru-limit.
Shards 1, 2 and 3 need to be resolved. 1 and 2 are resolved first.
Resultant lru list:
                               1 -----&gt; 2
refs on base inode -          (1)  +   (1) = 2
3 needs to be resolved. So 1 is lru'd out. Resultant lru list -
		               2 -----&gt; 3
refs on base inode -          (1)  +   (1) = 2

Note that 1 is inode_unlink()d but not destroyed because there are
non-zero refs on it since it is still participating in this ongoing
FALLOCATE operation.

FALLOCATE is sent on all participant shards. In the cbk, all of them are
added to fync_list.
Resulting fsync list -
                               1 -----&gt; 2 -----&gt; 3 (order doesn't matter)
refs on base inode -          (1)  +   (1)  +   (1) = 3
Total refs = 3 + 2 = 5

Now an attempt is made to unlink this file. Background deletion is triggered.
The first $shard-deletion-rate shards need to be unlinked in the first batch.
So shards 1 and 2 need to be resolved. inode_resolve fails on 1 but succeeds
on 2 and so it's moved to tail of list.
lru list now -
                              3 -----&gt; 2
No change in refs.

shard 1 is looked up. In lookup_cbk, it's linked and added back to lru list
at the cost of evicting shard 3.
lru list now -
                              2 -----&gt; 1
refs on base inode:          (1)  +   (1) = 2
fsync list now -
                              1 -----&gt; 2 (again order doesn't matter)
refs on base inode -         (1)  +   (1) = 2
Total refs = 2 + 2 = 4
After eviction, it is found 3 needs fsync. So fsync is wound, yet to be ack'd.
So it is still inode_link()d.

Now deletion of shards 1 and 2 completes. lru list is empty. Base inode unref'd and
destroyed.
In the next batched deletion, 3 needs to be deleted. It is inode_resolve()able.
It is added back to lru list but base inode passed to __shard_update_shards_inode_list()
is NULL since the inode is destroyed. But its ctx-&gt;inode still contains base inode ptr
from first addition to lru list for no additional ref on it.
lru list now -
                              3
refs on base inode -         (0)
Total refs on base inode = 0
Unlink is sent on 3. It completes. Now since the ctx contains ptr to base_inode and the
shard is part of lru list, base shard is unref'd leading to a crash.

FIX:
When shard is readded back to lru list, copy the base inode pointer as is into its inode ctx,
even if it is NULL. This is needed to prevent double unrefs at the time of deleting it.

Change-Id: I99a44039da2e10a1aad183e84f644d63ca552462
Updates: bz#1696136
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>across: clang-scan: fix NULL dereferencing warnings</title>
<updated>2019-06-04T10:30:29+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-05-20T05:41:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e7aeab3063ac5645136303278b477d7de35266c0'/>
<id>e7aeab3063ac5645136303278b477d7de35266c0</id>
<content type='text'>
All these checks are done after analyzing clang-scan report produced
by the CI job @ https://build.gluster.org/job/clang-scan

updates: bz#1622665
Change-Id: I590305af4ceb779be952974b2a36066ffc4865ca
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
All these checks are done after analyzing clang-scan report produced
by the CI job @ https://build.gluster.org/job/clang-scan

updates: bz#1622665
Change-Id: I590305af4ceb779be952974b2a36066ffc4865ca
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Fix block-count accounting upon truncate to lower size</title>
<updated>2019-06-04T07:30:12+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2019-05-08T07:30:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=400b66d568ad18fefcb59949d1f8368d487b9a80'/>
<id>400b66d568ad18fefcb59949d1f8368d487b9a80</id>
<content type='text'>
The way delta_blocks is computed in shard is incorrect, when a file
is truncated to a lower size. The accounting only considers change
in size of the last of the truncated shards.

FIX:

Get the block-count of each shard just before an unlink at posix in
xdata.  Their summation plus the change in size of last shard
(from an actual truncate) is used to compute delta_blocks which is
used in the xattrop for size update.

Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
fixes: bz#1705884
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The way delta_blocks is computed in shard is incorrect, when a file
is truncated to a lower size. The accounting only considers change
in size of the last of the truncated shards.

FIX:

Get the block-count of each shard just before an unlink at posix in
xdata.  Their summation plus the change in size of last shard
(from an actual truncate) is used to compute delta_blocks which is
used in the xattrop for size update.

Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
fixes: bz#1705884
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
