<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs/shard, branch v6.9</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>features/shard: Send correct size when reads are sent beyond file size</title>
<updated>2019-10-24T09:24:22+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2019-08-07T06:42:43+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=02f26172c00143953310e9dd4f8ec08f3de66954'/>
<id>02f26172c00143953310e9dd4f8ec08f3de66954</id>
<content type='text'>
Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b
fixes bz#1737141
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit 51237eda7c4b3846d08c5d24d1e3fe9b7ffba1d4)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b
fixes bz#1737141
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit 51237eda7c4b3846d08c5d24d1e3fe9b7ffba1d4)
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Fix block-count accounting upon truncate to lower size</title>
<updated>2019-07-03T11:16:10+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2019-05-08T07:30:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=25e4a1249f3904a2a918194541566e3cda512c6b'/>
<id>25e4a1249f3904a2a918194541566e3cda512c6b</id>
<content type='text'>
Backport of:
&gt; BUG: bz#1705884
&gt; Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;

The way delta_blocks is computed in shard is incorrect, when a file
is truncated to a lower size. The accounting only considers change
in size of the last of the truncated shards.

FIX:

Get the block-count of each shard just before an unlink at posix in
xdata.  Their summation plus the change in size of last shard
(from an actual truncate) is used to compute delta_blocks which is
used in the xattrop for size update.

Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
fixes: bz#1716871
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit 400b66d568ad18fefcb59949d1f8368d487b9a80)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of:
&gt; BUG: bz#1705884
&gt; Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;

The way delta_blocks is computed in shard is incorrect, when a file
is truncated to a lower size. The accounting only considers change
in size of the last of the truncated shards.

FIX:

Get the block-count of each shard just before an unlink at posix in
xdata.  Their summation plus the change in size of last shard
(from an actual truncate) is used to compute delta_blocks which is
used in the xattrop for size update.

Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
fixes: bz#1716871
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit 400b66d568ad18fefcb59949d1f8368d487b9a80)
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Ref shard inode while adding to fsync list</title>
<updated>2019-01-24T18:14:57+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2019-01-24T08:44:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=72922c1fd69191b220f79905a23395c3a87f86ce'/>
<id>72922c1fd69191b220f79905a23395c3a87f86ce</id>
<content type='text'>
PROBLEM:

Lot of the earlier changes in the management of shards in lru, fsync
lists assumed that if a given shard exists in fsync list, it must be
part of lru list as well. This was found to be not true.

Consider this - a file is FALLOCATE'd to a size which would make the
number of participant shards to be greater than the lru list size.
In this case, some of the resolved shards that are to participate in
this fop will be evicted from lru list to give way to the rest of the
shards. And once FALLOCATE completes, these shards are added to fsync
list but without a ref. After the fop completes, these shard inodes
are unref'd and destroyed while their inode ctxs are still part of
fsync list. Now when an FSYNC is called on the base file and the
fsync-list traversed, the client crashes due to illegal memory access.

FIX:

Hold a ref on the shard inode when adding to fsync list as well.
And unref under following conditions:
1. when the shard is evicted from lru list
2. when the base file is fsync'd
3. when the shards are deleted.

Change-Id: Iab460667d091b8388322f59b6cb27ce69299b1b2
fixes: bz#1669077
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
PROBLEM:

Lot of the earlier changes in the management of shards in lru, fsync
lists assumed that if a given shard exists in fsync list, it must be
part of lru list as well. This was found to be not true.

Consider this - a file is FALLOCATE'd to a size which would make the
number of participant shards to be greater than the lru list size.
In this case, some of the resolved shards that are to participate in
this fop will be evicted from lru list to give way to the rest of the
shards. And once FALLOCATE completes, these shards are added to fsync
list but without a ref. After the fop completes, these shard inodes
are unref'd and destroyed while their inode ctxs are still part of
fsync list. Now when an FSYNC is called on the base file and the
fsync-list traversed, the client crashes due to illegal memory access.

FIX:

Hold a ref on the shard inode when adding to fsync list as well.
And unref under following conditions:
1. when the shard is evicted from lru list
2. when the base file is fsync'd
3. when the shards are deleted.

Change-Id: Iab460667d091b8388322f59b6cb27ce69299b1b2
fixes: bz#1669077
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix zero-flag.t script</title>
<updated>2018-12-19T16:02:07+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2018-12-19T12:27:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8cde14a537f0112400744d518ed196eb8fa232f2'/>
<id>8cde14a537f0112400744d518ed196eb8fa232f2</id>
<content type='text'>
The default value of shard-block-size was changed from 4MB
to 64MB sometime back. The script "fallocate"s a 6MB file
and expects it to have 1 shard under .shard. This worked when
the shard-block-size was 4MB. With the default value now at 64MB,
file "file1" won't have any shards under .shard and the stat on the
1st shard's path fails with ENOENT.

Changed the script to explicitly set shard-block-size to 4MB.

Change-Id: I7f1785922287d16d74c95fa57cbbe12e6e66e4f7
fixes: bz#1656264
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The default value of shard-block-size was changed from 4MB
to 64MB sometime back. The script "fallocate"s a 6MB file
and expects it to have 1 shard under .shard. This worked when
the shard-block-size was 4MB. With the default value now at 64MB,
file "file1" won't have any shards under .shard and the stat on the
1st shard's path fails with ENOENT.

Changed the script to explicitly set shard-block-size to 4MB.

Change-Id: I7f1785922287d16d74c95fa57cbbe12e6e66e4f7
fixes: bz#1656264
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Mark tests/bugs/shard/zero-flag.t bad</title>
<updated>2018-12-05T08:46:05+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-12-05T05:22:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9c06057bae91440aabb1a1a69e5315b56454e1a2'/>
<id>9c06057bae91440aabb1a1a69e5315b56454e1a2</id>
<content type='text'>
Change-Id: I2f4ca470c6666584e0feb129ab712f06772a86c2
Updates: bz#1656264
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I2f4ca470c6666584e0feb129ab712f06772a86c2
Updates: bz#1656264
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Hold a ref on base inode when adding a shard to lru list</title>
<updated>2018-10-16T03:37:44+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2018-10-05T06:02:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e627977617dd765f6b58a70882c6acda6c6aab6e'/>
<id>e627977617dd765f6b58a70882c6acda6c6aab6e</id>
<content type='text'>
In __shard_update_shards_inode_list(), previously shard translator
was not holding a ref on the base inode whenever a shard was added to
the lru list. But if the base shard is forgotten and destroyed either
by fuse due to memory pressure or due to the file being deleted at some
point by a different client with this client still containing stale
shards in its lru list, the client would crash at the time of locking
lru_base_inode-&gt;lock owing to illegal memory access.

So now the base shard is ref'd into the inode ctx of every shard that
is added to lru list until it gets lru'd out.

The patch also handles the case where none of the shards associated
with a file that is about to be deleted are part of the LRU list and
where an unlink at the beginning of the operation destroys the base
inode (because there are no refkeepers) and hence all of the shards
that are about to be deleted will be resolved without the existence
of a base shard in-memory. This, if not handled properly, could lead
to a crash.

Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
updates: bz#1605056
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In __shard_update_shards_inode_list(), previously shard translator
was not holding a ref on the base inode whenever a shard was added to
the lru list. But if the base shard is forgotten and destroyed either
by fuse due to memory pressure or due to the file being deleted at some
point by a different client with this client still containing stale
shards in its lru list, the client would crash at the time of locking
lru_base_inode-&gt;lock owing to illegal memory access.

So now the base shard is ref'd into the inode ctx of every shard that
is added to lru list until it gets lru'd out.

The patch also handles the case where none of the shards associated
with a file that is about to be deleted are part of the LRU list and
where an unlink at the beginning of the operation destroys the base
inode (because there are no refkeepers) and hence all of the shards
that are about to be deleted will be resolved without the existence
of a base shard in-memory. This, if not handled properly, could lead
to a crash.

Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
updates: bz#1605056
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Land part 2 of clang-format changes</title>
<updated>2018-09-12T12:22:45+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T12:22:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e16868dede6455cab644805af6fe1ac312775e13'/>
<id>e16868dede6455cab644805af6fe1ac312775e13</id>
<content type='text'>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Fix crash and test case in RENAME fop</title>
<updated>2018-08-14T13:49:42+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2018-08-02T16:18:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d46632247cbbeefb4798512e4426943f9768ecbf'/>
<id>d46632247cbbeefb4798512e4426943f9768ecbf</id>
<content type='text'>
Setting the refresh flag in inode ctx in shard_rename_src_cbk()
is applicable only when the dst file exists and is sharded and
has a hard link &gt; 1 at the time of rename.

But this piece of code is exercised even when dst doesn't exist.
In this case, the mount crashes because local-&gt;int_inodelk.loc.inode
is NULL.

Change-Id: Iaf85a5ee3dff8b01a76e11972f10f2bb9dcbd407
Updates: bz#1611692
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Setting the refresh flag in inode ctx in shard_rename_src_cbk()
is applicable only when the dst file exists and is sharded and
has a hard link &gt; 1 at the time of rename.

But this piece of code is exercised even when dst doesn't exist.
In this case, the mount crashes because local-&gt;int_inodelk.loc.inode
is NULL.

Change-Id: Iaf85a5ee3dff8b01a76e11972f10f2bb9dcbd407
Updates: bz#1611692
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix tests/bugs/shard/configure-lru-limit.t</title>
<updated>2018-08-13T04:56:40+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-08-12T09:55:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=212075b662820a2794f170d11b3a66734e00d6c3'/>
<id>212075b662820a2794f170d11b3a66734e00d6c3</id>
<content type='text'>
Check for the bricks to be up before attempting to mount.

Change-Id: I1224908137016df3007f4467aa9760967ce0694d
Fixes: bz#1615092
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Check for the bricks to be up before attempting to mount.

Change-Id: I1224908137016df3007f4467aa9760967ce0694d
Fixes: bz#1615092
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Make lru limit of inode list configurable</title>
<updated>2018-07-23T07:32:32+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2018-07-20T05:22:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5fa004f3c4924dd6d83aecd5f33fc58badbe9305'/>
<id>5fa004f3c4924dd6d83aecd5f33fc58badbe9305</id>
<content type='text'>
Currently this lru limit is hard-coded to 16384. This patch makes it
configurable to make it easier to hit the lru limit and enable testing
of different cases that arise when the limit is reached.

The option is features.shard-lru-limit. It is by design allowed to
be configured only in init() but not in reconfigure(). This is to avoid
all the complexity associated with eviction of least recently used shards
when the list is shrunk.

Change-Id: Ifdcc2099f634314fafe8444e2d676e192e89e295
updates: bz#1605056
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently this lru limit is hard-coded to 16384. This patch makes it
configurable to make it easier to hit the lru limit and enable testing
of different cases that arise when the limit is reached.

The option is features.shard-lru-limit. It is by design allowed to
be configured only in init() but not in reconfigure(). This is to avoid
all the complexity associated with eviction of least recently used shards
when the list is shrunk.

Change-Id: Ifdcc2099f634314fafe8444e2d676e192e89e295
updates: bz#1605056
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
