<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests, branch v5.12</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/ec: fix EIO error for concurrent writes on sparse files</title>
<updated>2020-02-25T06:48:11+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-07-17T12:50:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=171607c8aef54013c064c4e8b62a82e400ecab7e'/>
<id>171607c8aef54013c064c4e8b62a82e400ecab7e</id>
<content type='text'>
EC doesn't allow concurrent writes on overlapping areas, they are
serialized. However non-overlapping writes are serviced in parallel.
When a write is not aligned, EC first needs to read the entire chunk
from disk, apply the modified fragment and write it again.

The problem appears on sparse files because a write to an offset
implicitly creates data on offsets below it (so, in some way, they
are overlapping). For example, if a file is empty and we read 10 bytes
from offset 10, read() will return 0 bytes. Now, if we write one byte
at offset 1M and retry the same read, the system call will return 10
bytes (all containing 0's).

So if we have two writes, the first one at offset 10 and the second one
at offset 1M, EC will send both in parallel because they do not overlap.
However, the first one will try to read missing data from the first chunk
(i.e. offsets 0 to 9) to recombine the entire chunk and do the final write.
This read will happen in parallel with the write to 1M. What could happen
is that half of the bricks process the write before the read, and the
half do the read before the write. Some bricks will return 10 bytes of
data while the otherw will return 0 bytes (because the file on the brick
has not been expanded yet).

When EC tries to recombine the answers from the bricks, it can't, because
it needs more than half consistent answers to recover the data. So this
read fails with EIO error. This error is propagated to the parent write,
which is aborted and EIO is returned to the application.

The issue happened because EC assumed that a write to a given offset
implies that offsets below it exist.

This fix prevents the read of the chunk from bricks if the current size
of the file is smaller than the read chunk offset. This size is
correctly tracked, so this fixes the issue.

Also modifying ec-stripe.t file for Test #13 within it.
In this patch, if a file size is less than the offset we are writing, we
fill zeros in head and tail and do not consider it strip cache miss.
That actually make sense as we know what data that part holds and there is
no need of reading it from bricks.

Change-Id: Ic342e8c35c555b8534109e9314c9a0710b6225d6
Fixes: bz#1805053
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
EC doesn't allow concurrent writes on overlapping areas, they are
serialized. However non-overlapping writes are serviced in parallel.
When a write is not aligned, EC first needs to read the entire chunk
from disk, apply the modified fragment and write it again.

The problem appears on sparse files because a write to an offset
implicitly creates data on offsets below it (so, in some way, they
are overlapping). For example, if a file is empty and we read 10 bytes
from offset 10, read() will return 0 bytes. Now, if we write one byte
at offset 1M and retry the same read, the system call will return 10
bytes (all containing 0's).

So if we have two writes, the first one at offset 10 and the second one
at offset 1M, EC will send both in parallel because they do not overlap.
However, the first one will try to read missing data from the first chunk
(i.e. offsets 0 to 9) to recombine the entire chunk and do the final write.
This read will happen in parallel with the write to 1M. What could happen
is that half of the bricks process the write before the read, and the
half do the read before the write. Some bricks will return 10 bytes of
data while the otherw will return 0 bytes (because the file on the brick
has not been expanded yet).

When EC tries to recombine the answers from the bricks, it can't, because
it needs more than half consistent answers to recover the data. So this
read fails with EIO error. This error is propagated to the parent write,
which is aborted and EIO is returned to the application.

The issue happened because EC assumed that a write to a given offset
implies that offsets below it exist.

This fix prevents the read of the chunk from bricks if the current size
of the file is smaller than the read chunk offset. This size is
correctly tracked, so this fixes the issue.

Also modifying ec-stripe.t file for Test #13 within it.
In this patch, if a file size is less than the offset we are writing, we
fill zeros in head and tail and do not consider it strip cache miss.
That actually make sense as we know what data that part holds and there is
no need of reading it from bricks.

Change-Id: Ic342e8c35c555b8534109e9314c9a0710b6225d6
Fixes: bz#1805053
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Prevent double pre-op xattrops</title>
<updated>2020-02-24T07:55:55+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-06-20T11:35:49+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ab30da4ff9d26935fa157b2732b9f3c36dd2519e'/>
<id>ab30da4ff9d26935fa157b2732b9f3c36dd2519e</id>
<content type='text'>
Problem:
Race:
Thread-1                                    Thread-2
1) Does ec_get_size_version() to perform
pre-op fxattrop as part of write-1
                                           2) Calls ec_set_dirty_flag() in
                                              ec_get_size_version() for write-2.
					      This sets dirty[] to 1
3) Completes executing
ec_prepare_update_cbk leading to
ctx-&gt;dirty[] = '1'
					   4) Takes LOCK(inode-&gt;lock) to check if there are
					      any flags and sets dirty-flag because
				              lock-&gt;waiting_flag is 0 now. This leads to
					      fxattrop to increment on-disk dirty[] to '2'

At the end of the writes the file will be marked for heal even when it doesn't need heal.

Fix:
Perform ec_set_dirty_flag() and other checks inside LOCK() to prevent dirty[] to be marked
as '1' in step 2) above

Fixes: bz#1805050
Change-Id: Icac2ab39c0b1e7e154387800fbededc561612865
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Race:
Thread-1                                    Thread-2
1) Does ec_get_size_version() to perform
pre-op fxattrop as part of write-1
                                           2) Calls ec_set_dirty_flag() in
                                              ec_get_size_version() for write-2.
					      This sets dirty[] to 1
3) Completes executing
ec_prepare_update_cbk leading to
ctx-&gt;dirty[] = '1'
					   4) Takes LOCK(inode-&gt;lock) to check if there are
					      any flags and sets dirty-flag because
				              lock-&gt;waiting_flag is 0 now. This leads to
					      fxattrop to increment on-disk dirty[] to '2'

At the end of the writes the file will be marked for heal even when it doesn't need heal.

Fix:
Perform ec_set_dirty_flag() and other checks inside LOCK() to prevent dirty[] to be marked
as '1' in step 2) above

Fixes: bz#1805050
Change-Id: Icac2ab39c0b1e7e154387800fbededc561612865
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>server: Mount fails after reboot 1/3 gluster nodes</title>
<updated>2020-02-24T07:50:34+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2020-01-21T15:39:56+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0c6663284294024bd91152d7d8579f59780bd338'/>
<id>0c6663284294024bd91152d7d8579f59780bd338</id>
<content type='text'>
Problem: At the time of coming up one server node(1x3) after reboot
client is unmounted.The client is unmounted because a client
is getting AUTH_FAILED event and client call fini for the graph.The
client is getting AUTH_FAILED because brick is not attached with a
graph at that moment

Solution: To avoid the unmounting the client graph throw ENOENT error
          from server in case if brick is not attached with server at
          the time of authenticate clients.

&gt; Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
&gt; Fixes: bz#1793852
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (cherry picked from commit f6421dff22a6ddaf14134f6894deae219948c89d)

Fixes: bz#1804512
Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: At the time of coming up one server node(1x3) after reboot
client is unmounted.The client is unmounted because a client
is getting AUTH_FAILED event and client call fini for the graph.The
client is getting AUTH_FAILED because brick is not attached with a
graph at that moment

Solution: To avoid the unmounting the client graph throw ENOENT error
          from server in case if brick is not attached with server at
          the time of authenticate clients.

&gt; Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
&gt; Fixes: bz#1793852
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (cherry picked from commit f6421dff22a6ddaf14134f6894deae219948c89d)

Fixes: bz#1804512
Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: fix fd reopen</title>
<updated>2020-02-20T08:26:40+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-04-12T15:54:44+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=19f7b2db26ae43ef42a265cbb7f0b712d05a262f'/>
<id>19f7b2db26ae43ef42a265cbb7f0b712d05a262f</id>
<content type='text'>
Currently EC tries to reopen fd's that have been opened while a brick
was down. This is done as part of regular write operations, just after
having acquired the locks, and it's sent as a sub-fop of the main write
fop.

There were two problems:

1. The reopen was attempted on all UP bricks, even if a previous lock
didn't succeed. This is incorrect because most probably the open will
fail.

2. If reopen is sent and fails, the error is propagated to the main
operation, causing it to fail when it shouldn't.

To fix this, we only attempt reopens on bricks where the current fop
owns a lock, and we prevent any error to be propagated to the main
fop.

To implement this behaviour an argument used to indicate the minimum
number of required answers has overloaded to also include some flags. To
make the change consistent, it has been necessary to rename the
argument, which means that a lot of files have been changed. However
there are no functional changes.

This change has also uncovered a problem in discard code, which didn't
correctely process requests of small sizes because no real discard fop
was being processed, only a write of 0's on some region. In this case
some fields of the fop remained uninitialized or with incorrect values.
To fix this, a new function has been created to simulate success on a
fop and it's used in the discard case.

Thanks to Pranith for providing a test script that has also detected an
issue in this patch. This patch includes a small modification of this
script to force data to be written into bricks before stopping them.

Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec
Fixes: bz#1805047
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently EC tries to reopen fd's that have been opened while a brick
was down. This is done as part of regular write operations, just after
having acquired the locks, and it's sent as a sub-fop of the main write
fop.

There were two problems:

1. The reopen was attempted on all UP bricks, even if a previous lock
didn't succeed. This is incorrect because most probably the open will
fail.

2. If reopen is sent and fails, the error is propagated to the main
operation, causing it to fail when it shouldn't.

To fix this, we only attempt reopens on bricks where the current fop
owns a lock, and we prevent any error to be propagated to the main
fop.

To implement this behaviour an argument used to indicate the minimum
number of required answers has overloaded to also include some flags. To
make the change consistent, it has been necessary to rename the
argument, which means that a lot of files have been changed. However
there are no functional changes.

This change has also uncovered a problem in discard code, which didn't
correctely process requests of small sizes because no real discard fop
was being processed, only a write of 0's on some region. In this case
some fields of the fop remained uninitialized or with incorrect values.
To fix this, a new function has been created to simulate success on a
fop and it's used in the discard case.

Thanks to Pranith for providing a test script that has also detected an
issue in this patch. This patch includes a small modification of this
script to force data to be written into bricks before stopping them.

Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec
Fixes: bz#1805047
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Heal entries when there is a source &amp; no healed_sinks</title>
<updated>2019-10-11T07:19:03+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-09-05T10:44:50+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=11cbd2da4a065332d0b8eee624b654a9991a0170'/>
<id>11cbd2da4a065332d0b8eee624b654a9991a0170</id>
<content type='text'>
Problem:
In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame
anything for entry heal, heal will not complete even though we have
clear source and sinks. This will happen because while doing
afr_selfheal_find_direction() only the bricks which are blamed by
non-accused bricks are considered as sinks. Later in
__afr_selfheal_entry_finalize_source() when it tries to mark all the
non-sources as sinks it fails to do so because there won't be any
healed_sinks marked, no witness present and there will be a source.

Fix:
If there is a source and no healed_sinks, then reset all the locked
sources to 0 and healed sinks to 1 to do conservative merge.

Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548
Fixes: bz#1760710
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame
anything for entry heal, heal will not complete even though we have
clear source and sinks. This will happen because while doing
afr_selfheal_find_direction() only the bricks which are blamed by
non-accused bricks are considered as sinks. Later in
__afr_selfheal_entry_finalize_source() when it tries to mark all the
non-sources as sinks it fails to do so because there won't be any
healed_sinks marked, no witness present and there will be a source.

Fix:
If there is a source and no healed_sinks, then reset all the locked
sources to 0 and healed sinks to 1 to do conservative merge.

Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548
Fixes: bz#1760710
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr/lookup: Pass xattr_req in while doing a selfheal in lookup</title>
<updated>2019-09-05T12:43:45+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-09-05T12:42:35+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fcfcb0ad04402a34f07075d29981922a6dbc858c'/>
<id>fcfcb0ad04402a34f07075d29981922a6dbc858c</id>
<content type='text'>
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors

Backport of&gt;https://review.gluster.org/#/c/glusterfs/+/23024/
&gt;Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
&gt;Fixes: bz#1728770
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Change-Id: I740ccdbfcf2667fe4a850c12ecdc4b9eeed08293
Fixes: bz#1749352
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors

Backport of&gt;https://review.gluster.org/#/c/glusterfs/+/23024/
&gt;Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
&gt;Fixes: bz#1728770
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Change-Id: I740ccdbfcf2667fe4a850c12ecdc4b9eeed08293
Fixes: bz#1749352
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile</title>
<updated>2019-07-18T06:26:14+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-06-11T04:22:06+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c2e67ea0b7faa5f8a5875aaf7563e8658e392d96'/>
<id>c2e67ea0b7faa5f8a5875aaf7563e8658e392d96</id>
<content type='text'>
... with out which volume creation fails with "volume create: &lt;xyz&gt;: failed:
Failed to create volume files"

&gt;Fixes: bz#1716812
&gt;Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
&gt;Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Fixes: bz#1721106
Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... with out which volume creation fails with "volume create: &lt;xyz&gt;: failed:
Failed to create volume files"

&gt;Fixes: bz#1716812
&gt;Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
&gt;Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Fixes: bz#1721106
Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: honor contention notifications for partially acquired locks</title>
<updated>2019-06-28T11:07:08+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-05-09T09:07:18+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=257fee6312513498ee70765b9f343d5df77b1fce'/>
<id>257fee6312513498ee70765b9f343d5df77b1fce</id>
<content type='text'>
EC was ignoring lock contention notifications received while a lock was
being acquired. When a lock is partially acquired (some bricks have
granted the lock but some others not yet) we can receive notifications
from acquired bricks, which should be honored, since we may not receive
more notifications after that.

Since EC was ignoring them, once the lock was acquired, it was not
released until the eager-lock timeout, causing unnecessary delays on
other clients.

This fix takes into consideration the notifications received before
having completed the full lock acquisition. After that, the lock will
be releaed as soon as possible.

Backport of:
&gt; BUG: bz#1708156
&gt; Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Fixes: bz#1717282
Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
EC was ignoring lock contention notifications received while a lock was
being acquired. When a lock is partially acquired (some bricks have
granted the lock but some others not yet) we can receive notifications
from acquired bricks, which should be honored, since we may not receive
more notifications after that.

Since EC was ignoring them, once the lock was acquired, it was not
released until the eager-lock timeout, causing unnecessary delays on
other clients.

This fix takes into consideration the notifications received before
having completed the full lock acquisition. After that, the lock will
be releaed as soon as possible.

Backport of:
&gt; BUG: bz#1708156
&gt; Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Fixes: bz#1717282
Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests/utils: Fix py2/py3 util python scripts</title>
<updated>2019-06-28T11:07:08+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-06-06T07:24:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5355f5a18146183a8e27872ebee501acad7b5eb0'/>
<id>5355f5a18146183a8e27872ebee501acad7b5eb0</id>
<content type='text'>
Following files are fixed.

tests/bugs/distribute/overlap.py
tests/utils/changelogparser.py
tests/utils/create-files.py
tests/utils/gfid-access.py
tests/utils/libcxattr.py

Have marked glupy as bad test.

Backport of:
&gt; Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
&gt; BUG: 1193929
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
Updates: bz#1629877
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Following files are fixed.

tests/bugs/distribute/overlap.py
tests/utils/changelogparser.py
tests/utils/create-files.py
tests/utils/gfid-access.py
tests/utils/libcxattr.py

Have marked glupy as bad test.

Backport of:
&gt; Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
&gt; BUG: 1193929
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
Updates: bz#1629877
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Remove local from owners_list on failure of lock-acquisition</title>
<updated>2019-05-08T14:07:12+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-04-04T10:01:56+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=65505150d665b207e2eb2143599a772c39a648f8'/>
<id>65505150d665b207e2eb2143599a772c39a648f8</id>
<content type='text'>
When eager-lock lock acquisition fails because of say network failures, the
local is not being removed from owners_list, this leads to accumulation of
waiting frames and the application will hang because the waiting frames are
under the assumption that another transaction is in the process of acquiring
lock because owner-list is not empty. Handled this case as well in this patch.
Added asserts to make it easier to find these problems in future.

fixes bz#1699736
Change-Id: I3101393265e9827755725b1f2d94a93d8709e923
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When eager-lock lock acquisition fails because of say network failures, the
local is not being removed from owners_list, this leads to accumulation of
waiting frames and the application will hang because the waiting frames are
under the assumption that another transaction is in the process of acquiring
lock because owner-list is not empty. Handled this case as well in this patch.
Added asserts to make it easier to find these problems in future.

fixes bz#1699736
Change-Id: I3101393265e9827755725b1f2d94a93d8709e923
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
