<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests, branch v5.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>afr: thin-arbiter 2 domain locking and in-memory state</title>
<updated>2018-12-12T14:26:42+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-09-23T11:29:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d7a4d256bd86aadcd60668ee37079514dfcf41f3'/>
<id>d7a4d256bd86aadcd60668ee37079514dfcf41f3</id>
<content type='text'>
2 domain locking + xattrop for write-txn failures:
--------------------------------------------------
- A post-op wound on TA takes AFR_TA_DOM_NOTIFY range lock and
AFR_TA_DOM_MODIFY full lock, does xattrop on TA and releases
AFR_TA_DOM_MODIFY lock and stores in-memory which brick is bad.

- All further write txn failures are handled based on this in-memory
value without querying the TA.

- When shd heals the files, it does so by requesting full lock on
AFR_TA_DOM_NOTIFY domain. Client uses this as a cue (via upcall),
releases AFR_TA_DOM_NOTIFY range lock and invalidates its in-memory
notion of which brick is bad. The next write txn failure is wound on TA
to again update the in-memory state.

- Any incomplete write txns before the AFR_TA_DOM_NOTIFY upcall release
request is got is completed before the lock is released.

- Any write txns got after the release request are maintained in a ta_waitq.

- After the release is complete, the ta_waitq elements are spliced to a
separate queue which is then processed one by one.

- For fops that come in parallel when the in-memory bad brick is still
unknown, only one is wound to TA on wire. The other ones are maintained
in a ta_onwireq which is then processed after we get the response from
TA.

Change-Id: I32c7b61a61776663601ab0040e2f0767eca1fd64
updates: bz#1648205
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
2 domain locking + xattrop for write-txn failures:
--------------------------------------------------
- A post-op wound on TA takes AFR_TA_DOM_NOTIFY range lock and
AFR_TA_DOM_MODIFY full lock, does xattrop on TA and releases
AFR_TA_DOM_MODIFY lock and stores in-memory which brick is bad.

- All further write txn failures are handled based on this in-memory
value without querying the TA.

- When shd heals the files, it does so by requesting full lock on
AFR_TA_DOM_NOTIFY domain. Client uses this as a cue (via upcall),
releases AFR_TA_DOM_NOTIFY range lock and invalidates its in-memory
notion of which brick is bad. The next write txn failure is wound on TA
to again update the in-memory state.

- Any incomplete write txns before the AFR_TA_DOM_NOTIFY upcall release
request is got is completed before the lock is released.

- Any write txns got after the release request are maintained in a ta_waitq.

- After the release is complete, the ta_waitq elements are spliced to a
separate queue which is then processed one by one.

- For fops that come in parallel when the in-memory bad brick is still
unknown, only one is wound to TA on wire. The other ones are maintained
in a ta_onwireq which is then processed after we get the response from
TA.

Change-Id: I32c7b61a61776663601ab0040e2f0767eca1fd64
updates: bz#1648205
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: assign gfid during name heal when no 'source' is present.</title>
<updated>2018-12-12T12:57:59+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-09-28T11:30:00+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=fbdaffdb6d90409124507b3d9b15fc5d6b3ed8e6'/>
<id>fbdaffdb6d90409124507b3d9b15fc5d6b3ed8e6</id>
<content type='text'>
Problem:
If parent dir is in split-brain or has dirty xattrs set, and the file
has gfid missing on one of the bricks, then name heal won't assign the
gfid.

Fix:
Use the brick we select the gfid from as the 'source'.

Note: Problem was found while trying to debug a split-brain issue on
Cynthia Zhou's setup.

fixes: bz#1655545
Change-Id: Id088d4f0fb017aa35122de426654194e581ed742
Reported-by: Cynthia Zhou &lt;cynthia.zhou@nokia-sbell.com&gt;
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 4d58730c0cd6ab5db39aec8a15276f7bd3371b04)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
If parent dir is in split-brain or has dirty xattrs set, and the file
has gfid missing on one of the bricks, then name heal won't assign the
gfid.

Fix:
Use the brick we select the gfid from as the 'source'.

Note: Problem was found while trying to debug a split-brain issue on
Cynthia Zhou's setup.

fixes: bz#1655545
Change-Id: Id088d4f0fb017aa35122de426654194e581ed742
Reported-by: Cynthia Zhou &lt;cynthia.zhou@nokia-sbell.com&gt;
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 4d58730c0cd6ab5db39aec8a15276f7bd3371b04)
</pre>
</div>
</content>
</entry>
<entry>
<title>gfapi: Offload callback notifications to synctask</title>
<updated>2018-12-12T12:53:23+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2018-11-18T18:08:08+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b710f812101233c0b25edcb0f59d197bcd9d0026'/>
<id>b710f812101233c0b25edcb0f59d197bcd9d0026</id>
<content type='text'>
Upcall notifications are received from server via epoll
and same thread is used to forward these notifications
to the application. This may lead to deadlock and hang
in the following scenario.

Consider if as part of handling these callbacks,
application has to do some operations which involve
sending I/Os to gfapi stack which inturn have to wait for
epoll threads to receive repsonse. Thus this may lead to
deadlock if all the epoll threads are waiting to complete
these callback notifications.

To address it, instead of using epoll thread itself,
make use of synctask to send those notificaitons to the
application.

Change-Id: If614e0d09246e4279b9d1f40d883a32a39c8fd90
updates: bz#1651323
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
(cherry picked from commit ad35193718a99494ab1b852ca4cbdf054f73de88)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Upcall notifications are received from server via epoll
and same thread is used to forward these notifications
to the application. This may lead to deadlock and hang
in the following scenario.

Consider if as part of handling these callbacks,
application has to do some operations which involve
sending I/Os to gfapi stack which inturn have to wait for
epoll threads to receive repsonse. Thus this may lead to
deadlock if all the epoll threads are waiting to complete
these callback notifications.

To address it, instead of using epoll thread itself,
make use of synctask to send those notificaitons to the
application.

Change-Id: If614e0d09246e4279b9d1f40d883a32a39c8fd90
updates: bz#1651323
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
(cherry picked from commit ad35193718a99494ab1b852ca4cbdf054f73de88)
</pre>
</div>
</content>
</entry>
<entry>
<title>lease: Treat unlk request as noop if lease not found</title>
<updated>2018-12-12T12:53:10+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2018-10-29T09:11:26+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f17188e4aa5e02d266cf147cf418e6cc27f5db21'/>
<id>f17188e4aa5e02d266cf147cf418e6cc27f5db21</id>
<content type='text'>
When the glusterfs server recalls the lease, it expects
client to flush data and unlock the lease. If not it sets
a timer (starting from the time it sends RECALL request) and post
timeout, it revokes it.

Here we could have a race where in client did send UNLK
lease request but because of network delay it may have reached
after server revokes it. To handle such situations, treat
such requests as noop and return sucesss.

Change-Id: I166402d10273f4f115ff04030ecbc14676a01663
updates: bz#1651323
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
(cherry picked from commit c2e758b54d8a3f778e3e63db0000bb8b63de9b25)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When the glusterfs server recalls the lease, it expects
client to flush data and unlock the lease. If not it sets
a timer (starting from the time it sends RECALL request) and post
timeout, it revokes it.

Here we could have a race where in client did send UNLK
lease request but because of network delay it may have reached
after server revokes it. To handle such situations, treat
such requests as noop and return sucesss.

Change-Id: I166402d10273f4f115ff04030ecbc14676a01663
updates: bz#1651323
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
(cherry picked from commit c2e758b54d8a3f778e3e63db0000bb8b63de9b25)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Use 2 domain locking in SHD for thin-arbiter</title>
<updated>2018-11-29T15:33:37+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2018-05-30T09:57:52+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e504e9f97053b7b755aea49dc13a1e886c896b85'/>
<id>e504e9f97053b7b755aea49dc13a1e886c896b85</id>
<content type='text'>
With this change when SHD starts the index crawl it requests
all the clients to release the AFR_TA_DOM_NOTIFY lock so that
clients will know the in memory state is no more valid and
any new operations needs to query the thin-arbiter if required.

When SHD completes healing all the files without any failure, it
will again take the AFR_TA_DOM_NOTIFY lock and gets the xattrs on
TA to see whether there are any new failures happened by that time.
If there are new failures marked on TA, SHD will start the crawl
immediately to heal those failures as well. If there are no new
failures, then SHD will take the AFR_TA_DOM_MODIFY lock and unsets
the xattrs on TA, so that both the data bricks will be considered
as good there after.

&gt;Change-Id: I037b89a0823648f314580ba0716d877bd5ddb1f1
&gt;fixes: bz#1579788
&gt;Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
(cherry picked from commit 5784a00f997212d34bd52b2303e20c097240d91c)

Change-Id: I037b89a0823648f314580ba0716d877bd5ddb1f1
fixes: bz#1648205
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With this change when SHD starts the index crawl it requests
all the clients to release the AFR_TA_DOM_NOTIFY lock so that
clients will know the in memory state is no more valid and
any new operations needs to query the thin-arbiter if required.

When SHD completes healing all the files without any failure, it
will again take the AFR_TA_DOM_NOTIFY lock and gets the xattrs on
TA to see whether there are any new failures happened by that time.
If there are new failures marked on TA, SHD will start the crawl
immediately to heal those failures as well. If there are no new
failures, then SHD will take the AFR_TA_DOM_MODIFY lock and unsets
the xattrs on TA, so that both the data bricks will be considered
as good there after.

&gt;Change-Id: I037b89a0823648f314580ba0716d877bd5ddb1f1
&gt;fixes: bz#1579788
&gt;Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
(cherry picked from commit 5784a00f997212d34bd52b2303e20c097240d91c)

Change-Id: I037b89a0823648f314580ba0716d877bd5ddb1f1
fixes: bz#1648205
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: ensure volinfo-&gt;caps is set to correct value.</title>
<updated>2018-11-09T18:46:34+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-03T18:28:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3782c90a617dfefc9bc8a92d0facb3927659dede'/>
<id>3782c90a617dfefc9bc8a92d0facb3927659dede</id>
<content type='text'>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit aae1c402b74fd02ed2f6473b896f108d82aef8e3)

fixes: bz#1647968
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit aae1c402b74fd02ed2f6473b896f108d82aef8e3)

fixes: bz#1647968
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: correction in tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t</title>
<updated>2018-11-08T14:37:17+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-08T14:03:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2b1d28aa809fc364ca383866ad4d905016d6ef57'/>
<id>2b1d28aa809fc364ca383866ad4d905016d6ef57</id>
<content type='text'>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643078

Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643078

Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t</title>
<updated>2018-10-25T13:12:28+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-21T12:02:52+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=686849beb424b8b0ebd17b21a9cc201f252f3547'/>
<id>686849beb424b8b0ebd17b21a9cc201f252f3547</id>
<content type='text'>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641872
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641872
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Hold a ref on base inode when adding a shard to lru list</title>
<updated>2018-10-25T13:11:49+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2018-10-05T06:02:21+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=02a05da6989f5cd4283e2e5d62a9cfa6493d65dc'/>
<id>02a05da6989f5cd4283e2e5d62a9cfa6493d65dc</id>
<content type='text'>
Backport of:
&gt; Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
&gt; (cherry picked from commit e627977)
&gt; BUG: 1605056

In __shard_update_shards_inode_list(), previously shard translator
was not holding a ref on the base inode whenever a shard was added to
the lru list. But if the base shard is forgotten and destroyed either
by fuse due to memory pressure or due to the file being deleted at some
point by a different client with this client still containing stale
shards in its lru list, the client would crash at the time of locking
lru_base_inode-&gt;lock owing to illegal memory access.

So now the base shard is ref'd into the inode ctx of every shard that
is added to lru list until it gets lru'd out.

The patch also handles the case where none of the shards associated
with a file that is about to be deleted are part of the LRU list and
where an unlink at the beginning of the operation destroys the base
inode (because there are no refkeepers) and hence all of the shards
that are about to be deleted will be resolved without the existence
of a base shard in-memory. This, if not handled properly, could lead
to a crash.

Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
updates: bz#1641440
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of:
&gt; Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
&gt; (cherry picked from commit e627977)
&gt; BUG: 1605056

In __shard_update_shards_inode_list(), previously shard translator
was not holding a ref on the base inode whenever a shard was added to
the lru list. But if the base shard is forgotten and destroyed either
by fuse due to memory pressure or due to the file being deleted at some
point by a different client with this client still containing stale
shards in its lru list, the client would crash at the time of locking
lru_base_inode-&gt;lock owing to illegal memory access.

So now the base shard is ref'd into the inode ctx of every shard that
is added to lru list until it gets lru'd out.

The patch also handles the case where none of the shards associated
with a file that is about to be deleted are part of the LRU list and
where an unlink at the beginning of the operation destroys the base
inode (because there are no refkeepers) and hence all of the shards
that are about to be deleted will be resolved without the existence
of a base shard in-memory. This, if not handled properly, could lead
to a crash.

Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
updates: bz#1641440
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gfapi: Bug fixes in leases processing code-path</title>
<updated>2018-10-18T13:26:29+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2018-10-10T16:07:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2f2872b335018a7fa4b61193f2e6404bef8864ed'/>
<id>2f2872b335018a7fa4b61193f2e6404bef8864ed</id>
<content type='text'>
This patch fixes below issues in gfapi lease code-path
* 'glfs_setfsleasid' should allow NULL input to be
   able to reset leaseid
* Applications should be allowed to (un)register for
  upcall notifications of type GLFS_EVENT_LEASE_RECALL
* APIs added to read contents of GLFS_EVENT_LEASE_RECALL
  argument which is of type "struct glfs_upcall_lease"

This is backport of the below mainline patch -
 https://review.gluster.org/#/c/glusterfs/+/21391

Change-Id: I3320ddf235cc82fad561e13b9457ebd64db6c76b
updates: #350
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes below issues in gfapi lease code-path
* 'glfs_setfsleasid' should allow NULL input to be
   able to reset leaseid
* Applications should be allowed to (un)register for
  upcall notifications of type GLFS_EVENT_LEASE_RECALL
* APIs added to read contents of GLFS_EVENT_LEASE_RECALL
  argument which is of type "struct glfs_upcall_lease"

This is backport of the below mainline patch -
 https://review.gluster.org/#/c/glusterfs/+/21391

Change-Id: I3320ddf235cc82fad561e13b9457ebd64db6c76b
updates: #350
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
