<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators, branch v5.0</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>debug/io-stats: io stats filenames contain garbage</title>
<updated>2018-10-18T12:48:09+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2018-10-17T13:18:19+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=7e918e9a0724fd50a672fa5a9c844af7f903e9fc'/>
<id>7e918e9a0724fd50a672fa5a9c844af7f903e9fc</id>
<content type='text'>
As dict_unserialize does not null terminate the value,
using snprintf adds garbage characters to the buffer
used to create the filename.
The code also used this-&gt;name in the filename which
will be the same for all bricks for a volume. The
files were thus overwritten if a node contained
multiple bricks for a volume. The code now uses
the conf-&gt;unique instead if available.

Change-Id: I2c72534b32634b87961d3b3f7d53c5f2ca2c068c
fixes: bz#1640392
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
(cherry picked from commit 219cd649fdbd7bfd6c2268a0a4f66bcc15918e31)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As dict_unserialize does not null terminate the value,
using snprintf adds garbage characters to the buffer
used to create the filename.
The code also used this-&gt;name in the filename which
will be the same for all bricks for a volume. The
files were thus overwritten if a node contained
multiple bricks for a volume. The code now uses
the conf-&gt;unique instead if available.

Change-Id: I2c72534b32634b87961d3b3f7d53c5f2ca2c068c
fixes: bz#1640392
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
(cherry picked from commit 219cd649fdbd7bfd6c2268a0a4f66bcc15918e31)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: fix incorrect reporting of directory split-brain</title>
<updated>2018-10-11T11:00:02+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-11T01:52:09+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=34e8058b7c8e906c889d6d0e155ea63037148eea'/>
<id>34e8058b7c8e906c889d6d0e155ea63037148eea</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1638163
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1638163
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: prevent winding inodelks twice for arbiter volumes</title>
<updated>2018-10-11T10:56:41+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-11T01:01:40+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c7b933bc460bf47a8d4404055bf1a52225a138cb'/>
<id>c7b933bc460bf47a8d4404055bf1a52225a138cb</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1638159
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1638159
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>posix: Fix exporting default value for `export-statfs-size`</title>
<updated>2018-10-08T05:40:48+00:00</updated>
<author>
<name>Aravinda VK</name>
<email>avishwan@redhat.com</email>
</author>
<published>2018-09-17T08:46:09+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=3a962b17ad82a7a80da804550cfdf84ec4ec82fd'/>
<id>3a962b17ad82a7a80da804550cfdf84ec4ec82fd</id>
<content type='text'>
No default value was specified for `export-statfs-size` in posix
option table. Glusterd2 sets default value as `off` since the
option type is `bool`. Posix treats `export-statfs-size=on` if
not specified in volfile(That means default value is `on`)

This patch sets default value as `on`

&gt; Change-Id: I5c6341183be9b62a78fdbc94621220f9284e1382
&gt; updates: #302
&gt; Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;

(cherry picked from commit 07088d95e450f847722e5decbfa5da18a0dbd9de)

Change-Id: Ib6b3accdb9921376c16040bd2312b99b0226a26f
Fixes: bz#1636842
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
No default value was specified for `export-statfs-size` in posix
option table. Glusterd2 sets default value as `off` since the
option type is `bool`. Posix treats `export-statfs-size=on` if
not specified in volfile(That means default value is `on`)

This patch sets default value as `on`

&gt; Change-Id: I5c6341183be9b62a78fdbc94621220f9284e1382
&gt; updates: #302
&gt; Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;

(cherry picked from commit 07088d95e450f847722e5decbfa5da18a0dbd9de)

Change-Id: Ib6b3accdb9921376c16040bd2312b99b0226a26f
Fixes: bz#1636842
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Make data eager-lock decision based on number of locks</title>
<updated>2018-10-05T14:37:01+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-09-18T06:45:57+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=45b5b48bc1e21099ab04cf0b1d432c6cb015cc05'/>
<id>45b5b48bc1e21099ab04cf0b1d432c6cb015cc05</id>
<content type='text'>
For both Virt and block workloads the file is opened multiple times
leading to dynamically setting eager-lock to off for the workload.
Instead of depending on the number-of-open-fds, if we change the
logic to depend on number of inodelks, then it will give better
performance than the earlier logic. When there is an eager-lock
and number of inodelks is more than 1 we know that there is a
conflicting lock, so depend on that information to decide whether
to keep the current transaction go through delayed-post-op or not.

Locks xlator doesn't have implementation to query number of locks in
fxattrop in releases older than 3.10 so to keep things backward
compatible in 3.12, data transactions will use new logic where as
fxattrop transactions will use old logic. I am planning to send one
more patch which makes metadata domain locks also depend on
inodelk-count

Profile info for a dd of 500MB to a file with another fd opened
on the file using exec 250&gt;filename

Without this patch:
 0.14      67.41 us      16.72 us    3870.82 us  892 FINODELK
 0.59     279.87 us      95.71 us    2085.89 us  898 FXATTROP
 3.46     366.43 us      81.75 us    6952.79 us 4000 WRITE
95.79  148733.99 us   50568.12 us  919127.86 us  273 FSYNC

With this patch:
 0.00      51.01 us      38.07 us      80.16 us    4 FINODELK
 0.00     235.43 us     235.43 us     235.43 us    1 TRUNCATE
 0.00     125.07 us      56.80 us     193.33 us    2 GETXATTR
 0.00     135.86 us      62.13 us     209.59 us    2  INODELK
 0.00     197.88 us     155.39 us     253.90 us    4 FXATTROP
 0.00     450.59 us     394.28 us     506.89 us    2  XATTROP
 0.00      56.96 us      19.06 us     406.59 us   23    FLUSH
37.81  273648.93 us      48.43 us 6017657.05 us   44   LOOKUP
62.18    4951.86 us      93.80 us 1143154.75 us 3999    WRITE

postgresql benchmark performance changed from ~1130 TPS to ~2300TPS
randio fio job inside Ovirt based VM went from ~600IOPs to ~2000IOPS

fixes bz#1635972
Change-Id: If7f7388d2f08cf7f17ca517a4ea222560661dc36
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For both Virt and block workloads the file is opened multiple times
leading to dynamically setting eager-lock to off for the workload.
Instead of depending on the number-of-open-fds, if we change the
logic to depend on number of inodelks, then it will give better
performance than the earlier logic. When there is an eager-lock
and number of inodelks is more than 1 we know that there is a
conflicting lock, so depend on that information to decide whether
to keep the current transaction go through delayed-post-op or not.

Locks xlator doesn't have implementation to query number of locks in
fxattrop in releases older than 3.10 so to keep things backward
compatible in 3.12, data transactions will use new logic where as
fxattrop transactions will use old logic. I am planning to send one
more patch which makes metadata domain locks also depend on
inodelk-count

Profile info for a dd of 500MB to a file with another fd opened
on the file using exec 250&gt;filename

Without this patch:
 0.14      67.41 us      16.72 us    3870.82 us  892 FINODELK
 0.59     279.87 us      95.71 us    2085.89 us  898 FXATTROP
 3.46     366.43 us      81.75 us    6952.79 us 4000 WRITE
95.79  148733.99 us   50568.12 us  919127.86 us  273 FSYNC

With this patch:
 0.00      51.01 us      38.07 us      80.16 us    4 FINODELK
 0.00     235.43 us     235.43 us     235.43 us    1 TRUNCATE
 0.00     125.07 us      56.80 us     193.33 us    2 GETXATTR
 0.00     135.86 us      62.13 us     209.59 us    2  INODELK
 0.00     197.88 us     155.39 us     253.90 us    4 FXATTROP
 0.00     450.59 us     394.28 us     506.89 us    2  XATTROP
 0.00      56.96 us      19.06 us     406.59 us   23    FLUSH
37.81  273648.93 us      48.43 us 6017657.05 us   44   LOOKUP
62.18    4951.86 us      93.80 us 1143154.75 us 3999    WRITE

postgresql benchmark performance changed from ~1130 TPS to ~2300TPS
randio fio job inside Ovirt based VM went from ~600IOPs to ~2000IOPS

fixes bz#1635972
Change-Id: If7f7388d2f08cf7f17ca517a4ea222560661dc36
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Batch writes in same lock even when multiple fds are open</title>
<updated>2018-10-05T14:37:01+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-09-06T09:39:42+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=320901d1dc9e1d4b2985e7277ca22816e7936f2f'/>
<id>320901d1dc9e1d4b2985e7277ca22816e7936f2f</id>
<content type='text'>
Problem:
When eager-lock is disabled because of multiple-fds opened and app
writes come on conflicting regions, the number of locks grows very
fast leading to all the CPU being spent just in locking and unlocking
by traversing huge queues in locks xlator for granting locks.

Fix:
Reduce the number of locks in transit by bundling the writes in the
same lock and disable delayed piggy-pack when we learn that multiple
fds are open on the file. This will reduce the size of queues in the
locks xlator.  This also reduces the number of network calls like
inodelk/fxattrop.

Please note that this problem can still happen if eager-lock is
disabled as the writes will not be bundled in the same lock.

fixes bz#1635975
Change-Id: I8fd1cf229aed54ce5abd4e6226351a039924dd91
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When eager-lock is disabled because of multiple-fds opened and app
writes come on conflicting regions, the number of locks grows very
fast leading to all the CPU being spent just in locking and unlocking
by traversing huge queues in locks xlator for granting locks.

Fix:
Reduce the number of locks in transit by bundling the writes in the
same lock and disable delayed piggy-pack when we learn that multiple
fds are open on the file. This will reduce the size of queues in the
locks xlator.  This also reduces the number of network calls like
inodelk/fxattrop.

Please note that this problem can still happen if eager-lock is
disabled as the writes will not be bundled in the same lock.

fixes bz#1635975
Change-Id: I8fd1cf229aed54ce5abd4e6226351a039924dd91
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: use proper path to the volfile</title>
<updated>2018-10-05T14:24:35+00:00</updated>
<author>
<name>Raghavendra Bhat</name>
<email>raghavendra@redhat.com</email>
</author>
<published>2018-10-01T21:30:19+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=5254a4b0c15ff75502806cc61a9eb9d21b55d411'/>
<id>5254a4b0c15ff75502806cc61a9eb9d21b55d411</id>
<content type='text'>
Till now, glusterd was generating the volfile path for the snapshot
volume's bricks like this.

/snaps/&lt;snap name&gt;/&lt;brick volfile&gt;

But in reality, the path to the brick volfile for a snapshot volume is

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

The above workaround was used to distinguish between a mount command used
to mount the snapshot volume, and a brick of the snapshot volume, so that
based on what is actually happening, glusterd can return the proper volfile
(client volfile for the former and the brick volfile for the latter). But,
this was causing problems for snapshot restore when brick multiplexing is
enabled. Because, with brick multiplexing, it tries to find the volfile
and sends GETSPEC rpc call to glusterd using the 2nd style of path i.e.

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

So, when the snapshot brick (which is multiplexed) sends a GETSPEC rpc
request to glusterd for obtaining the brick volume file, glusterd was
returning the client volume file of the snapshot volume instead of the
brick volume file.

Change-Id: I28b2dfa5d9b379fe943db92c2fdfea879a6a594e
fixes: bz#1636162
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
(cherry picked from commit 83a89296a3d12a3fc2a643c0630be5ce659204ea)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Till now, glusterd was generating the volfile path for the snapshot
volume's bricks like this.

/snaps/&lt;snap name&gt;/&lt;brick volfile&gt;

But in reality, the path to the brick volfile for a snapshot volume is

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

The above workaround was used to distinguish between a mount command used
to mount the snapshot volume, and a brick of the snapshot volume, so that
based on what is actually happening, glusterd can return the proper volfile
(client volfile for the former and the brick volfile for the latter). But,
this was causing problems for snapshot restore when brick multiplexing is
enabled. Because, with brick multiplexing, it tries to find the volfile
and sends GETSPEC rpc call to glusterd using the 2nd style of path i.e.

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

So, when the snapshot brick (which is multiplexed) sends a GETSPEC rpc
request to glusterd for obtaining the brick volume file, glusterd was
returning the client volume file of the snapshot volume instead of the
brick volume file.

Change-Id: I28b2dfa5d9b379fe943db92c2fdfea879a6a594e
fixes: bz#1636162
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
(cherry picked from commit 83a89296a3d12a3fc2a643c0630be5ce659204ea)
</pre>
</div>
</content>
</entry>
<entry>
<title>mdcache: Fix asan reported potential heap buffer overflow</title>
<updated>2018-10-02T18:41:39+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-09-28T17:47:33+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9eeca96885560ee555826ca0eb4294d226928728'/>
<id>9eeca96885560ee555826ca0eb4294d226928728</id>
<content type='text'>
The char pointer mdc_xattr_str in function mdc_xattr_list_populate
is malloc'd and doing a strcat into a malloc'd region can
overflow content allocated based on prior contents of the
memory region.

Added a NULL terimation to the malloc'd region to prevent
the overflow, and treat it as an empty string.

Change-Id: If0decab669551581230a8ede4c44c319ff04bac9
Updates: bz#1635373
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
(cherry picked from commit d00a2a1b398346bbdc5ac9b3ba4b09fb1ce1e699)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The char pointer mdc_xattr_str in function mdc_xattr_list_populate
is malloc'd and doing a strcat into a malloc'd region can
overflow content allocated based on prior contents of the
memory region.

Added a NULL terimation to the malloc'd region to prevent
the overflow, and treat it as an empty string.

Change-Id: If0decab669551581230a8ede4c44c319ff04bac9
Updates: bz#1635373
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
(cherry picked from commit d00a2a1b398346bbdc5ac9b3ba4b09fb1ce1e699)
</pre>
</div>
</content>
</entry>
<entry>
<title>ctime: Provide noatime option</title>
<updated>2018-10-02T12:45:23+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2018-09-03T13:07:58+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=315b45f85ecba15d7fc8f2342468b89ee4747c48'/>
<id>315b45f85ecba15d7fc8f2342468b89ee4747c48</id>
<content type='text'>
Most of the applications are {c|m}time dependant
and very few are atime dependant. So provide noatime
option to not update atime when ctime feature is
enabled.

Also this option has to be enabled with ctime
feature to avoid unnecessary self heal. Since
AFR/EC reads data from single subvolume, atime
is only updated in one subvolume triggering self
heal.

Backport of:
&gt; Patch: https://review.gluster.org/21073
&gt; BUG: 1593538
&gt; Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 89636be4c73b12de2e11c75d8e59527bb243f147)

updates: bz#1633015
Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Most of the applications are {c|m}time dependant
and very few are atime dependant. So provide noatime
option to not update atime when ctime feature is
enabled.

Also this option has to be enabled with ctime
feature to avoid unnecessary self heal. Since
AFR/EC reads data from single subvolume, atime
is only updated in one subvolume triggering self
heal.

Backport of:
&gt; Patch: https://review.gluster.org/21073
&gt; BUG: 1593538
&gt; Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 89636be4c73b12de2e11c75d8e59527bb243f147)

updates: bz#1633015
Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: acquire lock to update volinfo structure</title>
<updated>2018-09-27T15:36:12+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-09-11T08:49:42+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a39d7273eb8efecf88613a7ba284a9a3cd970be9'/>
<id>a39d7273eb8efecf88613a7ba284a9a3cd970be9</id>
<content type='text'>
Problem: With commit cb0339f92, we are using a separate syntask
for restart_bricks. There can be a situation where two threads
are accessing the same volinfo structure at the same time and
updating volinfo structure. This can lead volinfo to have
inconsistent values and assertion failures because of unexpected
values.

Solution: While updating the volinfo structure, acquire a
store_volinfo_lock, and release the lock only when the thread
completed its critical section part.

&gt; BUG: bz#1627610
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

&gt; Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
(cherry picked from commit 484f417da945cf83afdbf136bb4817311862a8d2)

fixes: bz#1633552

Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: With commit cb0339f92, we are using a separate syntask
for restart_bricks. There can be a situation where two threads
are accessing the same volinfo structure at the same time and
updating volinfo structure. This can lead volinfo to have
inconsistent values and assertion failures because of unexpected
values.

Solution: While updating the volinfo structure, acquire a
store_volinfo_lock, and release the lock only when the thread
completed its critical section part.

&gt; BUG: bz#1627610
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

&gt; Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
(cherry picked from commit 484f417da945cf83afdbf136bb4817311862a8d2)

fixes: bz#1633552

Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
