<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-locks.c, branch v5.8</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>Land part 2 of clang-format changes</title>
<updated>2018-09-12T12:22:45+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T12:22:45+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=e16868dede6455cab644805af6fe1ac312775e13'/>
<id>e16868dede6455cab644805af6fe1ac312775e13</id>
<content type='text'>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>xlators: move from strlen() to sizeof()</title>
<updated>2018-08-31T02:14:06+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2018-08-21T16:33:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=d6d729b0609957c0382749c30da507dda77561b7'/>
<id>d6d729b0609957c0382749c30da507dda77561b7</id>
<content type='text'>
xlators/features/index/src/index.c
xlators/features/shard/src/shard.c
xlators/features/upcall/src/upcall-internal.c
xlators/mgmt/glusterd/src/glusterd-bitrot.c
xlators/mgmt/glusterd/src/glusterd-locks.c
xlators/mgmt/glusterd/src/glusterd-mountbroker.c
xlators/mgmt/glusterd/src/glusterd-op-sm.c

For const strings, just do compile time size calc instead of runtime.

Compile-tested only!

Change-Id: I995b2b89f14454b3855a4cd0ca90b3f01d5e080f
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xlators/features/index/src/index.c
xlators/features/shard/src/shard.c
xlators/features/upcall/src/upcall-internal.c
xlators/mgmt/glusterd/src/glusterd-bitrot.c
xlators/mgmt/glusterd/src/glusterd-locks.c
xlators/mgmt/glusterd/src/glusterd-mountbroker.c
xlators/mgmt/glusterd/src/glusterd-op-sm.c

For const strings, just do compile time size calc instead of runtime.

Compile-tested only!

Change-Id: I995b2b89f14454b3855a4cd0ca90b3f01d5e080f
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: ignore importing volume which is undergoing a delete operation</title>
<updated>2018-08-16T07:02:04+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-07-31T07:03:49+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=0b450b8b35cf8f03d51eeb5b0941832cde636993'/>
<id>0b450b8b35cf8f03d51eeb5b0941832cde636993</id>
<content type='text'>
Problem explanation:

Assuming in a 3 nodes cluster, if N1 originates a delete operation and
while N1's commit phase completes, either glusterd service of N2 or N3
gets disconnected from N1 (before completing the commit phase), N1 will
attempt to end up importing the volume which is in-flight for a delete
in other nodes as a fresh resulting into an incorrect configuration
state.

Fix:

Mark a volume as stage deleted once a volume delete operation passes
it's staging phase and reset this flag during unlock phase. Now during
this intermediate phase if the same volume gets imported to other peers,
it shouldn't considered to be recreated.

An automated .t is quite tough to implement with the current infra.

Test Case:

1. Keep creating and deleting volumes in a loop on a 3 node cluster
2. Simulate n/w failure between the peers (ifdown followed by ifup)
3. Check if output of 'gluster v list | wc -l' is same across all 3
nodes during 1 &amp; 2.

Change-Id: Ifdd5dc39699120258d7fdd42fe2deb9de25c6246
Fixes: bz#1605077
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem explanation:

Assuming in a 3 nodes cluster, if N1 originates a delete operation and
while N1's commit phase completes, either glusterd service of N2 or N3
gets disconnected from N1 (before completing the commit phase), N1 will
attempt to end up importing the volume which is in-flight for a delete
in other nodes as a fresh resulting into an incorrect configuration
state.

Fix:

Mark a volume as stage deleted once a volume delete operation passes
it's staging phase and reset this flag during unlock phase. Now during
this intermediate phase if the same volume gets imported to other peers,
it shouldn't considered to be recreated.

An automated .t is quite tough to implement with the current infra.

Test Case:

1. Keep creating and deleting volumes in a loop on a 3 node cluster
2. Simulate n/w failure between the peers (ifdown followed by ifup)
3. Check if output of 'gluster v list | wc -l' is same across all 3
nodes during 1 &amp; 2.

Change-Id: Ifdd5dc39699120258d7fdd42fe2deb9de25c6246
Fixes: bz#1605077
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>All: run codespell on the code and fix issues.</title>
<updated>2018-07-22T14:40:16+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2018-07-16T14:03:17+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=621138ce763eda8270d0a4f6d7209fd50ada8787'/>
<id>621138ce763eda8270d0a4f6d7209fd50ada8787</id>
<content type='text'>
Please review, it's not always just the comments that were fixed.
I've had to revert of course all calls to creat() that were changed
to create() ...

Only compile-tested!

Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Please review, it's not always just the comments that were fixed.
I've had to revert of course all calls to creat() that were changed
to create() ...

Only compile-tested!

Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: glusterd is releasing the locks before timeout</title>
<updated>2018-05-28T14:10:45+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-04-17T12:40:01+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c54702a34aa1feb86e2f5f2b1b238966a52ae37b'/>
<id>c54702a34aa1feb86e2f5f2b1b238966a52ae37b</id>
<content type='text'>
Problem: We introduced lock timer in mgmt v3, which will realease
the lock after 3 minutes from command execution. Some commands related
to heal/profile will take more time to execute. For these comands
timeout is set to 10 minutes. As the lock timer is set to 3 minutes
glusterd is releasing the lock after 3 minutes. That means locks are
released before the command is completed its execution.

Solution: Pass a timeout parameter from cli to glusterd, when there
is a change in default timeout value(i.e, default timeout value can
be changed through command line or for the commands related to profile/heal
we will change the default timeout value to 10 minutes.) glusterd will
set the lock timer timeout according to the timeout value passed.

Change-Id: I7b7a9a4f95ed44aca39ef9d9907f546bca99c69d
fixes: bz#1577731
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: We introduced lock timer in mgmt v3, which will realease
the lock after 3 minutes from command execution. Some commands related
to heal/profile will take more time to execute. For these comands
timeout is set to 10 minutes. As the lock timer is set to 3 minutes
glusterd is releasing the lock after 3 minutes. That means locks are
released before the command is completed its execution.

Solution: Pass a timeout parameter from cli to glusterd, when there
is a change in default timeout value(i.e, default timeout value can
be changed through command line or for the commands related to profile/heal
we will change the default timeout value to 10 minutes.) glusterd will
set the lock timer timeout according to the timeout value passed.

Change-Id: I7b7a9a4f95ed44aca39ef9d9907f546bca99c69d
fixes: bz#1577731
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: setting mgmt_v3_timer-&gt;timer to NULL after deleting mgmt_v3_timer</title>
<updated>2018-04-02T17:28:40+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-04-02T15:00:16+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=64376e107733e129497cd200a69725c489e9b286'/>
<id>64376e107733e129497cd200a69725c489e9b286</id>
<content type='text'>
We are setting mgmt_v3_timer-&gt;timer to NULL after mgmt_v3_timer is deleted
which is unnecessary. So removing the statement.

This issue is caught while running glusterd with ASAN.

Change-Id: Ied1f91590a2c64ec1af36d4de9c3febd6cf94bb9
Fixes: bz#1562907
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We are setting mgmt_v3_timer-&gt;timer to NULL after mgmt_v3_timer is deleted
which is unnecessary. So removing the statement.

This issue is caught while running glusterd with ASAN.

Change-Id: Ied1f91590a2c64ec1af36d4de9c3febd6cf94bb9
Fixes: bz#1562907
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: glusterd crash in gd_mgmt_v3_unlock_timer_cbk</title>
<updated>2018-03-15T05:00:56+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2018-03-15T04:57:55+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=97233b3f69595a7d7a3da3a80a6911b8b4985881'/>
<id>97233b3f69595a7d7a3da3a80a6911b8b4985881</id>
<content type='text'>
Memory cleanup of same pointer twice inside gd_mgmt_v3_unlock_timer_cbk
causing glusterd to crash.

Change-Id: I9147241d995780619474047b1010317a89b9965a
BUG: 1550339
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Memory cleanup of same pointer twice inside gd_mgmt_v3_unlock_timer_cbk
causing glusterd to crash.

Change-Id: I9147241d995780619474047b1010317a89b9965a
BUG: 1550339
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : memory leak in mgmt_v3 lock functionality</title>
<updated>2018-03-06T08:05:24+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2018-03-01T09:14:34+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=4511b45bf4d13581cd74c4b87495eda6d54ec5be'/>
<id>4511b45bf4d13581cd74c4b87495eda6d54ec5be</id>
<content type='text'>
In order to take care of stale lock issue, a timer was intrduced
in mgmt_v3 lock. This timer is not freeing the memory due to
which this leak got introduced

With this fix now memory cleanup in locking is handled properly

Change-Id: I2e1ce3ebba3520f7660321f3d97554080e4e22f4
BUG: 1550339
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In order to take care of stale lock issue, a timer was intrduced
in mgmt_v3 lock. This timer is not freeing the memory due to
which this leak got introduced

With this fix now memory cleanup in locking is handled properly

Change-Id: I2e1ce3ebba3520f7660321f3d97554080e4e22f4
BUG: 1550339
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: revert coverity fix for freeing key_dup</title>
<updated>2017-11-29T16:13:47+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-11-29T07:47:37+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=bd63d164eabba59596f991c2aa7b66b43ed732a8'/>
<id>bd63d164eabba59596f991c2aa7b66b43ed732a8</id>
<content type='text'>
key_dup can't be freed here as the same is referenced at the
gd_mgmt_v3_unlock_timer_cbk.

Change-Id: I85667f98c82d1acebcce59137dfc0dd1ca93b4eb
BUG: 789278
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
key_dup can't be freed here as the same is referenced at the
gd_mgmt_v3_unlock_timer_cbk.

Change-Id: I85667f98c82d1acebcce59137dfc0dd1ca93b4eb
BUG: 789278
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix few coverity errors</title>
<updated>2017-11-06T03:56:48+00:00</updated>
<author>
<name>Prashanth Pai</name>
<email>ppai@redhat.com</email>
</author>
<published>2017-11-03T06:23:12+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f461d75b226a5bc6c088280e43a96f9b54f33af5'/>
<id>f461d75b226a5bc6c088280e43a96f9b54f33af5</id>
<content type='text'>
Fixes issues 810, 248, 491, 499, 85, 786, 811, 43, and 44
from the report at [1].

[1]: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

BUG: 789278
Change-Id: I27ebae2ffb2256b8eef0757d768cc46e5a942e9f
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes issues 810, 248, 491, 499, 85, 786, 811, 43, and 44
from the report at [1].

[1]: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

BUG: 789278
Change-Id: I27ebae2ffb2256b8eef0757d768cc46e5a942e9f
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
