<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs/src, branch v7.3</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>volgen: make thin-arbiter name unique in 'pending-xattr' option</title>
<updated>2020-02-17T09:52:06+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@kadalu.io</email>
</author>
<published>2020-02-04T17:32:51+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8b45b798a1fec498cdad36c79c5976b607bb1be9'/>
<id>8b45b798a1fec498cdad36c79c5976b607bb1be9</id>
<content type='text'>
Thin-arbiter module makes use of 'pending-xattr' name for the translator
as the filename which gets created in thin-arbiter node. By making this
unique, we can host single thin-arbiter node for multiple clusters.

Updates: #763
Change-Id: Ib3c732e7e04e6dba229e71ae3e64f1f3cb6d794d
Signed-off-by: Amar Tumballi &lt;amar@kadalu.io&gt;
(cherry picked from commit 8db8202f716fd24c8c52f8ee5f66e169310dc9b1)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Thin-arbiter module makes use of 'pending-xattr' name for the translator
as the filename which gets created in thin-arbiter node. By making this
unique, we can host single thin-arbiter node for multiple clusters.

Updates: #763
Change-Id: Ib3c732e7e04e6dba229e71ae3e64f1f3cb6d794d
Signed-off-by: Amar Tumballi &lt;amar@kadalu.io&gt;
(cherry picked from commit 8db8202f716fd24c8c52f8ee5f66e169310dc9b1)
</pre>
</div>
</content>
</entry>
<entry>
<title>gf-event: Handle unix volfile-servers</title>
<updated>2020-02-10T07:31:40+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-10-24T06:54:35+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=f4f24c8c782bf4fa601f7ef14bbf2e2b6583cd90'/>
<id>f4f24c8c782bf4fa601f7ef14bbf2e2b6583cd90</id>
<content type='text'>
Problem:
glfsheal program uses unix-socket-based volfile server.
volfile server will be the path to socket in this case.
gf_event expects this to be hostname in all cases. So getaddrinfo
will fail on the unix-socket path, events won't be sent in this case.

Fix:
In case of unix sockets, default to localhost

fixes: bz#1793085
Change-Id: I60d27608792c29d83fb82beb5fde5ef4754bece8
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
glfsheal program uses unix-socket-based volfile server.
volfile server will be the path to socket in this case.
gf_event expects this to be hostname in all cases. So getaddrinfo
will fail on the unix-socket path, events won't be sent in this case.

Fix:
In case of unix sockets, default to localhost

fixes: bz#1793085
Change-Id: I60d27608792c29d83fb82beb5fde5ef4754bece8
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>timer: fix event destruction race</title>
<updated>2020-01-10T05:36:52+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-12-19T10:58:54+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=2fc16af00b12b3aa92d373aa1d5a4b9fd6ce6b94'/>
<id>2fc16af00b12b3aa92d373aa1d5a4b9fd6ce6b94</id>
<content type='text'>
In current timer implementation, each event has an absolute time at which
it will be fired. When the first timer of the queue has not elapsed yet,
a pthread_cond_timedwait() is used to wait until the expected time.

Apparently that's fine. However the time passed to that function was a
pointer to the timespec structure contained in the event itself. This is
problematic because of how pthread_cond_timedwait() works internally.

Simplifying a bit, pthread_cond_timedwait() basically queues itself as a
waiter for the given condition variable and releases the mutex. Then it
does the timed wait using the passed value.

With that in mind, the follwing case is possible:

   Timer Thread                            Other Thread
   ------------                            ------------

					   gf_timer_call_cancel()
   pthread_mutex_lock()                    |
                                           + pthread_mutex_lock()
   event = current_event()                 |
   pthread_cond_timedwait(&amp;event-&gt;at)      |
   + pthread_mutex_unlock()                |
   |                                       + remove_event()
   |                                       + destroy_event()
   + timed_wait(&amp;event-&gt;at)

As we can see, the time is used after it has been destroyed, which means
we have a use-after-free problem.

This patch fixes the problem by copying the time to a local variable
before calling pthread_cond_timedwait()

Backport of:
&gt; Change-Id: I0f4e8eded24fe3a1276dc75c6cf093bae973d26b
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Fixes: bz#1785208

Change-Id: I0f4e8eded24fe3a1276dc75c6cf093bae973d26b
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1767264
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In current timer implementation, each event has an absolute time at which
it will be fired. When the first timer of the queue has not elapsed yet,
a pthread_cond_timedwait() is used to wait until the expected time.

Apparently that's fine. However the time passed to that function was a
pointer to the timespec structure contained in the event itself. This is
problematic because of how pthread_cond_timedwait() works internally.

Simplifying a bit, pthread_cond_timedwait() basically queues itself as a
waiter for the given condition variable and releases the mutex. Then it
does the timed wait using the passed value.

With that in mind, the follwing case is possible:

   Timer Thread                            Other Thread
   ------------                            ------------

					   gf_timer_call_cancel()
   pthread_mutex_lock()                    |
                                           + pthread_mutex_lock()
   event = current_event()                 |
   pthread_cond_timedwait(&amp;event-&gt;at)      |
   + pthread_mutex_unlock()                |
   |                                       + remove_event()
   |                                       + destroy_event()
   + timed_wait(&amp;event-&gt;at)

As we can see, the time is used after it has been destroyed, which means
we have a use-after-free problem.

This patch fixes the problem by copying the time to a local variable
before calling pthread_cond_timedwait()

Backport of:
&gt; Change-Id: I0f4e8eded24fe3a1276dc75c6cf093bae973d26b
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Fixes: bz#1785208

Change-Id: I0f4e8eded24fe3a1276dc75c6cf093bae973d26b
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1767264
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: expose cluster.optimistic-change-log to CLI.</title>
<updated>2020-01-09T13:22:22+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2020-01-08T05:41:53+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=dc2c94c6df87cb3e97adfff080123db883f7a591'/>
<id>dc2c94c6df87cb3e97adfff080123db883f7a591</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/23960/

This volume option was not made avaialble to `gluster volume set` CLI.

Reported-by: epolakis(https://github.com/kinsu) in
https://github.com/gluster/glusterfs/issues/781

fixes: bz#1788785
Change-Id: I7141bdd4e53ee99e22b354edde8d023bfc0b2cd7
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/23960/

This volume option was not made avaialble to `gluster volume set` CLI.

Reported-by: epolakis(https://github.com/kinsu) in
https://github.com/gluster/glusterfs/issues/781

fixes: bz#1788785
Change-Id: I7141bdd4e53ee99e22b354edde8d023bfc0b2cd7
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: make heal info lockless</title>
<updated>2019-12-16T05:38:25+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-11-07T09:48:30+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=ac108947b0f25293c154707b70ea01eb3774f542'/>
<id>ac108947b0f25293c154707b70ea01eb3774f542</id>
<content type='text'>
Changes in locks xlator:
Added support for per-domain inodelk count requests.
Caller needs to set GLUSTERFS_MULTIPLE_DOM_LK_CNT_REQUESTS key in the
dict and then set each key with name
'GLUSTERFS_INODELK_DOM_PREFIX:&lt;domain name&gt;'.
In the response dict, the xlator will send the per domain count as
values for each of these keys.

Changes in AFR:
Replaced afr_selfheal_locked_inspect() with afr_lockless_inspect(). Logic has
been added to make the latter behave same as the former, thus not
breaking the current heal info output behaviour.

fixes: bz#1783858
Change-Id: Ie9e83c162aa77f44a39c2ba7115de558120ada4d
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit d7e049160a9dea988ded5816491c2234d40ab6b3)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Changes in locks xlator:
Added support for per-domain inodelk count requests.
Caller needs to set GLUSTERFS_MULTIPLE_DOM_LK_CNT_REQUESTS key in the
dict and then set each key with name
'GLUSTERFS_INODELK_DOM_PREFIX:&lt;domain name&gt;'.
In the response dict, the xlator will send the per domain count as
values for each of these keys.

Changes in AFR:
Replaced afr_selfheal_locked_inspect() with afr_lockless_inspect(). Logic has
been added to make the latter behave same as the former, thus not
breaking the current heal info output behaviour.

fixes: bz#1783858
Change-Id: Ie9e83c162aa77f44a39c2ba7115de558120ada4d
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit d7e049160a9dea988ded5816491c2234d40ab6b3)
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: event_slot_alloc converted infinite loop after reach slot_used to 1024</title>
<updated>2019-12-13T07:07:52+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-12-10T03:05:23+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=749c1b461cc38b0f61a7d9bfdfe54af7d24ee69b'/>
<id>749c1b461cc38b0f61a7d9bfdfe54af7d24ee69b</id>
<content type='text'>
Problem: In the commit faf5ac13c4ee00a05e9451bf8da3be2a9043bbf2 missed one
         condition to come out from the loop so after reach the slot_used to
         1024 loop has become infinite loop

Solution: Correct the code path to avoid the infinite loop

&gt; Change-Id: Ia02a109571f0d8cc9902c32db3e9b9282ee5c1db
&gt; Fixes: bz#1781440
&gt; Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit 8030f9c0f092170ceb50cedf59b9c330022825b7)

Change-Id: Ia02a109571f0d8cc9902c32db3e9b9282ee5c1db
Fixes: bz#1782826
Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: In the commit faf5ac13c4ee00a05e9451bf8da3be2a9043bbf2 missed one
         condition to come out from the loop so after reach the slot_used to
         1024 loop has become infinite loop

Solution: Correct the code path to avoid the infinite loop

&gt; Change-Id: Ia02a109571f0d8cc9902c32db3e9b9282ee5c1db
&gt; Fixes: bz#1781440
&gt; Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit 8030f9c0f092170ceb50cedf59b9c330022825b7)

Change-Id: Ia02a109571f0d8cc9902c32db3e9b9282ee5c1db
Fixes: bz#1782826
Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: Synchronize slot allocation code</title>
<updated>2019-12-05T07:22:29+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-10-03T08:36:52+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=286e17ac84428f581330ae2a4b0b5912e559e795'/>
<id>286e17ac84428f581330ae2a4b0b5912e559e795</id>
<content type='text'>
Problem: Current slot allocation/deallocation code path is not
         synchronized.There are scenario when due to race condition
         in slot allocation/deallocation code path brick is crashed.

Solution: Synchronize slot allocation/deallocation code path to
          avoid the issue

&gt; Change-Id: I4fb659a75234218ffa0e5e0bf9308f669f75fc25
&gt; Fixes: bz#1763036
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit faf5ac13c4ee00a05e9451bf8da3be2a9043bbf2)

Change-Id: I4fb659a75234218ffa0e5e0bf9308f669f75fc25
Fixes: bz#1778175
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Current slot allocation/deallocation code path is not
         synchronized.There are scenario when due to race condition
         in slot allocation/deallocation code path brick is crashed.

Solution: Synchronize slot allocation/deallocation code path to
          avoid the issue

&gt; Change-Id: I4fb659a75234218ffa0e5e0bf9308f669f75fc25
&gt; Fixes: bz#1763036
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit faf5ac13c4ee00a05e9451bf8da3be2a9043bbf2)

Change-Id: I4fb659a75234218ffa0e5e0bf9308f669f75fc25
Fixes: bz#1778175
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: fix memory allocation issues</title>
<updated>2019-09-16T02:02:00+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-06-21T09:28:08+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a2201d804d8e76ca82a9d086a6ee545cb26228a1'/>
<id>a2201d804d8e76ca82a9d086a6ee545cb26228a1</id>
<content type='text'>
Two problems have been identified that caused that gluster's memory
usage were twice higher than required.

1. An off by 1 error caused that all objects allocated from the memory
   pools were taken from a pool bigger than required. Since each pool
   corresponds to a size equal to a power of two, this was wasting half
   of the available memory.

2. The header information used for accounting on each memory object was
   not taken into consideration when searching for a suitable memory
   pool. It was added later when each individual block was allocated.
   This made this space "invisible" to memory accounting.

Credits: Thanks to Nithya Balachandran for identifying this problem and
         testing this patch.

&gt;Fixes: bz#1722802
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
&gt;Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt;(cherry picked from commit 1716a907da1a835b658740f1325033d7ddd44952)

Fixes: bz#1748774
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Two problems have been identified that caused that gluster's memory
usage were twice higher than required.

1. An off by 1 error caused that all objects allocated from the memory
   pools were taken from a pool bigger than required. Since each pool
   corresponds to a size equal to a power of two, this was wasting half
   of the available memory.

2. The header information used for accounting on each memory object was
   not taken into consideration when searching for a suitable memory
   pool. It was added later when each individual block was allocated.
   This made this space "invisible" to memory accounting.

Credits: Thanks to Nithya Balachandran for identifying this problem and
         testing this patch.

&gt;Fixes: bz#1722802
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
&gt;Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt;(cherry picked from commit 1716a907da1a835b658740f1325033d7ddd44952)

Fixes: bz#1748774
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core/syncop: Bail out if frame creation fails</title>
<updated>2019-09-15T13:20:25+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2019-09-03T15:22:42+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=8c5bc03a47ee38b6cfec8725224248896c75f5d8'/>
<id>8c5bc03a47ee38b6cfec8725224248896c75f5d8</id>
<content type='text'>
There could be cases (either due to insufficient memory or
corrupted mem-pool) due to which frame creation fails. Bail out
with error in such cases.

This is the backport of below mainline fix -
&gt; Fixes: bz#1748448
&gt; review url: https://review.gluster.org/#/c/glusterfs/+/23350/

Change-Id: I8cc0a5852f6f04d2bac991e4eb79ecb42577da11
Fixes: bz#1751556
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There could be cases (either due to insufficient memory or
corrupted mem-pool) due to which frame creation fails. Bail out
with error in such cases.

This is the backport of below mainline fix -
&gt; Fixes: bz#1748448
&gt; review url: https://review.gluster.org/#/c/glusterfs/+/23350/

Change-Id: I8cc0a5852f6f04d2bac991e4eb79ecb42577da11
Fixes: bz#1751556
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ctime: Fix incorrect realtime passed to frame-&gt;root-&gt;ctime</title>
<updated>2019-08-28T05:49:36+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-08-20T10:19:40+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b85d550a552d485f4a7f1eedbc00bdf1f67d6263'/>
<id>b85d550a552d485f4a7f1eedbc00bdf1f67d6263</id>
<content type='text'>
On systems that don't support "timespec_get"(e.g., centos6), it
was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch
time which is incorrect. This patch introduces "timespec_now_realtime"
which uses "clock_gettime" with "CLOCK_REALTIME" which fixes
the issue.

Backport of:
 &gt; Patch: https://review.gluster.org/23274/
 &gt; Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1743652
(cherry picked from commit d14d0749340d9cb1ef6fc4b35f2fb3015ed0339d)

Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1746145
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On systems that don't support "timespec_get"(e.g., centos6), it
was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch
time which is incorrect. This patch introduces "timespec_now_realtime"
which uses "clock_gettime" with "CLOCK_REALTIME" which fixes
the issue.

Backport of:
 &gt; Patch: https://review.gluster.org/23274/
 &gt; Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1743652
(cherry picked from commit d14d0749340d9cb1ef6fc4b35f2fb3015ed0339d)

Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1746145
</pre>
</div>
</content>
</entry>
</feed>
