<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/performance, branch v3.11.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>readdir-ahead: Fix duplicate listing and cache size calculation</title>
<updated>2017-06-21T12:48:09+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-06-12T05:29:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1f4cc8fb8d5647ef923e6cfa7e4f027d6aab97f8'/>
<id>1f4cc8fb8d5647ef923e6cfa7e4f027d6aab97f8</id>
<content type='text'>
Issue:
If a opendir is followed by a closedir without readdir, though
the prefetched entries were freed, the freed size was not accounted
in priv-&gt;rda_cache_size. Thus the cache limit will exceed if there
are multiple opendir followed by closedir.

Fix:
Fix the pric-&gt;rda_cache_size calculation. Also have removed the
inode_ctx_size. Each perf xlator has its own cache limit that
it works with. Also the inode_ctx size can change, if a forget/
invalidate or any other factor triggers the inode_ctx size.


&gt; Reviewed-on: https://review.gluster.org/17504
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; (cherry picked from commit e97c32ee9913969a726f8a8286cf714f907729d6)

Change-Id: I9707ec558076ce046e58a55989ec9513c70ea029
BUG: 1460898
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17529
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
If a opendir is followed by a closedir without readdir, though
the prefetched entries were freed, the freed size was not accounted
in priv-&gt;rda_cache_size. Thus the cache limit will exceed if there
are multiple opendir followed by closedir.

Fix:
Fix the pric-&gt;rda_cache_size calculation. Also have removed the
inode_ctx_size. Each perf xlator has its own cache limit that
it works with. Also the inode_ctx size can change, if a forget/
invalidate or any other factor triggers the inode_ctx size.


&gt; Reviewed-on: https://review.gluster.org/17504
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; (cherry picked from commit e97c32ee9913969a726f8a8286cf714f907729d6)

Change-Id: I9707ec558076ce046e58a55989ec9513c70ea029
BUG: 1460898
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17529
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: fix spelling errors</title>
<updated>2017-06-13T14:34:34+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2017-06-01T10:56:22+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=cac435fa282fc1b6be4a09e5869e0181538c9e43'/>
<id>cac435fa282fc1b6be4a09e5869e0181538c9e43</id>
<content type='text'>
fixes for various minor spelling errors and typos

master BUG: 1457808
master: https://review.gluster.org/17442

Reported-by: Patrick Matthäi &lt;pmatthaei@debian.org&gt;
Change-Id: Ic1be36f82e3d822bbdc9559878bd79520fc0fcd5
BUG: 1459090
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17475
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
fixes for various minor spelling errors and typos

master BUG: 1457808
master: https://review.gluster.org/17442

Reported-by: Patrick Matthäi &lt;pmatthaei@debian.org&gt;
Change-Id: Ic1be36f82e3d822bbdc9559878bd79520fc0fcd5
BUG: 1459090
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17475
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>perf/ioc: Fix race causing crash when accessing freed page</title>
<updated>2017-06-06T12:54:48+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-05-29T09:51:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=a443560b541bfe854291229ee9407498f60a8e97'/>
<id>a443560b541bfe854291229ee9407498f60a8e97</id>
<content type='text'>
ioc_inode_wakeup does not lock the ioc_inode for the duration
of the operation, leaving a window where ioc_prune could find
a NULL waitq and hence free the page which ioc_inode_wakeup later
tries to access.

Thanks to Mohit for the analysis.

credit: moagrawa@redhat.com

&gt; BUG: 1456385
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17410
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;

Change-Id: I54b064857e2694826d0c03b23f8014e3984a3330
BUG: 1457058
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17424
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
ioc_inode_wakeup does not lock the ioc_inode for the duration
of the operation, leaving a window where ioc_prune could find
a NULL waitq and hence free the page which ioc_inode_wakeup later
tries to access.

Thanks to Mohit for the analysis.

credit: moagrawa@redhat.com

&gt; BUG: 1456385
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17410
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;

Change-Id: I54b064857e2694826d0c03b23f8014e3984a3330
BUG: 1457058
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17424
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nl-cache: Remove null check validation for frame-&gt;local in lookup cbk</title>
<updated>2017-06-06T12:54:27+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-05-30T04:48:04+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=42f325c3284482899e8d9f72f9beb8cf00294d43'/>
<id>42f325c3284482899e8d9f72f9beb8cf00294d43</id>
<content type='text'>
For nameless lookups, nl-cache does not init frame local, so the cbk
throws up messages like these flooding the logs, especially whenenver
gfid lookup on '/' is done (i.e. loc.path="/" and loc.gfid=1).

[2017-05-30 04:35:31.628443] E [nl-cache.c:201:nlc_lookup_cbk]
(--&gt;/usr/lib64/glusterfs/3.8.4/xlator/performance/io-cache.so(+0x3d81)
[0x7f0883005d81]
--&gt;/usr/lib64/glusterfs/3.8.4/xlator/performance/quick-read.so(+0x3127)
[0x7f0882dfb127]
--&gt;/usr/lib64/glusterfs/3.8.4/xlator/performance/nl-cache.so(+0x4cd3)
[0x7f08829e0cd3] ) 0-distrep-nl-cache: invalid argument: local [Invalid
argument]

Fixed it.

&gt; Reviewed-on: https://review.gluster.org/17417
&gt; Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
(cherry picked from commit ec86167d09bcbb763e31b73fb3d688efaa5444d7)

Change-Id: I21cb44a9d2a324617e43f46fed83c9a0942d3a0b
BUG: 1457901
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17446
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For nameless lookups, nl-cache does not init frame local, so the cbk
throws up messages like these flooding the logs, especially whenenver
gfid lookup on '/' is done (i.e. loc.path="/" and loc.gfid=1).

[2017-05-30 04:35:31.628443] E [nl-cache.c:201:nlc_lookup_cbk]
(--&gt;/usr/lib64/glusterfs/3.8.4/xlator/performance/io-cache.so(+0x3d81)
[0x7f0883005d81]
--&gt;/usr/lib64/glusterfs/3.8.4/xlator/performance/quick-read.so(+0x3127)
[0x7f0882dfb127]
--&gt;/usr/lib64/glusterfs/3.8.4/xlator/performance/nl-cache.so(+0x4cd3)
[0x7f08829e0cd3] ) 0-distrep-nl-cache: invalid argument: local [Invalid
argument]

Fixed it.

&gt; Reviewed-on: https://review.gluster.org/17417
&gt; Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
(cherry picked from commit ec86167d09bcbb763e31b73fb3d688efaa5444d7)

Change-Id: I21cb44a9d2a324617e43f46fed83c9a0942d3a0b
BUG: 1457901
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17446
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nl-cache: Remove the max limit for nl-cache-limit and nl-cache-timeout</title>
<updated>2017-05-26T16:34:17+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-05-12T04:57:28+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=1f1f66ef6662ee84f13d49911cdf72556b1c73ef'/>
<id>1f1f66ef6662ee84f13d49911cdf72556b1c73ef</id>
<content type='text'>
The max limit is better unset when arbitrary. Otherwise in the future
if max has to be changed, it can break backward compatility.

&gt;Reviewed-on: https://review.gluster.org/17261
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt;(cherry picked from commit 64f41b962b643b966e376a10a16671c569bf6299)

Change-Id: I4337a3789a2d0d5cc8e2bf687a22536c97608461
BUG: 1453152
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17400
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The max limit is better unset when arbitrary. Otherwise in the future
if max has to be changed, it can break backward compatility.

&gt;Reviewed-on: https://review.gluster.org/17261
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt;(cherry picked from commit 64f41b962b643b966e376a10a16671c569bf6299)

Change-Id: I4337a3789a2d0d5cc8e2bf687a22536c97608461
BUG: 1453152
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17400
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nl-cache: In case of nameless operations do not cache</title>
<updated>2017-05-25T15:30:34+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-05-16T13:55:20+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=b8b398a5ee7a0e02582b2c441548bd758ebdb71c'/>
<id>b8b398a5ee7a0e02582b2c441548bd758ebdb71c</id>
<content type='text'>
Issue:
In nameless lookup/other fops, parent inode will be NULL, when we try
to add the cache to the NULL inode, it causes a crash.

Hence handle the scenario of nameless fops, and do not cache/serve
the nameless fops.

&gt;Reviewed-on: https://review.gluster.org/17316
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit 284cd8851bfe60984d2f11b5c52fe3204ff43b06)

Change-Id: I3b90f882ac89e6aaf3419db89e6f890797f37700
BUG: 1454569
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17361
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
In nameless lookup/other fops, parent inode will be NULL, when we try
to add the cache to the NULL inode, it causes a crash.

Hence handle the scenario of nameless fops, and do not cache/serve
the nameless fops.

&gt;Reviewed-on: https://review.gluster.org/17316
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit 284cd8851bfe60984d2f11b5c52fe3204ff43b06)

Change-Id: I3b90f882ac89e6aaf3419db89e6f890797f37700
BUG: 1454569
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17361
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rda, glusterd: Change the max of rda-cache-limit to INFINITY</title>
<updated>2017-05-22T15:05:43+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-05-19T05:39:13+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=42fc1abdb41817b691cda87ddc7ea94129279475'/>
<id>42fc1abdb41817b691cda87ddc7ea94129279475</id>
<content type='text'>
Issue:
The max value of rda-cache-limit is 1GB before this patch.
When parallel-readdir is enabled, there will be many instances of
readdir-ahead, hence the rda-cache-limit depends on the number of
instances. Eg: On a volume with distribute count 4, rda-cache-limit
when parallel-readdir is enabled, will be 4GB instead of 1GB.
Consider a followinf sequence of operations:
- Enable parallel readdir
- Set rda-cache-limit to lets say 3GB
- Disable parallel-readdir, this results in one instance of readdir-ahead
  and the rda-cache-limit will be back to 1GB, but the current value is 3GB
  and hence the mount will stop working as 3GB &gt; max 1GB.

Solution:
To fix this, we can limit the cache to 1GB even when parallel-readdir
is enabled. But there is no necessity to limit the cache to 1GB, it
can be increased if the system has enough resources. Hence getting rid
of the rda-cache-limit max value is more apt. If we just change the
rda-cache-limit max to INFINITY, we will render older(&lt;3.11) clients
broken, when the rda-cache-limit is set to &gt; 1GB (as the older clients
still expect a value &lt; 1GB). To safely change the max value of
rda-cache-limit to INFINITY, add a check in glusted to verify all
the clients are &gt; 3.11 if the value exceeds 1GB.

&gt;Reviewed-on: https://review.gluster.org/17338
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit e43b40296956d132c70ffa3aa07b0078733b39d4)

Change-Id: Id0cdda3b053287b659c7bf511b13db2e45b92032
BUG: 1453152
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17354
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
The max value of rda-cache-limit is 1GB before this patch.
When parallel-readdir is enabled, there will be many instances of
readdir-ahead, hence the rda-cache-limit depends on the number of
instances. Eg: On a volume with distribute count 4, rda-cache-limit
when parallel-readdir is enabled, will be 4GB instead of 1GB.
Consider a followinf sequence of operations:
- Enable parallel readdir
- Set rda-cache-limit to lets say 3GB
- Disable parallel-readdir, this results in one instance of readdir-ahead
  and the rda-cache-limit will be back to 1GB, but the current value is 3GB
  and hence the mount will stop working as 3GB &gt; max 1GB.

Solution:
To fix this, we can limit the cache to 1GB even when parallel-readdir
is enabled. But there is no necessity to limit the cache to 1GB, it
can be increased if the system has enough resources. Hence getting rid
of the rda-cache-limit max value is more apt. If we just change the
rda-cache-limit max to INFINITY, we will render older(&lt;3.11) clients
broken, when the rda-cache-limit is set to &gt; 1GB (as the older clients
still expect a value &lt; 1GB). To safely change the max value of
rda-cache-limit to INFINITY, add a check in glusted to verify all
the clients are &gt; 3.11 if the value exceeds 1GB.

&gt;Reviewed-on: https://review.gluster.org/17338
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit e43b40296956d132c70ffa3aa07b0078733b39d4)

Change-Id: Id0cdda3b053287b659c7bf511b13db2e45b92032
BUG: 1453152
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17354
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/read-ahead: prevent stale data being returned to application.</title>
<updated>2017-05-18T14:53:06+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2014-04-11T10:28:47+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=c1ee349754547b55aefc507f7bd056a8a2a6628d'/>
<id>c1ee349754547b55aefc507f7bd056a8a2a6628d</id>
<content type='text'>
Assume that fd is shared by two application threads/processes.

T0 read is triggered from app-thread t1 and read call passes through
   write-behind.
T1 app-thread t2 issues a write. The page on which read from t1 is
   waiting is marked stale
T2 write-behind caches write and indicates to application as write
   complete.
T3 app-thread t2 issues read to same region. Since, there is already a
   page for that region (created as part of read at T0), this read
   request waits on that page to be filled (though it is stale, which
   is a bug).
T4 read (triggered at T0) completes from brick (with write still
   pending). Now both read requests from t1 and t2 are served this data
   (though data is stale from app-thread t2's perspective - which is a
   bug)
T5 write is flushed to brick by write-behind.

Fix is to not to serve data from a stale page, but instead initiate a
fresh read to back-end.

&gt;Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
&gt;BUG: 1414242
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/7447
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Csaba Henk &lt;csaba@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;

(cherry picked from commit 2ff39c5cbea6fbda0d7a442f55e6dc2a72efb171)
Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
BUG: 1449311
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17221
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Assume that fd is shared by two application threads/processes.

T0 read is triggered from app-thread t1 and read call passes through
   write-behind.
T1 app-thread t2 issues a write. The page on which read from t1 is
   waiting is marked stale
T2 write-behind caches write and indicates to application as write
   complete.
T3 app-thread t2 issues read to same region. Since, there is already a
   page for that region (created as part of read at T0), this read
   request waits on that page to be filled (though it is stale, which
   is a bug).
T4 read (triggered at T0) completes from brick (with write still
   pending). Now both read requests from t1 and t2 are served this data
   (though data is stale from app-thread t2's perspective - which is a
   bug)
T5 write is flushed to brick by write-behind.

Fix is to not to serve data from a stale page, but instead initiate a
fresh read to back-end.

&gt;Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
&gt;BUG: 1414242
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/7447
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Csaba Henk &lt;csaba@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;

(cherry picked from commit 2ff39c5cbea6fbda0d7a442f55e6dc2a72efb171)
Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
BUG: 1449311
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17221
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nl-cache: free nlc_conf_t in fini()</title>
<updated>2017-05-16T00:29:12+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-05-12T11:12:39+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=9a20120790a462608a6121504bab27e4e910b471'/>
<id>9a20120790a462608a6121504bab27e4e910b471</id>
<content type='text'>
The (xlator_t*)-&gt;private structure in negative-lookup-cache is allocated
in the init() function of the xlator, but never free'd. Valgrind
detected this as:

    656 bytes in 1 blocks are definitely lost in loss record X of Y
       at 0x..+ calloc (/builddir/build/BUILD/valgrind-3.11.0/coregrind/m_replacemalloc/vg_replace_malloc.c:711)
       by 0x.. __gf_calloc (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/mem-pool.c:117)
       by 0x.. init (/usr/src/debug/glusterfs-3.11dev/xlators/performance/nl-cache/src/nl-cache.c:669)
       by 0x.. __xlator_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/xlator.c:472)
       by 0x.. xlator_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/xlator.c:498)
       by 0x.. glusterfs_graph_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/graph.c:321)
       by 0x.. glusterfs_graph_activate (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/graph.c:693)
       by 0x.. glfs_process_volfp (/usr/src/debug/glusterfs-3.11dev/api/src/glfs-mgmt.c:79)
       by 0x.. glfs_volumes_init (/usr/src/debug/glusterfs-3.11dev/api/src/glfs.c:160)
       by 0x.. glfs_init_common (/usr/src/debug/glusterfs-3.11dev/api/src/glfs.c:868)
       by 0x.. glfs_init@@GFAPI_3.4.0 (/usr/src/debug/glusterfs-3.11dev/api/src/glfs.c:913)
       by 0x.. main (/root/gluster-debug/gfapi-load-volfile/gfapi-load-volfile.c:54)

When the xlators is unloaded, it should free the resources it allocated.
This can easily be done in the fini() function.

Cherry picked from commit d7e9dcfad228f385ad64526b1f06b55e98b06964:
&gt; Change-Id: I079e78cc207145bc542e2282fc4cf2bb4dadc28a
&gt; BUG: 1442569
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17143
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;

Change-Id: I079e78cc207145bc542e2282fc4cf2bb4dadc28a
BUG: 1450267
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17263
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The (xlator_t*)-&gt;private structure in negative-lookup-cache is allocated
in the init() function of the xlator, but never free'd. Valgrind
detected this as:

    656 bytes in 1 blocks are definitely lost in loss record X of Y
       at 0x..+ calloc (/builddir/build/BUILD/valgrind-3.11.0/coregrind/m_replacemalloc/vg_replace_malloc.c:711)
       by 0x.. __gf_calloc (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/mem-pool.c:117)
       by 0x.. init (/usr/src/debug/glusterfs-3.11dev/xlators/performance/nl-cache/src/nl-cache.c:669)
       by 0x.. __xlator_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/xlator.c:472)
       by 0x.. xlator_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/xlator.c:498)
       by 0x.. glusterfs_graph_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/graph.c:321)
       by 0x.. glusterfs_graph_activate (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/graph.c:693)
       by 0x.. glfs_process_volfp (/usr/src/debug/glusterfs-3.11dev/api/src/glfs-mgmt.c:79)
       by 0x.. glfs_volumes_init (/usr/src/debug/glusterfs-3.11dev/api/src/glfs.c:160)
       by 0x.. glfs_init_common (/usr/src/debug/glusterfs-3.11dev/api/src/glfs.c:868)
       by 0x.. glfs_init@@GFAPI_3.4.0 (/usr/src/debug/glusterfs-3.11dev/api/src/glfs.c:913)
       by 0x.. main (/root/gluster-debug/gfapi-load-volfile/gfapi-load-volfile.c:54)

When the xlators is unloaded, it should free the resources it allocated.
This can easily be done in the fini() function.

Cherry picked from commit d7e9dcfad228f385ad64526b1f06b55e98b06964:
&gt; Change-Id: I079e78cc207145bc542e2282fc4cf2bb4dadc28a
&gt; BUG: 1442569
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17143
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;

Change-Id: I079e78cc207145bc542e2282fc4cf2bb4dadc28a
BUG: 1450267
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17263
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: make the per glusterfs_ctx_t timer-wheel refcounted</title>
<updated>2017-05-12T13:32:32+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-04-17T10:20:07+00:00</published>
<link rel='alternate' type='text/html' href='http://git.gluster.org/cgit/glusterfs.git/commit/?id=45a5cea1ad028bdff5f33770df8ecdd9ac69b6f1'/>
<id>45a5cea1ad028bdff5f33770df8ecdd9ac69b6f1</id>
<content type='text'>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.


&gt;Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt;Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/17068
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt;(cherry picked from commit 73fcf3a874b2049da31d01b8363d1ac85c9488c2)

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1450267
Reviewed-on: https://review.gluster.org/17262
Tested-by: Poornima G &lt;pgurusid@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.


&gt;Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt;Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/17068
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt;(cherry picked from commit 73fcf3a874b2049da31d01b8363d1ac85c9488c2)

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1450267
Reviewed-on: https://review.gluster.org/17262
Tested-by: Poornima G &lt;pgurusid@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
