summaryrefslogtreecommitdiffstats
path: root/glusterfsd
diff options
context:
space:
mode:
authorJeff Darcy <jdarcy@redhat.com>2016-10-14 10:04:07 -0400
committerShyamsundar Ranganathan <srangana@redhat.com>2017-02-02 19:44:09 -0500
commit1ed73ffa16cb7fe4415acbdb095da6a4628f711a (patch)
tree872af8763080b4221e82c7295bd61c63122203ab /glusterfsd
parent0a0112b2c02a30bcb7eca8fa9ecb7fbbe84aa7f8 (diff)
libglusterfs: make memory pools more thread-friendly
Early multiplexing tests revealed *massive* contention on certain pools' global locks - especially for dictionaries and secondarily for call stubs. For the thread counts that multiplexing can create, a more lock-free solution is clearly needed. Also, the current mem-pool implementation does a poor job releasing memory back to the system, artificially inflating memory usage to match whatever the worst case was since the process started. This is bad in general, but especially so for multiplexing where there are more pools and a major point of the whole exercise is to reduce memory consumption. The basic ideas for the new design are these There is one pool, globally, for each power-of-two size range. Every attempt to create a new pool within this range will instead add a reference to the existing pool. Instead of adding pools for each translator within each multiplexed brick (potentially infinite and quite possibly thousands), we allocate one set of size-based pools per *thread* (hundreds at worst). Each per-thread pool is divided into hot and cold lists. Every allocation first attempts to use the hot list, then the cold list. When objects are freed, they always go on the hot list. There is one global "pool sweeper" thread, which periodically reclaims everything in each pool's cold list and then "demotes" the current hot list to be the new cold list. For normal allocation activity, only a per-thread lock need be taken, and even that only to guard against very rare contention from the pool sweeper. When threads start and stop, a global lock must be taken to add them to the pool sweeper's list. Lock contention is therefore extremely low, and the hot/cold lists also provide good locality. A more complete explanation (of a similar earlier design) can be found here: http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html Backport of: > Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665 > BUG: 1385758 > Reviewed-on: https://review.gluster.org/15645 BUG: 1418091 Change-Id: Id09bbea41f65fcd245822607bc204f3a34904dc2 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16531 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Diffstat (limited to 'glusterfsd')
-rw-r--r--glusterfsd/src/glusterfsd.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/glusterfsd/src/glusterfsd.c b/glusterfsd/src/glusterfsd.c
index 1f7b63e7594..f402246e78e 100644
--- a/glusterfsd/src/glusterfsd.c
+++ b/glusterfsd/src/glusterfsd.c
@@ -2472,6 +2472,13 @@ main (int argc, char *argv[])
if (ret)
goto out;
+ /*
+ * If we do this before daemonize, the pool-sweeper thread dies with
+ * the parent, but we want to do it as soon as possible after that in
+ * case something else depends on pool allocations.
+ */
+ mem_pools_init ();
+
#ifdef GF_LINUX_HOST_OS
ret = set_oom_score_adj (ctx);
if (ret)