summaryrefslogtreecommitdiffstats
path: root/libglusterfs/src/event-epoll.c
Commit message (Collapse)AuthorAgeFilesLines
* libglusterfs-event-epoll: structure loggingyatipadia2020-02-061-54/+39
| | | | | | | | convert gf_msg() to gf_smsg() Change-Id: Idf5bfc826b0c9f1a2674eea2a2e6164f30806b00 Updates: #657 Signed-off-by: yatipadia <ypadia@redhat.com>
* rpc: event_slot_alloc converted infinite loop after reach slot_used to 1024Mohit Agrawal2019-12-101-1/+1
| | | | | | | | | | | | | Problem: In the commit faf5ac13c4ee00a05e9451bf8da3be2a9043bbf2 missed one condition to come out from the loop so after reach the slot_used to 1024 loop has become infinite loop Solution: Correct the code path to avoid the infinite loop Change-Id: Ia02a109571f0d8cc9902c32db3e9b9282ee5c1db Fixes: bz#1781440 Credits: Xavi Hernandez <xhernandez@redhat.com> Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* rpc: Synchronize slot allocation codeMohit Agrawal2019-10-221-33/+41
| | | | | | | | | | | | | Problem: Current slot allocation/deallocation code path is not synchronized.There are scenario when due to race condition in slot allocation/deallocation code path brick is crashed. Solution: Synchronize slot allocation/deallocation code path to avoid the issue Change-Id: I4fb659a75234218ffa0e5e0bf9308f669f75fc25 Fixes: bz#1763036 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* multiple files: another attempt to remove includesYaniv Kaul2019-06-141-6/+0
| | | | | | | | | | | | | | | | | | There are many include statements that are not needed. A previous more ambitious attempt failed because of *BSD plafrom (see https://review.gluster.org/#/c/glusterfs/+/21929/ ) Now trying a more conservative reduction. It does not solve all circular deps that we have, but it does reduce some of them. There is just too much to handle reasonably (dht-common.h includes dht-lock.h which includes dht-common.h ...), but it does reduce the overall number of lines of include we need to look at in the future to understand and fix the mess later one. Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* socket: socket event handlers now return voidMilind Changire2019-02-181-4/+3
| | | | | | | | | | | | | | Problem: Returning any value from socket event handlers to the event sub-system doesn't make sense since event sub-system cannot handle socket sub-system errors. Solution: Change return type of all socket event handlers to 'void' Change-Id: I70dc2c57f12b7ea2fae41120f71aa0d7fe0b2b6f Fixes: bz#1651246 Signed-off-by: Milind Changire <mchangir@redhat.com>
* core: make gf_thread_create() easier to useXavi Hernandez2019-02-011-12/+2
| | | | | | | | | | | | | | This patch creates a specific function to set the thread name using a string format and a variable argument list, like printf(). This function is used to set the thread name from gf_thread_create(), which now accepts a variable argument list to create the full name. It's not necessary anymore to use a local array to build the name of the thread. This is done automatically. Change-Id: Idd8d01fd462c227359b96e98699f8c6d962dc17c Updates: bz#1193929 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* Multiple files: reduce work while under lock.Yaniv Kaul2019-01-291-2/+2
| | | | | | | | | | | | | | | | | Mostly, unlock before logging. In some cases, moved different code that was not needed to be under lock (for example, taking time, or malloc'ing) to be executed before taking the lock. Note: logging might be slightly less accurate in order, since it may not be done now under the lock, so order of logs is racy. I think it's a reasonable compromise. Compile-tested only! updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I2438710016afc9f4f62a176ef1a0d3ed793b4f89
* socket: fix issue on concurrent handle of a socketZhang Huan2019-01-281-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Found an issue on concurrent invoke of event handler to the same socket fd, causing memory corruption. This issue arises after applying commit "socket: Remove redundant in_lock in incoming message handling" that removes priv->in_lock to serialize socket read. The following call sequence describes how concurrent socket event handle happens. thread 1 thread 2 thread 3 epoll_wait() return (slot->in_handler is 0) call select_on_epoll() and epoll_ctl() on fd epoll_wait() return slot->in_handler++ (slot->in_handler is 1) slot->in_handler++ (slot->in_handler is 2) call handler() call handler() Fix this issue by skip invoke of handler if there is already a handler inprogress. Change-Id: I437126ac772debcadb00993a948919c931cd607b updates: bz#1467614 Signed-off-by: Zhang Huan <zhanghuan@open-fs.com>
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-051-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* rpcsvc: provide each request handler thread its own queueRaghavendra Gowdappa2018-11-291-13/+101
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A single global per program queue is contended by all request handler threads and event threads. This can lead to high contention. So, reduce the contention by providing each request handler thread its own private queue. Thanks to "Manoj Pillai"<mpillai@redhat.com> for the idea of pairing a single queue with a fixed request-handler-thread and event-thread, which brought down the performance regression due to overhead of queuing significantly. Thanks to "Xavi Hernandez"<xhernandez@redhat.com> for discussion on how to communicate the event-thread death to request-handler-thread. Thanks to "Karan Sandha"<ksandha@redhat.com> for voluntarily running the perf benchmarks to qualify that performance regression introduced by ping-timer-fixes is fixed with this patch and patiently running many iterations of regression tests while RCAing the issue. Thanks to "Milind Changire"<mchangir@redhat.com> for patiently running the many iterations of perf benchmarking tests while RCAing the regression caused by ping-timer-expiry fixes. Change-Id: I578c3fc67713f4234bd3abbec5d3fbba19059ea5 Fixes: bz#1644629 Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-769/+749
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* clang-scan: fix multiple issuesAmar Tumballi2018-08-311-0/+25
| | | | | | | | | | | * Buffer overflow issue in glusterfsd * Null argument passed to function expecting non-null (event-epoll) * Make sure the op_ret value is set in macro (posix) Updates: bz#1622665 Change-Id: I32b378fc40a5e3ee800c0dfbc13335d44c9db9ac Signed-off-by: Amar Tumballi <amarts@redhat.com>
* coverity: libglusterfs issuesAmar Tumballi2018-08-191-4/+11
| | | | | | | | | CID: 1391415, 1274122, 1274201, 1382408, 1382437, 1389436 1288798, 1288106, 1288110 updates: bz#789278 Change-Id: I48c7a50f22f5f4580310040c66463d9f7dd26204 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* build: rename event.h to gf-event.hNiels de Vos2018-07-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Newer FreeBSD versions (noticed with 10.3-RELEASE) provide a event.h file that on occasion gets included instead of the libglusterfs file. When this happens, 'struct event_pool' will not be defined and building will fail with errors like: autoscale-threads.c:18:55: error: incomplete definition of type 'struct event_pool' int thread_count = pool->eventthreadcount; ~~~~^ autoscale-threads.c:17:16: note: forward declaration of 'struct event_pool' struct event_pool *pool = ctx->event_pool; ^ This problem is caused by 'pkg-config --cflags uuid' that adds /usr/local/include to the GF_CPPFLAGS. The use of libuuid is preferred so that the contrib/uuid/ directory can be removed. By renaming event.h to gf-event.h there is no conflict between the different event.h files anymore and compiling on FreeBSD works without issues. Change-Id: Ie69f6b8a4f8f8e9630d39a86693eb74674f0f763 Updates: bz#1607319 Signed-off-by: Niels de Vos <ndevos@redhat.com>
* core (named threads): flood of -Wformat-truncation warnings with gcc-7.1Kaleb S. KEITHLEY2018-07-231-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Starting in Fedora 26 which has gcc-7.1.x, -Wformat-trunction is enabled with -Wformat, resulting in a flood of new warnings. This many warnings is a concern because it makes it hard(er) to see other warnings that should be addressed. An example is at https://kojipkgs.fedoraproject.org//packages/glusterfs/3.12.0/1.fc28/data/logs/x86_64/build.log For more info see https://review.gluster.org/#/c/18267/ I can't find much (or good) documentation on the heuristics the compiler uses for this warning. In the case of printing integer types it appears it looks at the available space in the destination and the range of values for the variable and/or its type. To address the specific question about why 0x3ff versus 0xfff to mask the value, either would suffice to hint to the compiler that the printed value will fit in three characters. But the loop is from 0...1023 (or 0...0x3ff if you prefer) so I chose that as a more "accurate" mask to use as it exactly matches the range of values of the loop. Fixes: bz#1492847 Change-Id: I6e309ba42159841131d8241bfc0566ef09e00aa9
* All: run codespell on the code and fix issues.Yaniv Kaul2018-07-221-2/+2
| | | | | | | | | | | | Please review, it's not always just the comments that were fixed. I've had to revert of course all calls to creat() that were changed to create() ... Only compile-tested! Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* rpc: conditionally cleanup event slotMilind Changire2018-05-291-0/+8
| | | | | | | | | | | | | | | Problem: socket_server_event_handler() may call rpc_transport_unref() for a socket which may fail registration with the event sub-system. This causes an assert to fail in event_unregister_epoll_common() Solution: Conditionally cleanup event slot only if slot index >= 0 Change-Id: I6ca7c133712e894adc335c7763879691b7e4b213 fixes: bz#1582704 BUG: 1582704 Signed-off-by: Milind Changire <mchangir@redhat.com>
* glusterfs: Use gcc builtin ATOMIC operator to increase/decreate refcount.Mohit Agrawal2017-12-121-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In glusterfs code base we call mutex_lock/unlock to take reference/dereference for a object.Sometime it could be reason for lock contention also. Solution: There is no need to use mutex to increase/decrease ref counter, instead of using mutex use gcc builtin ATOMIC operation. Test: I have not observed yet how much performance gain after apply this patch specific to glusterfs but i have tested same with below small program(mutex and atomic both) and get good difference. static int numOuterLoops; static void * threadFunc(void *arg) { int j; for (j = 0; j < numOuterLoops; j++) { __atomic_add_fetch (&glob, 1,__ATOMIC_ACQ_REL); } return NULL; } int main(int argc, char *argv[]) { int opt, s, j; int numThreads; pthread_t *thread; int verbose; int64_t n = 0; if (argc < 2 ) { printf(" Please provide 2 args Num of threads && Outer Loop\n"); exit (-1); } numThreads = atoi(argv[1]); numOuterLoops = atoi (argv[2]); if (1) { printf("\tthreads: %d; outer loops: %d;\n", numThreads, numOuterLoops); } thread = calloc(numThreads, sizeof(pthread_t)); if (thread == NULL) { printf ("calloc error so exit\n"); exit (-1); } __atomic_store (&glob, &n, __ATOMIC_RELEASE); for (j = 0; j < numThreads; j++) { s = pthread_create(&thread[j], NULL, threadFunc, NULL); if (s != 0) { printf ("pthread_create failed so exit\n"); exit (-1); } } for (j = 0; j < numThreads; j++) { s = pthread_join(thread[j], NULL); if (s != 0) { printf ("pthread_join failed so exit\n"); exit (-1); } } printf("glob value is %ld\n",__atomic_load_n (&glob,__ATOMIC_RELAXED)); exit(0); } time ./thr_count 800 800000 threads: 800; outer loops: 800000; glob value is 640000000 real 1m10.288s user 0m57.269s sys 3m31.565s time ./thr_count_atomic 800 800000 threads: 800; outer loops: 800000; glob value is 640000000 real 0m20.313s user 1m20.558s sys 0m0.028 Change-Id: Ie5030a52ea264875e002e108dd4b207b15ab7cc7 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* event/epoll: don't call handler for events received after a pollerrRaghavendra G2017-09-061-6/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | we register socket with EPOLLONESHOT, which means it has to be explicitly added back through epoll_ctl to receive more events. Normally we do this once the handler completes processing of current event. But event_select_on_epoll is one asynchronous codepath where socket can be added back for polling while an event on the same socket is being processed. event_select_on_epoll has a check whether an event is being processed in the form of slot->in_handler. But this check is not sufficient enough to prevent parallel events as slot->in_handler is not atomically incremented with respect to reception of the event. This means following imaginary sequence of events can happen: * epoll_wait returns with a POLLERR - say POLLERR1 - on a socket (sock1) associated with slot s1. socket_event_handle_pollerr is yet to be invoked. * an event_select_on called from __socket_ioq_churn which was called in request/reply/msg submission codepath (as opposed to __socket_ioq_churn called as part of POLLOUT handling - we cannot receive a POLLOUT due to EPOLLONESHOT) adds back sock1 for polling. * since sock1 was added back for polling in step 2 and our polling is level-triggered, another thread picks up another POLLERR event - say POLLERR2. socket_event_handler is invoked as part of processing POLLERR2 and it completes execution setting priv->sock to -1. * event_unregister_epoll called as part of __socket_reset due to POLLERR1 would receive fd as -1 resulting in assert failure. Also, since the first pollerr event has done rpc_transport_unref, subsequent parallel events (not just pollerr, but other events too) could be acting on a freed up transport too. Change-Id: I5db755068e7890ec755b59f7a35a57da110339eb BUG: 1486134 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: https://review.gluster.org/18129 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
* libglusterfs: Name threads on creationRaghavendra Talur2017-07-191-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Set names to threads on creation for easier debugging. Output of top -H -p <PID-OF-GLUSTERFSD> Before: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd After: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703 BUG: 1254002 Updates: #271 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: https://review.gluster.org/11926 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* core: assorted typos and spelling mistakes from Debian lintianKaleb S. KEITHLEY2017-07-031-12/+12
| | | | | | | | | | | | | | Plus minor readability improvements. Reported-by: pmatthaei@debian.org Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf BUG: 1466785 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17660 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* event/epoll: Add back socket for polling of events immediately afterRaghavendra G2017-05-121-29/+52
| | | | | | | | | | | | | | | | | | | | | | | | | reading the entire rpc message from the wire Currently socket is added back for future events after higher layers (rpc, xlators etc) have processed the message. If message processing involves signficant delay (as in writev replies processed by Erasure Coding), performance takes hit. Hence this patch modifies transport/socket to add back the socket for polling of events immediately after reading the entire rpc message, but before notification to higher layers. credits: Thanks to "Kotresh Hiremath Ravishankar" <khiremat@redhat.com> for assitance in fixing a regression in bitrot caused by this patch. Change-Id: I04b6b9d0b51a1cfb86ecac3c3d87a5f388cf5800 BUG: 1448364 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: https://review.gluster.org/15036 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* core: run many bricks within one glusterfsd processJeff Darcy2017-01-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/14763 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* core: use syscall wrappers instead of direct syscalls - libglusterfsKaleb S. KEITHLEY2015-11-161-2/+3
| | | | | | | | | | | | | | | | various xlators and other components are invoking system calls directly instead of using the libglusterfs/syscall.[ch] wrappers. If not using the system call wrappers there should be a comment in the source explaining why the wrapper isn't used. Change-Id: Ieeca2d36adbc884e4cfa0026dba40df70310d40b BUG: 1267967 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/12275 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* event-epoll: Use pollers[] to check if event_pool_dispatch was calledKrishnan Parthasarathi2015-08-281-1/+19
| | | | | | | | | | BUG: 1242421 Change-Id: I1a0044653f15d33f89ffe16edc5baba40393dec3 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/12004 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* event: add dispatched flag to know if event_dispatch was calledKrishnan Parthasarathi2015-08-211-1/+5
| | | | | | | | | | | | | | This is important for glusterfs processes that choose to reconfigure no. of event-threads (a.k.a epoll worker-threads) before they call event_dispatch on the event_pool. glusterd needs this today. Change-Id: Ia8df3c958545324472262c555ed84b71797f002e BUG: 1242421 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/11911 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* event,parse-utils,quota-common-utils,rbthash/libglusterfs : Porting to new ↵Mohamed Ashiq2015-06-241-39/+41
| | | | | | | | | | | logging framework Change-Id: I13ae9a97fe442c06cf50fe77c63718ea0858dc4d BUG: 1194640 Signed-off-by: Mohamed Ashiq <ashiq333@gmail.com> Reviewed-on: http://review.gluster.org/10823 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* build: do not #include "config.h" in each fileNiels de Vos2015-05-291-5/+0
| | | | | | | | | | | | | | | | | | Instead of including config.h in each file, and have the additional config.h included from the compiler commandline (-include option). When a .c file tests for a certain #define, and config.h was not included, incorrect assumtions were made. With this change, it can not happen again. BUG: 1222319 Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10808 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* event_pool: Add the code to destroy the poller threads and event pool ↵Poornima G2015-03-031-10/+84
| | | | | | | | | | | | gracefully. Change-Id: I49b6ceebb45773620c318fb5d20b81623db75ab6 BUG: 1093594 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/9691 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* epoll: Fix a leak in slot allocation.Poornima G2015-03-021-1/+3
| | | | | | | | | | | | | | | | | When the slots_used in a table becomes 0, the table will not get reused, leading to a leak. This patch fixes the leak. Change-Id: Ib86826d287368174ea7ebe0d0d64b2dec574634e BUG: 1093594 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/9725 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* epoll: Fix broken RPC throttling due to MT epollShyam2015-03-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | The RPC throttle which kicks in by setting the poll-in event on a socket to false, is broken with the MT epoll commit. This is due to the event handler of poll-in attempting to read as much out of the socket till it receives an EAGAIN. Which may never happen and hence we would be processing far more RPCs that we want to. This is being fixed by changing the epoll from ET to LT, and reading request by request, so that we honor the throttle. The downside is that we do not drain the socket, but go back to epoll_wait before reading the next request, but when kicking in throttle, we need to anyway and so a busy connection would degrade to LT anyway to maintain the throttle. As a result this change should not cause deviation in the performance much for busy connections. Change-Id: I522d284d2d0f40e1812ab4c1a453c8aec666464c BUG: 1192114 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/9726 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* socket: allow only one epoll thread to read msg fragmentsKrishnan Parthasarathi2015-02-271-3/+5
| | | | | | | | | | | | | | | | | __socket_read_reply function releases sock priv->lock briefly for notifying higher layers of message's xid. This could result in other epoll threads that are processing events on this socket to read further fragments of the same message. This may lead to incorrect fragment processing and result in a crash. Change-Id: I915665b2e54ca16f2ad65970e51bf76c65d954a4 BUG: 1197118 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/9742 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* epoll: Adding the ability to configure epoll threadsShyam2015-02-071-23/+187
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the ability to configure the number of event threads for various gluster services. Currently with the multi thread epoll patch, it is possible to have more than one thread waiting on socket activity and processing the same. This thread count is currently static, which this commit makes dynamic. The current services which use IO path, i.e brick processes, any client process (nfs, FUSE, gfapi, heal, rebalance, etc.a), gain 2 set parameters to control the number of threads that are processing events. These settings are, - client.event-threads <n> - server.event-threads <n> The client setting affects the client graph consumers, and the server setting affects the brick processes. These are processed and inited/reconfigured using the client/server protocol xlators. Other services (say glusterd) would need to extend similar configuration settings to take advantage of multi threaded event processing. At present glusterd is not enabled with this commit, as it does not stand to gain from this multi-threading (as I understand it). Change-Id: Id8422fc57a9f95a135158eb6477ccf9d3c9ea4d9 BUG: 1104462 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/9488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* epoll: edge triggered and multi-threaded epollVijaikumar M2015-02-071-284/+463
| | | | | | | | | | | | | | | | | | | | | | | - edge triggered (oneshot) polling with epoll - pick one event to avoid multiple events getting picked up by same thread and so get better distribution of events against multiple threads - wire support for multiple poll threads to epoll_wait in parallel - evdata to store absolute index and not hint for epoll - store index and gen of slot instead of fd and index hint - perform fd close asynchronously inside event.c for multithread safety - poll is still single threaded Change-Id: I536851dda0ab224c5d5a1b130a571397c9cace8f BUG: 1104462 Signed-off-by: Anand Avati <avati@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/3842 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* event: peel out poll and epoll specific code into separate filesAnand Avati2012-09-121-0/+463
code re-org, no change in logic. Change-Id: Ib1cb3d3f4cf8989a916df7476479c26570d07a84 BUG: 821087 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/3932 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>