summaryrefslogtreecommitdiffstats
path: root/rpc/rpc-lib/src/libgfrpc.sym
Commit message (Collapse)AuthorAgeFilesLines
* rpcsvc: Add latency tracking for rpc programsPranith Kumar K2020-09-071-0/+1
| | | | | | | | | | Added latency tracking of rpc-handling code. With this change we should be able to monitor the amount of time rpc-handling code is consuming for each of the rpc call. fixes: #1466 Change-Id: I04fc7f3b12bfa5053c0fc36885f271cb78f581cd Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* rpc: Remove duplicate codePranith Kumar K2019-03-281-1/+0
| | | | | | | | | | rpc_clnt_disable() and rpc_clnt_disconnect() have same code. Removed rpc_clnt_disconnect() and moved calls to rpc_clnt_disconnect() to rpc_clnt_disable() updates bz#1193929 Change-Id: I965f57cc1d5af36d266810125558b6f5e5f279d4 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* server: Resolve memory leak path in server_initMohit Agrawal2018-12-031-0/+1
| | | | | | | | | | | | | | Problem: 1) server_init does not cleanup allocate resources while it is failed before return error 2) dict leak at the time of graph destroying Solution: 1) free resources in case of server_init is failed 2) Take dict_ref of graph xlator before destroying the graph to avoid leak Change-Id: I9e31e156b9ed6bebe622745a8be0e470774e3d15 fixes: bz#1654917 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* rpcsvc: provide each request handler thread its own queueRaghavendra Gowdappa2018-11-291-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A single global per program queue is contended by all request handler threads and event threads. This can lead to high contention. So, reduce the contention by providing each request handler thread its own private queue. Thanks to "Manoj Pillai"<mpillai@redhat.com> for the idea of pairing a single queue with a fixed request-handler-thread and event-thread, which brought down the performance regression due to overhead of queuing significantly. Thanks to "Xavi Hernandez"<xhernandez@redhat.com> for discussion on how to communicate the event-thread death to request-handler-thread. Thanks to "Karan Sandha"<ksandha@redhat.com> for voluntarily running the perf benchmarks to qualify that performance regression introduced by ping-timer-fixes is fixed with this patch and patiently running many iterations of regression tests while RCAing the issue. Thanks to "Milind Changire"<mchangir@redhat.com> for patiently running the many iterations of perf benchmarking tests while RCAing the regression caused by ping-timer-expiry fixes. Change-Id: I578c3fc67713f4234bd3abbec5d3fbba19059ea5 Fixes: bz#1644629 Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
* rpc/clnt: Don't let consumers manage "connected" stateRaghavendra G2018-06-041-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The state management of "connected" in rpc is ad-hoc as far as the responsibility goes. Note that there is nothing wrong with functionality itself. rpc layer manages this state in disconnect codepath and has exposed an api to manage this one from consumers. Note that rpc layer never sets "connected" to true by itself, which forces the consumers to use this api to get a working rpc connection. The situation is best captured from a comment in code from Jeff Darcy in glusterfsd/src/gf-attach.c: -/* - * In a sane world, the generic RPC layer would be capable of tracking - * connection status by itself, with no help from us. It might invoke our - * callback if we had registered one, but only to provide information. Sadly, - * we don't live in that world. Instead, the callback *must* exist and *must* - * call rpc_clnt_{set,unset}_connected, because that's the only way those - * fields get set (with RPC both above and below us on the stack). If we don't - * do that, then rpc_clnt_submit doesn't think we're connected even when we - * are. It calls the socket code to reconnect, but the socket code tracks this - * stuff in a sane way so it knows we're connected and returns EINPROGRESS. - * Then we're stuck, connected but unable to use the connection. To make it - * work, we define and register this trivial callback. - */ Also, consumers of rpc know about state of connection only through the notifications sent by rpc-clnt. So, consumers don't have any extra information to manage the state and hence letting them manage the state is counter intuitive. This patch cleans that up and instead moves the responsibility of state management of rpc layer into itself. Change-Id: I31e641a60795fc480ca753917f4b2579f1e05094 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Fixes: bz#1585585
* build: address linkage issuesKaleb S. KEITHLEY2018-03-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | We have the following undefined symbol error from protocol/server.so: glusterfs_mgmt_pmap_signout glusterfs_autoscale_threads See https://review.gluster.org/19225 (bz#1532238) and https://review.gluster.org/19657 (bz#1550895) (why are there two different bzs for the same bug?) IMO this is a cleaner solution. I.e. moving the above two functions to libgfrpc (.../rpc/rpc-lib/...) I would also, for (foolish) consistency sake, like to see glusterfs_mgmt_pmap_signin() moved from glusterfsd to libgfrpc as well. This works on f28/rawhide, with its new, more restrictive run-time link semantics. The smoke and regression tests on earlier fedora and centos will confirm that it works on those platforms too. Change-Id: I9cfbd1cc15e7ebd9fc31b56ac791287fa2c584de BUG: 1550895 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* rpcsvc: scale rpcsvc_request_handler threadsMilind Changire2018-02-261-0/+1
| | | | | | | | | | | | Scale rpcsvc_request_handler threads to match the scaling of event handler threads. Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c51 for a discussion about why we need multi-threaded rpcsvc request handlers. Change-Id: Ib6838fb8b928e15602a3d36fd66b7ba08999430b Signed-off-by: Milind Changire <mchangir@redhat.com>
* rpc: Adds rpcbind6 programs to libgfrpc symbolsSheena Artrip2018-02-131-0/+2
| | | | | | | | | | | | | Building with --with-default-ipv6 causes shared components of gluster calling the rpcbind6 functions to fail. Adding the symbols in the list is all that is necessary. Building without ipv6 keeps the same behavior. No test cases as this is a build-specific fix. Change-Id: I248d3291bf17326b07d152d9b79cdcfaf9068f0d BUG: 1544961 Signed-off-by: Sheena Artrip <sheenobu@fb.com>
* rpc: use export map to minimize exported symbols in libgf{rpc,xdr}.soKaleb S. KEITHLEY2018-01-121-0/+65
Without an export map (at link time) libgrpc and libgfxdr export over 150 and 450 symbols each, respectively. Many are not used by anything else. (Unclear what the unused symbols are, some may be simple sloppiness, e.g. not declaring functions static that should be. Others may be intra-library calls that can't be static but aren't part of the API, per se.) By linking with an export map the number of exported symbols is reduced to ~60 and ~250 respectively. This parallels the similar change made to libglusterfs recently and the older changes to the xlators to minimize the symbols that are visible (exported) from the .so. And I don't know, do we want to go all the way to symbol versions? For these libs? And for libglusterfs? fixes gluster/glusterfs#392 Change-Id: I9cdc3eee10e5f1408d7e7f2f29fad597c97e4003 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>