summaryrefslogtreecommitdiffstats
path: root/xlators/protocol/server/src/server.c
Commit message (Collapse)AuthorAgeFilesLines
* rpcsvc: Add latency tracking for rpc programsPranith Kumar K2020-09-071-0/+2
| | | | | | | | | | Added latency tracking of rpc-handling code. With this change we should be able to monitor the amount of time rpc-handling code is consuming for each of the rpc call. fixes: #1466 Change-Id: I04fc7f3b12bfa5053c0fc36885f271cb78f581cd Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* protocol/server: Fix coverity issue RESOURCE_LEAKSheetal Pamecha2020-04-101-0/+1
| | | | | | | | | Handle case of arg not freed CID: 1422174 Updates: #1060 Change-Id: Ibd03908a3ea8369035c2b7f6e024b3e5be48f436 Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* core[brick_mux]: brick crashed when creating and deleting volumes over timeMohit Agrawal2020-03-271-3/+9
| | | | | | | | | | | | | | | | | Problem: In brick_mux environment, while volumes are created/stopped in a loop after running a long time the main brick is crashed.The brick is crashed because the main brick process was not cleaned up memory for all objects at the time of detaching a volume. Below are the objects that are missed at the time of detaching a volume 1) xlator object for a brick graph 2) local_pool for posix_lock xlator 3) rpc object cleanup at quota xlator 4) inode leak at brick xlator Solution: To avoid the crash resolve all leak at the time of detaching a brick Change-Id: Ibb6e46c5fba22b9441a88cbaf6b3278823235913 updates: #977 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* protocol/server: structure loggingyatip2020-03-031-109/+70
| | | | | | | | Convert all gf_msg() to gf_smsg() Updates: #657 Change-Id: Ic54b03f05e2766c87f50df0b3a66803b5519fad9 Signed-off-by: yatip <ypadia@redhat.com>
* server.c: fix Coverity issue 1405844 (memory - illegal access)Yaniv Kaul2020-01-231-2/+2
| | | | | | | | | | We can use memcpy() instead of strncpy() as both are strings that are 37 bytes (GF_UUID_BUF_SIZE) long. fixes: CID#1405844 Change-Id: Ic74e8817cd790c13e29f3e6be8f18f2bfff77115 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* protocol/handshake: pass volume-id for extra checkAmar Tumballi2019-09-301-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With added check of volume-id during handshake, we can be sure to not connect with a brick if this gets re-used in another volume. This prevents any accidental issues which can happen with a stale client process lurking along. Also added test case for testing same volume name which would fetch a different volfile (ie, different bricks, different type), and a different volume name, but same brick. For reference: Currently a client<->server handshake happens in glusterfs through protocol/client translator (setvolume) to protocol/server using a dictionary which containes many keys. Rejection happens in server side if some of the required keys are missing in handshake dictionary. Till now, there was no single unique identifier to validate for a client to tell server if it is actually talking to a corresponding server. All we look in protocol/client is a key called 'remote-subvolume', which should match with a subvolume name in server volume file, and for any volume with same brick name (can be present in same cluster due to recreate), it would be same. This could cause major issue, when a client was connected to a given brick, in one volume would be connected to another volume's brick if its re-created/re-used. To prevent this behavior, we are now passing along 'volume-id' in handshake, which would be preserved for the life of client process, which can prevent this accidental connections. NOTE: This behavior wouldn't be applicable for user-snapshot enabled volumes, as snapshotted volume's would have different volume-id. Fixes: bz#1620580 Change-Id: Ie98286e94ce95ae09c2135fd6ec7d7c2ca1e8095 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* posix: In brick_mux brick is crashed while start/stop volume in loopMohit Agrawal2019-08-201-1/+5
| | | | | | | | | | | | | | | Problem: In brick_mux environment sometime brick is crashed while volume stop/start in a loop.Brick is crashed in janitor task at the time of accessing priv.If posix priv is cleaned up before call janitor task then janitor task is crashed. Solution: To avoid the crash in brick_mux environment introduce a new flag janitor_task_stop in posix_private and before send CHILD_DOWN event wait for update the flag by janitor_task_done Change-Id: Id9fa5d183a463b2b682774ab5cb9868357d139a4 fixes: bz#1730409 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* libglusterfs: remove dependency of rpcAmar Tumballi2019-08-161-1/+0
| | | | | | | | | | | | | | | | | | Goal: 'libglusterfs' files shouldn't have any dependency outside of the tree, specially the header files, shouldn't have '#include' from outside the tree. Fixes: * Had to introduce libglusterd so, methods and structures required for only mgmt/glusterd, and cli/ are separated from 'libglusterfs/' * Remove rpc/xdr/gen from build, which was used mainly so dependency for libglusterfs could be properly satisfied. * Move rpcsvc_auth_data to client_t.h, so all dependencies could be handled. Updates: bz#1636297 Change-Id: I0e80243a5a3f4615e6fac6e1b947ad08a9363fce Signed-off-by: Amar Tumballi <amarts@redhat.com>
* event: rename event_XXX with gf_ prefixedXiubo Li2019-07-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I hit one crash issue when using the libgfapi. In the libgfapi it will call glfs_poller() --> event_dispatch() in file api/src/glfs.c:721, and the event_dispatch() is defined by libgluster locally, the problem is the name of event_dispatch() is the extremly the same with the one from libevent package form the OS. For example, if a executable program Foo, which will also use and link the libevent and the libgfapi at the same time, I can hit the crash, like: kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp 00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000] The link for Foo is: lib_foo_LADD = -levent $(GFAPI_LIBS) It will crash. This is because the glfs_poller() is calling the event_dispatch() from the libevent, not the libglsuter. The gfapi link info : GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid If I link Foo like: lib_foo_LADD = $(GFAPI_LIBS) -levent It will works well without any problem. And if Foo call one private lib, such as handler_glfs.so, and the handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't and it will dlopen(handler_glfs.so), then the crash will be hit everytime. The link info will be: foo_LADD = -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like: foo_LADD = $(GFAPI_LIBS) -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS. And in some cases when the --as-needed link option is added(on many dists it is added as default), then the crash is back again, the above workaround won't work. Fixes: #699 Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa Signed-off-by: Xiubo Li <xiubli@redhat.com>
* server.c: fix Coverity CID 1399758Yaniv Kaul2019-03-211-1/+2
| | | | | | | | | | | | 1399758 Dereference before null check It was introduced @ commit 67f48bfcc16a38052e6c9ae7c25e69b03b8ae008 updates: bz#789278 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I1424b008b240691fe2a8924e31c708d0fb4f362d
* glusterfsd: Brick is getting crash at the time of startupMohit Agrawal2019-03-131-5/+5
| | | | | | | | | | | | Problem: Brick is getting crash because graph was not activated at the time of accessing server_conf Solution: To avoid the crash check ctx->active before processing a request Change-Id: Ib112e0eace19189e45f430abdac5511c026bed47 fixes: bz#1687705 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* server.c: use dict_() funcs with key length.Yaniv Kaul2019-02-181-15/+19
| | | | | | | | | | | Changed to use the dict_() funcs which take the key length. This happens to also reduce work under the lock in one case as well. Compile-tested only! updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I958fcc29e95286fe3c74178cae3f01a8b2db26f2
* core: heketi-cli is throwing error "target is busy"Mohit Agrawal2019-01-241-0/+17
| | | | | | | | | | | | | | | Problem: At the time of deleting block hosting volume through heketi-cli , it is throwing an error "target is busy". cli is throwing an error because brick is not detached successfully and brick is not detached due to race condition to cleanp xprt associated with detached brick Solution: To avoid xprt specifc race condition introduce an atomic flag on rpc_transport Change-Id: Id4ff1fe8375a63be71fb3343f455190a1b8bb6d4 fixes: bz#1668190 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-051-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* protocol/server: support server.all-squashXie Changlong2018-12-051-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | We still use gnfs on our side, so do a little work to support server.all-squash. Just like server.root-squash, it's also a volume wide option. Also see bz#1285126 $ gluster volume set <VOLNAME> server.all-squash on Note: If you enable server.root-squash and server.all-squash at the same time, only server.all-squash works. Please refer to following table +---------------+-----------------+---------------------------+ | |all_squash | no_all_squash | +-------------------------------------------------------------+ | | |anonuid/anongid for root | |root_squash |anonuid/anongid |useruid/usergid for no-root| +-------------------------------------------------------------+ |no_root_squash |anonuid/anongid |useruid/usergid | +-------------------------------------------------------------+ Updates bz#1285126 Signed-off-by: Xie Changlong <xiechanglong@cmss.chinamobile.com> Signed-off-by: Xue Chuanyu <xuechuanyu@cmss.chinamobile.com> Change-Id: Iea043318fe6e9a75fa92b396737985062a26b47e
* rpc: bump up server.event-threadsMilind Changire2018-12-041-1/+1
| | | | | | | | | | | | | | | Problem: A single event-thread causes performance issues in the system. Solution: Bump up event-threads to 2 to make the system more performant. This helps in making the system more responsive and helps avoid the ping-timer-expiry problem as well. However, setting the event-threads to 2 is not the only thing required to avoid ping-timer-expiry issues. Change-Id: Idb0fd49e078db3bd5085dd083b0cdc77b59ddb00 fixes: bz#1653277 Signed-off-by: Milind Changire <mchangir@redhat.com>
* server: Resolve memory leak path in server_initMohit Agrawal2018-12-031-9/+45
| | | | | | | | | | | | | | Problem: 1) server_init does not cleanup allocate resources while it is failed before return error 2) dict leak at the time of graph destroying Solution: 1) free resources in case of server_init is failed 2) Take dict_ref of graph xlator before destroying the graph to avoid leak Change-Id: I9e31e156b9ed6bebe622745a8be0e470774e3d15 fixes: bz#1654917 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* rpcsvc: provide each request handler thread its own queueRaghavendra Gowdappa2018-11-291-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A single global per program queue is contended by all request handler threads and event threads. This can lead to high contention. So, reduce the contention by providing each request handler thread its own private queue. Thanks to "Manoj Pillai"<mpillai@redhat.com> for the idea of pairing a single queue with a fixed request-handler-thread and event-thread, which brought down the performance regression due to overhead of queuing significantly. Thanks to "Xavi Hernandez"<xhernandez@redhat.com> for discussion on how to communicate the event-thread death to request-handler-thread. Thanks to "Karan Sandha"<ksandha@redhat.com> for voluntarily running the perf benchmarks to qualify that performance regression introduced by ping-timer-fixes is fixed with this patch and patiently running many iterations of regression tests while RCAing the issue. Thanks to "Milind Changire"<mchangir@redhat.com> for patiently running the many iterations of perf benchmarking tests while RCAing the regression caused by ping-timer-expiry fixes. Change-Id: I578c3fc67713f4234bd3abbec5d3fbba19059ea5 Fixes: bz#1644629 Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
* core: create a constant for default network timeoutXavi Hernandez2018-11-231-1/+1
| | | | | | | | | | A new constant named GF_NETWORK_TIMEOUT has been defined and all references to the hard-coded timeout of 42 seconds have been replaced with this constant. Change-Id: Id30f5ce4f1230f9288d9e300538624bcf1a6da27 fixes: bz#1652852 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* all: fix the format string exceptionsAmar Tumballi2018-11-051-1/+1
| | | | | | | | | | | | | | | | Currently, there are possibilities in few places, where a user-controlled (like filename, program parameter etc) string can be passed as 'fmt' for printf(), which can lead to segfault, if the user's string contains '%s', '%d' in it. While fixing it, makes sense to make the explicit check for such issues across the codebase, by making the format call properly. Fixes: CVE-2018-14661 Fixes: bz#1644763 Change-Id: Ib547293f2d9eb618594cbff0df3b9c800e88bde4 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* protocol: remove the option 'verify-volfile-checksum'Amar Tumballi2018-11-051-1/+0
| | | | | | | | | | | | | | | | 'getspec' operation is not used between 'client' and 'server' ever since we have off-loaded volfile management to glusterd, ie, at least 7 years. No reason to keep the dead code! The removed option had no meaning, as glusterd didn't provide a way to set (or unset) this option. So, no regression should be observed from any of the existing glusterfs deployment, supported or unsupported. Updates: CVE-2018-14653 Updates: bz#1644756 Change-Id: I4a2e0f673c5bcd4644976a61dbd2d37003a428eb Signed-off-by: Amar Tumballi <amarts@redhat.com>
* all: fix warnings on non 64-bits architecturesXavi Hernandez2018-10-101-2/+2
| | | | | | | | | | When compiling in other architectures there appear many warnings. Some of them are actual problems that prevent gluster to work correctly on those architectures. Change-Id: Icdc7107a2bc2da662903c51910beddb84bdf03c0 fixes: bz#1632717 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* core: glusterfsd keeping fd open in index xlatorMohit Agrawal2018-10-081-55/+123
| | | | | | | | | | | | | | Problem: Current resource cleanup sequence is not perfect while brick mux is enabled Solution: 1) Destroying xprt after cleanup all fd associated with a client 2) Before call fini for brick xlators ensure no stub should be running on a brick Change-Id: I86195785e428f57d3ef0da3e4061021fafacd435 fixes: bz#1631357 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-1579/+1507
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* xlators: add classification flag to someAmar Tumballi2018-09-041-0/+1
| | | | | | | | | Add classification to those translators which has `xlator_api_t` already defined and used. Updates: #430 Change-Id: I9d2772cb2c4ed4ab06aaa546500cf3b7d00bddac Signed-off-by: Amar Tumballi <amarts@redhat.com>
* clang-scan: fix multiple issuesAmar Tumballi2018-08-311-1/+1
| | | | | | | | | | | * Buffer overflow issue in glusterfsd * Null argument passed to function expecting non-null (event-epoll) * Make sure the op_ret value is set in macro (posix) Updates: bz#1622665 Change-Id: I32b378fc40a5e3ee800c0dfbc13335d44c9db9ac Signed-off-by: Amar Tumballi <amarts@redhat.com>
* coverity: multiple fixesAmar Tumballi2018-08-311-3/+6
| | | | | | | | CID: 1390477, 1124827 updates: bz#789278 Change-Id: I41060d131aec6e58e7267ac8531b29a70f8c4359 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* All: remove memset() before sprintf()Yaniv Kaul2018-08-141-5/+0
| | | | | | | | | | | | It's not needed. There's a good chance the compiler is smart enough to remove it anyway, but it can't hurt - I hope. Compile-tested only! Change-Id: Id7c054e146ba630227affa591007803f3046416b updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* build: rename event.h to gf-event.hNiels de Vos2018-07-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Newer FreeBSD versions (noticed with 10.3-RELEASE) provide a event.h file that on occasion gets included instead of the libglusterfs file. When this happens, 'struct event_pool' will not be defined and building will fail with errors like: autoscale-threads.c:18:55: error: incomplete definition of type 'struct event_pool' int thread_count = pool->eventthreadcount; ~~~~^ autoscale-threads.c:17:16: note: forward declaration of 'struct event_pool' struct event_pool *pool = ctx->event_pool; ^ This problem is caused by 'pkg-config --cflags uuid' that adds /usr/local/include to the GF_CPPFLAGS. The use of libuuid is preferred so that the contrib/uuid/ directory can be removed. By renaming event.h to gf-event.h there is no conflict between the different event.h files anymore and compiling on FreeBSD works without issues. Change-Id: Ie69f6b8a4f8f8e9630d39a86693eb74674f0f763 Updates: bz#1607319 Signed-off-by: Niels de Vos <ndevos@redhat.com>
* All: run codespell on the code and fix issues.Yaniv Kaul2018-07-221-2/+2
| | | | | | | | | | | | Please review, it's not always just the comments that were fixed. I've had to revert of course all calls to creat() that were changed to create() ... Only compile-tested! Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* server: Set ssl-allow option in options table and rename IDPrashanth Pai2018-07-111-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change explicitly adds 'ssl-allow' options to the server xlator's options table so that glusterd2 can see it as a settable option. This change also marks 'auth.allow' and 'auth.reject' options as a settable. Glusterd2 doesn't maintain a separate volume options table. Glusterd2 dynamically loads shared objects of xlators to read their option table and other information. Glusterd2 reads 'xlator_api_t' if available. If that's not available, it falls back to reading just the options table directly. In glusterd2, volume set operations are performed by users on keys of the format <xlator>.<option-name>. Glusterd2 uses xlator name set in 'xlator_api_t.identifier'. If that's not present it will use the shared object's file name as xlator name. Hence, it is important for 'xlator_api_t.identifier' to be set properly, and in this case, the proper value is "server". This name shall be used by users as prefix while setting volume options implemented in server xlator. The name will also be used in volfile. A user in glusterd2 can authorize a client over TLS as follows: $ glustercli volume set <volname> server.ssl-allow <client1-CN>[,<clientN-CN>] gd2 References: https://github.com/gluster/glusterd2/issues/971 https://github.com/gluster/glusterd2/issues/214 https://github.com/gluster/glusterd2/pull/967 Updates: bz#1193929 Change-Id: I59ef58acb8d51917e6365a83be03e79ae7c5ad17 Signed-off-by: Prashanth Pai <ppai@redhat.com>
* glusterfs: Resolve brick crashes at the time of inode_unrefMohit Agrawal2018-05-151-3/+3
| | | | | | | | | | | | | | | Problem: Sometimes brick process is getting crash at the time of calling inode_unref in fd_destroy Solution: Brick process is getting crash because inode is already free by xlator_mem_cleanup call by server_rpc_notify.To resolve the same move code specific to call transport_unref in last in free_state. BUG: 1577574 Change-Id: Ia517c230d68af4e929b6b753e4c374a26c39dc1a fixes: bz#1577574 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Glusterfsd: brick crash during get-statehari gowtham2018-05-111-2/+5
| | | | | | | | | | | The xprt's dereferencing wasn't checked before using it for the strcmp, which caused the segfault and crashed the brick process. fix: Check every deferenced variable before using it. Change-Id: I7f705d1c88a124e8219bb877156fadb17ecf11c3 fixes: bz#1575864 Signed-off-by: hari gowtham <hgowtham@redhat.com>
* protocol/server: don't assume there would be a volfile idAmar Tumballi2018-05-081-1/+9
| | | | | | | | | | | | | | | | | | | | | Earlier glusterfs never had an assumption someone would start it with right arguments, and brick processes would be spawned by a management layer. It just assume the role based on the volfile. Other than volfile, no other arguments should be technically mandatory for working of glusterfs. With this patch, that assumption holds true. Updates: github issue # 352 A note on why this particular issue for this basic sanity? As per the design of thin-arbiter/tie-breaker, it can be started independently on any machine, without need of glusterd. So, similar to 'glusterd', we should be able to spawn a process with any translator without options/volume id etc. fixes: bz#1569399 Change-Id: I5c0650fe0bfde35ad94ccba60e63f6cdcd1ae5ff Signed-off-by: Amar Tumballi <amarts@redhat.com>
* server/auth: add option for strict authenticationMohammed Rafi KC2018-04-201-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When this option is enabled, we will check for a matching username and password, if not found then the connection will be rejected. This also does a checksum validation of volfile The option is invalid when SSL/TLS is in use, at which point the SSL/TLS certificate user name is used to validate and hence authorize the right user. This expects TLS allow rules to be setup correctly rather than the default *. This option is not settable, as a result this cannot be enabled for volumes using the CLI. This is used with the shared storage volume, to restrict access to the same in non-SSL/TLS environments to the gluster peers only. Tested: ./tests/bugs/protocol/bug-1321578.t ./tests/features/ssl-authz.t - Ran tests on volumes with and without strict auth checking (as brick vol file needed to be edited to test, or rather to enable the option) - Ran tests on volumes to ensure existing mounts are disconnected when we enable strict checking Change-Id: I2ac4f0cfa5b59cc789cc5a265358389b04556b59 fixes: bz#1568844 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: ShyamsundarR <srangana@redhat.com>
* server: fix unresolved symbols by moving them to libglusterfsMohit Agrawal2018-04-201-1/+1
| | | | | | | | | | | | | | | | Problem: glusterd2 build is failed due to undefined symbol (xlator_mem_cleanup , glusterfsd_ctx) in server.so Solution: To resolve the same done below two changes 1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c to xlator.c to be part of libglusterfs.so 2) replace glusterfsd_ctx to this->ctx because symbol glusterfsd_ctx is not part of server.so BUG: 1544090 Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5 fixes: bz#1544090 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* gluster: Sometimes Brick process is crashed at the time of stopping brickMohit Agrawal2018-04-191-47/+123
| | | | | | | | | | | | | | | | | | | | | | | | Problem: Sometimes brick process is getting crashed at the time of stop brick while brick mux is enabled. Solution: Brick process was getting crashed because of rpc connection was not cleaning properly while brick mux is enabled.In this patch after sending GF_EVENT_CLEANUP notification to xlator(server) waits for all rpc client connection destroy for specific xlator.Once rpc connections are destroyed in server_rpc_notify for all associated client for that brick then call xlator_mem_cleanup for for brick xlator as well as all child xlators.To avoid races at the time of cleanup introduce two new flags at each xlator cleanup_starting, call_cleanup. BUG: 1544090 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/) with same patch after enable brick mux forcefully, all test cases are passed. Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf updates: bz#1544090
* glusterd: volume inode/fd status broken with brick muxhari gowtham2018-04-191-42/+50
| | | | | | | | | | | | | | | | | | | | | | | Problem: The values for inode/fd was populated from the ctx received from the server xlator. Without brickmux, every brick from a volume belonged to a single brick from the volume. So searching the server and populating it worked. With brickmux, a number of bricks can be confined to a single process. These bricks can be from different volumes too (if we use the max-bricks-per-process option). If they are from different volumes, using the server xlator to populate causes problem. Fix: Use the brick to validate and populate the inode/fd status. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd fixes: bz#1566067
* rpcsvc: correct event-thread scalingMilind Changire2018-03-121-3/+4
| | | | | | | | | | | | | Problem: Auto thread count derived from the number of attachs and detachs was reset to 1 when server_reconfigure() was called. Solution: Avoid auto-thread-count reset to 1. Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e BUG: 1547888 Signed-off-by: Milind Changire <mchangir@redhat.com>
* build: address linkage issuesKaleb S. KEITHLEY2018-03-051-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | We have the following undefined symbol error from protocol/server.so: glusterfs_mgmt_pmap_signout glusterfs_autoscale_threads See https://review.gluster.org/19225 (bz#1532238) and https://review.gluster.org/19657 (bz#1550895) (why are there two different bzs for the same bug?) IMO this is a cleaner solution. I.e. moving the above two functions to libgfrpc (.../rpc/rpc-lib/...) I would also, for (foolish) consistency sake, like to see glusterfs_mgmt_pmap_signin() moved from glusterfsd to libgfrpc as well. This works on f28/rawhide, with its new, more restrictive run-time link semantics. The smoke and regression tests on earlier fedora and centos will confirm that it works on those platforms too. Change-Id: I9cfbd1cc15e7ebd9fc31b56ac791287fa2c584de BUG: 1550895 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* glusterfsd: Memleak in glusterfsd process while brick mux is onMohit Agrawal2018-02-271-1/+0
| | | | | | | | | | | | | | | | | | Problem: At the time of stopping the volume while brick multiplex is enabled memory is not cleanup from all server side xlators. Solution: To cleanup memory for all server side xlators call fini in glusterfs_handle_terminate after send GF_EVENT_CLEANUP notification to top xlator. BUG: 1544090 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Note: Run all test-cases in separate build (https://review.gluster.org/19574) with same patch after enable brick mux forcefully, all test cases are passed. Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
* rpcsvc: scale rpcsvc_request_handler threadsMilind Changire2018-02-261-2/+8
| | | | | | | | | | | | Scale rpcsvc_request_handler threads to match the scaling of event handler threads. Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c51 for a discussion about why we need multi-threaded rpcsvc request handlers. Change-Id: Ib6838fb8b928e15602a3d36fd66b7ba08999430b Signed-off-by: Milind Changire <mchangir@redhat.com>
* Revert "glusterfsd: Memleak in glusterfsd process while brick mux is on"Mohit Agrawal2018-02-191-0/+1
| | | | | | | | | | | There are still remain some code paths where cleanup is required while brick mux is on.I will upload a new patch after resolve all code paths. This reverts commit b313d97faa766443a7f8128b6e19f3d2f1b267dd. BUG: 1544090 Change-Id: I26ef1d29061092bd9a409c8933d5488e968ed90e Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterfsd: Memleak in glusterfsd process while brick mux is onMohit Agrawal2018-02-151-1/+0
| | | | | | | | | | | | | Problem: At the time of stopping the volume while brick multiplex is enabled memory is not cleanup from all server side xlators. Solution: To cleanup memory for all server side xlators call fini in glusterfs_handle_terminate after send GF_EVENT_CLEANUP notification to top xlator. BUG: 1544090 Change-Id: Ifa1525e25b697371276158705026b421b4f81140 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* protocol: Remove lock recovery logic from client and serverAnoop C S2018-01-291-182/+18
| | | | | | Change-Id: I27f5e1e34fe3eac96c7dd88e90753fb5d3d14550 BUG: 1272030 Signed-off-by: Anoop C S <anoopcs@redhat.com>
* protocol/server: Fix "auth-path" in options tableKotresh HR2018-01-221-0/+5
| | | | | | | | | | | Options set will crash the brick with glusterd2. glusterd used to set "auth-path" during volfile generation. With glusterd2, it should come from options table. Updates: #302 Change-Id: Ie41a17779c185b87ace7e0bce0d0ba594b415a75 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* locks: added inodelk/entrylk contention upcall notificationsXavier Hernandez2018-01-161-6/+38
| | | | | | | | | | | | | | The locks xlator now is able to send a contention notification to the current owner of the lock. This is only a notification that can be used to improve performance of some client side operations that might benefit from extended duration of lock ownership. Nothing is done if the lock owner decides to ignore the message and to not release the lock. For forced release of acquired resources, leases must be used. Change-Id: I7f1ad32a0b4b445505b09908a050080ad848f8e0 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
* protocol/server: add dump_metrics methodAmar Tumballi2017-12-181-9/+55
| | | | | | | | | As 'xlator_api' export is a requirement to add dump_metrics, use it Updates #168 Change-Id: Iaba8bb9151ef35038b0ff48bb26e8399d67aa039 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* protocol/server: Update xlator options tableKaushal M2017-11-171-13/+43
| | | | | | | Updates #302 Change-Id: Ifb604914a5d8b5c47ea2de0c026043b71a783387 Signed-off-by: Kaushal M <kaushal@redhat.com>
* libglusterfs/options: Add new member 'setkey' to xlator optionsKaushal M2017-11-151-2/+5
| | | | | | | | | | | | | | | | | The 'setkey' will be used as the key by GD2 when setting the option during volgen. 'setkey' also supports using varstrings. This is mainly to be used for options, which use a different key for 'volume set' and in volfiles. For eg. the 'auth.*' options of protocol/server. The protocol/server xlator has been updated to make use of this for the auth.allow and auth.reject options. Updates #302 Change-Id: I1fd2fd69625c9db48595bd3f494c221625255169 Signed-off-by: Amar Tumballi <amarts@redhat.com>