summaryrefslogtreecommitdiffstats
path: root/glusterfsd
Commit message (Collapse)AuthorAgeFilesLines
* event: rename event_XXX with gf_ prefixedXiubo Li2019-08-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I hit one crash issue when using the libgfapi. In the libgfapi it will call glfs_poller() --> event_dispatch() in file api/src/glfs.c:721, and the event_dispatch() is defined by libgluster locally, the problem is the name of event_dispatch() is the extremly the same with the one from libevent package form the OS. For example, if a executable program Foo, which will also use and link the libevent and the libgfapi at the same time, I can hit the crash, like: kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp 00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000] The link for Foo is: lib_foo_LADD = -levent $(GFAPI_LIBS) It will crash. This is because the glfs_poller() is calling the event_dispatch() from the libevent, not the libglsuter. The gfapi link info : GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid If I link Foo like: lib_foo_LADD = $(GFAPI_LIBS) -levent It will works well without any problem. And if Foo call one private lib, such as handler_glfs.so, and the handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't and it will dlopen(handler_glfs.so), then the crash will be hit everytime. The link info will be: foo_LADD = -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like: foo_LADD = $(GFAPI_LIBS) -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS. And in some cases when the --as-needed link option is added(on many dists it is added as default), then the crash is back again, the above workaround won't work. Backport of: > https://review.gluster.org/#/c/glusterfs/+/23110/ > Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa > Fixes: #699 > Signed-off-by: Xiubo Li <xiubli@redhat.com> Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa updates: bz#1740519 Signed-off-by: Xiubo Li <xiubli@redhat.com> (cherry picked from commit 799edc73c3d4f694c365c6a7c27c9ab8eed5f260)
* glusterd/svc: update pid of mux volumes from the shd processMohammed Rafi KC2019-07-242-9/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For a normal volume, we are updating the pid from a the process while we do a daemonization or at the end of the init if it is no-daemon mode. Along with updating the pid we also lock the file, to make sure that the process is running fine. With brick mux, we were updating the pidfile from gluterd after an attach/detach request. There are two problems with this approach. 1) We are not holding a pidlock for any file other than parent process. 2) There is a chance for possible race conditions with attach/detach. For example, shd start and a volume stop could race. Let's say we are starting an shd and it is attached to a volume. While we trying to link the pid file to the running process, this would have deleted by the thread that doing a volume stop. Backport of : https://review.gluster.org/#/c/glusterfs/+/22935/ >Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a >Updates: bz#1722541 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a Updates: bz#1732668 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* core: fedora 30 compiler warningsSheetalPamecha2019-06-181-3/+1
| | | | | | | | warning: ā€˜%sā€™ directive argument is null [-Wformat-overflow=] Change-Id: I69b8d47f0002c58b00d1cc947fac6f1c64e0b295 updates: bz#1193929 Signed-off-by: SheetalPamecha <spamecha@redhat.com>
* multiple files: another attempt to remove includesYaniv Kaul2019-06-141-2/+0
| | | | | | | | | | | | | | | | | | There are many include statements that are not needed. A previous more ambitious attempt failed because of *BSD plafrom (see https://review.gluster.org/#/c/glusterfs/+/21929/ ) Now trying a more conservative reduction. It does not solve all circular deps that we have, but it does reduce some of them. There is just too much to handle reasonably (dht-common.h includes dht-lock.h which includes dht-common.h ...), but it does reduce the overall number of lines of include we need to look at in the future to understand and fix the mess later one. Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* tests: add tests for different signal handlingAmar Tumballi2019-05-301-4/+2
| | | | | | | | | | | Also some cleanup: * old-protocol.t was actually added to make sure we have line-coverage * first-test.t should have been removed as per the comment. It doesn't do anything. * add statvfs to rpc-coverage so we can cover statvfs in few xlators. updates: bz#1693692 Change-Id: Ie8651ce007de484c4abced16b4de765aa5e517be Signed-off-by: Amar Tumballi <amarts@redhat.com>
* Fix some "Null pointer dereference" coverity issuesXavi Hernandez2019-05-261-11/+13
| | | | | | | | | | | | | | | | | | | | | | This patch fixes the following CID's: * 1124829 * 1274075 * 1274083 * 1274128 * 1274135 * 1274141 * 1274143 * 1274197 * 1274205 * 1274210 * 1274211 * 1288801 * 1398629 Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558 Updates: bz#789278 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* core: avoid dynamic TLS allocation when possibleXavi Hernandez2019-04-241-3/+1
| | | | | | | | | | | | | | | | | | | Some interdependencies between logging and memory management functions make it impossible to use the logging framework before initializing memory subsystem because they both depend on Thread Local Storage allocated through pthread_key_create() during initialization. This causes a crash when we try to log something very early in the initialization phase. To prevent this, several dynamically allocated TLS structures have been replaced by static TLS reserved at compile time using '__thread' keyword. This also reduces the number of error sources, making initialization simpler. Updates: bz#1193929 Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* core: Log level changes do not effect on running client processMohit Agrawal2019-04-153-18/+29
| | | | | | | | | | | | | | | | | | | Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level per xlator during reconfigure only for a brick process not for the client process. Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure about brick multiplex introudce a flag brick_mux at ctx->cmd_args. Note: There are two other changes done with this patch 1) Ignore client-log-level option to attach a brick with already running brick if brick_mux is enabled 2) Add a log to print pid of the running process to make easier debugging Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com> Fixes: bz#1696046
* mgmt/shd: Implement multiplexing in self heal daemonMohammed Rafi KC2019-04-013-23/+236
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Shd daemon is per node, which means they create a graph with all volumes on it. While this is a great for utilizing resources, it is so good in terms of performance and managebility. Because self-heal daemons doesn't have capability to automatically reconfigure their graphs. So each time when any configurations changes happens to the volumes(replicate/disperse), we need to restart shd to bring the changes into the graph. Because of this all on going heal for all other volumes has to be stopped in the middle, and need to restart all over again. Solution: This changes makes shd as a per volume daemon, so that the graph will be generated for each volumes. When we want to start/reconfigure shd for a volume, we first search for an existing shd running on the node, if there is none, we will start a new process. If already a daemon is running for shd, then we will simply detach a graph for a volume and reatach the updated graph for the volume. This won't touch any of the on going operations for any other volumes on the shd daemon. Example of an shd graph when it is per volume graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ --------- --------- ---------- | AFR-1 | | AFR-2 | | AFR-3 | -------- --------- ---------- A running shd daemon with 3 volumes will be like--> graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ ------------ ------------ ------------ | volume-1 | | volume-2 | | volume-3 | ------------ ------------ ------------ Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99 fixes: bz#1659708 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* rpc/transport: Missing a ref on dict while creating transport objectMohammed Rafi KC2019-03-201-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | while creating rpc_tranpsort object, we store a dictionary without taking a ref on dict but it does an unref during the cleaning of the transport object. So the rpc layer expect the caller to take a ref on the dictionary before passing dict to rpc layer. This leads to a lot of confusion across the code base and leads to ref leaks. Semantically, this is not correct. It is the rpc layer responsibility to take a ref when storing it, and free during the cleanup. I'm listing down the total issues or leaks across the code base because of this confusion. These issues are currently present in the upstream master. 1) changelog_rpc_client_init 2) quota_enforcer_init 3) rpcsvc_create_listeners : when there are two transport, like tcp,rdma. 4) quotad_aggregator_init 5) glusterd: init 6) nfs3_init_state 7) server: init 8) client:init This patch does the cleanup according to the semantics. Change-Id: I46373af9630373eb375ee6de0e6f2bbe2a677425 updates: bz#1659708 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterfsd: Multiple shd processes are spawned on brick_mux environmentMohit Agrawal2019-03-121-6/+16
| | | | | | | | | | | | | | | | | | | Problem: Multiple shd processes are spawned while starting volumes in the loop on brick_mux environment.glusterd spawn a process based on a pidfile and shd daemon is taking some time to update pid in pidfile due to that glusterd is not able to get shd pid Solution: Commit cd249f4cb783f8d79e79468c455732669e835a4f changed the code to update pidfile in parent for any gluster daemon after getting the status of forking child in parent.To resolve the same correct the condition update pidfile in parent only for glusterd and for rest of the daemon pidfile is updated in child Change-Id: Ifd14797fa949562594a285ec82d58384ad717e81 fixes: bz#1684404 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterfsd: Do not process PROFILE_NFS_INFO if graph is not readyhujianfei2019-02-191-0/+5
| | | | | | | | | | | | | | | | | | | | | | Otherwise, gnfs will crash in following situation. Also see commit 2f9e555f. Reproducible Steps: 1. kill gnfs process 2. service glusterd restart;gluster volume profile [vol] info nfs dump trace info: /lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7fcf5cb6a872] /lib64/libglusterfs.so.0(gf_print_trace+0x324)[0x7fcf5cb743a4] /lib64/libc.so.6(+0x35670)[0x7fcf5b1d5670] /usr/sbin/glusterfs(glusterfs_handle_nfs_profile+0x114)[0x7fcf5d066474] /lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x7fcf5cba1502] /lib64/libc.so.6(+0x47110)[0x7fcf5b1e7110] Fixes: bz#1677559 Change-Id: Id68edb3e4646c39544e0b4c90b5e0a9083b37b0d Signed-off-by: hujianfei <hujianfei@cmss.chinamobile.com>
* glusterd: adding a comment for code readabilitySanju Rakonde2019-02-191-0/+10
| | | | | | | | | | Adding a comment in the source code, so that anyone reading the code will understand the changes done by d4fa29 better. fixes: bz#1654270 Change-Id: I75aff4243420c434c47d69a4b310f77bf161bb29 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* core: implement a global thread poolXavi Hernandez2019-02-182-1/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements a thread pool that is wait-free for adding jobs to the queue and uses a very small locked region to get jobs. This makes it possible to decrease contention drastically. It's based on wfcqueue structure provided by urcu library. It automatically enables more threads when load demands it, and stops them when not needed. There's a maximum number of threads that can be used. This value can be configured. Depending on the workload, the maximum number of threads plays an important role. So it needs to be configured for optimal performance. Currently the thread pool doesn't self adjust the maximum for the workload, so this configuration needs to be changed manually. For this reason, the global thread pool has been made optional, so that volumes can still use the thread pool provided by io-threads. To enable it for bricks, the following option needs to be set: config.global-threading = on This option has no effect if bricks are already running. A restart is required to activate it. It's recommended to also enable the following option when running bricks with the global thread pool: performance.iot-pass-through = on To enable it for a FUSE mount point, the option '--global-threading' must be added to the mount command. To change it, an umount and remount is needed. It's recommended to disable the following option when using global threading on a mount point: performance.client-io-threads = off To enable it for services managed by glusterd, glusterd needs to be started with option '--global-threading'. In this case all daemons, like self-heal, will be using the global thread pool. Currently it can only be enabled for bricks, FUSE mounts and glusterd services. The maximum number of threads for clients and bricks can be configured using the following options: config.client-threads config.brick-threads These options can be applied online and its effect is immediate most of the times. If one of them is set to 0, the maximum number of threads will be calcutated as #cores * 2. Some distributions use a very old userspace-rcu library (version 0.7) for this reason, some header files from version 0.10 have been copied into contrib/userspace-rcu and are used if the detected version is 0.7 or older. An additional change has been made to io-threads to prevent that threads are started when iot-pass-through is set. Change-Id: I09d19e246b9e6d53c6247b29dfca6af6ee00a24b updates: #532 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* fuse: reflect the actual default for lru-limit optionAmar Tumballi2019-02-111-1/+1
| | | | | | | | in both `--help` text and man page updates: bz#1193929 Change-Id: I9aa9367c6863ac8e2403255280697c9e6be26cf0 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* mount/fuse: expose auto-invalidation as a mount optionRaghavendra Gowdappa2019-02-022-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Auto invalidation is necessary when same (meta)data is shared/access across multiple mounts. However, if (meta)data is not shared, all relevant I/O goes through the cache of single mount and hence is coherent with (meta)data on bricks always. So, fuse-auto-invalidation can be disabled for this case which gives a huge performance boost for workloads that write data and then immediately read the data they just wrote. From glusterfs --help, <snip> --auto-invalidation[=BOOL] controls whether fuse-kernel can auto-invalidate attribute, dentry and page-cache. Disable this only if same files/directories are not accessed across two different mounts concurrently [default: "on"] </snip> Details on how disabling auto-invalidation helped to reduce pgbench init times can be found at [1]. Time taken for pgbench init of scale 8000 was 8340s. That will be an improvement of 86% (59280s vs 8340s) with auto-invalidations turned off along with other optimizations. Just disabling auto-invalidation contributed 56% improvement by reducing the total time taken by 33260s. [1] https://www.spinics.net/lists/gluster-devel/msg25907.html Change-Id: I0ed730dba9064bd9c576ad1800170a21e100e1ce Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com> updates: bz#1664934
* Multiple files: reduce work while under lock.Yaniv Kaul2019-01-291-7/+10
| | | | | | | | | | | | | | | | | Mostly, unlock before logging. In some cases, moved different code that was not needed to be under lock (for example, taking time, or malloc'ing) to be executed before taking the lock. Note: logging might be slightly less accurate in order, since it may not be done now under the lock, so order of logs is racy. I think it's a reasonable compromise. Compile-tested only! updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I2438710016afc9f4f62a176ef1a0d3ed793b4f89
* rpc: Fix double freePoornima G2019-01-221-2/+0
| | | | | | | | | | The value rsp.xdata.xdata_val was being freed twice. It was assigned to dict->extra_stdfree, dict_destroy would free it and also there was an explicit free. Getting rid of explicit free in this patch. Change-Id: Ia9c73454bec3970b33f154fa754398bf3b045645 fixes: bz#1668268 Signed-off-by: Poornima G <pgurusid@redhat.com>
* rpc: use address-family option from vol fileMilind Changire2019-01-221-1/+4
| | | | | | | | | | | | | | | | | This patch helps enable IPv6 connections in the cluster. The default address-family is IPv4 without using this option explicitly. When address-family is set to "inet6" in the /etc/glusterfs/glusterd.vol file, the mount command-line also needs to have -o xlator-option="transport.address-family=inet6" added to it. This option also gets added to the brick command-line. Snapshot and gfapi use-cases should also use this option to pass in the inet6 address-family. Change-Id: I97db91021af27bacb6d7578e33ea4817f66d7270 fixes: bz#1635863 Signed-off-by: Milind Changire <mchangir@redhat.com>
* core: Resolve memory leak for brickMohit Agrawal2019-01-161-1/+7
| | | | | | | | | | | Problem: Some functions are not freeing memory allocated by xdr_to_genric so it has become leak Solution: Call free to avoid leak Change-Id: I3524fe2831d1511d378a032f21467edae3850314 fixes: bz#1656682 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* core: glusterd/add-brick-and-validate-replicated-volume-options.t is crashMohit Agrawal2019-01-141-6/+2
| | | | | | | | | | | | | Problem: Sometime brick is getting crash at the time of handling pmap signin request Solution: glusterfs_mgmt_pamp_signin is using same frame to send pmap signin request so to avoid crash send signin request on separate frame Change-Id: I443f854171ec4372e8d5f84bdc576c468e92c493 fixes: bz#1665656 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* core: Resolve dict_leak at the time of destroying graphMohit Agrawal2019-01-141-2/+1
| | | | | | | | | | | | Problem: In gluster code some of the places it call's get_new_dict to create a dictionary without taking reference so at the time of dict_unref it has become a leak Solution: To resolve the same call dict_new instead of get_new_dict updates bz#1650403 Change-Id: I3ccbbf5af07079a4fa09aad2cd0458c8625b2f06 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: kill the process without releasing the cleanup mutex lockSanju Rakonde2019-01-021-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem: glusterd acquires a cleanup mutex lock before it starts cleanup process, so that any other thread which tries to acquire lock on any resource will be blocked on cleanup mutex lock. We don't want any thread to try to acquire any resource, once the cleanup is started. because other threads might try to acquire lock on resources which are already freed by the thread which is going though the cleanup phase. previously we were releasing the cleanup mutex lock before the process exit. As we are releasing the cleanup mutex lock, before the process can exit some other thread which is blocked on cleanup mutex lock is acquiring the cleanup mutex lock and trying to acquire some resources which are already freed as a part of cleanup. This is leading glusterd to crash. Solution: We should exit the process without releasing the cleanup mutex lock. Change-Id: Ibae1c62260f141019017f7a547519a5d38dc2bb6 fixes: bz#1654270 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* posix: use synctask for janitorPoornima G2018-12-191-5/+3
| | | | | | | | | | | | | | With brick mux, the number of threads increases as the number of bricks increases. As an initiative to reduce the number of threads in brick mux scenario, replacing janitor thread to use synctask infra. Now close() and closedir() handle by separate janitor thread which is linked with glusterfs_ctx. Updates #475 Change-Id: I0c4aaf728125ab7264442fde59f3d08542785f73 Signed-off-by: Poornima G <pgurusid@redhat.com>
* fuse: add --lru-limit optionAmar Tumballi2018-12-142-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The inode LRU mechanism is moot in fuse xlator (ie. there is no limit for the LRU list), as fuse inodes are referenced from kernel context, and thus they can only be dropped on request of the kernel. This might results in a high number of passive inodes which are useless for the glusterfs client, causing a significant memory overhead. This change tries to remedy this by extending the LRU semantics and allowing to set a finite limit on the fuse inode LRU. A brief history of problem: When gluster's inode table was designed, fuse didn't have any 'invalidate' method, which means, userspace application could never ask kernel to send a 'forget()' fop, instead had to wait for kernel to send it based on kernel's parameters. Inode table remembers the number of times kernel has cached the inode based on the 'nlookup' parameter. And 'nlookup' field is not used by no other entry points (like server-protocol, gfapi etc). Hence the inode_table of fuse module always has to have lru-limit as '0', which means no limit. GlusterFS always had to keep all inodes in memory as kernel would have had a reference to it. Again, the reason for this is, kernel's glusterfs inode reference was pointer of 'inode_t' structure in glusterfs. As it is a pointer, we could never free it (to prevent segfault, or memory corruption). Solution: In the inode table, handle the prune case of inodes with 'nlookup' differently, and call a 'invalidator' method, which in this case is fuse_invalidate(), and it sends the request to kernel for getting the forget request. When the kernel sends the forget, it means, it has dropped all the reference to the inode, and it will send the forget with the 'nlookup' parameter too. We just need to make sure to reduce the 'nlookup' value we have when we get forget. That automatically cause the relevant prune to happen. Credits: Csaba Henk, Xavier Hernandez, Raghavendra Gowdappa, Nithya B fixes: bz#1560969 Change-Id: Ifee0737b23b12b1426c224ec5b8f591f487d83a2 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* glusterfsd: Fix coverity issueIraj Jamali2018-12-141-5/+0
| | | | | | | | | | | Problem reported: value assigned to a variable is never used Fixes CID : 1274230 updates: bz#789278 Change-Id: I7afcb411876dea81c6820c5b31ae0a2896f9ca15 Signed-off-by: Iraj Jamali <ijamali@redhat.com>
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-055-31/+31
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* server: Resolve memory leak path in server_initMohit Agrawal2018-12-031-0/+4
| | | | | | | | | | | | | | Problem: 1) server_init does not cleanup allocate resources while it is failed before return error 2) dict leak at the time of graph destroying Solution: 1) free resources in case of server_init is failed 2) Take dict_ref of graph xlator before destroying the graph to avoid leak Change-Id: I9e31e156b9ed6bebe622745a8be0e470774e3d15 fixes: bz#1654917 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: perform store operation in cleanup lockAtin Mukherjee2018-11-271-35/+38
| | | | | | | | | All glusterd store operation and cleanup thread should work under a critical section to avoid any partial store write. Change-Id: I4f12e738f597a1f925c87ea2f42565dcf9ecdb9d Fixes: bz#1652430 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* coverity: Fix coverity issuesMohammed Rafi KC2018-11-261-2/+5
| | | | | | | | | | | | | | | | This patch fixes coverity CID : 1356537 https://scan6.coverity.com/reports.htm#v42907/p10714/fileInstanceId=87389108&defectInstanceId=26791927&mergedDefectId=1356537 CID : 1395666 https://scan6.coverity.com/reports.htm#v42907/p10714/fileInstanceId=87389187&defectInstanceId=26791932&mergedDefectId=1395666 CID : 1351707 https://scan6.coverity.com/reports.htm#v42907/p10714/fileInstanceId=87389027&defectInstanceId=26791973&mergedDefectId=1351707 CID : 1396910 https://scan6.coverity.com/reports.htm#v42907/p10714/fileInstanceId=87389027&defectInstanceId=26791973&mergedDefectId=13596910 Change-Id: I8094981a741f4d61b083c05a98df23dcf5b022a2 updates: bz#789278 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* core: Resolve memory leak at the time of graph initMohit Agrawal2018-11-221-3/+34
| | | | | | | | | | | | | | | Problem: In the commit 751b14f2bfd40e08ad395ccd98c6eb0a41ac4e91 one code path is missed to avoid leak at the time of calling graph init Solution: Before destroying graph call xlator fini to avoid leak for server-side xlators those call init during graph init Credit: Pranith Kumar Karampuri fixes: bz#1651431 Change-Id: I6e7cff0d792ab9d954524b28667e94f2d9ec19a2 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* core: Resolve memory leak at the time of graph initMohit Agrawal2018-11-201-4/+7
| | | | | | | | | | | Problem: Memory leak when graph init fails as during volfile exchange between brick and glusterd Solution: Fix the error code path in glusterfs_graph_init Change-Id: If62bee61283fccb7fd60abc6ea217cfac12358fa fixes: bz#1651431 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterfsd: NULL pointer dereferencing clang fixIraj Jamali2018-11-201-1/+1
| | | | | | | | | Added a check to avoid clang warning Updates: bz#1622665 Change-Id: If9ae4e4f2ae13c85dad0e87d8dd6930dde74bbda Signed-off-by: Iraj Jamali <ijamali@redhat.com>
* glusterfsd: Make io-stats xlator search position independentPranith Kumar K2018-11-151-8/+10
| | | | | | | | | | | | | | | | | Problem: glusterfsd notify trigger for profile info command expects decompounder xlator to have the name of the brick and its immediate child to be io-stats xlator. In GD2 decompounder xlator doesn't exist, so this is preventing io-stats xlator from receiving the profile info collection notification. Fix: search for io-stats xlator below server xlator till the first instance is found and send notification for it. fixes bz#1649709 Change-Id: I92a1d9019bbd5546050ab43d50d571c444e027ed Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* glusterfsd: Make each multiplexed brick sign inPrashanth Pai2018-11-121-4/+22
| | | | | | | | | | | | | | | | | | | | | | | NOTE: This change will be consumed by brick mux implementation of glusterd2 only. No corresponsing change in glusterd1 has been made. When a multiplexed brick process is shutting down, it sends sign out requests to glusterd for all bricks that it contains. However, sign in request is only sent for a single brick. Consequently, glusterd has to use some tricky means to repopulate pmap registry with information of multiplexed bricks during glusterd restart. This change makes each multiplexed brick send a sign in request to glusterd2 which ensures that glusterd2 can easily repopulate pmap registry with port information. As a bonus, sign in request will now also contain PID of the brick sending the request so that glusterd2 can rely on this instead of having to read/manage brick pidfiles. Change-Id: I409501515bd9a28ee7a960faca080e97cabe5858 updates: bz#1193929 Signed-off-by: Prashanth Pai <ppai@redhat.com>
* glusterfsd: Do not process GLUSTERD_NODE_STATUS if graph is not readyHu Jianfei2018-11-071-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | Otherwise, gnfs will crash if we try to get nfs clients status in following situation. Also see commit 2f9e555f. Reproducible Steps: 1. systemctl restart glusterd; gluster volume status rep 2. systemctl restart glusterd; gluster volume status rep nfs clients step 1 works ok, but step 2 will lead localhost gnfs crash with certain probability. /lib64/libglusterfs.so.0(+0x270f0)[0x7effb6c7b0f0] /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7effb6c854a4] /lib64/libc.so.6(+0x35270)[0x7effb52e7270] /usr/sbin/glusterfs(glusterfs_handle_node_status+0x155)[0x7effb7196905] /lib64/libglusterfs.so.0(+0x63f40)[0x7effb6cb7f40] /lib64/libc.so.6(+0x46d40)[0x7effb52f8d40] Updates: bz#1646869 Change-Id: Ia4cb009f821d32b2d18ba48d3467cc81a4b07747 Signed-off-by: Xie Changlong <xiechanglong@cmss.chinamobile.com> Signed-off-by: Hu Jianfei <hujianfei@cmss.chinamobile.com>
* fuse: diagnostic FLUSH interruptCsaba Henk2018-11-062-0/+45
| | | | | | | | | | | | | | | | | | | We add dummy interrupt handling for the FLUSH fuse message. It can be enabled by the "--fuse-flush-handle-interrupt" hidden command line option, or "-ofuse-flush-handle-interrupt=yes" mount option. It serves no other than diagnostic & demonstational purposes -- to exercise the interrupt handling framework a bit and to give an usage example. Documentation is also provided that showcases interrupt handling via FLUSH. Change-Id: I522f1e798501d06b74ac3592a5f73c1ab0590c60 updates: #465 Signed-off-by: Csaba Henk <csaba@redhat.com>
* glusterfsd: fix the asan leak messageAmar Tumballi2018-10-161-0/+1
| | | | | | | | | | | | | | | | | | | Fixes below trace of ASan: Direct leak of 130 byte(s) in 1 object(s) allocated from: #0 0x7fa794bb5850 in malloc (/lib64/libasan.so.4+0xde850) #1 0x7fa7944e5de9 in __gf_malloc ../../../libglusterfs/src/mem-pool.c:136 #2 0x40b85c in gf_strndup ../../../libglusterfs/src/mem-pool.h:166 #3 0x40b85c in gf_strdup ../../../libglusterfs/src/mem-pool.h:183 #4 0x40b85c in parse_opts ../../../glusterfsd/src/glusterfsd.c:1049 #5 0x7fa792a98720 in argp_parse (/lib64/libc.so.6+0x101720) #6 0x40d89f in parse_cmdline ../../../glusterfsd/src/glusterfsd.c:2041 #7 0x406d07 in main ../../../glusterfsd/src/glusterfsd.c:2625 updates: bz#1633930 Change-Id: I394b3fc24b7a994c1b03635cb5e973e7290491d3 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* glusterfsd: Fix coverity issues around unused valuesShyamsundarR2018-10-161-0/+2
| | | | | | | | | | | | | | This patch fixes, CID: 1274064, 1274232 The fix is to not add into the return dict the throughput and time values on error computing the same. The pattern applied is the same as when an error occurs during adding either the throughput or the time value to the dict. Change-Id: I33e21e75efbc691f18b818934ef3bf70dd075097 Updates: bz#789278 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* all: fix warnings on non 64-bits architecturesXavi Hernandez2018-10-101-1/+1
| | | | | | | | | | When compiling in other architectures there appear many warnings. Some of them are actual problems that prevent gluster to work correctly on those architectures. Change-Id: Icdc7107a2bc2da662903c51910beddb84bdf03c0 fixes: bz#1632717 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* glusterfsd/mgmt : Check for NULL after creating frameAshish Pandey2018-09-211-0/+4
| | | | | | | | | | CID: 1382390 https://scan6.coverity.com/reports.htm#v42607/p10714fileInstanceId=85472585&defectInstanceId=26074725&mergedDefectId=1382390 Change-Id: Iade073a5e72f29ad5e8f372955869bc287eb9793 updates: bz#789278 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-123-4706/+4658
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* Land clang-format changesGluster Ant2018-09-123-148/+125
| | | | Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
* mgmt/glusterd : Fix coverity issueAshish Pandey2018-09-111-3/+6
| | | | | | | | | | | | CID: 727146, 727066 https://scan6.coverity.com/reports.htm#v42607/p10714/fileInstanceId=85393035&defectInstanceId=26034751&mergedDefectId=727146 https://scan6.coverity.com/reports.htm#v42607/p10714/fileInstanceId=85392913&defectInstanceId=26034571&mergedDefectId=727066 updates: bz#789278 Change-Id: Ieaef33829ec88e68690dabce4ea21d2e61dad9f6 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* glusterfsd/src/glusterfsd.c: Move to GF_MALLOC() instead of GF_CALLOC() when ā†µYaniv Kaul2018-09-071-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | possible It doesn't make sense to calloc (allocate and clear) memory when the code right away fills that memory with data. It may be optimized by the compiler, or have a microscopic performance improvement. In some cases, also changed allocation size to be sizeof some struct or type instead of a pointer - easier to read. In some cases, removed redundant strlen() calls by saving the result into a variable. 1. Only done for the straightforward cases. There's room for improvement. 2. Please review carefully, especially for string allocation, with the terminating NULL string. Only compile-tested! updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: Iaed86fcc909022c5158c3e08a9106b1110b9df0a
* New flag to glusterfsd binary to print libexec dirAravinda VK2018-09-053-1/+14
| | | | | | | | | | | | New CLI option for `glusterfsd` binary to get the path of libexec directory. This helps glusterd2 to detect the installed path of `gsyncd` and other binaries. Usage: `glusterfsd --print-libexecdir` Updates: bz#1193929 Change-Id: I8c1a74afd9acec7ee7bd3deabed9d9f20fe3fb5f Signed-off-by: Aravinda VK <avishwan@redhat.com>
* clang-scan: fix multiple issuesAmar Tumballi2018-08-311-5/+12
| | | | | | | | | | | * Buffer overflow issue in glusterfsd * Null argument passed to function expecting non-null (event-epoll) * Make sure the op_ret value is set in macro (posix) Updates: bz#1622665 Change-Id: I32b378fc40a5e3ee800c0dfbc13335d44c9db9ac Signed-off-by: Amar Tumballi <amarts@redhat.com>
* coverity: multiple fixesAmar Tumballi2018-08-311-1/+4
| | | | | | | | CID: 1390477, 1124827 updates: bz#789278 Change-Id: I41060d131aec6e58e7267ac8531b29a70f8c4359 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* build: add --enable-asan configure optionsNiels de Vos2018-08-301-1/+1
| | | | | | | | | | | | | | Introduce a `./configure --enable-asan` to build with `-fsanitize=address -fno-omit-frame-pointer` options. This uses the libasan.so shared library, so that needs to be available. While running builds with the ASAN options, several linker issues surfaced and these have been addressed with this change as well. Building with --enable-asan has been tested on Fedora 28. Change-Id: I428a9da70dd8f7d0056cfbe5c398619a571469b2 Updates: #492 Signed-off-by: Niels de Vos <ndevos@redhat.com>
* multiple files: move from strlen() to sizeof()Yaniv Kaul2018-08-292-3/+3
| | | | | | | | | | | | | | | {glusterfsd|glusterfsd-mgmt|quota-common-utils|xlator|tier|stripe}.c tools/setgfid2path/src/main.c xlators/cluster/afr/src/afr-inode-read.c {glusterfs-acl|glusterfs}.h For const strings, just do compile time size calc instead of runtime. Compile-tested only! Change-Id: I303684b1ff29b05c10126fb1057f507e404ced07 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>