summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-shd-svc.c
Commit message (Collapse)AuthorAgeFilesLines
* syncop: improve scaling and implement more toolsXavi Hernandez2020-05-131-4/+4
| | | | | | | | | | | | | | | | | | | | The current scaling of the syncop thread pool is not working properly and can leave some tasks in the run queue more time than necessary when the maximum number of threads is not reached. This patch provides a better scaling condition to react faster to pending work. Condition variables and sleep in the context of a synctask have also been implemented. Their purpose is to replace regular condition variables and sleeps that block synctask threads and prevent other tasks to be executed. The new features have been applied to several places in glusterd. Change-Id: Ic50b7c73c104f9e41f08101a357d30b95efccfbf Fixes: #1116 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* glusterd, rpc, glusterfsd: fix coverity defects and put required annotationsAtin Mukherjee2019-09-101-0/+2
| | | | | | | | | | | 1404965 - Null pointer dereference 1404316 - Program hangs 1401715 - Program hangs 1401713 - Program hangs Updates: bz#789278 Change-Id: I6e6575daafcb067bc910445f82a9d564f43b75a2 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: stop stale bricks during handshaking in brick mux modeAtin Mukherjee2019-08-251-6/+5
| | | | | | | | | | | | | | | | | | | | This patch addresses two problems: 1. During friend handshaking, if a volume is imported due to change in the version, the old bricks were not stopped which would lead to a situation where bricks will run with old volfiles. 2. As part of attaching shd service in glusterd_attach_svc, there might be a case that the volume for which we're attempting to attach a shd service might become stale and in the process of deletion and hence in every retrials (if the rpc connection isn't ready) check for the existance of the volume and then only attempt the further attach request. Fixes: bz#1733425 Change-Id: I6bac6b871f7e31cb5bf277db979289dec196a03e Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: create separate logdirs for cluster.rc instancesN Balachandran2019-08-141-3/+3
| | | | | | | | | | Create a separate logdir for each host instance created by cluster.rc. This makes it easier to determine the files belonging to a particular instance. Change-Id: Ic8321f83f98995412b7d5f095b3d3f0391767a8b Fixes: bz#1733042 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* glusterd/shd: Return null proc if process is not running.Mohammed Rafi KC2019-08-051-0/+3
| | | | | | | | | | | | | We were ruturning first proc entry even if it is not running. This was in an assumption that the process could have just started and not updated the pidfile. Now we that we have introduced the states for process state, we can take decision based on that. Change-Id: Ibfc11c966b0db599a8d6a08d8b975233b2bbfb8c Fixes: bz#1728766 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd/svc: update pid of mux volumes from the shd processMohammed Rafi KC2019-07-091-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | For a normal volume, we are updating the pid from a the process while we do a daemonization or at the end of the init if it is no-daemon mode. Along with updating the pid we also lock the file, to make sure that the process is running fine. With brick mux, we were updating the pidfile from gluterd after an attach/detach request. There are two problems with this approach. 1) We are not holding a pidlock for any file other than parent process. 2) There is a chance for possible race conditions with attach/detach. For example, shd start and a volume stop could race. Let's say we are starting an shd and it is attached to a volume. While we trying to link the pid file to the running process, this would have deleted by the thread that doing a volume stop. Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a Updates: bz#1722541 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd/shd: Change shd logfile to a unique nameMohammed Rafi KC2019-06-241-6/+6
| | | | | | | | | | | | | | | | With the shd mux changes, shd was havinga a logfile with volname of the first started volume. This was creating a lot confusion, as other volumes data is also logging to a logfile which has a different vol name. With this changes the logfile will be changed to a unique name ie "/var/log/glusterfs/glustershd.log". This was the same logfile name before the shd mux Change-Id: I2b94c1f0b2cf3c9493505dddf873687755a46dda fixes: bz#1721601 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* shd/mux: Fix race between mux_proc unlink and stopMohammed Rafi KC2019-06-241-0/+3
| | | | | | | | | | | | | | There is a small race window, where we have a shd proc without having a connection. That is when we stopped the last shd running on a process. The list was removed outside of a lock just after stopping the process. So there is a window where we stopped the process, but the shd proc list contains the entry. Change-Id: Id82a82509e5cd72acac24e8b7b87197626525441 fixes: bz#1722541 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd/shd: Optimize the glustershd manager to send reconfigureMohammed Rafi KC2019-05-311-4/+5
| | | | | | | | | | | | | Traditionally all svc manager will execute process stop and then followed by start each time when they called. But that is not required by shd, because the attach request implemented in the shd multiplex has the intelligence to check whether a detach is required prior to attaching the graph. So there is no need to send an explicit detach request if we are sure that the next call is an attach request Change-Id: I9157c8dcaffdac038f73286bcf5646a3f1d3d8ec fixes: bz#1710054 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd/svc: glusterd_svcs_stop should call individual wrapper functionMohammed Rafi KC2019-05-311-2/+10
| | | | | | | | | | | | glusterd_svcs_stop should call individual wrapper function to stop a daemon rather than calling glusterd_svc_stop. For example for shd, it should call glusterd_shdsvc_stop instead of calling basic API function to stop. Because the individual functions for each daemon could be doing some specific operation in their wrapper function. Change-Id: Ie6d40590251ad470ef3901d1141ab7b22c3498f5 fixes: bz#1712741 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* shd/glusterd: Serialize shd manager to prevent race conditionMohammed Rafi KC2019-05-101-0/+14
| | | | | | | | | | | At the time of a glusterd restart, while doing a handshake there is a possibility that multiple shd manager might get executed. Because of this, there is a chance that multiple shd get spawned during a glusterd restart Change-Id: Ie20798441e07d7d7a93b7d38dfb924cea178a920 fixes: bz#1707081 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: coverity fixesAtin Mukherjee2019-04-261-1/+1
| | | | | | | | | | | | | | | | 1400775 - USE_AFTER_FREE 1400742 - Missing Unlock 1400736 - CHECKED_RETURN 1398470 - Missing Unlock Missing unlock is the tricky one, we have had annotation added, but coverity still continued to complaint. Added pthread_mutex_unlock to clean up the lock before destroying it to see if it makes coverity happy. Updates: bz#789278 Change-Id: I1d892612a17f805144d96c1b15004a85a1639414 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: coverity fixesAtin Mukherjee2019-04-251-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | Addresses the following: * CID 1124776: Resource leaks (RESOURCE_LEAK) - Variable "aa" going out of scope leaks the storage it points to in glusterd-volgen.c * Bunch of CHECKED_RETURN defects in the callers of synctask_barrier_init * CID 1400755: Error handling issues (CHECKED_RETURN) - Calling "gf_is_service_running" without checking return value in xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 671 in glusterd_shdsvc_stop() * CID 1400745: Memory - illegal accesses (USE_AFTER_FREE) - Dereferencing freed pointer "volinfo" in /xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 460 in glusterd_shdsvc_start() * CID 1400742: Program hangs (LOCK) - adding annotation to fix this false positive Updates: bz#789278 Change-Id: I02f16e7eeb8c5cf72f7d0b29d00df4f03b3718b3 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd/shd: Keep a ref on volinfo until attach rpc execute cbkMohammed Rafi KC2019-04-241-0/+3
| | | | | | | | | | | When svc attach execute for multiplexing a daemon, we have to keep a ref on volinfo until it finish the execution. Because, if the attach is an aysnc call, then a parallel volume delete can lead to free the volinfo Change-Id: Ibc02b89557baaed2f63db63d7fb1a7480444ae0d fixes: bz#1702185 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* shd/mux: Fix coverity issues introduced by shd mux patchMohammed Rafi KC2019-04-151-0/+6
| | | | | | | | | | | | | CID 1400475: Null pointer dereferences (FORWARD_NULL) CID 1400474: Null pointer dereferences (FORWARD_NULL) CID 1400471: Code maintainability issues (UNUSED_VALUE) CID 1400470: Null pointer dereferences (FORWARD_NULL) CID 1400469: Memory - illegal accesses (USE_AFTER_FREE) CID 1400467: Code maintainability issues (UNUSED_VALUE) Change-Id: I0ca1c733be335c6e5844f44850f8066626ac40d4 updates: bz#789278 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* mgmt/shd: Implement multiplexing in self heal daemonMohammed Rafi KC2019-04-011-54/+486
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Shd daemon is per node, which means they create a graph with all volumes on it. While this is a great for utilizing resources, it is so good in terms of performance and managebility. Because self-heal daemons doesn't have capability to automatically reconfigure their graphs. So each time when any configurations changes happens to the volumes(replicate/disperse), we need to restart shd to bring the changes into the graph. Because of this all on going heal for all other volumes has to be stopped in the middle, and need to restart all over again. Solution: This changes makes shd as a per volume daemon, so that the graph will be generated for each volumes. When we want to start/reconfigure shd for a volume, we first search for an existing shd running on the node, if there is none, we will start a new process. If already a daemon is running for shd, then we will simply detach a graph for a volume and reatach the updated graph for the volume. This won't touch any of the on going operations for any other volumes on the shd daemon. Example of an shd graph when it is per volume graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ --------- --------- ---------- | AFR-1 | | AFR-2 | | AFR-3 | -------- --------- ---------- A running shd daemon with 3 volumes will be like--> graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ ------------ ------------ ------------ | volume-1 | | volume-2 | | volume-3 | ------------ ------------ ------------ Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99 fixes: bz#1659708 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* afr: add client-pid to all gf_event() callsRavishankar N2019-03-271-0/+10
| | | | | | | | | client-pid for glustershd is GF_CLIENT_PID_SELF_HEALD client-pid for glfsheal is GF_CLIENT_PID_GLFS_HEALD updates: bz#1689250 Change-Id: Ib3a863af160ff48c822a5e6b0c27c575c9887470 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-051-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* mgmt/glusterd: NULL pointer dereferencing clang fixShwetha Acharya2018-10-121-1/+1
| | | | | | | | | | | | | | | Problem: dereferencing of this->name; volinfo and xl can be null. Solution: Replaced this->name with apropriate names in few places, added a null check to avoid dereferencing of volinfo, and introduced a goto out statement, such that null pointer value is not passed to the function volgen_xlator_link when xl becomes NULL. Updates: bz#1622665 Change-Id: I77616bd23f58328cb6dbe681914a028991d49abb Signed-off-by: Shwetha Acharya <sacharya@redhat.com>
* mgmt/glusterd: NULL pointer deferencing clang fixIraj Jamali2018-10-021-1/+1
| | | | | | | | | Changed this->name to "glusterd" Updates: bz#1622665 Change-Id: Ic8ce428cefd6a5cecf5547769d8b13f530065c56 Signed-off-by: Iraj Jamali <ijamali@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-193/+186
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* Infra to indentify processhari gowtham2017-08-161-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: currently we can't identify which process is running and how many instances of it are available. Fix: name the process when its spawned and send it to the server and save it in the client_t The processes that abide by this change from this patch are: 1) fuse mount, 2) rebalance, 3) selfheal, 4) tier, 5) quota, 6) snapshot, 7) brick. 8) gfapi (by default. gfapi.<processname> if processname is found) Note: fuse gets a process name as native-fuse-client by default. If the user gives a name for the fuse and spawns it, it will be of this type --process-name native-fuse-client.<name_specified>. This can be made use by the process like aux mount done by quota, geo-rep, etc by adding another option in the aux mount " -o process-name=gsync_mount" Updates: #178 Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: Ie4d02257216839338043737691753bab9a974d5e Reviewed-on: https://review.gluster.org/17957 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: add async events (part 2)Atin Mukherjee2016-08-231-0/+2
| | | | | | | | | | | Change-Id: I7a5687143713c283f0051aac2383f780e3e43646 BUG: 1360809 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/15153 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
* glusterd: Stop/restart/notify to daemons(svcs) during reset/set on a volumeanand2015-08-061-1/+65
| | | | | | | | | | | | | | | | | | | | problem : Reset/set commands were not working properly. reset command returns success but it not sending notification to svcs if corresponding graph modified. Fix: Whenever reset/set command issued, generate the temp graph and compare with original graph and do the fallowing actions 1.) If both graph are identical nothing to do with svcs. 2.) If any changes in graph topology restart/stop service by calling svc manager. 3) If changes in options send notify signal by calling glusterd_fetchspec_notify. Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7 BUG: 1209329 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10850 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: initialize the daemon services on demandAtin Mukherjee2015-07-271-4/+22
| | | | | | | | | | | | | | | | | | | | | | As of now all the daemon services are initialized at glusterD init path. Since socket file path of per node daemon demands the uuid of the node, MY_UUID macro is invoked as part of the initialization. The above flow breaks the usecases where a gluster image is built following a template could be Dockerfile, Vagrantfile or any kind of virtualization environment. This means bringing instances of this image would have same UUIDs for the node resulting in peer probe failure. Solution is to lazily initialize the services on demand. Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e BUG: 1238135 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Porting messages to new logging framework.Nandaja Varma2015-06-141-4/+5
| | | | | | | | | | Change-Id: I56ced6fca0246c230cc389132c47a0f60472ed0c BUG: 1194640 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/9836 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: nfs,shd,quotad,snapd daemons refactoringAtin Mukherjee2015-02-201-0/+167
This patch ports nfs, shd, quotad & snapd with the approach suggested in http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2 BUG: 1191486 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9428 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>