summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-volume-ops.c
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: performance improvementnik-redhat2020-08-181-2/+4
| | | | | | | | | | | | | | | | | | | | | | Issue: In the glusertd_op_stage_create_volume(), fetching of values from the dict is done, whereas same values are fetched by glusterd_check_brick_order() which is called from that function. This leads to unnecssary performance overhead. Fix: Instead of fetching the values again, passing the values to the glusterd_check_brick_order() if it's fethced before, else a NULL is passed and then only fetching is done here. Also, few changes are made to the code to reduce the cost of operations such as 'fast fail' for false conditions and a bit of code clean up. Fixes: #1397 Change-Id: Ic7b523adbca8eb63ef9eb29c206e3b19e05c0815 Signed-off-by: nik-redhat <nladha@redhat.com>
* glusterd: dereference of null pointernik-redhat2020-07-091-3/+3
| | | | | | | | | | | | | | Issue: 'this' is used before it is defined, therefore it can lead to a NULL dereference. Fix: Moved the definition of 'this', before it's use to avoid NULL dereference. Change-Id: I6ad382192129dfa3a206426e5610040e7a905be6 Updates: #1096 Signed-off-by: nik-redhat <nladha@redhat.com>
* glusterd: additional log informationnik-redhat2020-06-291-46/+92
| | | | | | | | | | | | | | | Issue: Some of the functions didn't had sufficient logging of information in case of failure. Fix: Added log information in few functions in case of failure indicating the cause of such event. Change-Id: I301cf3a1c8d2c94505c6ae0d83072b0241c36d84 fixes: #874 Signed-off-by: nik-redhat <nladha@redhat.com>
* glusterd: add-brick command failureSanju Rakonde2020-06-211-6/+35
| | | | | | | | | | | | | | | Problem: add-brick operation is failing when replica or disperse count is not mentioned in the add-brick command. Reason: with commit a113d93 we are checking brick order while doing add-brick operation for replica and disperse volumes. If replica count or disperse count is not mentioned in the command, the dict get is failing and resulting add-brick operation failure. fixes: #1306 Change-Id: Ie957540e303bfb5f2d69015661a60d7e72557353 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: destroy all volume info locks and mutexesDmitry Antipov2020-06-101-2/+0
| | | | | | | | | | Add destroy calls for 'store_volinfo_lock' and 'lock' of volume info. Move initialization of 'store_volinfo_lock' from glusterd_op_create_volume() to common place, which is glusterd_volinfo_new() indeed. Change-Id: I5fae4469f28eab80c4fa6f5947646528e6aedad7 Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Fixes: #1291
* glusterd: check for same node while adding bricks in disperse volumeSheetal Pamecha2020-06-061-237/+2
| | | | | | | | | | | | | | | The optimal way for configuring disperse and replicate volumes is to have all bricks in different nodes. During create operation it fails saying it is not optimal, user must use force to over-ride this behavior. Implementing same during add-brick operation to avoid situation where all the added bricks end up from same host. Operation will error out accordingly. and this can be over-ridden by using force same as create. fixes: #1047 Change-Id: I3ee9c97c1a14b73f4532893bc00187ef9355238b Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* ec: change error message for heal commands for disperse volumeSheetal Pamecha2020-02-031-1/+3
| | | | | | | | | | | | Currently when we issue a heal statistics or similar commands for disperse volume, it fails with message "Volume is not of type replicate." Adding message "this command is supported for volumes of type replicate" to reflect supportability and better understanding of heal functionality for disperse volumes. fixes: bz#1785998 Change-Id: I9688a9fdf427cb6f657cfd5b8db2f76a6c56f6e2 Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* multiple xlators: reduce key lengthYaniv Kaul2020-01-141-1/+1
| | | | | | | | | | | | | | | In many cases, we were freely allocating long keys with no need. Smaller char arrays are just fine almost anywhere, so just went ahead and looked where they we can use smaller ones. In some cases, annotated the functions as static and the prefixes passed as const as it was easier to read and understand. Where relevant, converted the dict functions to use known key length. Change-Id: I882ab33ea20d90b63278336cd1370c09ffdab7f2 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* [RFC]#ifdef gNFS related code if we are not compiling gNFSYaniv Kaul2019-12-181-6/+6
| | | | | | | | | | | | | | | | If we are not compiling gNFS (--enable-gnfs is not given in the ./configure script params), there is little point in compiling code that is related to it. This patch tries to eliminate it. My hope (and it's not clear from the code ) is that I did not break the NFS Ganesha support as well. Other than that, tried to compile with and without anad it looks sane. Change-Id: I8d6c98066b9fceab4ec10fc6f5e81ab069e853bd updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: set xaatrs after checking the brick orderSanju Rakonde2019-12-051-30/+30
| | | | | | | | | | | | | | | | | | | | | | Problem: When volume creation fails complaining about the bricks from same hosts for replica volumes, the bricks can't be re-used to create any volume without using force at the end. It says, brick is already part of a volume. Reason: When volume create opeartion issued, we set xattrs on the bricks. If the transaction fails in later checks, the xattrs will remain on the brick. When the brick is re-used, by looking at the xattrs, glusterd thinks it is already part of volume. Solution: Check the brick order for replica and disperse volumes before setting the xattrs. fixes: bz#1776801 Change-Id: I44a971b37f520e5a20dc9fad6520286d315063b9 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd, rpc, glusterfsd: fix coverity defects and put required annotationsAtin Mukherjee2019-09-101-0/+1
| | | | | | | | | | | 1404965 - Null pointer dereference 1404316 - Program hangs 1401715 - Program hangs 1401713 - Program hangs Updates: bz#789278 Change-Id: I6e6575daafcb067bc910445f82a9d564f43b75a2 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: IPV6 hostname address is not parsed correctlyMohit Agrawal2019-09-061-5/+11
| | | | | | | | | | | | Problem: IPV6 hostname address is not parsed correctly in function glusterd_check_brick_order Solution: Update the code to parse hostname address Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab Fixes: bz#1747746 Credits: Amgad Saleh <amgad.saleh@nokia.com> Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* Revert "glusterd: (storhaug) remove ganesha (843e1b0)"Jiffin Tony Thottan2019-08-241-0/+46
| | | | | | | | | please note as an additional change, macro GLUSTERD_GET_SNAP_DIR moved from glusterd-store.c to glusterd-snapshot-utils.h Change-Id: I811efefc148453fe32e4f0d322e80455447cec71 updates: #663 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* glusterd: create separate logdirs for cluster.rc instancesN Balachandran2019-08-141-2/+1
| | | | | | | | | | Create a separate logdir for each host instance created by cluster.rc. This makes it easier to determine the files belonging to a particular instance. Change-Id: Ic8321f83f98995412b7d5f095b3d3f0391767a8b Fixes: bz#1733042 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* glusterd/thin-arbiter: Thin-arbiter integration with GD1Vishal Pandey2019-06-281-1/+73
| | | | | | | | | | | | | | | | | | | | | | | | gluster volume create <VOLNAME> replica 2 thin-arbiter 1 <host1>:<brick1> <host2>:<brick2> <thin-arbiter-host>:<path-to-store-replica-id-file> [force] The changes have been made in a way that the last brick in the bricks list will be treated as the thin-arbiter. GD1 will be manipulated to consider replica count to be as 2 and continue creating the volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick client xlator entries for each subvolume in fuse volfile, volfile generation is modified in a way to inject these entries seperately in the volfile for every subvolume. Few more additions - 1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count. 2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry in fuse volfiles. The option can be set using the following CLI syntax - gluster volume set <VOLNAME> client.ta-brick-port <PORTNO.> 3- Volume Info will contain a Thin-Arbiter-path entry to distinguish from other replicate volumes. Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406 Updates #687 Signed-off-by: Vishal Pandey <vpandey@redhat.com>
* glusterd-volgen.c: remove BD xlator from the graphYaniv Kaul2019-06-181-5/+0
| | | | | | | | | | | | | The BD xlator was removed some time ago. Remove it from the graph. We can also remove the caps settings - only the BD xlator was using it. Lastly, remove the caps (which only BD was using) and the document describing the translator. Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd/tier: remove tier related code from glusterdHari Gowtham2019-05-271-92/+0
| | | | | | | | | | | | | The handler functions are pointed to dummy functions. The switch case handling for tier also have been moved to point default case to avoid issues, if reintroduced. The tier changes in DHT still remain as such. updates: bz#1693692 Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
* glusterd: remove glusterd_check_volume_exists() callAtin Mukherjee2019-04-111-4/+7
| | | | | | | | As the same functionality is covered in glusterd_volinfo_find Updates: bz#1193929 Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: remove redundant glusterd_check_volume_exists () callsAtin Mukherjee2019-04-081-31/+1
| | | | | | | | | | | | | | | | | | A pattern of following was found in multiple places where both glusterd_check_volume_exists and glusterd_volinfo_find do the same job. We just need one of them not both. In a scaled environment having many volumes this is a bottleneck to iterate over the volume list to find a volume twice! exists = glusterd_check_volume_exists(volname); ret = glusterd_volinfo_find(volname, &volinfo); if ((ret) || (!exists)) { Credits: ykaul@redhat.com for finding this out Updates: bz#1193929 Change-Id: Ie116fe5c93e261a2bddd267c28ccb20a2884a36f Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* mgmt/shd: Implement multiplexing in self heal daemonMohammed Rafi KC2019-04-011-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Shd daemon is per node, which means they create a graph with all volumes on it. While this is a great for utilizing resources, it is so good in terms of performance and managebility. Because self-heal daemons doesn't have capability to automatically reconfigure their graphs. So each time when any configurations changes happens to the volumes(replicate/disperse), we need to restart shd to bring the changes into the graph. Because of this all on going heal for all other volumes has to be stopped in the middle, and need to restart all over again. Solution: This changes makes shd as a per volume daemon, so that the graph will be generated for each volumes. When we want to start/reconfigure shd for a volume, we first search for an existing shd running on the node, if there is none, we will start a new process. If already a daemon is running for shd, then we will simply detach a graph for a volume and reatach the updated graph for the volume. This won't touch any of the on going operations for any other volumes on the shd daemon. Example of an shd graph when it is per volume graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ --------- --------- ---------- | AFR-1 | | AFR-2 | | AFR-3 | -------- --------- ---------- A running shd daemon with 3 volumes will be like--> graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ ------------ ------------ ------------ | volume-1 | | volume-2 | | volume-3 | ------------ ------------ ------------ Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99 fixes: bz#1659708 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* Multiple files: remove HAVE_BD_XLATOR related code.Yaniv Kaul2019-03-251-148/+0
| | | | | | | | | | | | The BD translator was removed some time ago, (in commit a907e468e724c32b9833ce59806fc215c7122d63). This completes the work. Compile-tested only! updates: bz#1635688 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I999df52e479a72d3cc9523f22f9056de17eb559c
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-051-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* glusterd: volume-ops calls naked mallocKaleb S. KEITHLEY2018-11-281-2/+2
| | | | | | | | | | | | | | | | libglusterfs provides wrapper functions MALLOC/__gf_default_malloc, CALLOC/__gf_default_calloc, and REALLOC/__gf_default_realloc for those few places outside of mempool.c that need to call malloc/calloc/realloc directly. Notable exceptions are "contrib" code, e.g. rbtree and timer-wheel, and perhaps parsers generated by yacc+lex. But even parsers can be fixed to at least call the wrappers mentioned above, if not our own allocators Change-Id: Ie6156307b6d2183be9c9aff153afb7598974f4e4 updates: bz#1193929 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* glusterd: allow shared-storage to use bricks under glusterd working directorySanju Rakonde2018-11-081-1/+2
| | | | | | | | | | | | | | | With commit 44e4db, we are not allowing user to create a volume using glusterd's working directory as a brick or any sub directory under glusterd's working directory as a brick.This has broken shared-storage since the volume "gluster-shared-storage" is created using the bricks under glusterd's working directory. With this patch, we let the "gluster-shared-storage" volume to use bricks under glusterd's working directory. fixes: bz#1647029 Change-Id: Ifcbcf4576eea12cf46f199dea287b29bd3ec3bfd Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: initialise caps inside #ifdef HAVE_BD_XLATOR blockSanju Rakonde2018-11-061-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Note: The problem is seen when we disable bd xlator. Problem: When we create a volume, volume info file is having caps value as 15 in nodes which hosts bricks for that volume. Remainig nodes in cluster are not having caps field. When glusterd is restarted, peers are going into rejected state, because of this mismacth in configuration files. Cause: In glusterd_op_create_volume(), we initialise caps value as 15 in the beginning. Later, we check whether brick belongs to the same node or not. If brick doesn't belong to the same node, caps value will be set to 0. If brick belongs to the same node, we will change the caps value inside Solution: If brick doesn't belongs to the same node,caps is set to 0 and if brick belongs to same brick caps value is changed inside #ifdef HAVE_BD_XLATOR block. So, to have the consistency across the cluster, we need to initialise caps value inside #ifdef HAVE_BD_XLATOR block, only when brick belongs to the same node. fixes: bz#1645986 Change-Id: I2648f420b21d6e69e7c38b0f4736d41e0f15a7f5 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* stripe: remove the translator from build and glusterdAmar Tumballi2018-10-311-51/+0
| | | | | | | | | | | | | | | | Based on the proposal to remove few features as they are not actively maintained [1], removing stripe translator from the build. Also make sure there are no regression tests involving stripe translator. [1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html Note that this patch aims at removing the translator from build, and a followup patch is needed to remove the code from repository. Updates: bz#1364707 Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* glusterd: ensure volinfo->caps is set to correct value.Sanju Rakonde2018-10-251-0/+2
| | | | | | | | | | | | | | | | | With the commit febf5ed4848, during the volume create op, we are setting volinfo->caps to 0, only if any of the bricks belong to the same node and brickinfo->vg[0] is null. Previously, we used to set volinfo->caps to 0, when either brick doesn't belong to the same node or brickinfo->vg[0] is null. With this patch, we set volinfo->caps to 0, when either brick doesn't belong to the same node or brickinfo->vg[0] is null. (as we do earlier without commit febf5ed4848). fixes: bz#1635820 Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: improve logging for stage_deleted flagSanju Rakonde2018-10-231-0/+4
| | | | | | Change-Id: I5f0667a47ddd24cb00949c875c19f3d1dbd8d603 fixes: bz#1605077 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: acquire lock to update volinfo structureSanju Rakonde2018-09-181-0/+2
| | | | | | | | | | | | | | | | | | Problem: With commit cb0339f92, we are using a separate syntask for restart_bricks. There can be a situation where two threads are accessing the same volinfo structure at the same time and updating volinfo structure. This can lead volinfo to have inconsistent values and assertion failures because of unexpected values. Solution: While updating the volinfo structure, acquire a store_volinfo_lock, and release the lock only when the thread completed its critical section part. Fixes: bz#1627610 Signed-off-by: Sanju Rakonde <srakonde@redhat.com> Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
* Land part 2 of clang-format changesGluster Ant2018-09-121-2826/+2862
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* Some (mgmt) xlators: use dict_{setn|getn|deln|get_int32n|set_int32n|set_strn}Yaniv Kaul2018-09-091-110/+149
| | | | | | | | | | | | | | | | | | | | | In a previous patch (https://review.gluster.org/20769) we've added the key length to be passed to dict_* funcs, to remove the need to strlen() it. This patch moves some xlators to use it. - It also adds dict_get_int32n which was missing. - It also reduces the size of some key variables. They were set to 1024b or PATH_MAX, where sometimes 64 bytes were really enough. Please review carefully: 1. That I did not reduce some the size of the key variables too much. 2. That I did not mix up some keys. Compile-tested only! Change-Id: Ic729baf179f40e8d02bc2350491d4bb9b6934266 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* multiple xlators (mgmt): strncpy()->sprintf(), reduce strlen()'sYaniv Kaul2018-09-071-6/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | xlators/mgmt/glusterd/src/glusterd-geo-rep.c xlators/mgmt/glusterd/src/glusterd-handshake.c xlators/mgmt/glusterd/src/glusterd-sm.c xlators/mgmt/glusterd/src/glusterd-store.c xlators/mgmt/glusterd/src/glusterd-utils.c xlators/mgmt/glusterd/src/glusterd-volgen.c xlators/mgmt/glusterd/src/glusterd-volume-ops.c xlators/mgmt/glusterd/src/glusterd.c strncpy may not be very efficient for short strings copied into a large buffer: If the length of src is less than n, strncpy() writes additional null bytes to dest to ensure that a total of n bytes are written. Instead, use snprintf(). Try to ensure output is not truncated. Also: - save the result of strlen() and re-use it when possible. - move from strlen to SLEN (sizeof() ) for const strings. Compile-tested only! Change-Id: Ib5d001857236f43e41c4a51b5f48e1a33110aaeb updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* mgmt/glusterd: Fix resource leak & unused value issues in glusterd-volume-ops.cVijay Bellur2018-08-231-0/+5
| | | | | | | | Addresses CID: 1274132, 1325534 Change-Id: I176612ef5baf5618d543838a5f32db7dcd7002c3 updates: bz#789278 Signed-off-by: Vijay Bellur <vbellur@redhat.com>
* xlators/mgmt/glusterd/src/glusterd-volume-ops.c : reduce size of message ↵Yaniv Kaul2018-08-231-3/+3
| | | | | | | | | | | | | variable Size of error message variable reduced from 2048 bytes to 64 bytes or 128 bytes. Compile-tested only! Change-Id: I393ba03969a6e71ccb0ce382d0e0546192897312 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: ignore importing volume which is undergoing a delete operationAtin Mukherjee2018-08-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem explanation: Assuming in a 3 nodes cluster, if N1 originates a delete operation and while N1's commit phase completes, either glusterd service of N2 or N3 gets disconnected from N1 (before completing the commit phase), N1 will attempt to end up importing the volume which is in-flight for a delete in other nodes as a fresh resulting into an incorrect configuration state. Fix: Mark a volume as stage deleted once a volume delete operation passes it's staging phase and reset this flag during unlock phase. Now during this intermediate phase if the same volume gets imported to other peers, it shouldn't considered to be recreated. An automated .t is quite tough to implement with the current infra. Test Case: 1. Keep creating and deleting volumes in a loop on a 3 node cluster 2. Simulate n/w failure between the peers (ifdown followed by ifup) 3. Check if output of 'gluster v list | wc -l' is same across all 3 nodes during 1 & 2. Change-Id: Ifdd5dc39699120258d7fdd42fe2deb9de25c6246 Fixes: bz#1605077 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Add multiple checks before attach/start a brickMohit Agrawal2018-07-271-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In brick mux scenario sometime glusterd is not able to start/attach a brick and gluster v status shows brick is already running Solution: 1) To make sure brick is running check brick_path in /proc/<pid>/fd , if a brick is consumed by the brick process it means brick stack is come up otherwise not 2) Before start/attach a brick check if a brick is mounted or not 3) At the time of printing volume status check brick is consumed by any brick process Test: To test the same followed procedure 1) Setup brick mux environment on a vm 2) Put a breaking point in gdb in function posix_health_check_thread_proc at the time of notify GF_EVENT_CHILD_DOWN event 3) unmount anyone brick path forcefully 4) check gluster v status it will show N/A for the brick 5) Try to start volume with force option, glusterd throw message "No device available for mount brick" 6) Mount the brick_root path 7) Try to start volume with force option 8) down brick is started successfully Change-Id: I91898dad21d082ebddd12aa0d1f7f0ed012bdf69 fixes: bz#1595320 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd: Coverity issues with type FORWARD_NULLSanju Rakonde2018-07-241-1/+1
| | | | | | | | | | | This patch fixes coverity issues 102, 103, 112 and 119 from [1] [1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-07-23-5fa004f3/html/ Updates: bz#789278 Change-Id: I99762eb0bcbd974a5250434777db63520f2ce2e6 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* All: run codespell on the code and fix issues.Yaniv Kaul2018-07-221-1/+1
| | | | | | | | | | | | Please review, it's not always just the comments that were fixed. I've had to revert of course all calls to creat() that were changed to create() ... Only compile-tested! Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* Fix compile warningsXavi Hernandez2018-07-101-22/+42
| | | | | | | | | | | This patch fixes compile warnings that appear with newer compilers. The solution applied is only to remove the warnings, but it doesn't always solve the problem in the best way. It assumes that the problem will never happen, as the previous code assumed. Change-Id: I6e8470d6c2e2dbd3bd7d324b5fd2f92ffdc3d6ec updates: bz#1193929 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* glusterd: changing the op-version of volume stop mgmt v3Kaleb S. KEITHLEY2018-03-281-3/+3
| | | | | | | | log message describe the actual test Change-Id: I1ea7300a6b186032a65236492d6d2a6eef0ab983 fixes: bz#1560441 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* glusterd: changing the op-version of volume stop mgmt v3Sanju Rakonde2018-03-271-1/+1
| | | | | | Change-Id: Iefc5a00d36436b23181871fa365f27b8d90cff0a fixes: bz#1560441 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: Implementing volume stop in mgmt v3Sanju Rakonde2018-03-261-1/+16
| | | | | | Change-Id: I8f9c594cf56331d54eb4884335699744685ef20d fixes: bz#1560441 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: connect to an existing brick process when qourum status is ↵Atin Mukherjee2018-01-051-1/+2
| | | | | | | | | | | | | | NOT_APPLICABLE_QUORUM First of all, this patch reverts commit 635c1c3 as the same is causing a regression with bricks not coming up on time when a node is rebooted. This patch tries to fix the problem in a different way by just trying to connect to an existing running brick when quorum status is not applicable. Change-Id: I0efb5901832824b1c15dcac529bffac85173e097 BUG: 1509845 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* Tier: Stop tierd for detach starthari gowtham2017-12-011-7/+10
| | | | | | | | | | | | | | | | | | | Problem: tierd was stopped only after detach commit This makes the detach take a longer time. The detach demotes the files to the cold brick and if the promotion frequency is hit, then the tierd starts to promote files to hot tier again. Fix: stop tierd after detach start so the files get demoted faster. Note: the is_tier_enabled was not maintained properly. That has been fixed too. some code clean up has been done. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I532f7410cea04fbb960105483810ea3560ca149b BUG: 1446381
* glusterd: fix brick restart parallelismAtin Mukherjee2017-11-011-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | glusterd's brick restart logic is not always sequential as there is atleast three different ways how the bricks are restarted. 1. through friend-sm and glusterd_spawn_daemons () 2. through friend-sm and handling volume quorum action 3. through friend handshaking when there is a mimatch on quorum on friend import. In a brick multiplexing setup, glusterd ended up trying to spawn the same brick process couple of times as almost in fraction of milliseconds two threads hit glusterd_brick_start () because of which glusterd didn't have any choice of rejecting any one of them as for both the case brick start criteria met. As a solution, it'd be better to control this madness by two different flags, one is a boolean called start_triggered which indicates a brick start has been triggered and it continues to be true till a brick dies or killed, the second is a mutex lock to ensure for a particular brick we don't end up getting into glusterd_brick_start () more than once at same point of time. Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2 BUG: 1506513 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* gluster: IPv6 single stack supportKevin Vigor2017-10-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | Summary: - This diff changes all locations in the code to prefer inet6 family instead of inet. This will allow change GlusterFS to operate via IPv6 instead of IPv4 for all internal operations while still being able to serve (FUSE or NFS) clients via IPv4. - The changes apply to NFS as well. - This diff ports D1892990, D1897341 & D1896522 to the 3.8 branch. Test Plan: Prove tests! Reviewers: dph, rwareing Signed-off-by: Shreyas Siravara <sshreyas@fb.com> Change-Id: I34fdaaeb33c194782255625e00616faf75d60c33 BUG: 1406898 Reviewed-on-3.8-fb: http://review.gluster.org/16059 Reviewed-by: Shreyas Siravara <sshreyas@fb.com> Tested-by: Shreyas Siravara <sshreyas@fb.com>
* gfproxyd: Let glusterd manage gfproxy daemonPoornima G2017-10-181-0/+2
| | | | | | | Updates: #242 BUG: 1428063 Change-Id: Iaaf2edf99b2ecc75f6d30762c752a6d445c1c826 Signed-off-by: Poornima G <pgurusid@redhat.com>
* glusterd : fix client io-threads option for replicate volumesRavishankar N2017-10-091-0/+34
| | | | | | | | | | | | | | | | | | | Problem: Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading client-io-threads for replicate volumes (it was set to on by default in commit e068c1997314046658dd502e9118dab32decf879) due to performance issues but in doing so, inadvertently failed to load the xlator even if the user explicitly enabled the option using the volume set command. This was despite returning returning sucess for the volume set. Fix: Modify the check in perfxl_option_handler() and add checks in volume create/add-brick/remove-brick code paths, tying it all to GD_OP_VERSION_3_12_2. Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf BUG: 1498570 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* heal: New feature heal info summary to list the status of brick and count of ↵Mohamed Ashiq Liyazudeen2017-09-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | entries to be healed Command output: Brick 192.168.2.8:/brick/1 Status: Connected Total Number of entries: 363 Number of entries in heal pending: 362 Number of entries in split-brain: 0 Number of entries possibly healing: 1 <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <healInfo> <bricks> <brick hostUuid="9105dd4b-eca8-4fdb-85b2-b81cdf77eda3"> <name>192.168.2.8:/brick/1</name> <status>Connected</status> <totalNumberOfEntries>363</numberOfEntries> <numberOfEntriesInHealPending>362</numberOfEntriesInHealPending> <numberOfEntriesInSplitBrain>0</numberOfEntriesInSplitBrain> <numberOfEntriesPossiblyHealing>1</numberOfEntriesPossiblyHealing> </brick> </bricks> </healInfo> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> </cliOutput> Change-Id: I40cb6f77a14131c9e41b292f4901b41a228863d7 BUG: 1261463 Signed-off-by: Mohamed Ashiq Liyazudeen <mliyazud@redhat.com> Reviewed-on: https://review.gluster.org/12154 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Karthik U S <ksubrahm@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* mgmt/glusterd: Provide more information in command messageAshish Pandey2017-08-101-3/+5
| | | | | | | | | | | | | | | | | | | | | Problem: When more than one bricks are present on the same node, while creating a volume, we get a warning message that the setup is not optimal. We need to add more information in this error/warning. Solution: Add following line in current message. Bricks should be on different nodes to have best fault tolerant configuration. Change-Id: Ica72bd6e68dff7e41c37617f3b775a981fa40c69 BUG: 1480099 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/18014 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>