summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Brick process fails to come up with brickmux onVishal Pandey2020-03-171-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue: 1- In a cluster of 3 Nodes N1, N2, N3. Create 3 volumes vol1, vol2, vol3 with 3 bricks (one from each node) 2- Set cluster.brick-multiplex on 3- Start all 3 volumes 4- Check if all bricks on a node are running on same port 5- Kill N1 6- Set performance.readdir-ahead for volumes vol1, vol2, vol3 7- Bring N1 up and check volume status 8- All bricks processes not running on N1. Root Cause - Since, There is a diff in volfile versions in N1 as compared to N2 and N3 therefore glusterd_import_friend_volume() is called. glusterd_import_friend_volume() copies the new_volinfo and deletes old_volinfo and then calls glusterd_start_bricks(). glusterd_start_bricks() looks for the volfiles and sends an rpc request to glusterfs_handle_attach(). Now, since the volinfo has been deleted by glusterd_delete_stale_volume() from priv->volumes list before glusterd_start_bricks() and glusterd_create_volfiles_and_notify_services() and glusterd_list_add_order is called after glusterd_start_bricks(), therefore the attach RPC req gets an empty volfile path and that causes the brick to crash. Fix- Call glusterd_list_add_order() and glusterd_create_volfiles_and_notify_services before glusterd_start_bricks() cal is made in glusterd_import_friend_volume > Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa > Bug: bz#1773856 > Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> (cherry picked from commit 45e81aae791da9d013aba2286af44826227c05ec) Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa fixes: bz#1808964 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: stop stale bricks during handshaking in brick mux modeAtin Mukherjee2020-03-164-9/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch addresses two problems: 1. During friend handshaking, if a volume is imported due to change in the version, the old bricks were not stopped which would lead to a situation where bricks will run with old volfiles. 2. As part of attaching shd service in glusterd_attach_svc, there might be a case that the volume for which we're attempting to attach a shd service might become stale and in the process of deletion and hence in every retrials (if the rpc connection isn't ready) check for the existance of the volume and then only attempt the further attach request. patch on master: https://review.gluster.org/#/c/glusterfs/+/23042/ > Bug: bz#1733425 > Change-Id: I6bac6b871f7e31cb5bf277db979289dec196a03e > Signed-off-by: Atin Mukherjee <amukherj@redhat.com> > Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> fixes: bz#1812849 Change-Id: I6bac6b871f7e31cb5bf277db979289dec196a03e Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* volgen: make thin-arbiter name unique in 'pending-xattr' optionAmar Tumballi2020-02-171-2/+12
| | | | | | | | | | | Thin-arbiter module makes use of 'pending-xattr' name for the translator as the filename which gets created in thin-arbiter node. By making this unique, we can host single thin-arbiter node for multiple clusters. Updates: #763 Change-Id: Ib3c732e7e04e6dba229e71ae3e64f1f3cb6d794d Signed-off-by: Amar Tumballi <amar@kadalu.io> (cherry picked from commit 8db8202f716fd24c8c52f8ee5f66e169310dc9b1)
* afr: expose cluster.optimistic-change-log to CLI.Ravishankar N2020-01-091-0/+5
| | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/glusterfs/+/23960/ This volume option was not made avaialble to `gluster volume set` CLI. Reported-by: epolakis(https://github.com/kinsu) in https://github.com/gluster/glusterfs/issues/781 fixes: bz#1788785 Change-Id: I7141bdd4e53ee99e22b354edde8d023bfc0b2cd7 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterd: fix use-after-free of a dict_tXavi Hernandez2019-09-151-1/+1
| | | | | | | | | | | | | | | | | | A dict was passed to a function that calls dict_unref() without taking any additional reference. Given that the same dict is also used after the function returns, this was causing a use-after-free situation. To fix the issue, we simply take an additional reference before calling the function. > Fixes: bz#1723890 > Change-Id: I98c6b76b08fe3fa6224edf281a26e9ba1ffe3017 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> > (cherry picked from commit f36086db87aae24c10abde434f081d78b942735e) Fixes: bz#1752245 Change-Id: I98c6b76b08fe3fa6224edf281a26e9ba1ffe3017 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd: IPV6 hostname address is not parsed correctlyMohit Agrawal2019-09-061-5/+11
| | | | | | | | | | | | | | | | | | Problem: IPV6 hostname address is not parsed correctly in function glusterd_check_brick_order Solution: Update the code to parse hostname address > Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab > Fixes: bz#1747746 > Credits: Amgad Saleh <amgad.saleh@nokia.com> > Signed-off-by: Mohit Agrawal <moagrawal@redhat.com> > (cherry picked from commit 6563ffb04d7ba51a89726e7c5bbb85c7dbc685b5) Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab Fixes: bz#1749664 Credits: Amgad Saleh <amgad.saleh@nokia.com> Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* event: rename event_XXX with gf_ prefixedXiubo Li2019-08-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I hit one crash issue when using the libgfapi. In the libgfapi it will call glfs_poller() --> event_dispatch() in file api/src/glfs.c:721, and the event_dispatch() is defined by libgluster locally, the problem is the name of event_dispatch() is the extremly the same with the one from libevent package form the OS. For example, if a executable program Foo, which will also use and link the libevent and the libgfapi at the same time, I can hit the crash, like: kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp 00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000] The link for Foo is: lib_foo_LADD = -levent $(GFAPI_LIBS) It will crash. This is because the glfs_poller() is calling the event_dispatch() from the libevent, not the libglsuter. The gfapi link info : GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid If I link Foo like: lib_foo_LADD = $(GFAPI_LIBS) -levent It will works well without any problem. And if Foo call one private lib, such as handler_glfs.so, and the handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't and it will dlopen(handler_glfs.so), then the crash will be hit everytime. The link info will be: foo_LADD = -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like: foo_LADD = $(GFAPI_LIBS) -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS. And in some cases when the --as-needed link option is added(on many dists it is added as default), then the crash is back again, the above workaround won't work. Backport of: > https://review.gluster.org/#/c/glusterfs/+/23110/ > Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa > Fixes: #699 > Signed-off-by: Xiubo Li <xiubli@redhat.com> Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa updates: bz#1740519 Signed-off-by: Xiubo Li <xiubli@redhat.com> (cherry picked from commit 799edc73c3d4f694c365c6a7c27c9ab8eed5f260)
* glusterd: don't log a warning message for tier-enabled keySanju Rakonde2019-07-291-2/+5
| | | | | | | | | | | | | | We are logging a warning message saying unknown-key for tier-enabled kay. although the tier xlator is deprecated, this key is left behind for handling the peer rejection issues in a heterogeneous cluster. We need not to log if this key is not found/recognised. updates: bz#1727008 Change-Id: Ia68661898a618f99a240ca8d8a124ff6a65ebe9d Signed-off-by: Sanju Rakonde <srakonde@redhat.com> (cherry picked from commit 7d270d05f835d41e32572501246b50181dc9be56)
* glusterd/svc: update pid of mux volumes from the shd processMohammed Rafi KC2019-07-249-35/+114
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For a normal volume, we are updating the pid from a the process while we do a daemonization or at the end of the init if it is no-daemon mode. Along with updating the pid we also lock the file, to make sure that the process is running fine. With brick mux, we were updating the pidfile from gluterd after an attach/detach request. There are two problems with this approach. 1) We are not holding a pidlock for any file other than parent process. 2) There is a chance for possible race conditions with attach/detach. For example, shd start and a volume stop could race. Let's say we are starting an shd and it is attached to a volume. While we trying to link the pid file to the running process, this would have deleted by the thread that doing a volume stop. Backport of : https://review.gluster.org/#/c/glusterfs/+/22935/ >Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a >Updates: bz#1722541 >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a Updates: bz#1732668 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: conditionally clear txn_opinfo in stage opAtin Mukherjee2019-07-201-2/+10
| | | | | | | | | | | | | | ...otherwise this leads to a crash when volume status is run on a heterogeneous mode. > Fixes: bz#1723658 > Change-Id: I0d39f412b2e5e9d3ef0a3462b90b38bb5364b09d > Signed-off-by: Atin Mukherjee <amukherj@redhat.com> > (cherry picked from commit a72452fcf90679b28baec12d2769cbaa982bb4e4) Fixes: bz#1728127 Change-Id: I0d39f412b2e5e9d3ef0a3462b90b38bb5364b09d Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: do not mark skip_locking as true for geo-rep operationsSanju Rakonde2019-07-191-2/+7
| | | | | | | | | | | | | | | | | | We need to send the commit req to peers in case of geo-rep operations even though it is a no volname operation. In commit phase peers try to set the txn_opinfo which will fail because it is a no volname operation where we don't require a commit phase. We mark skip_locking as true for no volname operations, but we have to give an exception to geo-rep operations, so that they can set txn_opinfo in commit phase. Please refer to detailed RCA at the bug: 1730543 fixes: bz#1730543 Change-Id: I9f2478b12a281f6e052035c0563c40543493a3fc Signed-off-by: Sanju Rakonde <srakonde@redhat.com> (cherry picked from commit b917974ee922d7a2e079692ad7d6f61f900b37b2)
* glusterd: Show the correct brick status in get-stateMohit Agrawal2019-07-153-2/+37
| | | | | | | | | | | | | | Problem: get-state does not show correct brick status if brick status is not Started, it always shows started if any value is set brickinfo->status Solution: Check the value of brickinfo->status to show correct status in get-state Change-Id: I12a79619024c2cf59f338220d144f2f034059b3b fixes: bz#1726905 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com> (cherry picked from commit af989db23d1db00e087f2b9d3dfc43b13ef17153)
* glusterd/thin-arbiter: Thin-arbiter integration with GD1Vishal Pandey2019-07-049-23/+730
| | | | | | | | | | | | | | | | | | | | | | | | | gluster volume create <VOLNAME> replica 2 thin-arbiter 1 <host1>:<brick1> <host2>:<brick2> <thin-arbiter-host>:<path-to-store-replica-id-file> [force] The changes have been made in a way that the last brick in the bricks list will be treated as the thin-arbiter. GD1 will be manipulated to consider replica count to be as 2 and continue creating the volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick client xlator entries for each subvolume in fuse volfile, volfile generation is modified in a way to inject these entries seperately in the volfile for every subvolume. Few more additions - 1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count. 2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry in fuse volfiles. The option can be set using the following CLI syntax - gluster volume set <VOLNAME> client.ta-brick-port <PORTNO.> 3- Volume Info will contain a Thin-Arbiter-path entry to distinguish from other replicate volumes. Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406 Updates #687 Signed-off-by: Vishal Pandey <vpandey@redhat.com> (cherry picked from commit 9b223b15ab69fce4076de036ee162f36a058bcd2)
* glusterd/shd: Change shd logfile to a unique nameMohammed Rafi KC2019-06-246-33/+39
| | | | | | | | | | | | | | | | With the shd mux changes, shd was havinga a logfile with volname of the first started volume. This was creating a lot confusion, as other volumes data is also logging to a logfile which has a different vol name. With this changes the logfile will be changed to a unique name ie "/var/log/glusterfs/glustershd.log". This was the same logfile name before the shd mux Change-Id: I2b94c1f0b2cf3c9493505dddf873687755a46dda fixes: bz#1721601 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* shd/mux: Fix race between mux_proc unlink and stopMohammed Rafi KC2019-06-241-0/+3
| | | | | | | | | | | | | | There is a small race window, where we have a shd proc without having a connection. That is when we stopped the last shd running on a process. The list was removed outside of a lock just after stopping the process. So there is a window where we stopped the process, but the shd proc list contains the entry. Change-Id: Id82a82509e5cd72acac24e8b7b87197626525441 fixes: bz#1722541 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* encryption/crypt: remove from volume fileAmar Tumballi2019-06-202-34/+0
| | | | | | | | | | | | | | The feature is not supported and is moved out of the codebase from glusterfs-5.x release. Doesn't make sense to keep the code to support it. For those who want to upgrade from an version supporting it to higher version, please do a 'gluster volume reset $VOL encryption reset' and then continue with the upgrade process. updates: bz#1648169 Change-Id: I8cf822c0d7195940bd37f6af2432a3cac68d44d1 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* glusterd-volgen.c: remove BD xlator from the graphYaniv Kaul2019-06-188-49/+1
| | | | | | | | | | | | | The BD xlator was removed some time ago. Remove it from the graph. We can also remove the caps settings - only the BD xlator was using it. Lastly, remove the caps (which only BD was using) and the document describing the translator. Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* core: fedora 30 compiler warningsSheetalPamecha2019-06-182-4/+2
| | | | | | | | warning: ā€˜%sā€™ directive argument is null [-Wformat-overflow=] Change-Id: I69b8d47f0002c58b00d1cc947fac6f1c64e0b295 updates: bz#1193929 Signed-off-by: SheetalPamecha <spamecha@redhat.com>
* glusterd: log error message only when rsp.op_ret is negativeSanju Rakonde2019-06-171-1/+1
| | | | | | | | | | | | | | | | Problem: commit d42221bec9 added a log message based on rsp.op_ret check. but while running subdir-mount.t, this message is seen even on successful mounts. Solution: in __server_getspec(), return value of sys_read() is assigned to ret, which will be a non-negative number in when sys_read() is success. This non-zero value is assigned to rsp.op_ret. We should log an error only when rsp.op_ret is negative. fixes: bz#1718848 Change-Id: Ieef8ba33c2c7b4a97d4aef17543f58e66fd3b341 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfileAtin Mukherjee2019-06-171-0/+1
| | | | | | | | | ... with out which volume creation fails with "volume create: <xyz>: failed: Failed to create volume files" Fixes: bz#1716812 Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* clang-scan: resolve warningAmar Tumballi2019-06-151-0/+6
| | | | | | | | | | | | | | | dht-common.c: because there was a 'goto err' before assigning the 'local' variable, there is possibility of NULL dereference. As the check which was done wouldn't ever be true, removed the check. glusterd-geo-rep.c: a possible path where 'slave_host' could be NULL when it gets passed to strcmp() is found. strcmp() expects a valid string. Add a NULL check. Updates: bz#1622665 Change-Id: I64c280bc1beac9a2b109e8fa88f2a5ce8b823c3a Signed-off-by: Amar Tumballi <amarts@redhat.com>
* multiple files: another attempt to remove includesYaniv Kaul2019-06-141-3/+0
| | | | | | | | | | | | | | | | | | There are many include statements that are not needed. A previous more ambitious attempt failed because of *BSD plafrom (see https://review.gluster.org/#/c/glusterfs/+/21929/ ) Now trying a more conservative reduction. It does not solve all circular deps that we have, but it does reduce some of them. There is just too much to handle reasonably (dht-common.h includes dht-lock.h which includes dht-common.h ...), but it does reduce the overall number of lines of include we need to look at in the future to understand and fix the mess later one. Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: store fips-mode-rchecksum option in the info fileRavishankar N2019-06-081-0/+11
| | | | | | | | | | | commit 146e4b45d0ce906ae50fd6941a1efafd133897ea enabled storage.fips-mode-rchecksum option for all new volumes with op-version >=GD_OP_VERSION_7_0 but `gluster vol get $volname storage.fips-mode-rchecksum` was displaying it as 'off'. This patch fixes it. fixes: bz#1717782 Change-Id: Ie09f89838893c5776a3f60569dfe8d409d1494dd Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterd: Fix typosAnoop C S2019-06-071-1/+1
| | | | | | Change-Id: I8cf0a153f84ef2d162e6dd03261441d211c07d40 updates: bz#1193929 Signed-off-by: Anoop C S <anoopcs@redhat.com>
* glusterd: coverity fixMohit Agrawal2019-06-041-5/+11
| | | | | | | | | | 1401716: Resource leak 1401714: Dereference before null check updates: bz#789278 Change-Id: I8fb0b143a1d4b37ee6be7d880d9b5b84ba00bf36 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd/tier: gluster upgrade broken because of tierhari gowtham2019-06-031-0/+9
| | | | | | | | | | | | | | | | | | | | Problem: While tier code was removed, the is_tier_enabled related to tier wasn't handled for upgrade. As this option was missing in the info file, the checksum mismatch issue happens during upgrade. This results in the peer rejections happening. Fix: use the op_version check and note down the is_tier_enabled always. This way it will be dummy key, but the future upgrades will work fine. NOTE: Just having the key from 3.10 to 7 will cause issues when upgraded from 5 to 8 or any such upgrade which skips the version where we handle it. Change-Id: I9951e2b74f16e58e884e746c34dcf53e559c7143 fixes: bz#1714973 Signed-off-by: hari gowtham <hgowtham@redhat.com>
* posix: add storage.reserve-size optionSheetal Pamecha2019-06-031-0/+33
| | | | | | | | | | | storage.reserve-size option will take size as input instead of percentage. If set, priority will be given to storage.reserve-size over storage.reserve. Default value of this option is 0. fixes: bz#1651445 Change-Id: I7a7342c68e436e8bf65bd39c567512ee04abbcea Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
* glusterd: remove trivial conditionsSanju Rakonde2019-06-011-4/+2
| | | | | | | updates: bz#1193929 Change-Id: Ieb5e35d454498bc389972f9f15fe46b640f1b97d Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: Optimize code to copy dictionary in handshake code pathMohit Agrawal2019-05-314-36/+179
| | | | | | | | | | | | | Problem: While high no. of volumes are configured around 2000 glusterd has bottleneck during handshake at the time of copying dictionary Solution: To avoid the bottleneck serialize a dictionary instead of copying key-value pair one by one Change-Id: I9fb332f432e4f915bc3af8dcab38bed26bda2b9a fixes: bz#1711297 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd/shd: Optimize the glustershd manager to send reconfigureMohammed Rafi KC2019-05-311-4/+5
| | | | | | | | | | | | | Traditionally all svc manager will execute process stop and then followed by start each time when they called. But that is not required by shd, because the attach request implemented in the shd multiplex has the intelligence to check whether a detach is required prior to attaching the graph. So there is no need to send an explicit detach request if we are sure that the next call is an attach request Change-Id: I9157c8dcaffdac038f73286bcf5646a3f1d3d8ec fixes: bz#1710054 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd/svc: Stop stale process using the glusterd_proc_stopMohammed Rafi KC2019-05-311-3/+3
| | | | | | | | | | | While restarting a glusterd process, when we have a stale pid we were doing a simple kill. Instead we can use glusterd_proc_stop Because it has more logging plus force kill in case if there is any problem with kill signal handling. Change-Id: I4a2dadc210a7a65762dd714e809899510622b7ec updates: bz#1710054 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd/svc: glusterd_svcs_stop should call individual wrapper functionMohammed Rafi KC2019-05-312-7/+15
| | | | | | | | | | | | glusterd_svcs_stop should call individual wrapper function to stop a daemon rather than calling glusterd_svc_stop. For example for shd, it should call glusterd_shdsvc_stop instead of calling basic API function to stop. Because the individual functions for each daemon could be doing some specific operation in their wrapper function. Change-Id: Ie6d40590251ad470ef3901d1141ab7b22c3498f5 fixes: bz#1712741 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: add an op-version checkSanju Rakonde2019-05-311-1/+5
| | | | | | | | | | | | | | | | | | | | Problem: "gluster v status" is hung in heterogenous cluster when issued from a non-upgraded node. Cause: commit 34e010d64 fixes the txn-opinfo mem leak in op-sm framework by not setting the txn-opinfo if some conditions are true. When vol status is issued from a non-upgraded node, command is hanging in its upgraded peer as the upgraded node setting the txn-opinfo based on new conditions where as non-upgraded nodes are following diff conditions. Fix: Add an op-version check, so that all the nodes follow same set of conditions to set txn-opinfo. fixes: bz#1710159 Change-Id: Ie1f353212c5931ddd1b728d2e6949dfe6225c4ab Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: coverity fixSanju Rakonde2019-05-301-2/+0
| | | | | | | | | 1401590: Deadcode updates: bz#789278 Change-Id: I3aa1d3aa9769e6990f74b6a53e288e788173c5e0 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: bulkvoldict thread is not handling all volumesMohit Agrawal2019-05-271-1/+1
| | | | | | | | | | | | | | | Problem: In commit ac70f66c5805e10b3a1072bd467918730c0aeeb4 I missed one condition to populate volume dictionary in multiple threads while brick_multiplex is enabled.Due to that glusterd is not sending volume dictionary for all volumes to peer. Solution: Update the condition in code as well as update test case also to avoid the issue Change-Id: I06522dbdfee4f7e995d9cc7b7098fdf35340dc52 fixes: bz#1711250 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd/tier: remove tier related code from glusterdHari Gowtham2019-05-2726-5305/+126
| | | | | | | | | | | | | The handler functions are pointed to dummy functions. The switch case handling for tier also have been moved to point default case to avoid issues, if reintroduced. The tier changes in DHT still remain as such. updates: bz#1693692 Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
* Fix some "Null pointer dereference" coverity issuesXavi Hernandez2019-05-261-2/+5
| | | | | | | | | | | | | | | | | | | | | | This patch fixes the following CID's: * 1124829 * 1274075 * 1274083 * 1274128 * 1274135 * 1274141 * 1274143 * 1274197 * 1274205 * 1274210 * 1274211 * 1288801 * 1398629 Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558 Updates: bz#789278 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* glusterd: coverity fixSanju Rakonde2019-05-231-1/+1
| | | | | | | | | CID: 1401345 - Unused value updates: bz#789278 Change-Id: I6b8f2611151ce0174042384b7632019c312ebae3 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd-utils.c: skip checksum when possible.Yaniv Kaul2019-05-231-22/+18
| | | | | | | | | | | | We only need to calculate and write the checksum in case of !is_quota_conf . Align the code in accordance. Also, use a smaller buffer (to write few chars). Change-Id: I40c83ce10447df77ff9975d314d768ec2c0087c2 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: Add gluster volume stop operation to glusterd_validate_quorum()Vishal Pandey2019-05-111-1/+1
| | | | | | | | | | | | | | ISSUE: gluster volume stop succeeds even if quorum is not met. Fix: Add GD_OP_STOP_VOLUME to gluster_validate_quorum in glusterd_mgmt_v3_pre_validate (). Since the volume stop command has been ported from synctask to mgmt_v3, the quorum check was missed out. Change-Id: I7a634ad89ec2e286ea262d7952061efad5360042 fixes: bz#1690753 Signed-off-by: Vishal Pandey <vpandey@redhat.com>
* shd/glusterd: Serialize shd manager to prevent race conditionMohammed Rafi KC2019-05-103-0/+18
| | | | | | | | | | | At the time of a glusterd restart, while doing a handshake there is a possibility that multiple shd manager might get executed. Because of this, there is a chance that multiple shd get spawned during a glusterd restart Change-Id: Ie20798441e07d7d7a93b7d38dfb924cea178a920 fixes: bz#1707081 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: fix inconsistent global option output in volume getAtin Mukherjee2019-05-091-2/+2
| | | | | | | | | | volume get all all | grep <key> & volume get <volname> all | grep <key> dumps two different output value for cluster.brick-multiplex and cluster.server-quorum-ratio Fixes: bz#1707700 Change-Id: Id131734e0502aa514b84768cf67fce3c22364eae Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: reduce some work in glusterd-utils.cYaniv Kaul2019-05-091-125/+128
| | | | | | | | | | | | Similar to https://review.gluster.org/#/c/glusterfs/+/22652/ , reduce some of the work by using smaller buffers and less conversion of parameters when snprintf()'ing them. On the way, remove some clang warnings, mainly on dead assignment. Change-Id: Ie51e6d6f14df6b2ccbebba314cf937af08839741 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: improve logging in __server_getspec()Sanju Rakonde2019-05-082-2/+15
| | | | | | | updates: bz#1193929 Change-Id: Idad745d5869c92e6bed71842f14bc1a3362ca4bd Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd/store: store all key-values in one shotYaniv Kaul2019-05-082-323/+292
| | | | | | | | | | | | Instead of saving each key-value separately, which is slow ( especially as we fflush() after each!), store them all as one string and write all together. Implements https://github.com/gluster/glusterfs/issues/629 Change-Id: Ie77a272446b0b6785584b710a4fdd9c613dd9578 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat,.com>
* glusterd: coverity fixAtin Mukherjee2019-05-061-2/+14
| | | | | | | | CID: 1382403 (CHECKED_RETURN) Updates: bz#789278 Change-Id: I4c57b93fd3d14c524ff8519ed876f029834de306 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: prevent use-after-free in glusterd_op_ac_send_brick_op()Niels de Vos2019-05-061-2/+1
| | | | | | | | | Coverity reported that GF_FREE(req_ctx) could be called 2x on req_ctx. Change-Id: I9120686e5920de8c27688e10de0db6aa26292064 CID: 1401115 Updates: bz#789278 Signed-off-by: Niels de Vos <ndevos@redhat.com>
* glusterd-utils.c: reduce work in glusterd_add_volume_to_dict()Yaniv Kaul2019-05-031-58/+59
| | | | | | | | | | | 1. Use small arrays, 32 or 64 bytes should suffice. 2. Do not repeat the pattern of snprintf '%s.%d', prefix, count over and over. Change-Id: Ief6de78b766d9a07acb6256fc4830f4f3cfba7c9 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: Fix coverity defects & put coverity annotationsAtin Mukherjee2019-05-0214-33/+65
| | | | | | | | | | | Along with fixing few defect, put the required annotations for the defects which are marked ignore/false positive/intentional as per the coverity defect sheet. This should avoid the per component graph showing many defects as open in the coverity glusterfs web page. Updates: bz#789278 Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Fix bulkvoldict thread logic in brick multiplexingMohit Agrawal2019-04-301-6/+18
| | | | | | | | | | | | | | | Problem: Currently glusterd spawn bulkvoldict in brick_mux environment while no. of volumes are less than configured glusterd.vol_count_per_thread Solution: Correct the logic to spawn bulkvoldict thread 1) Calculate endindex only while total thread is non zero 2) Update end index correctly to pass index for bulkvoldict thread Fixes: bz#1704252 Change-Id: I1def847fbdd6a605e7687bfc4e42b706bf0eb70b Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>