summaryrefslogtreecommitdiffstats
path: root/tests/bugs/glusterd
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Fix Add-brick with increasing replica count failureSheetal Pamecha2020-09-241-0/+21
| | | | | | | | | | | | | Problem: add-brick operation fails with multiple bricks on same server error when replica count is increased. This was happening because of extra runs in a loop to compare hostnames and if bricks supplied were less than "replica" count, the bricks will get compared to itself resulting in above error. Fixes: #1508 Change-Id: I8668e964340b7bf59728bb838525d2db062197ed Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* tests: provide an option to mark tests as 'flaky'Amar Tumballi2020-08-204-246/+0
| | | | | | | | | | | | | * also add some time gap in other tests to see if we get things properly * create a directory 'tests/000/', which can host any tests, which are flaky. * move all the tests mentioned in the issue to above directory. * as the above dir gets tested first, all flaky tests would be reported quickly. * change `run-tests.sh` to continue tests even if flaky tests fail. Reference: gluster/project-infrastructure#72 Updates: #1000 Change-Id: Ifdafa38d083ebd80f7ae3cbbc9aa3b68b6d21d0e Signed-off-by: Amar Tumballi <amar@kadalu.io>
* glusterd: getspec() returns wrong response when volfile not foundTamar Shacked2020-07-231-0/+3
| | | | | | | | | | | | | | | In a cluster env: getspec() detects that volfile not found. but further on, this return code is set by another call so the error is lost and not handled. As a result the server responds with ambiguous message: {op_ret = -1, op_errno = 0..} - which cause the client to stuck. Fix: server side: don't override the failure error. fixes: #1375 Change-Id: Id394954d4d0746570c1ee7d98969649c305c6b0d Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* tests: added volume operations to increase code coveragenik-redhat2020-07-061-14/+0
| | | | | | | | | | | | Added test for volume options like localtime-logging, fixed enable-shared-storage to include function coverage and few negative tests for other volume options to increase the code coverage in the glusterd component. Change-Id: Ib1706c1fd5bc98a64dcb5c8b15a121d639a597d7 Updates: #1052 Signed-off-by: nik-redhat <nladha@redhat.com>
* glusterd: add-brick command failureSanju Rakonde2020-06-211-0/+40
| | | | | | | | | | | | | | | Problem: add-brick operation is failing when replica or disperse count is not mentioned in the add-brick command. Reason: with commit a113d93 we are checking brick order while doing add-brick operation for replica and disperse volumes. If replica count or disperse count is not mentioned in the command, the dict get is failing and resulting add-brick operation failure. fixes: #1306 Change-Id: Ie957540e303bfb5f2d69015661a60d7e72557353 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* tests/glusterd: spurious failure of ↵Sanju Rakonde2020-06-171-3/+15
| | | | | | | | | | | | | | | | | | | | tests/bugs/glusterd/mgmt-handshake-and-volume-sync-post-glusterd-restart.t Test Summary Report ------------------- tests/bugs/glusterd/mgmt-handshake-and-volume-sync-post-glusterd-restart.t (Wstat: 0 Tests: 23 Failed: 3) Failed tests: 21-23 After glusterd restart, volume start is failing. Looks like, it need some time to sync the data. Adding sleep for the same. Note: All other changes are made to avoid spurious failures in the future. fixes: #1272 Change-Id: Ib184757fb936e03b5b6208465e44a8e790b71c1c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* test: Test case brick-mux-validation-in-cluster.t is failing on RHEL-8Mohit Agrawal2020-06-091-3/+1
| | | | | | | | | | | | | | | | | | | | | Brick process are not properly attached on any cluster node while some volume options are changed on peer node and glusterd is down on that specific node. Solution: At the time of restart glusterd it got a friend update request from a peer node if peer node having some changes on volume.If the brick process is started before received a friend update request in that case brick_mux behavior is not workingproperly. All bricks are attached to the same process even volumes options are not the same. To avoid the issue introduce an atomic flag volpeerupdate and update the value while glusterd has received a friend update request from peer for a specific volume.If volpeerupdate flag is 1 volume is started by glusterd_import_friend_volume synctask Change-Id: I4c026f1e7807ded249153670e6967a2be8d22cb7 Credit: Sanju Rakaonde <srakonde@redhat.com> fixes: #1290 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* io-cache,quick-read: deprecate volume options with flawed semantics or namingCsaba Henk2020-06-022-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | - performance.cache-size has a flawed semantics, as it's dispatched on two independent translators, io-cache and quick-read. - performance.qr-cache-timeout has a confusing name, as other options affecting quick-read have an unabbreviated "quick-read-..." prefix in their names. We keep these options with unchanged operation, but in the help output we indicate their deprecation. The following better alternatives are introduced: - performance.io-cache-size to tune cache-size option of io-cache - performance.quick-read-cache-size to tune cache-size option of quick-read - performance.quick-read-cache-timeout as a preferred synonym for performance.qr-cache-timeout Fixes: #952 Change-Id: Ibd04fb638de8cac450ba992ad8a415154f9f4281 Signed-off-by: Csaba Henk <csaba@redhat.com>
* tests: Fix for spurious failure for some test casesMohit Agrawal2020-04-161-0/+1
| | | | | | | | | | | | | Problem: Sometimes test case is failing at the time of creating files on mount point after mounting the volume Solution: After started the volume need to wait to make sure all bricks instances are completely started so put a online_brick_count check after just started the volume Change-Id: I5020e7e417539377277ca00189f9c51d2cf877a6 Fixes: #1162 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* tests: Fix spurious failure of ↵Mohit Agrawal2020-04-131-1/+1
| | | | | | | | | | | | | tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t Problem: Sometime volume status is failed after restart glusterd in one cluster node Solution: Wait to finish glusterd handshake on down cluster node Change-Id: Ib23ca41c943caf2903c61ebf42dc437c1b9d6054 Fixes: #1158 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* posix: Avoid dict_del logs in posix_is_layout_stale while key is NULLMohit Agrawal2020-04-091-2/+2
| | | | | | | | | | | | Problem: The key "GF_PREOP_PARENT_KEY" has been populated by dht and for non-distribute volume like 1x3 key is not populated so posix_is_layout stale throw a message while a file is created Solution: To avoid a log put a condition before delete a key Change-Id: I813ee7960633e7f9f5e9ad2f42f288053d9eb71f Fixes: #1150 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: Brick process fails to come up with brickmux onVishal Pandey2020-02-201-1/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue: 1- In a cluster of 3 Nodes N1, N2, N3. Create 3 volumes vol1, vol2, vol3 with 3 bricks (one from each node) 2- Set cluster.brick-multiplex on 3- Start all 3 volumes 4- Check if all bricks on a node are running on same port 5- Kill N1 6- Set performance.readdir-ahead for volumes vol1, vol2, vol3 7- Bring N1 up and check volume status 8- All bricks processes not running on N1. Root Cause - Since, There is a diff in volfile versions in N1 as compared to N2 and N3 therefore glusterd_import_friend_volume() is called. glusterd_import_friend_volume() copies the new_volinfo and deletes old_volinfo and then calls glusterd_start_bricks(). glusterd_start_bricks() looks for the volfiles and sends an rpc request to glusterfs_handle_attach(). Now, since the volinfo has been deleted by glusterd_delete_stale_volume() from priv->volumes list before glusterd_start_bricks() and glusterd_create_volfiles_and_notify_services() and glusterd_list_add_order is called after glusterd_start_bricks(), therefore the attach RPC req gets an empty volfile path and that causes the brick to crash. Fix- Call glusterd_list_add_order() and glusterd_create_volfiles_and_notify_services before glusterd_start_bricks() cal is made in glusterd_import_friend_volume Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa Fixes: bz#1773856 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: deafult options after volume resetSanju Rakonde2020-01-011-0/+5
| | | | | | | | | | | | | | | | | | | Problem: default option itransport.address-family is disappered in volume info output after a volume reset. Cause: with 3.8.0 onwards volume option transport.address-family has default value, any volume which is created will have this option set. So, volume info will show this in its output. But, with reset volume, this option is not handled. Solution: In glusterd_enable_default_options(), we should add this option along with other default options. This function is called by glusterd_options_reset() with volume reset command. fixes: bz#1786478 Change-Id: I58f7aa24cf01f308c4efe6cae748cc3bc8b99b1d Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* [Cli] Removing old log rotate command.kshithijiyer2019-12-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The old command for log rotate is still present removing it completely. Also adding testcase to test the log rotate command with both the old as well as the new command and fixing testcase which use the old syntax to use the new one. Code to be removed: 1. In cli-cmd-volume.c from struct cli_cmd volume_cmds[]: {"volume log rotate <VOLNAME> [BRICK]", cli_cmd_log_rotate_cbk, "rotate the log file for corresponding volume/brick" " NOTE: This is an old syntax, will be deprecated from next release."}, 2. In cli-cmd-volume.c from cli_cmd_log_rotate_cbk(): ||(strcmp("rotate", words[2]) == 0))) 3. In cli-cmd-parser.c from cli_cmd_log_rotate_parse() if (strcmp("rotate", words[2]) == 0) volname = (char *)words[3]; else fixes: bz#1750387 Change-Id: I56e4d295044e8d5fd1fc0d848bc87e135e9e32b4 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* glusterd: Client Handling of Elastic ClustersMohit Agrawal2019-11-121-0/+60
| | | | | | | | | | | | | | Configure the list of gluster servers in the key GLUSTERD_BRICK_SERVERS at the time of GETSPEC RPC CALL and access the value in client side to update volfile serve list so that client would be able to connect next volfile server in case of current volfile server is down Updates #741 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Change-Id: I23f36ddb92982bb02ffd83937a8bd8a2c97e8104
* cli: display detailed rebalance infoSanju Rakonde2019-11-121-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: When one of the node is down in cluster, rebalance status is not displaying detailed information. Cause: In glusterd_volume_rebalance_use_rsp_dict() we are aggregating rsp from all the nodes into a dictionary and sending it to cli for printing. While assigning a index to keys we are considering all the peers instead of considering only the peers which are up. Because of which, index is not reaching till 1. while parsing the rsp cli unable to find status-1 key in dictionary and going out without printing any information. Solution: The simplest fix for this without much code change is to continue to look for other keys when status-1 key is not found. fixes: bz#1764119 Change-Id: I0062839933c9706119eb85416256eade97e976dc Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* tests: mark tests/bugs/glusterd/quorum-value-check.t as NFS testSanju Rakonde2019-10-151-0/+2
| | | | | | | Fixes: bz#1665358 Change-Id: Iea000dd839d4e4dbef45941f97ab3725a2aa1726 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: rebalance start should fail when quorum is not metSanju Rakonde2019-10-101-0/+2
| | | | | | | | | | rebalance start should not succeed if quorum is not met. this patch adds a condition to check whether quorum is met in pre-validation stage. fixes: bz#1760467 Change-Id: Ic7d0d08f69e4bc6d5e7abae713ec1881531c8ad4 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: ./tests/bugs/glusterd/bug-1595320.t is failingMohit Agrawal2019-08-191-1/+1
| | | | | | | | | | | | | Problem: sometime ./tests/bugs/glusterd/bug-1595320.t is failing is failing at the time of checking brick_process after sending a kill signal to brick process Solution: Wait sometime after just sending a kill signal to brick process to make sure brick process is stopped Change-Id: Iee9e91284618abfc62a550d47e4f9117785def58 Fixes: bz#1743200 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: Can't run rebalance due to long unix socketMohit Agrawal2019-06-251-0/+50
| | | | | | | | | | | | | Problem: glusterd populate unix socket file name based on volname and if volname is lengthy socket system call's are failed due to breach maximum length is defined in the kernel. Solution:Convert unix socket name to hash to resolve the issue Change-Id: I5072e8184013095587537dbfa4767286307fff65 fixes: bz#1720566 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfileAtin Mukherjee2019-06-171-1/+3
| | | | | | | | | ... with out which volume creation fails with "volume create: <xyz>: failed: Failed to create volume files" Fixes: bz#1716812 Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterfsd/cleanup: Protect graph object under a lockMohammed Rafi KC2019-05-311-1/+3
| | | | | | | | | | | While processing a cleanup_and_exit function, we are accessing a graph object. But this has not been protected under a lock. Because a parallel cleanup of a graph is quite possible which might lead to an invalid memory access Change-Id: Id05ca70d5b57e172b0401d07b6a1f5386c044e79 fixes: bz#1708926 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: bulkvoldict thread is not handling all volumesMohit Agrawal2019-05-271-6/+10
| | | | | | | | | | | | | | | Problem: In commit ac70f66c5805e10b3a1072bd467918730c0aeeb4 I missed one condition to populate volume dictionary in multiple threads while brick_multiplex is enabled.Due to that glusterd is not sending volume dictionary for all volumes to peer. Solution: Update the condition in code as well as update test case also to avoid the issue Change-Id: I06522dbdfee4f7e995d9cc7b7098fdf35340dc52 fixes: bz#1711250 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: Add gluster volume stop operation to glusterd_validate_quorum()Vishal Pandey2019-05-111-1/+3
| | | | | | | | | | | | | | ISSUE: gluster volume stop succeeds even if quorum is not met. Fix: Add GD_OP_STOP_VOLUME to gluster_validate_quorum in glusterd_mgmt_v3_pre_validate (). Since the volume stop command has been ported from synctask to mgmt_v3, the quorum check was missed out. Change-Id: I7a634ad89ec2e286ea262d7952061efad5360042 fixes: bz#1690753 Signed-off-by: Vishal Pandey <vpandey@redhat.com>
* shd/glusterd: Serialize shd manager to prevent race conditionMohammed Rafi KC2019-05-101-0/+54
| | | | | | | | | | | At the time of a glusterd restart, while doing a handshake there is a possibility that multiple shd manager might get executed. Because of this, there is a chance that multiple shd get spawned during a glusterd restart Change-Id: Ie20798441e07d7d7a93b7d38dfb924cea178a920 fixes: bz#1707081 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd: define dumpops in the xlator_api of glusterdSanju Rakonde2019-04-271-0/+13
| | | | | | | | | | | | | | Problem: statedump is not capturing information related to glusterd Solution: statdump is not capturing glusterd info because trav->dumpops is null in gf_proc_dump_single_xlator_info () where trav is glusterd xlator object. trav->dumpops is null because we missed to define dumpops in xlator_api of glusterd. defining dumpops in xlator_api of glusterd fixes the issue. fixes: bz#1703629 Change-Id: If85429ecb1ef580aced8d5b88d09fc15258bfc4c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: Optimize glusterd handshaking code pathMohit Agrawal2019-04-151-0/+69
| | | | | | | | | | | | | | | | | | | | Problem: At the time of handshaking glusterd populate volume data in a dictionary.While no. of volumes are configured more than 1500 glusterd takes more than 10 min to generated the data.Due to taking more time rpc request times out and rpc start bailing of call frames. Solution: To optimize the code done below changes 1) Spawn multiple threads to populate volumes data in bulk in separate dictionary and introduce an option glusterd.brick-dict-thread-count to configure no. of threads to populate volume data. 2) Populate tier data only while volume type is tier 3) Compare snap data only while snap_count is non zero Fixes: bz#1699339 Change-Id: I38dc71970c049217f9d1a06fc0aaf4c26eab18f5 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* core: Log level changes do not effect on running client processMohit Agrawal2019-04-151-0/+113
| | | | | | | | | | | | | | | | | | | Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level per xlator during reconfigure only for a brick process not for the client process. Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure about brick multiplex introudce a flag brick_mux at ctx->cmd_args. Note: There are two other changes done with this patch 1) Ignore client-log-level option to attach a brick with already running brick if brick_mux is enabled 2) Add a log to print pid of the running process to make easier debugging Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com> Fixes: bz#1696046
* mgmt/shd: Implement multiplexing in self heal daemonMohammed Rafi KC2019-04-011-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Shd daemon is per node, which means they create a graph with all volumes on it. While this is a great for utilizing resources, it is so good in terms of performance and managebility. Because self-heal daemons doesn't have capability to automatically reconfigure their graphs. So each time when any configurations changes happens to the volumes(replicate/disperse), we need to restart shd to bring the changes into the graph. Because of this all on going heal for all other volumes has to be stopped in the middle, and need to restart all over again. Solution: This changes makes shd as a per volume daemon, so that the graph will be generated for each volumes. When we want to start/reconfigure shd for a volume, we first search for an existing shd running on the node, if there is none, we will start a new process. If already a daemon is running for shd, then we will simply detach a graph for a volume and reatach the updated graph for the volume. This won't touch any of the on going operations for any other volumes on the shd daemon. Example of an shd graph when it is per volume graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ --------- --------- ---------- | AFR-1 | | AFR-2 | | AFR-3 | -------- --------- ---------- A running shd daemon with 3 volumes will be like--> graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ ------------ ------------ ------------ | volume-1 | | volume-2 | | volume-3 | ------------ ------------ ------------ Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99 fixes: bz#1659708 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* Bump up timeout for tests on AWSNigel Babu2019-02-071-2/+2
| | | | | | Fixes: bz#1672727 Change-Id: I2b9be45f199f6436b858536c6f49be85902217f0 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* tests: run nfs tests only if --enable-gnfs is providedAmar Tumballi2019-01-241-0/+2
| | | | | | Fixes: bz#1665358 Change-Id: Idbf88ec3ac683733b32c313377eeb72f2819bf0d Signed-off-by: Amar Tumballi <amarts@redhat.com>
* libglusterfs/common-utils.c: Fix buffer size for checksum computationVarsha Rao2019-01-111-0/+35
| | | | | | | | | | | | | | Problem: When quorum count option is updated, the change is not reflected in the nfs-server.vol file. This is because in get_checksum_for_file(), when the last part of the file read has size less than buffer size, the read buffer stores old data value along with correct data value. Solution: Pass the bytes read instead of fixed buffer size, for calculating checksum. Change-Id: I4b641607c8a262961b3f3da0028a54e08c3f8589 fixes: bz#1657744 Signed-off-by: Varsha Rao <varao@redhat.com>
* cluster/afr: Disable client side heals in AFR by default.Sunil Kumar Acharya2019-01-101-0/+3
| | | | | | | | | With this changeset, default value for the AFR client side heal volume option is set to "off" fixes: bz#1663102 Change-Id: Ie4016932339c4896487e3e7cb5caca68739b7ba2 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* glusterd: coverity fixesAtin Mukherjee2018-11-031-0/+3
| | | | | | | | | | | | | Addresses CIDs : 1124769, 1124852, 1124864, 1134024, 1229876, 1382382 Also addressed a spurious failure in tests/bugs/glusterd/df-results-post-replace-brick-operations.t to ensure post replace brick operation and before triggering 'df' from mount, client has connection to the newly replaced bricks. Change-Id: Ie5d7e02f89400a661491d7fc2a120d6f6a83a1cc Updates: bz#789278 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* tiering: remove the translator from build and glusterdAmar Tumballi2018-11-021-78/+0
| | | | | | | | | | | | | Based on the proposal to remove few features as they are not actively maintained [1], removing tier translator from the build. Also make sure there are no regression tests involving tiering feature are present. [1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html Change-Id: I2c177f711f9b54b7b24e1a13525ff3132bd9a9c5 updates: bz#1642807 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* glusterd: set fsid while performing replace brickSanju Rakonde2018-11-021-0/+58
| | | | | | | | | | While performing the replace-brick operation, we should set fsid value to the new brick. fixes: bz#1637196 Change-Id: I9e9a4962fc0c2f5dff43e4ac11767814a0c0beaf Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* tests: brick-mux-fd-cleanup.t should be under core directoryAtin Mukherjee2018-10-311-78/+0
| | | | | | Fixes: bz#1637934 Change-Id: I5f95beab62bd2bdde3bbee94c308b0ad03e94379 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* stripe: remove the translator from build and glusterdAmar Tumballi2018-10-312-9/+2
| | | | | | | | | | | | | | | | Based on the proposal to remove few features as they are not actively maintained [1], removing stripe translator from the build. Also make sure there are no regression tests involving stripe translator. [1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html Note that this patch aims at removing the translator from build, and a followup patch is needed to remove the code from repository. Updates: bz#1364707 Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* glusterd: ensure volinfo->caps is set to correct value.Sanju Rakonde2018-10-251-0/+9
| | | | | | | | | | | | | | | | | With the commit febf5ed4848, during the volume create op, we are setting volinfo->caps to 0, only if any of the bricks belong to the same node and brickinfo->vg[0] is null. Previously, we used to set volinfo->caps to 0, when either brick doesn't belong to the same node or brickinfo->vg[0] is null. With this patch, we set volinfo->caps to 0, when either brick doesn't belong to the same node or brickinfo->vg[0] is null. (as we do earlier without commit febf5ed4848). fixes: bz#1635820 Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* tests: correction in tests/bugs/glusterd/optimized-basic-testcases-in-cluster.tSanju Rakonde2018-10-251-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has optimised glusterd test cases by clubbing the similar test cases into a single test case. https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t test case has been deleted and added as a part of tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t In the original test case, we create a volume with two bricks, each on a separate node(N1 & N2). From another node in cluster(N3), we try to detach a node which is hosting bricks. It fails. In the new test, we created volume with single brick on N1. and from another node in cluster, we tried to detach N1. we expect peer detach to fail, but peer detach was success as the node is hosting all the bricks of volume. Now, changing the new test case to cover the original test case scenario. Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to understand why the new test case is not failing in centos-regression. fixes: bz#1642597 Change-Id: Ifda12b5677143095f263fbb97a6808573f513234 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* core: glusterfsd keeping fd open in index xlatorMohit Agrawal2018-10-121-0/+78
| | | | | | | | | | | | | | | | | | | | | | Problem: At the time of processing GF_EVENT_PARENT_DOWN at brick xlator, it forwards the event to next xlator only while xlator ensures no stub is in progress. At io-thread xlator it decreases stub_cnt before the process a stub and notify EVENT to next xlator Solution: Introduce a new counter to save stub_cnt and decrease the counter after process the stub completely at io-thread xlator. To avoid brick crash at the time of call xlator_mem_cleanup move only brick xlator if detach brick name has found in the graph Note: Thanks to pranith for sharing a simple reproducer to reproduce the same fixes bz#1637934 Change-Id: I1a694a001f7a5417e8771e3adf92c518969b6baa Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* tests: fix test case failureSanju Rakonde2018-09-201-1/+1
| | | | | | | | | | | | | | | | tests/bugs/glusterd/bug-1595320.t is failing in downstream. In downstream repo, enabling the brick multiplexing made interactive, so it will throw an prompt for the user input. As no input is provided during the test case execution, the test is failing. Using macro CLI instead of using gluster command, will bypass the interacive commands. so replacing the gluster command with CLI macro will address the issue. Change-Id: I6b39052d8e415a8ed08de7c80a91dadce155146a updates: bz#1193929 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-7/+7
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* glusterd: avoid using glusterd's working directory as a brickSanju Rakonde2018-09-081-0/+9
| | | | | | | | | Adding checks for avoiding glusterd's working directory used as a brick for volume creation. fixes: bz#853601 Change-Id: I4b16a05f752e92216aa628f542a4fdbf59b3c669 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* tests: fix brick check ordersAtin Mukherjee2018-08-134-33/+51
| | | | | | | | | | | | fix brick checks for validating-server-quorum.t & quorum-validation.t ...and make brick_up_status_1 function more generic. Also fix a timing issue in bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t Change-Id: I797ef4cec5b160aafa979bae7151b1e99fcb48ac Updates: bz#1603063 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* tests: Fix cleanup routine for some mux testsShyamsundarR2018-08-131-3/+2
| | | | | | | | | | | | | | | | | Some of the mux tests, set a trap to catch test exit and call cleanup. This will cause cleanup to be invoked twice in case the test times out, or even otherwise, as include.rc also sets a trap to cleanup on exit (TERM and others). This leads to the tarballs generated on failures for these tests to be empty and does not aid debugging. This patch corrects this pattern across the tests to the more standard cleanup at the end. Fixes: bz#1615037 Change-Id: Ib83aeb09fac2aa591b390b9fb9e1f605bfef9a8b Signed-off-by: ShyamsundarR <srangana@redhat.com>
* glusterd: Compare volume_id before start/attach a brickMohit Agrawal2018-08-101-0/+93
| | | | | | | | | | | | | | Problem: After reboot a node brick is not coming up because fsid comparison is failed before start a brick Solution: Instead of comparing fsid compare volume_id to resolve the same because fsid is changed after reboot a node but volume_id persist as a xattr on brick_root path at the time of creating a volume. Change-Id: Ic289aab1b4ebfd83bbcae8438fee26ae61a0fff4 fixes: bz#1612418 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: Bricks of a normal volumes should not attach to ↵Sanju Rakonde2018-08-032-33/+51
| | | | | | | | | | | | | | | | | | | gluster_shared_storage bricks Problem: In a brick multiplexing environment, Bricks of a normal volume created by user are getting attached to the bricks of a volume "gluster_shared_storage" which is created by enabling the enable-shared-storage option. Mounting gluster_shared_storage has strict authentication checks. when we attach bricks of a normal volume to bricks of gluster_shared_storage, mounting the normal volume created by user will fail due to strict authentication checks. Solution: We should not attach bricks of a normal volume to brick process of gluster_shared_storage volume and vice versa. fixes: bz#1610726 Change-Id: If1b5a2a02675789a2915ba480fb48c145449163d Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: Introduce daemon-log-level cluster wide optionAtin Mukherjee2018-07-031-0/+93
| | | | | | | | | | | | This option, applicable to the node level daemons can be very helpful in controlling the log level of these services. Please note any daemon which is started prior to setting the specific value of this option (if not INFO) will need to go through a restart to have this change into effect. Change-Id: I7f6d2620bab2b094c737f5cc816bc093e9c9c4c9 fixes: bz#1597473 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: handling brick termination in brick-muxSanju Rakonde2018-05-071-0/+33
| | | | | | | | | | | | | | | | | Problem: There's a race between the glusterfs_handle_terminate() response sent to glusterd from last brick of the process and the socket disconnect event that encounters after the brick process got killed. Solution: When it is a last brick for the brick process, instead of sending GLUSTERD_BRICK_TERMINATE to brick process, glusterd will kill the process (same as we do it in case of non brick multiplecing). The test case is added for https://bugzilla.redhat.com/show_bug.cgi?id=1549996 Change-Id: If94958cd7649ea48d09d6af7803a0f9437a85503 fixes: bz#1545048 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>