summaryrefslogtreecommitdiffstats
path: root/tests/basic/glusterd
Commit message (Collapse)AuthorAgeFilesLines
* Updating gluster manual.Rishubh Jain2020-02-181-0/+4
| | | | | | | | | | Adding disperse-data to gluster manual under volume create command Change-Id: Ic9eb47c9e71a1d7a11af9394c615c8e90f8d1d69 Fixes: bz#1668239 Signed-off-by: Rishubh Jain <risjain@redhat.com> Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* tests: add tests for bz#1758878Ravishankar N2019-10-172-29/+61
| | | | | | | | updates: bz#1193929 Change-Id: I517fa29e57bde970c2c22ebc2de80fec1509cd2d Signed-off-by: Sanju Rakonde <srakonde@redhat.com> Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* read-ahead/io-cache: turn off by defaultRaghavendra Gowdappa2019-09-261-1/+1
| | | | | | | | | | | | | | | | We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these functionalities, this patch makes these two translators turned off by default for native fuse mounts. For non-native fuse mounts like gfapi (NFS-ganesha/samba) we can have these xlators on by having custom profiles. Change-Id: Ie7535788909d4c741844473696f001274dc0bb60 Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com> fixes: bz#1676479
* glusterd/thin-arbiter: Thin-arbiter integration with GD1Vishal Pandey2019-06-282-0/+70
| | | | | | | | | | | | | | | | | | | | | | | | gluster volume create <VOLNAME> replica 2 thin-arbiter 1 <host1>:<brick1> <host2>:<brick2> <thin-arbiter-host>:<path-to-store-replica-id-file> [force] The changes have been made in a way that the last brick in the bricks list will be treated as the thin-arbiter. GD1 will be manipulated to consider replica count to be as 2 and continue creating the volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick client xlator entries for each subvolume in fuse volfile, volfile generation is modified in a way to inject these entries seperately in the volfile for every subvolume. Few more additions - 1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count. 2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry in fuse volfiles. The option can be set using the following CLI syntax - gluster volume set <VOLNAME> client.ta-brick-port <PORTNO.> 3- Volume Info will contain a Thin-Arbiter-path entry to distinguish from other replicate volumes. Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406 Updates #687 Signed-off-by: Vishal Pandey <vpandey@redhat.com>
* mgmt/glusterd: Make changes related to cloudsync xlatorAnuradha Talur2019-04-101-0/+48
| | | | | | | | | | 1) The placement of cloudsync xlator has been changed to make it shard xlator's child. If cloudsync has to work with shard in the graph, it needs to be child of shard. Change-Id: Ib55424fdcb7ce8edae9f19b8a6e3d3ba86c1f0c4 fixes: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* mgmt/shd: Implement multiplexing in self heal daemonMohammed Rafi KC2019-04-011-20/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Shd daemon is per node, which means they create a graph with all volumes on it. While this is a great for utilizing resources, it is so good in terms of performance and managebility. Because self-heal daemons doesn't have capability to automatically reconfigure their graphs. So each time when any configurations changes happens to the volumes(replicate/disperse), we need to restart shd to bring the changes into the graph. Because of this all on going heal for all other volumes has to be stopped in the middle, and need to restart all over again. Solution: This changes makes shd as a per volume daemon, so that the graph will be generated for each volumes. When we want to start/reconfigure shd for a volume, we first search for an existing shd running on the node, if there is none, we will start a new process. If already a daemon is running for shd, then we will simply detach a graph for a volume and reatach the updated graph for the volume. This won't touch any of the on going operations for any other volumes on the shd daemon. Example of an shd graph when it is per volume graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ --------- --------- ---------- | AFR-1 | | AFR-2 | | AFR-3 | -------- --------- ---------- A running shd daemon with 3 volumes will be like--> graph ----------------------- | debug-iostat | ----------------------- / | \ / | \ ------------ ------------ ------------ | volume-1 | | volume-2 | | volume-3 | ------------ ------------ ------------ Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99 fixes: bz#1659708 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* stripe: remove the translator from build and glusterdAmar Tumballi2018-10-313-23/+1
| | | | | | | | | | | | | | | | Based on the proposal to remove few features as they are not actively maintained [1], removing stripe translator from the build. Also make sure there are no regression tests involving stripe translator. [1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html Note that this patch aims at removing the translator from build, and a followup patch is needed to remove the code from repository. Updates: bz#1364707 Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* cluster/ec : Prevent volume create without redundant brickSunil Kumar Acharya2018-10-241-0/+1
| | | | | | | | | | | | | Problem: EC volumes can be created without any redundant brick. Solution: Updated the conditional check to avoid volume create without redundant brick. fixes: bz#1642448 Change-Id: I0cb334b1b9378d67fcb8abf793dbe312c3179c0b Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* tests: fix volfile_server_switch.t spurious failureAtin Mukherjee2016-08-221-3/+1
| | | | | | | | | | | | | | | 1. Unnecessary self probe is removed 2. After every probe a peer_count check is added to give the test to time finish handhake. Change-Id: Iab52548f8b781e7968250cd98fdbeaf02472970d BUG: 1368953 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/15231 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd-client: switch volfile server incase existing connection breaksPrasanna Kumar Kalever2016-04-121-0/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Currently, say we have 10 Node gluster volume, and mounted it using Node 1 (N1) as volfile server and the rest as backup volfile servers $ mount -t glusterfs -obackup-volfile-servers=<N2>:<N3>:...:<N10> <N1>:/vol /mnt if N1 goes down we still be able to access the same mount point, but the problem is that if we add or remove bricks to the volume whoes volfile server is down in our case N1, that info will not be passed to client, because connection between glusterfs and glusterd (of N1) will be disconnected due to which we cannot store files to the newly added bricks until N1 comes back Solution: If N1 goes down iterate through the nodes specified in backup-volfile-servers list and try to establish the connection between glusterfs and glsuterd, hence we don't really have to wait until N1 comes back to store files in newly added bricks that are successfully added when N1 was down Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd BUG: 1289916 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/13002 Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Poornima G <pgurusid@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* mgmt/glusterd: Store arbiter-count and restore itPranith Kumar K2015-11-042-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | Problem: 1) Glusterd doesn't remember about arbiter information of replica volume in store. When glusterd goes down and comes backup, arbiter volumes will become replica volumes. 2) Glusterd doesn't import/export arbiter information to/from the other peers. 3) Volume info doesn't show any arbiter count in the output. Fix: 1) Persist arbiter information in glusterd-store 2) Import/Export arbiter information of the volume 3) Change volume info output to show arbiter count. Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361 BUG: 1276675 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12475 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cli: Provide CLI to create disperse volume with data, redundancy countsPranith Kumar K2015-02-231-0/+81
| | | | | | | | | | Change-Id: Iba44be565c895e26b19b5ff85a886873f6b53e5c BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9616 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* tests : spurious failure fix in heald.tAtin Mukherjee2015-02-041-6/+6
| | | | | | | | | | | | | | | | | | Problem : heald.t uses EXPECT to check whether shd process is up or not, but as shd is spawned with NO_WAIT end of volume start transaction doesn't gurantee that the process will be up by that time. Solution : Use EXPECT_WITHIN instead of EXPECT Change-Id: Ic81725aa7e7cde9c0c873837fcc4a73d8318dfa0 BUG: 1163543 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9575 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Implement Volume heal enable/disablePranith Kumar K2015-01-201-0/+89
For volumes with replicate, disperse xlators, self-heal daemon should do healing. This patch provides enable/disable functionality for the xlators to be part of self-heal-daemon. Replicate already had this functionality with 'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it uniform for both types of volumes. Internally it still does 'volume set' based on the volume type. Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29 BUG: 1177601 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9358 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>