summaryrefslogtreecommitdiffstats
path: root/tests/basic/glusterd/volfile_server_switch.t
Commit message (Collapse)AuthorAgeFilesLines
* read-ahead/io-cache: turn off by defaultRaghavendra Gowdappa2019-09-261-1/+1
| | | | | | | | | | | | | | | | We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these functionalities, this patch makes these two translators turned off by default for native fuse mounts. For non-native fuse mounts like gfapi (NFS-ganesha/samba) we can have these xlators on by having custom profiles. Change-Id: Ie7535788909d4c741844473696f001274dc0bb60 Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com> fixes: bz#1676479
* tests: fix volfile_server_switch.t spurious failureAtin Mukherjee2016-08-221-3/+1
| | | | | | | | | | | | | | | 1. Unnecessary self probe is removed 2. After every probe a peer_count check is added to give the test to time finish handhake. Change-Id: Iab52548f8b781e7968250cd98fdbeaf02472970d BUG: 1368953 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/15231 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd-client: switch volfile server incase existing connection breaksPrasanna Kumar Kalever2016-04-121-0/+48
Problem: Currently, say we have 10 Node gluster volume, and mounted it using Node 1 (N1) as volfile server and the rest as backup volfile servers $ mount -t glusterfs -obackup-volfile-servers=<N2>:<N3>:...:<N10> <N1>:/vol /mnt if N1 goes down we still be able to access the same mount point, but the problem is that if we add or remove bricks to the volume whoes volfile server is down in our case N1, that info will not be passed to client, because connection between glusterfs and glusterd (of N1) will be disconnected due to which we cannot store files to the newly added bricks until N1 comes back Solution: If N1 goes down iterate through the nodes specified in backup-volfile-servers list and try to establish the connection between glusterfs and glsuterd, hence we don't really have to wait until N1 comes back to store files in newly added bricks that are successfully added when N1 was down Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd BUG: 1289916 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/13002 Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Poornima G <pgurusid@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>