From 23ae6dc74378030ef983a25b49ec4406ab4ca4aa Mon Sep 17 00:00:00 2001 From: Ravishankar N Date: Fri, 26 Feb 2016 15:46:03 +0530 Subject: update arbiter-volumes.md Change-Id: Ib2b29567a737ddb27e1e19aefa2b741f0635bb3e Signed-off-by: Ravishankar N Reviewed-on: http://review.gluster.org/13531 Tested-by: Humble Devassy Chirammal Reviewed-by: Humble Devassy Chirammal --- done/Features/afr-arbiter-volumes.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/done/Features/afr-arbiter-volumes.md b/done/Features/afr-arbiter-volumes.md index e31bc31..3a6bbe8 100644 --- a/done/Features/afr-arbiter-volumes.md +++ b/done/Features/afr-arbiter-volumes.md @@ -40,7 +40,9 @@ prevent files from ending up in split-brain: * In all cases, if there is only one source before the FOP is initiated and if the FOP fails on that source, the application will receive ENOTCONN. -Note: It is possible to see if a replica 3 volume has arbiter configuration from +Note: `gluster volume info ` will have an indication of which bricks +of the volume are the arbiter bricks. +It is also possible to see if a replica 3 volume has arbiter configuration from the mount point. If `$mount_point/.meta/graphs/active/$V0-replicate-0/options/arbiter-count` exists and its value is 1, then it is an arbiter volume. Also the client volume graph @@ -53,4 +55,4 @@ from the arbiter brick will not take place. For example if there are 2 source bricks B2 and B3 (B3 being arbiter brick) and B2 is down, then data-self-heal will *not* happen from B3 to sink brick B1, and will be pending until B2 comes up and heal can happen from it. Note that metadata and entry self-heals can -still happen from B3 if it is one of the sources.cd \ No newline at end of file +still happen from B3 if it is one of the sources. -- cgit