summaryrefslogtreecommitdiffstats
path: root/doc/admin-guide
diff options
context:
space:
mode:
authorggarg <ggarg@redhat.com>2014-04-29 09:58:16 +0530
committerKaushal M <kaushal@redhat.com>2014-09-24 05:29:43 -0700
commitb097225202629448e3b10dc4160125c376971682 (patch)
treee5ff080e3deb3e01fd4cc6036704ad68e790cc1b /doc/admin-guide
parent70d76f20ee127fe7e8e52b2d67e2362283a01f34 (diff)
glusterd: Move brick order check from cli to glusterd.
Previously the brick order check for replicate volumes on volume create and add-brick was done by the cli. This check would fail when a hostname wasn't resolvable and would question the user if it was ok to continue. If the user continued, glusterd would fail the command again as the hostname wouldn't be resolvable. This was unnecessary. This change, moves the check from cli into glusterd. The check is now performed during staging of volume create after the bricks have been resolved. This prevents the above condition from occurring. As a result of this change, the user will no longer be questioned and given an option to continue the operation when a bad brick order is given or the brick order check fails. In such a case, the user can use 'force' to bypass the check and allow the command to succeed. Change-Id: I009861efaf3fb7f553a9b00116a992f031f652cb BUG: 1091935 Signed-off-by: ggarg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/7589 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
Diffstat (limited to 'doc/admin-guide')
-rw-r--r--doc/admin-guide/en-US/markdown/admin_setting_volumes.md38
1 files changed, 34 insertions, 4 deletions
diff --git a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md b/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
index 455238048be..028cd30647a 100644
--- a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
+++ b/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
@@ -155,9 +155,17 @@ high-availability and high-reliability are critical.
auth.allow or auth.reject.
> **Note**:
- > Make sure you start your volumes before you try to mount them or
+
+ > - Make sure you start your volumes before you try to mount them or
> else client operations after the mount will hang.
+ > - GlusterFS will fail to create a replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node replicated volume with a more that one brick of a replica set is present on the same peer.
+ > ```
+ # gluster volume create <volname> replica 4 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
+ volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.```
+
+ > Use the `force` option at the end of command if you want to create the volume in this case.
+
##Creating Striped Volumes
Striped volumes stripes data across bricks in the volume. For best
@@ -275,9 +283,17 @@ environments.
auth.allow or auth.reject.
> **Note**:
- > Make sure you start your volumes before you try to mount them or
+ > - Make sure you start your volumes before you try to mount them or
> else client operations after the mount will hang.
+ > - GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node distribute (replicated) volume with a more than one brick of a replica set is present on the same peer.
+ > ```
+ # gluster volume create <volname> replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
+ volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.```
+
+ > Use the `force` option at the end of command if you want to create the volume in this case.
+
+
##Creating Distributed Striped Replicated Volumes
Distributed striped replicated volumes distributes striped data across
@@ -312,9 +328,16 @@ Map Reduce workloads.
auth.allow or auth.reject.
> **Note**:
- > Make sure you start your volumes before you try to mount them or
+ > - Make sure you start your volumes before you try to mount them or
> else client operations after the mount will hang.
+ > - GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node distribute (replicated) volume with a more than one brick of a replica set is present on the same peer.
+ > ```
+ # gluster volume create <volname> stripe 2 replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
+ volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.```
+
+ > Use the `force` option at the end of command if you want to create the volume in this case.
+
##Creating Striped Replicated Volumes
Striped replicated volumes stripes data across replicated bricks in the
@@ -356,9 +379,16 @@ of this volume type is supported only for Map Reduce workloads.
auth.allow or auth.reject.
> **Note**:
- > Make sure you start your volumes before you try to mount them or
+ > - Make sure you start your volumes before you try to mount them or
> else client operations after the mount will hang.
+ > - GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node distribute (replicated) volume with a more than one brick of replica set is present on the same peer.
+ > ```
+ # gluster volume create <volname> stripe 2 replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
+ volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.```
+
+ > Use the `force` option at the end of command if you want to create the volume in this case.
+
##Starting Volumes
You must start your volumes before you try to mount them.