summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorHumble Devassy Chirammal <hchiramm@redhat.com>2015-07-23 17:54:34 +0530
committerHumble Devassy Chirammal <humble.devassy@gmail.com>2015-08-04 05:42:07 -0700
commited9959b0e2c7f401394fa6359641857180baf1c8 (patch)
tree3674ed24345c166335942d801fc87ef7569b28a5 /doc
parent9e3d87639c38b20304ba2809f3f27440ad712fad (diff)
Removing admin guide from glusterfs doc repo
The admin guide is maintained at https://github.com/gluster/glusterdocs. The admin guide updates should be against above repo and we should only have one copy. This is based on the discussion happened here: https://www.mail-archive.com/gluster-users@gluster.org/msg21168.html Change-Id: If5395e7e8005d3e505d229180ce55d466cb1a1fc BUG: 1206539 Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com> Reviewed-on: http://review.gluster.org/11747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/admin-guide/en-US/images/640px-GlusterFS_Architecture.pngbin97477 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Distributed_Replicated_Volume.pngbin62929 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Distributed_Striped_Replicated_Volume.pngbin57210 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Distributed_Striped_Volume.pngbin53781 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Distributed_Volume.pngbin47211 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Geo-Rep03_Internet.pngbin131824 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Geo-Rep04_Cascading.pngbin187341 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Geo-Rep_LAN.pngbin163417 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Geo-Rep_WAN.pngbin96291 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/GlusterFS_Architecture.pngbin133597 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Hadoop_Architecture.pngbin43815 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Replicated_Volume.pngbin44077 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Striped_Replicated_Volume.pngbin62113 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/Striped_Volume.pngbin43316 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/UFO_Architecture.pngbin72139 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/VSA_Architecture.pngbin38875 -> 0 bytes
-rw-r--r--doc/admin-guide/en-US/images/icon.svg19
-rw-r--r--doc/admin-guide/en-US/markdown/.gitignore2
-rw-r--r--doc/admin-guide/en-US/markdown/admin_ACLs.md216
-rw-r--r--doc/admin-guide/en-US/markdown/admin_Hadoop.md31
-rw-r--r--doc/admin-guide/en-US/markdown/admin_console.md50
-rw-r--r--doc/admin-guide/en-US/markdown/admin_directory_Quota.md219
-rw-r--r--doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md264
-rw-r--r--doc/admin-guide/en-US/markdown/admin_geo-replication.md681
-rw-r--r--doc/admin-guide/en-US/markdown/admin_logging.md56
-rw-r--r--doc/admin-guide/en-US/markdown/admin_lookup_optimization.md145
-rw-r--r--doc/admin-guide/en-US/markdown/admin_managing_snapshots.md316
-rw-r--r--doc/admin-guide/en-US/markdown/admin_managing_volumes.md770
-rw-r--r--doc/admin-guide/en-US/markdown/admin_monitoring_workload.md893
-rw-r--r--doc/admin-guide/en-US/markdown/admin_object_storage.md26
-rw-r--r--doc/admin-guide/en-US/markdown/admin_puppet.md499
-rw-r--r--doc/admin-guide/en-US/markdown/admin_rdma_transport.md70
-rw-r--r--doc/admin-guide/en-US/markdown/admin_setting_volumes.md674
-rw-r--r--doc/admin-guide/en-US/markdown/admin_settingup_clients.md600
-rw-r--r--doc/admin-guide/en-US/markdown/admin_ssl.md128
-rw-r--r--doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md58
-rw-r--r--doc/admin-guide/en-US/markdown/admin_storage_pools.md91
-rw-r--r--doc/admin-guide/en-US/markdown/admin_troubleshooting.md495
-rw-r--r--doc/admin-guide/en-US/markdown/did-you-know.md36
-rw-r--r--doc/admin-guide/en-US/markdown/glossary.md300
-rw-r--r--doc/admin-guide/en-US/markdown/glusterfs_introduction.md63
-rwxr-xr-xdoc/admin-guide/en-US/markdown/pdfgen.sh16
42 files changed, 0 insertions, 6718 deletions
diff --git a/doc/admin-guide/en-US/images/640px-GlusterFS_Architecture.png b/doc/admin-guide/en-US/images/640px-GlusterFS_Architecture.png
deleted file mode 100644
index 95f89ec8286..00000000000
--- a/doc/admin-guide/en-US/images/640px-GlusterFS_Architecture.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Distributed_Replicated_Volume.png b/doc/admin-guide/en-US/images/Distributed_Replicated_Volume.png
deleted file mode 100644
index 22daecdb903..00000000000
--- a/doc/admin-guide/en-US/images/Distributed_Replicated_Volume.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Distributed_Striped_Replicated_Volume.png b/doc/admin-guide/en-US/images/Distributed_Striped_Replicated_Volume.png
deleted file mode 100644
index d286fa99e94..00000000000
--- a/doc/admin-guide/en-US/images/Distributed_Striped_Replicated_Volume.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Distributed_Striped_Volume.png b/doc/admin-guide/en-US/images/Distributed_Striped_Volume.png
deleted file mode 100644
index 752fa982fa6..00000000000
--- a/doc/admin-guide/en-US/images/Distributed_Striped_Volume.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Distributed_Volume.png b/doc/admin-guide/en-US/images/Distributed_Volume.png
deleted file mode 100644
index 4386ca935b9..00000000000
--- a/doc/admin-guide/en-US/images/Distributed_Volume.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Geo-Rep03_Internet.png b/doc/admin-guide/en-US/images/Geo-Rep03_Internet.png
deleted file mode 100644
index 3cd0eaded02..00000000000
--- a/doc/admin-guide/en-US/images/Geo-Rep03_Internet.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Geo-Rep04_Cascading.png b/doc/admin-guide/en-US/images/Geo-Rep04_Cascading.png
deleted file mode 100644
index 54bf9f05cff..00000000000
--- a/doc/admin-guide/en-US/images/Geo-Rep04_Cascading.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Geo-Rep_LAN.png b/doc/admin-guide/en-US/images/Geo-Rep_LAN.png
deleted file mode 100644
index a74f6dbb50a..00000000000
--- a/doc/admin-guide/en-US/images/Geo-Rep_LAN.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Geo-Rep_WAN.png b/doc/admin-guide/en-US/images/Geo-Rep_WAN.png
deleted file mode 100644
index d72d72768bc..00000000000
--- a/doc/admin-guide/en-US/images/Geo-Rep_WAN.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/GlusterFS_Architecture.png b/doc/admin-guide/en-US/images/GlusterFS_Architecture.png
deleted file mode 100644
index b506db1f4e7..00000000000
--- a/doc/admin-guide/en-US/images/GlusterFS_Architecture.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Hadoop_Architecture.png b/doc/admin-guide/en-US/images/Hadoop_Architecture.png
deleted file mode 100644
index 8725bd330bb..00000000000
--- a/doc/admin-guide/en-US/images/Hadoop_Architecture.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Replicated_Volume.png b/doc/admin-guide/en-US/images/Replicated_Volume.png
deleted file mode 100644
index 135a63f345a..00000000000
--- a/doc/admin-guide/en-US/images/Replicated_Volume.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Striped_Replicated_Volume.png b/doc/admin-guide/en-US/images/Striped_Replicated_Volume.png
deleted file mode 100644
index adf1f8465eb..00000000000
--- a/doc/admin-guide/en-US/images/Striped_Replicated_Volume.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/Striped_Volume.png b/doc/admin-guide/en-US/images/Striped_Volume.png
deleted file mode 100644
index 63a84b242ab..00000000000
--- a/doc/admin-guide/en-US/images/Striped_Volume.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/UFO_Architecture.png b/doc/admin-guide/en-US/images/UFO_Architecture.png
deleted file mode 100644
index be85d7b2825..00000000000
--- a/doc/admin-guide/en-US/images/UFO_Architecture.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/VSA_Architecture.png b/doc/admin-guide/en-US/images/VSA_Architecture.png
deleted file mode 100644
index c3ab80cf3e8..00000000000
--- a/doc/admin-guide/en-US/images/VSA_Architecture.png
+++ /dev/null
Binary files differ
diff --git a/doc/admin-guide/en-US/images/icon.svg b/doc/admin-guide/en-US/images/icon.svg
deleted file mode 100644
index b2f16d0f61d..00000000000
--- a/doc/admin-guide/en-US/images/icon.svg
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0" width="32" height="32" id="svg3017">
- <defs id="defs3019">
- <linearGradient id="linearGradient2381">
- <stop id="stop2383" style="stop-color:#ffffff;stop-opacity:1" offset="0"/>
- <stop id="stop2385" style="stop-color:#ffffff;stop-opacity:0" offset="1"/>
- </linearGradient>
- <linearGradient x1="296.4996" y1="188.81061" x2="317.32471" y2="209.69398" id="linearGradient2371" xlink:href="#linearGradient2381" gradientUnits="userSpaceOnUse" gradientTransform="matrix(0.90776,0,0,0.90776,24.35648,49.24131)"/>
- </defs>
- <g transform="matrix(0.437808,-0.437808,0.437808,0.437808,-220.8237,43.55311)" id="g5089">
- <path d="m 8.4382985,-6.28125 c -0.6073916,0 -4.3132985,5.94886271 -4.3132985,8.25 l 0,26.71875 c 0,0.846384 0.5818159,1.125 1.15625,1.125 l 25.5625,0 c 0.632342,0 1.125001,-0.492658 1.125,-1.125 l 0,-5.21875 0.28125,0 c 0.49684,0 0.906249,-0.409411 0.90625,-0.90625 l 0,-27.9375 c 0,-0.4968398 -0.40941,-0.90625 -0.90625,-0.90625 l -23.8117015,0 z" transform="translate(282.8327,227.1903)" id="path5091" style="fill:#5c5c4f;stroke:#000000;stroke-width:3.23021388;stroke-miterlimit:4;stroke-dasharray:none"/>
- <rect width="27.85074" height="29.369793" rx="1.1414107" ry="1.1414107" x="286.96509" y="227.63805" id="rect5093" style="fill:#032c87"/>
- <path d="m 288.43262,225.43675 25.2418,0 0,29.3698 -26.37615,0.0241 1.13435,-29.39394 z" id="rect5095" style="fill:#ffffff"/>
- <path d="m 302.44536,251.73726 c 1.38691,7.85917 -0.69311,11.28365 -0.69311,11.28365 2.24384,-1.60762 3.96426,-3.47694 4.90522,-5.736 0.96708,2.19264 1.83294,4.42866 4.27443,5.98941 0,0 -1.59504,-7.2004 -1.71143,-11.53706 l -6.77511,0 z" id="path5097" style="fill:#a70000;fill-opacity:1;stroke-width:2"/>
- <rect width="25.241802" height="29.736675" rx="0.89682275" ry="0.89682275" x="290.73544" y="220.92249" id="rect5099" style="fill:#809cc9"/>
- <path d="m 576.47347,725.93939 6.37084,0.41502 0.4069,29.51809 c -1.89202,-1.31785 -6.85427,-3.7608 -8.26232,-1.68101 l 0,-26.76752 c 0,-0.82246 0.66212,-1.48458 1.48458,-1.48458 z" transform="matrix(0.499065,-0.866565,0,1,0,0)" id="rect5101" style="fill:#4573b3;fill-opacity:1"/>
- <path d="m 293.2599,221.89363 20.73918,0 c 0.45101,0 0.8141,0.3631 0.8141,0.81411 0.21547,6.32836 -19.36824,21.7635 -22.36739,17.59717 l 0,-17.59717 c 0,-0.45101 0.3631,-0.81411 0.81411,-0.81411 z" id="path5103" style="opacity:0.65536726;fill:url(#linearGradient2371);fill-opacity:1"/>
- </g>
-</svg>
diff --git a/doc/admin-guide/en-US/markdown/.gitignore b/doc/admin-guide/en-US/markdown/.gitignore
deleted file mode 100644
index 9eed460045f..00000000000
--- a/doc/admin-guide/en-US/markdown/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-output/*.pdf
-
diff --git a/doc/admin-guide/en-US/markdown/admin_ACLs.md b/doc/admin-guide/en-US/markdown/admin_ACLs.md
deleted file mode 100644
index ebae7f71887..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_ACLs.md
+++ /dev/null
@@ -1,216 +0,0 @@
-#POSIX Access Control Lists
-
-POSIX Access Control Lists (ACLs) allows you to assign different
-permissions for different users or groups even though they do not
-correspond to the original owner or the owning group.
-
-For example: User john creates a file but does not want to allow anyone
-to do anything with this file, except another user, antony (even though
-there are other users that belong to the group john).
-
-This means, in addition to the file owner, the file group, and others,
-additional users and groups can be granted or denied access by using
-POSIX ACLs.
-
-##Activating POSIX ACLs Support
-
-To use POSIX ACLs for a file or directory, the partition of the file or
-directory must be mounted with POSIX ACLs support.
-
-###Activating POSIX ACLs Support on Sever
-
-To mount the backend export directories for POSIX ACLs support, use the
-following command:
-
-`# mount -o acl `
-
-For example:
-
-`# mount -o acl /dev/sda1 /export1 `
-
-Alternatively, if the partition is listed in the /etc/fstab file, add
-the following entry for the partition to include the POSIX ACLs option:
-
-`LABEL=/work /export1 ext3 rw, acl 14 `
-
-###Activating POSIX ACLs Support on Client
-
-To mount the glusterfs volumes for POSIX ACLs support, use the following
-command:
-
-`# mount –t glusterfs -o acl `
-
-For example:
-
-`# mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster`
-
-##Setting POSIX ACLs
-
-You can set two types of POSIX ACLs, that is, access ACLs and default
-ACLs. You can use access ACLs to grant permission for a specific file or
-directory. You can use default ACLs only on a directory but if a file
-inside that directory does not have an ACLs, it inherits the permissions
-of the default ACLs of the directory.
-
-You can set ACLs for per user, per group, for users not in the user
-group for the file, and via the effective right mask.
-
-##Setting Access ACLs
-
-You can apply access ACLs to grant permission for both files and
-directories.
-
-**To set or modify Access ACLs**
-
-You can set or modify access ACLs use the following command:
-
-`# setfacl –m file `
-
-The ACL entry types are the POSIX ACLs representations of owner, group,
-and other.
-
-Permissions must be a combination of the characters `r` (read), `w`
-(write), and `x` (execute). You must specify the ACL entry in the
-following format and can specify multiple entry types separated by
-commas.
-
- ACL Entry | Description
- --- | ---
- u:uid:\<permission\> | Sets the access ACLs for a user. You can specify user name or UID
- g:gid:\<permission\> | Sets the access ACLs for a group. You can specify group name or GID.
- m:\<permission\> | Sets the effective rights mask. The mask is the combination of all access permissions of the owning group and all of the user and group entries.
- o:\<permission\> | Sets the access ACLs for users other than the ones in the group for the file.
-
-If a file or directory already has an POSIX ACLs, and the setfacl
-command is used, the additional permissions are added to the existing
-POSIX ACLs or the existing rule is modified.
-
-For example, to give read and write permissions to user antony:
-
-`# setfacl -m u:antony:rw /mnt/gluster/data/testfile `
-
-##Setting Default ACLs
-
-You can apply default ACLs only to directories. They determine the
-permissions of a file system objects that inherits from its parent
-directory when it is created.
-
-To set default ACLs
-
-You can set default ACLs for files and directories using the following
-command:
-
-`# setfacl –m –-set `
-
-Permissions must be a combination of the characters r (read), w (write), and x (execute). Specify the ACL entry_type as described below, separating multiple entry types with commas.
-
-u:*user_name:permissons*
- Sets the access ACLs for a user. Specify the user name, or the UID.
-
-g:*group_name:permissions*
- Sets the access ACLs for a group. Specify the group name, or the GID.
-
-m:*permission*
- Sets the effective rights mask. The mask is the combination of all access permissions of the owning group, and all user and group entries.
-
-o:*permissions*
- Sets the access ACLs for users other than the ones in the group for the file.
-
-For example, to set the default ACLs for the /data directory to read for
-users not in the user group:
-
-`# setfacl –m --set o::r /mnt/gluster/data `
-
-> **Note**
->
-> An access ACLs set for an individual file can override the default
-> ACLs permissions.
-
-**Effects of a Default ACLs**
-
-The following are the ways in which the permissions of a directory's
-default ACLs are passed to the files and subdirectories in it:
-
-- A subdirectory inherits the default ACLs of the parent directory
- both as its default ACLs and as an access ACLs.
-- A file inherits the default ACLs as its access ACLs.
-
-##Retrieving POSIX ACLs
-
-You can view the existing POSIX ACLs for a file or directory.
-
-**To view existing POSIX ACLs**
-
-- View the existing access ACLs of a file using the following command:
-
- `# getfacl `
-
- For example, to view the existing POSIX ACLs for sample.jpg
-
- # getfacl /mnt/gluster/data/test/sample.jpg
- # owner: antony
- # group: antony
- user::rw-
- group::rw-
- other::r--
-
-- View the default ACLs of a directory using the following command:
-
- `# getfacl `
-
- For example, to view the existing ACLs for /data/doc
-
- # getfacl /mnt/gluster/data/doc
- # owner: antony
- # group: antony
- user::rw-
- user:john:r--
- group::r--
- mask::r--
- other::r--
- default:user::rwx
- default:user:antony:rwx
- default:group::r-x
- default:mask::rwx
- default:other::r-x
-
-##Removing POSIX ACLs
-
-To remove all the permissions for a user, groups, or others, use the
-following command:
-
-`# setfacl -x `
-
-####setfaclentry_type Options
-
-The ACL entry_type translates to the POSIX ACL representations of owner, group, and other.
-
-Permissions must be a combination of the characters r (read), w (write), and x (execute). Specify the ACL entry_type as described below, separating multiple entry types with commas.
-
-u:*user_name*
- Sets the access ACLs for a user. Specify the user name, or the UID.
-
-g:*group_name*
- Sets the access ACLs for a group. Specify the group name, or the GID.
-
-m:*permission*
- Sets the effective rights mask. The mask is the combination of all access permissions of the owning group, and all user and group entries.
-
-o:*permissions*
- Sets the access ACLs for users other than the ones in the group for the file.
-
-For example, to remove all permissions from the user antony:
-
-`# setfacl -x u:antony /mnt/gluster/data/test-file`
-
-##Samba and ACLs
-
-If you are using Samba to access GlusterFS FUSE mount, then POSIX ACLs
-are enabled by default. Samba has been compiled with the
-`--with-acl-support` option, so no special flags are required when
-accessing or mounting a Samba share.
-
-##NFS and ACLs
-
-Currently GlusterFS supports POSIX ACL configuration through NFS mount,
-i.e. setfacl and getfacl commands work through NFS mount.
diff --git a/doc/admin-guide/en-US/markdown/admin_Hadoop.md b/doc/admin-guide/en-US/markdown/admin_Hadoop.md
deleted file mode 100644
index 1f5e8d4ae49..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_Hadoop.md
+++ /dev/null
@@ -1,31 +0,0 @@
-#Managing Hadoop Compatible Storage
-
-GlusterFS provides compatibility for Apache Hadoop and it uses the
-standard file system APIs available in Hadoop to provide a new storage
-option for Hadoop deployments. Existing MapReduce based applications can
-use GlusterFS seamlessly. This new functionality opens up data within
-Hadoop deployments to any file-based or object-based application.
-
-##Advantages
-
-The following are the advantages of Hadoop Compatible Storage with
-GlusterFS:
-
-- Provides simultaneous file-based and object-based access within
- Hadoop.
-- Eliminates the centralized metadata server.
-- Provides compatibility with MapReduce applications and rewrite is
- not required.
-- Provides a fault tolerant file system.
-
-###Pre-requisites
-
-The following are the pre-requisites to install Hadoop Compatible
-Storage :
-
-- Java Runtime Environment
-- getfattr - command line utility
-
-##Installing, and Configuring Hadoop Compatible Storage
-
-See the detailed instruction set at https://forge.gluster.org/hadoop/pages/ConfiguringHadoop2
diff --git a/doc/admin-guide/en-US/markdown/admin_console.md b/doc/admin-guide/en-US/markdown/admin_console.md
deleted file mode 100644
index 126b7e2064f..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_console.md
+++ /dev/null
@@ -1,50 +0,0 @@
-##Using the Gluster Console Manager – Command Line Utility
-
-The Gluster Console Manager is a single command line utility that
-simplifies configuration and management of your storage environment. The
-Gluster Console Manager is similar to the LVM (Logical Volume Manager)
-CLI or ZFS Command Line Interface, but across multiple storage servers.
-You can use the Gluster Console Manager online, while volumes are
-mounted and active. Gluster automatically synchronizes volume
-configuration information across all Gluster servers.
-
-Using the Gluster Console Manager, you can create new volumes, start
-volumes, and stop volumes, as required. You can also add bricks to
-volumes, remove bricks from existing volumes, as well as change
-translator settings, among other operations.
-
-You can also use the commands to create scripts for automation, as well
-as use the commands as an API to allow integration with third-party
-applications.
-
-###Running the Gluster Console Manager
-
-You can run the Gluster Console Manager on any GlusterFS server either
-by invoking the commands or by running the Gluster CLI in interactive
-mode. You can also use the gluster command remotely using SSH.
-
-- To run commands directly:
-
- ` # gluster peer `
-
- For example:
-
- ` # gluster peer status `
-
-- To run the Gluster Console Manager in interactive mode
-
- `# gluster`
-
- You can execute gluster commands from the Console Manager prompt:
-
- ` gluster> `
-
- For example, to view the status of the peer server:
-
- \# `gluster `
-
- `gluster > peer status `
-
- Display the status of the peer.
-
-
diff --git a/doc/admin-guide/en-US/markdown/admin_directory_Quota.md b/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
deleted file mode 100644
index 402ac5e4fcc..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
+++ /dev/null
@@ -1,219 +0,0 @@
-#Managing Directory Quota
-
-Directory quotas in GlusterFS allows you to set limits on usage of the disk
-space by directories or volumes. The storage administrators can control
-the disk space utilization at the directory and/or volume levels in
-GlusterFS by setting limits to allocatable disk space at any level in
-the volume and directory hierarchy. This is particularly useful in cloud
-deployments to facilitate utility billing model.
-
-> **Note**
-> For now, only Hard limits are supported. Here, the limit cannot be
-> exceeded and attempts to use more disk space or inodes beyond the set
-> limit is denied.
-
-System administrators can also monitor the resource utilization to limit
-the storage for the users depending on their role in the organization.
-
-You can set the quota at the following levels:
-
-- **Directory level** – limits the usage at the directory level
-- **Volume level** – limits the usage at the volume level
-
-> **Note**
-> You can set the disk limit on the directory even if it is not created.
-> The disk limit is enforced immediately after creating that directory.
-
-##Enabling Quota
-
-You must enable Quota to set disk limits.
-
-**To enable quota:**
-
-- Use the following command to enable quota:
-
- # gluster volume quota enable
-
- For example, to enable quota on the test-volume:
-
- # gluster volume quota test-volume enable
- Quota is enabled on /test-volume
-
-##Disabling Quota
-
-You can disable Quota, if needed.
-
-**To disable quota:**
-
-- Use the following command to disable quota:
-
- # gluster volume quota disable
-
- For example, to disable quota translator on the test-volume:
-
- # gluster volume quota test-volume disable
- Quota translator is disabled on /test-volume
-
-##Setting or Replacing Disk Limit
-
-You can create new directories in your storage environment and set the
-disk limit or set disk limit for the existing directories. The directory
-name should be relative to the volume with the export directory/mount
-being treated as "/".
-
-**To set or replace disk limit:**
-
-- Set the disk limit using the following command:
-
- # gluster volume quota limit-usage /
-
- For example, to set limit on data directory on the test-volume where
- data is a directory under the export directory:
-
- # gluster volume quota test-volume limit-usage /data 10GB
- Usage limit has been set on /data
-
- > **Note**
- > In a multi-level directory hierarchy, the strictest disk limit
- > will be considered for enforcement.
-
-##Displaying Disk Limit Information
-
-You can display disk limit information on all the directories on which
-the limit is set.
-
-**To display disk limit information:**
-
-- Display disk limit information of all the directories on which limit
- is set, using the following command:
-
- # gluster volume quota list
-
- For example, to see the set disks limit on the test-volume:
-
- # gluster volume quota test-volume list
- /Test/data 10 GB 6 GB
- /Test/data1 10 GB 4 GB
-
-- Display disk limit information on a particular directory on which
- limit is set, using the following command:
-
- # gluster volume quota list
-
- For example, to view the set limit on /data directory of test-volume:
-
- # gluster volume quota test-volume list /data
- /Test/data 10 GB 6 GB
-
-###Displaying Quota Limit Information Using the df Utility
-
-You can create a report of the disk usage using the df utility by taking quota limits into consideration. To generate a report, run the following command:
-
- # gluster volume set VOLNAME quota-deem-statfs on
-
-In this case, the total disk space of the directory is taken as the quota hard limit set on the directory of the volume.
-
->**Note**
->The default value for quota-deem-statfs is off. However, it is recommended to set quota-deem-statfs to on.
-
-The following example displays the disk usage when quota-deem-statfs is off:
-
- # gluster volume set test-volume features.quota-deem-statfs off
- volume set: success
- # gluster volume quota test-volume list
- Path Hard-limit Soft-limit Used Available
- -----------------------------------------------------------
- / 300.0GB 90% 11.5GB 288.5GB
- /John/Downloads 77.0GB 75% 11.5GB 65.5GB
-
-Disk usage for volume test-volume as seen on client1:
-
- # df -hT /home
- Filesystem Type Size Used Avail Use% Mounted on
- server1:/test-volume fuse.glusterfs 400G 12G 389G 3% /home
-
-The following example displays the disk usage when quota-deem-statfs is on:
-
- # gluster volume set test-volume features.quota-deem-statfs on
- volume set: success
- # gluster vol quota test-volume list
- Path Hard-limit Soft-limit Used Available
- -----------------------------------------------------------
- / 300.0GB 90% 11.5GB 288.5GB
- /John/Downloads 77.0GB 75% 11.5GB 65.5GB
-
-Disk usage for volume test-volume as seen on client1:
-
- # df -hT /home
- Filesystem Type Size Used Avail Use% Mounted on
- server1:/test-volume fuse.glusterfs 300G 12G 289G 4% /home
-
-The quota-deem-statfs option when set to on, allows the administrator to make the user view the total disk space available on the directory as the hard limit set on it.
-
-##Updating Memory Cache Size
-
-### Setting Timeout
-
-For performance reasons, quota caches the directory sizes on client. You
-can set timeout indicating the maximum valid duration of directory sizes
-in cache, from the time they are populated.
-
-For example: If there are multiple clients writing to a single
-directory, there are chances that some other client might write till the
-quota limit is exceeded. However, this new file-size may not get
-reflected in the client till size entry in cache has become stale
-because of timeout. If writes happen on this client during this
-duration, they are allowed even though they would lead to exceeding of
-quota-limits, since size in cache is not in sync with the actual size.
-When timeout happens, the size in cache is updated from servers and will
-be in sync and no further writes will be allowed. A timeout of zero will
-force fetching of directory sizes from server for every operation that
-modifies file data and will effectively disables directory size caching
-on client side.
-
-**To update the memory cache size:**
-
-- Use the following command to update the memory cache size:
-
- # gluster volume set features.quota-timeout
-
- For example, to update the memory cache size for every 5 seconds on
- test-volume:
-
- # gluster volume set test-volume features.quota-timeout 5
- Set volume successful
-
-##Setting Alert Time
-
-Alert time is the frequency at which you want your usage information to be logged after you reach the soft limit.
-
-**To set the alert time:**
-
-- Use the following command to set the alert time:
-
- # gluster volume quota VOLNAME alert-time time
-
- >**Note**
- >
- >The default alert-time is one week.
-
- For example, to set the alert time to one day:
-
- # gluster volume quota test-volume alert-time 1d
- volume quota : success
-
-##Removing Disk Limit
-
-You can remove set disk limit, if you do not want quota anymore.
-
-**To remove disk limit:**
-
-- Use the following command to remove the disk limit set on a particular directory:
-
- # gluster volume quota remove
-
- For example, to remove the disk limit on /data directory of
- test-volume:
-
- # gluster volume quota test-volume remove /data
- Usage limit set on /data is removed
diff --git a/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md b/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
deleted file mode 100644
index 38c1f6725b8..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
+++ /dev/null
@@ -1,264 +0,0 @@
-# Distributed Geo-Replication in glusterfs-3.5
-
-This is a admin how-to guide for new dustributed-geo-replication being released as part of glusterfs-3.5
-
-##### Note:
-This article is targeted towards users/admins who want to try new geo-replication, without going much deeper into internals and technology used.
-
-### How is it different from earlier geo-replication?
-
-- Up until now, in geo-replication, only one of the nodes in master volume would participate in geo-replication. This meant that all the data syncing is taken care by only one node while other nodes in the cluster would sit idle (not participate in data syncing). With distributed-geo-replication, each node of the master volume takes the repsonsibility of syncing the data present in that node. In case of replicate configuration, one of them would 'Active'ly sync the data while other node of the replica pair would be 'Passive'. The 'Passive' node only becomes 'Active' when the 'Active' pair goes down. This way new geo-rep leverages all the nodes in the volume and remove the bottleneck of syncing from one single node.
-- New change detection mechanism is the other thing which has been improved with new geo-rep. So far geo-rep used to crawl through glusterfs file system to figure out the files that need to synced. And because crawling filesystem can be an expensive operation, this used to be a major bottleneck for performance. With distributed geo-rep, all the files that need to be synced are identified through changelog xlator. Changelog xlator journals all the fops that modifes the file and these journals are then consumed by geo-rep to effectively identify the files that need to be synced.
-- A new syncing method tar+ssh, has been introduced to improve the performance of few specific data sets. You can switch between rsync and tar+ssh syncing method via CLI to suite your data set needs. This tar+ssh is better suited for data sets which have large number of small files.
-
-
-### Using Distributed geo-replication:
-
-#### Prerequisites:
-- There should be a password-less ssh setup between at least one node in master volume to one node in slave volume. The geo-rep create command should be executed from this node which has password-less ssh setup to slave.
-
-- Unlike previous version, slave **must** be a gluster volume. Slave can not be a directory. And both the master and slave volumes should have been created and started before creating geo-rep session.
-
-#### Creating secret pem pub file
-- Execute the below command from the node where you setup the password-less ssh to slave. This will create the secret pem pub file which would have information of RSA key of all the nodes in the master volume. And when geo-rep create command is executed, glusterd uses this file to establish a geo-rep specific ssh connections
-```sh
-gluster system:: execute gsec_create
-```
-
-#### Creating geo-replication session.
-Create a geo-rep session between master and slave volume using the following command. The node in which this command is executed and the <slave_host> specified in the command should have password less ssh setup between them. The push-pem option actually uses the secret pem pub file created earlier and establishes geo-rep specific password less ssh between each node in master to each node of slave.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> create push-pem [force]
-```
-
-If the total available size in slave volume is less than the total size of master, the command will throw error message. In such cases 'force' option can be used.
-
-In use cases where the rsa-keys of nodes in master volume is distributed to slave nodes through an external agent and slave side verifications like:
-- if ssh port 22 is open in slave
-- has proper passwordless ssh login setup
-- slave volume is created and is empty
-- if slave has enough memory
-is taken care by the external agent, the following command can be used to create geo-replication:
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> create no-verify [force]
-```
-In this case the master node rsa-key distribution to slave node does not happen and above mentioned slave verification is not performed and these two things has to be taken care externaly.
-
-### Creating Non Root Geo-replication session
-
-`mountbroker` is a new service of glusterd. This service allows an
-unprivileged process to own a GlusterFS mount by registering a label
-(and DSL (Domain-specific language) options ) with glusterd through a
-glusterd volfile. Using CLI, you can send a mount request to glusterd to
-receive an alias (symlink) of the mounted volume.
-
-A request from the agent, the unprivileged slave agents use the
-mountbroker service of glusterd to set up an auxiliary gluster mount for
-the agent in a special environment which ensures that the agent is only
-allowed to access with special parameters that provide administrative
-level access to the particular volume.
-
-**To setup an auxiliary gluster mount for the agent**:
-
-1. In all Slave nodes, Create a new group. For example, `geogroup`
-
-2. In all Slave nodes, Create a unprivileged account. For example, ` geoaccount`. Make it a member of ` geogroup`
-
-3. In all Slave nodes, Create a new directory owned by root and with permissions *0711.* For example, create mountbroker-root directory `/var/mountbroker-root`
-
-4. In any one of Slave node, Run the following commands to add options to glusterd vol file(`/etc/glusterfs/glusterd.vol`
- in rpm installations and `/usr/local/etc/glusterfs/glusterd.vol` in Source installation.
-
- gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root
- gluster system:: execute mountbroker opt geo-replication-log-group geogroup
- gluster system:: execute mountbroker opt rpc-auth-allow-insecure on
-
-5. In any one of Slave node, Add Mountbroker user to glusterd vol file using,
-
- ```sh
- gluster system:: execute mountbroker user geoaccount slavevol
- ```
-
-Where `slavevol` is Slave Volume name.
-
-If you host multiple slave volumes on Slave, for each of them and add the following options to the volfile using
-
- ```sh
- gluster system:: execute mountbroker user geoaccount2 slavevol2
- gluster system:: execute mountbroker user geoaccount3 slavevol3
- ```
-
-To add multiple volumes per mountbroker user,
-
- ```sh
- gluster system:: execute mountbroker user geoaccount1 slavevol11,slavevol12,slavevol13
- gluster system:: execute mountbroker user geoaccount2 slavevol21,slavevol22
- gluster system:: execute mountbroker user geoaccount3 slavevol31
- ```
-
-6. Restart `glusterd` service on all the Slave nodes
-
-7. Setup a passwdless SSH from one of the master node to the user on one of the slave node. For example, to geoaccount.
-
-8. Create a geo-replication relationship between master and slave to the user by running the following command on the master node:
- For example,
-
- ```sh
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> create push-pem [force]
- ```
-
-9. In the slavenode, which is used to create relationship, run `/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh` as a root with user name, master volume name, and slave volume names as the arguments.
-
- ```sh
- /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh <mountbroker_user> <master_volume> <slave_volume>
- ```
-
-### Create and mount meta volume
-NOTE:
-___
-If shared meta volume is already created and mounted at '/var/run/gluster/shared_storage'
-as part of nfs or snapshot, please jump into section 'Configure meta volume with goe-replication'.
-___
-
-A 3-way replicated common gluster meta-volume should be configured and is shared
-by nfs, snapshot and geo-replication. The name of the meta-volume should be
-'gluster_shared_storage' and should be mounted at '/var/run/gluster/shared_storage/'.
-
-The meta volume needs to be configured with geo-replication to better handle
-rename and other consistency issues in geo-replication during brick/node down
-scenarios when master volume is configured with EC(Erasure Code)/AFR.
-Following are the steps to configure meta volume
-
-Create a 3 way replicated meta volume in the master cluster with all three bricks from different nodes as follows.
-
- ```sh
- gluster volume create gluster_shared_storage replica 3 <host1>:<brick_path> <host2>:<brick_path> <host3>:<brick_path>
- ```
-
-Start the meta volume as follows.
-
- ```sh
- gluster volume start <meta_vol>
- ```
-
-Mount the meta volume as follows in all the master nodes.
- ```sh
- mount -t glusterfs <master_host>:gluster_shared_storage /var/run/gluster/shared_storage
- ```
-
-###Configure meta volume with geo-replication session as follows.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config use_meta_volume true
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config use_meta_volume true
- ```
-
-#### Starting a geo-rep session
-There is no change in this command from previous versions to this version.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> start
- ```
-
-This command actually starts the session. Meaning the gsyncd monitor process will be started, which in turn spawns gsync worker processes whenever required. This also turns on changelog xlator (if not in ON state already), which starts recording all the changes on each of the glusterfs bricks. And if master is empty during geo-rep start, the change detection mechanism will be changelog. Else it’ll be xsync (the changes are identified by crawling through filesystem). Later when the initial data is syned to slave, change detection mechanism will be set to changelog
-
-#### Status of geo-replication
-
-gluster now has variants of status command.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> status
- ```
-
-This displays the status of session from each brick of the master to each brick of the slave node.
-
-If you want more detailed status, then run 'status detail'
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status detail
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> status detail
- ```
-
-This command displays extra information like, total files synced, files that needs to be synced, deletes pending etc.
-
-#### Stopping geo-replication session
-
-This command stops all geo-rep relates processes i.e. gsyncd monitor and works processes. Note that changelog will **not** be turned off with this command.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> stop [force]
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> stop [force]
- ```
-
-Force option is to be used, when one of the node (or glusterd in one of the node) is down. Once stopped, the session can be restarted any time. Note that upon restarting of the session, the change detection mechanism falls back to xsync mode. This happens even though you have changelog generating journals, while the geo-rep session is stopped.
-
-#### Deleting geo-replication session
-
-Now you can delete the glusterfs geo-rep session. This will delete all the config data associated with the geo-rep session.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> delete
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> delete
- ```
-
-This deletes all the gsync conf files in each of the nodes. This returns failure, if any of the node is down. And unlike geo-rep stop, there is 'force' option with this.
-
-#### Changing the config values
-
-There are some configuration values which can be changed using the CLI. And you can see all the current config values with following command.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config
- ```
-
-But you can check only one of them, like log_file or change-detector
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config log-file
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config log-file
-
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config change-detector
-
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config working-dir
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config working-dir
- ```
-
-To set a new value to this, just provide a new value. Note that, not all the config values are allowed to change. Some can not be modified.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector xsync
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config change-detector xsync
- ```
-
-Make sure you provide the proper value to the config value. And if you have large number of small files data set, then you can use tar+ssh as syncing method. Note that, if geo-rep session is running, this restarts the gsyncd.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config use-tarssh true
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config use-tarssh true
- ```
-
-Resetting these value to default is also simple.
-
- ```sh
- gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config \!use-tarssh
- # If Mountbroker Setup,
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config \!use-tarssh
- ```
-
-That makes the config key (tar-ssh in this case) to fall back to it's default value.
diff --git a/doc/admin-guide/en-US/markdown/admin_geo-replication.md b/doc/admin-guide/en-US/markdown/admin_geo-replication.md
deleted file mode 100644
index 6b1f5c6df93..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_geo-replication.md
+++ /dev/null
@@ -1,681 +0,0 @@
-#Managing Geo-replication
-
-Geo-replication provides a continuous, asynchronous, and incremental
-replication service from one site to another over Local Area Networks
-(LANs), Wide Area Network (WANs), and across the Internet.
-
-Geo-replication uses a master–slave model, whereby replication and
-mirroring occurs between the following partners:
-
-- **Master** – a GlusterFS volume
-
-- **Slave** – a slave which can be of the following types:
-
- - A local directory which can be represented as file URL like
- `file:///path/to/dir`. You can use shortened form, for example,
- ` /path/to/dir`.
-
- - A GlusterFS Volume - Slave volume can be either a local volume
- like `gluster://localhost:volname` (shortened form - `:volname`)
- or a volume served by different host like
- `gluster://host:volname` (shortened form - `host:volname`).
-
- > **Note**
- >
- > Both of the above types can be accessed remotely using SSH tunnel.
- > To use SSH, add an SSH prefix to either a file URL or gluster type
- > URL. For example, ` ssh://root@remote-host:/path/to/dir`
- > (shortened form - `root@remote-host:/path/to/dir`) or
- > `ssh://root@remote-host:gluster://localhost:volname` (shortened
- > from - `root@remote-host::volname`).
-
-This section introduces Geo-replication, illustrates the various
-deployment scenarios, and explains how to configure the system to
-provide replication and mirroring in your environment.
-
-##Replicated Volumes vs Geo-replication
-
-The following table lists the difference between replicated volumes and
-geo-replication:
-
- Replicated Volumes | Geo-replication
- --- | ---
- Mirrors data across clusters | Mirrors data across geographically distributed clusters
- Provides high-availability | Ensures backing up of data for disaster recovery
- Synchronous replication (each and every file operation is sent across all the bricks) | Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences)
-
-##Preparing to Deploy Geo-replication
-
-This section provides an overview of the Geo-replication deployment
-scenarios, describes how you can check the minimum system requirements,
-and explores common deployment scenarios.
-
-##Exploring Geo-replication Deployment Scenarios
-
-Geo-replication provides an incremental replication service over Local
-Area Networks (LANs), Wide Area Network (WANs), and across the Internet.
-This section illustrates the most common deployment scenarios for
-Geo-replication, including the following:
-
-- Geo-replication over LAN
-- Geo-replication over WAN
-- Geo-replication over the Internet
-- Multi-site cascading Geo-replication
-
-**Geo-replication over LAN**
-
-You can configure Geo-replication to mirror data over a Local Area
-Network.
-
-![ Geo-replication over LAN ][]
-
-**Geo-replication over WAN**
-
-You can configure Geo-replication to replicate data over a Wide Area
-Network.
-
-![ Geo-replication over WAN ][]
-
-**Geo-replication over Internet**
-
-You can configure Geo-replication to mirror data over the Internet.
-
-![ Geo-replication over Internet ][]
-
-**Multi-site cascading Geo-replication**
-
-You can configure Geo-replication to mirror data in a cascading fashion
-across multiple sites.
-
-![ Multi-site cascading Geo-replication ][]
-
-##Geo-replication Deployment Overview
-
-Deploying Geo-replication involves the following steps:
-
-1. Verify that your environment matches the minimum system requirement.
-2. Determine the appropriate deployment scenario.
-3. Start Geo-replication on master and slave systems, as required.
-
-##Checking Geo-replication Minimum Requirements
-
-Before deploying GlusterFS Geo-replication, verify that your systems
-match the minimum requirements.
-
-The following table outlines the minimum requirements for both master
-and slave nodes within your environment:
-
- Component | Master | Slave
- --- | --- | ---
- Operating System | GNU/Linux | GNU/Linux
- Filesystem | GlusterFS 3.2 or higher | GlusterFS 3.2 or higher (GlusterFS needs to be installed, but does not need to be running), ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively)
- Python | Python 2.4 (with ctypes external module), or Python 2.5 (or higher) | Python 2.4 (with ctypes external module), or Python 2.5 (or higher)
- Secure shell | OpenSSH version 4.0 (or higher) | SSH2-compliant daemon
- Remote synchronization | rsync 3.0.7 or higher | rsync 3.0.7 or higher
- FUSE | GlusterFS supported versions | GlusterFS supported versions
-
-##Setting Up the Environment for Geo-replication
-
-**Time Synchronization**
-
-- On bricks of a geo-replication master volume, all the servers' time
- must be uniform. You are recommended to set up NTP (Network Time
- Protocol) service to keep the bricks sync in time and avoid
- out-of-time sync effect.
-
- For example: In a Replicated volume where brick1 of the master is at
- 12.20 hrs and brick 2 of the master is at 12.10 hrs with 10 minutes
- time lag, all the changes in brick2 between this period may go
- unnoticed during synchronization of files with Slave.
-
-**To setup Geo-replication for SSH**
-
-Password-less login has to be set up between the host machine (where
-geo-replication Start command will be issued) and the remote machine
-(where slave process should be launched through SSH).
-
-1. On the node where geo-replication sessions are to be set up, run the
- following command:
-
- # ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem
-
- Press Enter twice to avoid passphrase.
-
-2. Run the following command on master for all the slave hosts:
-
- # ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub @
-
-##Setting Up the Environment for a Secure Geo-replication Slave
-
-You can configure a secure slave using SSH so that master is granted a
-restricted access. With GlusterFS, you need not specify configuration
-parameters regarding the slave on the master-side configuration. For
-example, the master does not require the location of the rsync program
-on slave but the slave must ensure that rsync is in the PATH of the user
-which the master connects using SSH. The only information that master
-and slave have to negotiate are the slave-side user account, slave's
-resources that master uses as slave resources, and the master's public
-key. Secure access to the slave can be established using the following
-options:
-
-- Restricting Remote Command Execution
-
-- Using `Mountbroker` for Slaves
-
-- Using IP based Access Control
-
-**Backward Compatibility**
-
-Your existing Ge-replication environment will work with GlusterFS,
-except for the following:
-
-- The process of secure reconfiguration affects only the glusterfs
- instance on slave. The changes are transparent to master with the
- exception that you may have to change the SSH target to an
- unprivileged account on slave.
-
-- The following are the some exceptions where this might not work:
-
- - Geo-replication URLs which specify the slave resource when
- configuring master will include the following special
- characters: space, \*, ?, [;
-
- - Slave must have a running instance of glusterd, even if there is
- no gluster volume among the mounted slave resources (that is,
- file tree slaves are used exclusively).
-
-### Restricting Remote Command Execution
-
-If you restrict remote command execution, then the Slave audits commands
-coming from the master and the commands related to the given
-geo-replication session is allowed. The Slave also provides access only
-to the files within the slave resource which can be read or manipulated
-by the Master.
-
-To restrict remote command execution:
-
-1. Identify the location of the gsyncd helper utility on Slave. This
- utility is installed in `PREFIX/libexec/glusterfs/gsyncd`, where
- PREFIX is a compile-time parameter of glusterfs. For example,
- `--prefix=PREFIX` to the configure script with the following common
- values` /usr, /usr/local, and /opt/glusterfs/glusterfs_version`.
-
-2. Ensure that command invoked from master to slave passed through the
- slave's gsyncd utility.
-
- You can use either of the following two options:
-
- - Set gsyncd with an absolute path as the shell for the account
- which the master connects through SSH. If you need to use a
- privileged account, then set it up by creating a new user with
- UID 0.
-
- - Setup key authentication with command enforcement to gsyncd. You
- must prefix the copy of master's public key in the Slave
- account's `authorized_keys` file with the following command:
-
- `command=<path to gsyncd>`.
-
- For example,
- `command="PREFIX/glusterfs/gsyncd" ssh-rsa AAAAB3Nza....`
-
-### Using Mountbroker for Slaves
-
-`mountbroker` is a new service of glusterd. This service allows an
-unprivileged process to own a GlusterFS mount by registering a label
-(and DSL (Domain-specific language) options ) with glusterd through a
-glusterd volfile. Using CLI, you can send a mount request to glusterd to
-receive an alias (symlink) of the mounted volume.
-
-A request from the agent , the unprivileged slave agents use the
-mountbroker service of glusterd to set up an auxiliary gluster mount for
-the agent in a special environment which ensures that the agent is only
-allowed to access with special parameters that provide administrative
-level access to the particular volume.
-
-**To setup an auxiliary gluster mount for the agent**:
-
-1. In all Slave nodes, create a new group. For example, `geogroup`.
-
-2. In all Slave nodes, create a unprivileged account. For example, ` geoaccount`. Make it a
- member of ` geogroup`.
-
-3. In all Slave nodes, Create a new directory owned by root and with permissions *0711.*
- For example, create a create mountbroker-root directory
- `/var/mountbroker-root`.
-
-4. In any one of Slave node, Run the following commands to add options to glusterd vol
-file(`/etc/glusterfs/glusterd.vol`)
- in rpm installations and `/usr/local/etc/glusterfs/glusterd.vol` in Source installation.
-
- ```sh
- gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root
- gluster system:: execute mountbroker opt geo-replication-log-group geogroup
- gluster system:: execute mountbroker opt rpc-auth-allow-insecure on
- ```
-
-5. In any one of the Slave node, Add Mountbroker user to glusterd vol file using,
-
- ```sh
- gluster system:: execute mountbroker user geoaccount slavevol
- ```
-
- where slavevol is the Slave Volume name
-
- If you host multiple slave volumes on Slave, for each of them and add the following options to the
-volfile using,
-
- ```sh
- gluster system:: execute mountbroker user geoaccount2 slavevol2
- gluster system:: execute mountbroker user geoaccount3 slavevol3
- ```
-
- To add multiple volumes per mountbroker user,
-
- ```sh
- gluster system:: execute mountbroker user geoaccount1 slavevol11,slavevol12,slavevol13
- gluster system:: execute mountbroker user geoaccount2 slavevol21,slavevol22
- gluster system:: execute mountbroker user geoaccount3 slavevol31
- ```
-6. Restart `glusterd` service on all Slave nodes.
-
-7. Setup a passwdless SSH from one of the master node to the user on one of the slave node.
-For example, to geoaccount.
-
-8. Create a geo-replication relationship between master and slave to the user by running the
-following command on the master node:
-
- ```sh
- gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> create push-pem [force]
- ```
-
-9. In the slavenode, which is used to create relationship, run `/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh`
-as a root with user name, master volume name, and slave volume names as the arguments.
-
- ```sh
- /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh <mountbroker_user> <master_volume> <slave_volume>
- ```
-
-### Using IP based Access Control
-
-You can use IP based access control method to provide access control for
-the slave resources using IP address. You can use method for both Slave
-and file tree slaves, but in the section, we are focusing on file tree
-slaves using this method.
-
-To set access control based on IP address for file tree slaves:
-
-1. Set a general restriction for accessibility of file tree resources:
-
- # gluster volume geo-replication '/*' config allow-network ::1,127.0.0.1
-
- This will refuse all requests for spawning slave agents except for
- requests initiated locally.
-
-2. If you want the to lease file tree at `/data/slave-tree` to Master,
- enter the following command:
-
- # gluster volume geo-replicationconfig allow-network
-
- `MasterIP` is the IP address of Master. The slave agent spawn
- request from master will be accepted if it is executed at
- `/data/slave-tree`.
-
-If the Master side network configuration does not enable the Slave to
-recognize the exact IP address of Master, you can use CIDR notation to
-specify a subnet instead of a single IP address as MasterIP or even
-comma-separated lists of CIDR subnets.
-
-If you want to extend IP based access control to gluster slaves, use the
-following command:
-
- # gluster volume geo-replication '*' config allow-network ::1,127.0.0.1
-
-##Starting Geo-replication
-
-This section describes how to configure and start Gluster
-Geo-replication in your storage environment, and verify that it is
-functioning correctly.
-
-###Starting Geo-replication
-
-To start Gluster Geo-replication
-
-- Use the following command to start geo-replication between the hosts:
-
- # gluster volume geo-replication start
-
- For example:
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir start
- Starting geo-replication session between Volume1
- example.com:/data/remote_dir has been successful
-
- > **Note**
- >
- > You may need to configure the service before starting Gluster
- > Geo-replication.
-
-###Verifying Successful Deployment
-
-You can use the gluster command to verify the status of Gluster
-Geo-replication in your environment.
-
-**To verify the status Gluster Geo-replication**
-
-- Verify the status by issuing the following command on host:
-
- # gluster volume geo-replication status
-
- For example:
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir status
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir status
- MASTER SLAVE STATUS
- ______ ______________________________ ____________
- Volume1 root@example.com:/data/remote_dir Starting....
-
-###Displaying Geo-replication Status Information
-
-You can display status information about a specific geo-replication
-master session, or a particular master-slave session, or all
-geo-replication sessions, as needed.
-
-**To display geo-replication status information**
-
-- Use the following command to display information of all geo-replication sessions:
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir status
-
-- Use the following command to display information of a particular master slave session:
-
- # gluster volume geo-replication status
-
- For example, to display information of Volume1 and
- example.com:/data/remote\_dir
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir status
-
- The status of the geo-replication between Volume1 and
- example.com:/data/remote\_dir is displayed.
-
-- Display information of all geo-replication sessions belonging to a
- master
-
- # gluster volume geo-replication MASTER status
-
- For example, to display information of Volume1
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir status
-
- The status of a session could be one of the following:
-
-- **Initializing**: This is the initial phase of the Geo-replication session;
- it remains in this state for a minute in order to make sure no abnormalities are present.
-
-- **Created**: The geo-replication session is created, but not started.
-
-- **Active**: The gsync daemon in this node is active and syncing the data.
-
-- **Passive**: A replica pair of the active node. The data synchronization is handled by active node.
- Hence, this node does not sync any data.
-
-- **Faulty**: The geo-replication session has experienced a problem, and the issue needs to be
- investigated further.
-
-- **Stopped**: The geo-replication session has stopped, but has not been deleted.
-
- The Crawl Status can be one of the following:
-
-- **Changelog Crawl**: The changelog translator has produced the changelog and that is being consumed
- by gsyncd daemon to sync data.
-
-- **Hybrid Crawl**: The gsyncd daemon is crawling the glusterFS file system and generating pseudo
- changelog to sync data.
-
-- **Checkpoint Status**: Displays the status of the checkpoint, if set. Otherwise, it displays as N/A.
-
-##Configuring Geo-replication
-
-To configure Gluster Geo-replication
-
-- Use the following command at the Gluster command line:
-
- # gluster volume geo-replication config [options]
-
- For example:
-
- Use the following command to view list of all option/value pair:
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir config
-
-####Configurable Options
-
-The following table provides an overview of the configurable options for a geo-replication setting:
-
- Option | Description
- --- | ---
- gluster-log-file LOGFILE | The path to the geo-replication glusterfs log file.
- gluster-log-level LOGFILELEVEL| The log level for glusterfs processes.
- log-file LOGFILE | The path to the geo-replication log file.
- log-level LOGFILELEVEL | The log level for geo-replication.
- ssh-command COMMAND | The SSH command to connect to the remote machine (the default is SSH).
- rsync-command COMMAND | The rsync command to use for synchronizing the files (the default is rsync).
- use-tarssh true | The use-tarssh command allows tar over Secure Shell protocol. Use this option to handle workloads of files that have not undergone edits.
- volume_id=UID | The command to delete the existing master UID for the intermediate/slave node.
- timeout SECONDS | The timeout period in seconds.
- sync-jobs N | The number of simultaneous files/directories that can be synchronized.
- ignore-deletes | If this option is set to 1, a file deleted on the master will not trigger a delete operation on the slave. As a result, the slave will remain as a superset of the master and can be used to recover the master in the event of a crash and/or accidental delete.
- checkpoint [LABEL&#124;now] | Sets a checkpoint with the given option LABEL. If the option is set as now, then the current time will be used as the label.
-
-##Stopping Geo-replication
-
-You can use the gluster command to stop Gluster Geo-replication (syncing
-of data from Master to Slave) in your environment.
-
-**To stop Gluster Geo-replication**
-
-- Use the following command to stop geo-replication between the hosts:
-
- # gluster volume geo-replication stop
-
- For example:
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir stop
- Stopping geo-replication session between Volume1 and
- example.com:/data/remote_dir has been successful
-
-##Restoring Data from the Slave
-
-You can restore data from the slave to the master volume, whenever the
-master volume becomes faulty for reasons like hardware failure.
-
-The example in this section assumes that you are using the Master Volume
-(Volume1) with the following configuration:
-
- machine1# gluster volume info
- Type: Distribute
- Status: Started
- Number of Bricks: 2
- Transport-type: tcp
- Bricks:
- Brick1: machine1:/export/dir16
- Brick2: machine2:/export/dir16
- Options Reconfigured:
- geo-replication.indexing: on
-
-The data is syncing from master volume (Volume1) to slave directory
-(example.com:/data/remote\_dir). To view the status of this
-geo-replication session run the following command on Master:
-
- # gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status
-
-**Before Failure**
-
-Assume that the Master volume had 100 files and was mounted at
-/mnt/gluster on one of the client machines (client). Run the following
-command on Client machine to view the list of files:
-
- client# ls /mnt/gluster | wc –l
- 100
-
-The slave directory (example.com) will have same data as in the master
-volume and same can be viewed by running the following command on slave:
-
- example.com# ls /data/remote_dir/ | wc –l
- 100
-
-**After Failure**
-
-If one of the bricks (machine2) fails, then the status of
-Geo-replication session is changed from "OK" to "Faulty". To view the
-status of this geo-replication session run the following command on
-Master:
-
- # gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status
-
-Machine2 is failed and now you can see discrepancy in number of files
-between master and slave. Few files will be missing from the master
-volume but they will be available only on slave as shown below.
-
-Run the following command on Client:
-
- client # ls /mnt/gluster | wc –l
- 52
-
-Run the following command on slave (example.com):
-
- Example.com# # ls /data/remote_dir/ | wc –l
- 100
-
-**To restore data from the slave machine**
-
-1. Use the following command to stop all Master's geo-replication sessions:
-
- # gluster volume geo-replication stop
-
- For example:
-
- machine1# gluster volume geo-replication Volume1
- example.com:/data/remote_dir stop
-
- Stopping geo-replication session between Volume1 &
- example.com:/data/remote_dir has been successful
-
- > **Note**
- >
- > Repeat `# gluster volume geo-replication stop `command on all
- > active geo-replication sessions of master volume.
-
-2. Use the following command to replace the faulty brick in the master:
-
- # gluster volume replace-brick start
-
- For example:
-
- machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 start
- Replace-brick started successfully
-
-3. Use the following command to commit the migration of data:
-
- # gluster volume replace-brick commit force
-
- For example:
-
- machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 commit force
- Replace-brick commit successful
-
-4. Use the following command to verify the migration of brick by viewing the volume info:
-
- # gluster volume info
-
- For example:
-
- machine1# gluster volume info
- Volume Name: Volume1
- Type: Distribute
- Status: Started
- Number of Bricks: 2
- Transport-type: tcp
- Bricks:
- Brick1: machine1:/export/dir16
- Brick2: machine3:/export/dir16
- Options Reconfigured:
- geo-replication.indexing: on
-
-5. Run rsync command manually to sync data from slave to master
- volume's client (mount point).
-
- For example:
-
- example.com# rsync -PavhS --xattrs --ignore-existing /data/remote_dir/ client:/mnt/gluster
-
- Verify that the data is synced by using the following command:
-
- On master volume, run the following command:
-
- Client # ls | wc –l
- 100
-
- On the Slave run the following command:
-
- example.com# ls /data/remote_dir/ | wc –l
- 100
-
- Now Master volume and Slave directory is synced.
-
-6. Use the following command to restart geo-replication session from master to slave:
-
- # gluster volume geo-replication start
-
- For example:
-
- machine1# gluster volume geo-replication Volume1
- example.com:/data/remote_dir start
- Starting geo-replication session between Volume1 &
- example.com:/data/remote_dir has been successful
-
-##Best Practices
-
-**Manually Setting Time**
-
-If you have to change the time on your bricks manually, then you must
-set uniform time on all bricks. Setting time backward corrupts the
-geo-replication index, so the recommended way to set the time manually is:
-
-1. Stop geo-replication between the master and slave using the
- following command:
-
- # gluster volume geo-replication stop
-
-2. Stop the geo-replication indexing using the following command:
-
- # gluster volume set geo-replication.indexing of
-
-3. Set uniform time on all bricks.
-
-4. Use the following command to restart your geo-replication session:
-
- # gluster volume geo-replication start
-
-**Running Geo-replication commands in one system**
-
-It is advisable to run the geo-replication commands in one of the bricks
-in the trusted storage pool. This is because, the log files for the
-geo-replication session would be stored in the \*Server\* where the
-Geo-replication start is initiated. Hence it would be easier to locate
-the log-files when required.
-
-**Isolation**
-
-Geo-replication slave operation is not sandboxed as of now and is ran as
-a privileged service. So for the security reason, it is advised to
-create a sandbox environment (dedicated machine / dedicated virtual
-machine / chroot/container type solution) by the administrator to run
-the geo-replication slave in it. Enhancement in this regard will be
-available in follow-up minor release.
-
- [ Geo-replication over LAN ]: ../images/Geo-Rep_LAN.png
- [ Geo-replication over WAN ]: ../images/Geo-Rep_WAN.png
- [ Geo-replication over Internet ]: ../images/Geo-Rep03_Internet.png
- [ Multi-site cascading Geo-replication ]: ../images/Geo-Rep04_Cascading.png
diff --git a/doc/admin-guide/en-US/markdown/admin_logging.md b/doc/admin-guide/en-US/markdown/admin_logging.md
deleted file mode 100644
index f15907bbe61..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_logging.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# GlusterFS service Logs and locations
-
-Below lists the component, services, and functionality based logs in the GlusterFS Server. As per the File System Hierarchy Standards (FHS) all the log files are placed in the `/var/log` directory.
-⁠
-
-##glusterd:
-
-glusterd logs are located at `/var/log/glusterfs/etc-glusterfs-glusterd.vol.log`. One glusterd log file per server. This log file also contains the snapshot and user logs.
-
-##gluster cli command:
-gluster cli logs are located at `/var/log/glusterfs/cmd_history.log` Gluster commands executed on a node in a GlusterFS Trusted Storage Pool is logged in the `.cmd_log_history` file.
-
-##bricks:
-Bricks logs are located at `/var/log/glusterfs/bricks/<path extraction of brick path>.log` . One log file per brick on the server
-
-##rebalance:
-rebalance logs are located at `/var/log/glusterfs/VOLNAME-rebalance.log` . One log file per volume on the server.
-
-##self heal deamon:
-self heal deamon are logged at `/var/log/glusterfs/glustershd.log`. One log file per server
-
-##quota:
-
-`/var/log/glusterfs/quotad.log` are log of the quota daemons running on each node.
-`/var/log/glusterfs/quota-crawl.log` Whenever quota is enabled, a file system crawl is performed and the corresponding log is stored in this file.
-`/var/log/glusterfs/quota-mount- VOLNAME.log` An auxiliary FUSE client is mounted in <gluster-run-dir>/VOLNAME of the glusterFS and the corresponding client logs found in this file.
-
- One log file per server (and per volume from quota-mount.
-
-##Gluster NFS:
-
-`/var/log/glusterfs/nfs.log ` One log file per server
-
-##SAMBA Gluster:
-
-`/var/log/samba/glusterfs-VOLNAME-<ClientIP>.log` . If the client mounts this on a glusterFS server node, the actual log file or the mount point may not be found. In such a case, the mount outputs of all the glusterFS type mount operations need to be considered.
-
-##Ganesha NFS :
-`/var/log/nfs-ganesha.log`
-
-##FUSE Mount:
-`/var/log/glusterfs/<mountpoint path extraction>.log `
-
-##Geo-replication:
-
-`/var/log/glusterfs/geo-replication/<master>`
-`/var/log/glusterfs/geo-replication-slaves `
-
-##gluster volume heal VOLNAME info command:
-`/var/log/glusterfs/glfsheal-VOLNAME.log` . One log file per server on which the command is executed.
-
-##gluster-swift:
-`/var/log/messages`
-
-##SwiftKrbAuth:
-`/var/log/httpd/error_log `
diff --git a/doc/admin-guide/en-US/markdown/admin_lookup_optimization.md b/doc/admin-guide/en-US/markdown/admin_lookup_optimization.md
deleted file mode 100644
index ccab44f87bb..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_lookup_optimization.md
+++ /dev/null
@@ -1,145 +0,0 @@
-# DHT lookup optimization
-
-Distribute xlator (or DHT) has a performance penalty when dealing with negative
-lookups. This document explains the problem and optimization provided for
-alleviating the same in GlusterFS.
-
-## Negative lookups and issues surrounding them
-
-Negative lookups are lookup operations for entries that are not present in the
-volume. IOW, a lookup for a file/directory that does not exist is a negative
-lookup.
-
-DHT normally looks up an entry in the hashed subvolume first (based on the
-layout), if not found in the hashed location, it fans out a lookup across all
-subvolumes to DHT, to ensure that the entry is not present in another subvolume.
-This behavior comes from the perspective that if a rebalance is in progress,
-and the layout on disk is temporarily out of alignment with the actual location
-of the file, the entry is still found by the fan out lookup.
-
-Such fan out lookups are costly and typically slow down file creates. This
-especially impacts small file performance, where a large number of files are
-being added/created in quick succession to the volume.
-
-## Optimizing lookups in DHT
-
-A balanced volume is either, a new volume that is created, and no bricks are
-added to, or removed from the same, or a volume that has undergone expansion
-(or reduction) of bricks and a full rebalance has been run on the volume.
-
-In such volumes, the fan out lookup behavior can be turned off to speed up
-negative lookups, as files are in their respective hashed locations (or at
-least their DHT link-to entries are present in the hashed location).
-
-With GlusterFS 3.7.2 negative lookup fan-out behavior is optimized, by not
-performing the same, in an balanced volume.
-
-The optimization provided, further detects a cluster out of balance (when a
-fix-layout is done, or a brick is removed) to automatically turn **on** the
-fan out negative lookup behavior, thereby preventing duplicate entry creation
-in the volume, till the volume is brought into balance again.
-
-## Configuration options to enable optimized lookups
-
-With Gluster 3.7.2 the following options are provided to enable DHT lookup
-optimization,
-
-Option: cluster.lookup-optimize
-Description: "This option if set to ON enables the optimization of -ve lookups,
-by not doing a lookup on non-hashed subvolumes for files, in case the hashed
-subvolume does not return any result. This option disregards the
-lookup-unhashed setting, when enabled."
-Default: OFF
-
-CLI command to enable this option:
- gluster volume set <volname> cluster.lookup-optimize <on/off>
-
-### Client compatibility support
-
-As DHT xlator runs on the client stack of gluster (i.e on the machine where the
-FUSE/NFS Server/SAMBA Server are running), this configuration requires that the
-cluster and the clients are upgraded to 3.7.2 version, at the minimum.
-
-When setting this option, if any Gluster brick node or connected clients are of
-an older version, the option will error out stating incompatible version
-detected in the cluster and not allow the configuration change.
-
-Older clients connecting to the cluster post this configuration option is set,
-would also error out and not be able to mount the volume due to the version
-incompatibility.
-
-### Compatability with lookup-unhashed setting
-
-In older DHT versions, the configuration option lookup-unhashed emulated a
-similar behavior for a balanced cluster. The downside of this option is that
-if the cluster grows or becomes unbalanced, there is a risk of losing entry
-consistency. The current changes to gluster and specifically in DHT, prevent
-this inconsistency from occurring when using the new option (lookup-optimize).
-
-Additionally, if the lookup-optimize option is set, the older lookup-unhashed
-setting is ignored by DHT.
-
-## Requirements for the optimization to function
-
-When the lookup-optimize option is enabled, there are a few prerequisites
-before which the option is honored by DHT. The following list provides some of
-these conditions and ways to meet the same.
-
-1. New volume
- A new volume is a volume that has just been created and is unused or not
- started
- - For a volume that is just created
- - Prerequisite: Before starting and accessing this volume, set the lookup
- optimization to ON
- - Gotchas: All directories that are created post the above setting, will
- leverage the negative lookup optimization, except entries in the root of
- the volume.
- NOTE: The root of the volume, or the brick root on each brick of the
- volume, is already created prior to the start of the volume, or the
- ability to set this option. As a result the root of the volume gains this
- optimization only post the first full rebalance, or is treated equivalent
- to an existing directory (see (2)-(1) below).
-
-2. Existing volume
- An existing volume is one which is under use, and may have had bricks added
- or removed in its lifetime. In this scenario there are 2 cases where the
- lookup optimization behavior changes,
- Prerequisite: Enable the lookup-optimize option
- 1. New directory creation
- - All directories created beyond this point will gain the negative lookup
- optimization
- 2. Existing directories
- - Existing directories will not gain the lookup optimization
- - To enable existing directories to also gain the lookup optimization a
- full rebalance on the volume needs to be performed
-
-The optimization is also bypassed by the code automatically in the following
-conditions,
-
-1. Brick removed
- - When a remove-brick is executed for a volume, it immediately triggers a
- rebalance to move data out of the removed bricks. In these circumstances the
- optimization is bypassed and a fan out lookup is performed for negative
- lookups.
- - Post removal of the brick, the lookup optimization would automatically kick
- in
-2. Brick added and only fix-layout is executed
- - When a brick is added and a fix-layout only is executed, the files are
- still not present in the correct hashed locations. As a result under these
- conditions the lookup optimization is bypassed by DHT.
- - A full rebalance post fix-layout would get the optimization enabled
- NOTE: Although fix-layout is deprecated, it is still present and honored,
- as a result this distinction is presented in this document. This is not an
- endorsement of fix-layout still being supported.
-
-## FAQ
-<< TBD >>
-1. How do I verify that I have a problem with negative lookups? OR
- When should I use this option?
-2. Can I roll back to an older client post enabling this optimization?
-3. How do I verify this option is working for me?
-4. What additional meta-data does this option add to the bricks?
-5. I see duplicate entries after enabling this option, what should I do?
-6. I see the following error in my client logs, help!
-7. My create performance is still poor, help!
-8. <Other suggestions welcome>
diff --git a/doc/admin-guide/en-US/markdown/admin_managing_snapshots.md b/doc/admin-guide/en-US/markdown/admin_managing_snapshots.md
deleted file mode 100644
index 672a8ceb4c6..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_managing_snapshots.md
+++ /dev/null
@@ -1,316 +0,0 @@
-Managing GlusterFS Volume Snapshots
-==========================
-
-This section describes how to perform common GlusterFS volume snapshot
-management operations
-
-Pre-requisites
-=====================
-
-GlusterFS volume snapshot feature is based on thinly provisioned LVM snapshot.
-To make use of snapshot feature GlusterFS volume should fulfill following
-pre-requisites:
-
-* Each brick should be on an independent thinly provisioned LVM.
-* Brick LVM should not contain any other data other than brick.
-* None of the brick should be on a thick LVM.
-* gluster version should be 3.6 and above.
-
-Details of how to create thin volume can be found at the following link.
-https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinly_provisioned_volume_creation.html
-
-
-Few features of snapshot are:
-=============================
-
-**Crash Consistency**
-
-when a snapshot is taken at a particular point-in-time, it is made sure that
-the taken snapshot is crash consistent. when the taken snapshot is restored,
-then the data is identical as it was at the time of taking a snapshot.
-
-
-**Online Snapshot**
-
-When the snapshot is being taken the file system and its associated data
-continue to be available for the clients.
-
-
-**Quorum Based**
-
-The quorum feature ensures that the volume is in good condition while the bricks
-are down. Quorum is not met if any bricks are down in a n-way replication where
-n <= 2. Quorum is met when m bricks are up, where m >= (n/2 + 1) where n is odd,
-and m >= n/2 and first brick is up where n is even. snapshot creation fails
-if quorum is not met.
-
-
-**Barrier**
-
-During snapshot creation some of the fops are blocked to guarantee crash
-consistency. There is a default time-out of 2 minutes, if snapshot creation
-is not complete within that span then fops are unbarried. If unbarrier happens
-before the snapshot creation is complete then the snapshot creation operation
-fails. This to ensure that the snapshot is in a consistent state.
-
-
-
-Snapshot Management
-=====================
-
-
-**Snapshot creation**
-
-Syntax :
-*gluster snapshot create <snapname\> <volname\> \[no-timestamp] \[description <description\>\] \[force\]*
-
-Details :
-Creates a snapshot of a GlusterFS volume. User can provide a snap-name and a
-description to identify the snap. The description cannot be more than 1024
-characters.
-Snapshot will be created by appending timestamp with user provided snap name.
-User can override this behaviour by giving no-timestamp flag.
-
-NOTE : To be able to take a snapshot, volume should be present and it
-should be in started state.
-
------------------------------------------------------------------------------
-
-**Snapshot clone**
-
-Syntax :
-*gluster snapshot clone <clonename\> <snapname\>*
-
-Details :
-Creates a clone of a snapshot. Upon successful completion, a new GlusterFS
-volume will be created from snapshot. The clone will be a space efficient clone,
-i.e, the snapshot and the clone will share the backend disk.
-
-NOTE : To be able to take a clone from snapshot, snapshot should be present
-and it should be in activated state.
-
------------------------------------------------------------------------------
-
-**Restoring snaps**
-
-Syntax :
-*gluster snapshot restore <snapname\>*
-
-Details :
-Restores an already taken snapshot of a GlusterFS volume.
-Snapshot restore is an offline activity therefore if the volume is
-online (in started state) then the restore operation will fail.
-
-Once the snapshot is restored it will not be available in the
-list of snapshots.
-
----------------------------------------------------------------------------
-
-**Deleting snaps**
-
-Syntax :
-*gluster snapshot delete \(all | <snapname\> | volume <volname\>\)*
-
-Details :
-If snapname is specified then mentioned snapshot is deleted.
-If volname is specified then all snapshots belonging to that particular
-volume is deleted. If keyword *all* is used then all snapshots belonging
-to the system is deleted.
-
---------------------------------------------------------------------------
-
-**Listing of available snaps**
-
-Syntax:
-*gluster snapshot list \[volname\]*
-
-Details:
-Lists all snapshots taken.
-If volname is provided, then only the snapshots belonging to
-that particular volume is listed.
-
--------------------------------------------------------------------------
-
-**Information of available snaps**
-
-Syntax:
-*gluster snapshot info \[\(snapname | volume <volname\>\)\]*
-
-Details:
-This command gives information such as snapshot name, snapshot UUID,
-time at which snapshot was created, and it lists down the snap-volume-name,
-number of snapshots already taken and number of snapshots still available
-for that particular volume, and the state of the snapshot.
-
-------------------------------------------------------------------------
-
-**Status of snapshots**
-
-Syntax:
-*gluster snapshot status \[\(snapname | volume <volname\>\)\]*
-
-Details:
-This command gives status of the snapshot.
-The details included are snapshot brick path, volume group(LVM details),
-status of the snapshot bricks, PID of the bricks, data percentage filled for
-that particular volume group to which the snapshots belong to, and total size
-of the logical volume.
-
-If snapname is specified then status of the mentioned snapshot is displayed.
-If volname is specified then status of all snapshots belonging to that volume
-is displayed. If both snapname and volname is not specified then status of all
-the snapshots present in the system are displayed.
-
-------------------------------------------------------------------------
-
-**Configuring the snapshot behavior**
-
-Syntax:
-*snapshot config \[volname\] \(\[snap-max-hard-limit <count\>\] \[snap-max-soft-limit <percent>\]\)
- | \(\[auto-delete <enable|disable\>\]\)
- | \(\[activate-on-create <enable|disable\>\]\)*
-
-Details:
-Displays and sets the snapshot config values.
-
-snapshot config without any keywords displays the snapshot config values of
-all volumes in the system. If volname is provided, then the snapshot config
-values of that volume is displayed.
-
-Snapshot config command along with keywords can be used to change the existing
-config values. If volname is provided then config value of that volume is
-changed, else it will set/change the system limit.
-
-snap-max-soft-limit and auto-delete are global options, that will be
-inherited by all volumes in the system and cannot be set to individual volumes.
-
-The system limit takes precedence over the volume specific limit.
-
-When auto-delete feature is enabled, then upon reaching the soft-limit,
-with every successful snapshot creation, the oldest snapshot will be deleted.
-
-When auto-delete feature is disabled, then upon reaching the soft-limit,
-the user gets a warning with every successful snapshot creation.
-
-Upon reaching the hard-limit, further snapshot creations will not be allowed.
-
-activate-on-create is disabled by default. If you enable activate-on-create,
-then further snapshot will be activated during the time of snapshot creation.
--------------------------------------------------------------------------
-
-**Activating a snapshot**
-
-Syntax:
-*gluster snapshot activate <snapname\>*
-
-Details:
-Activates the mentioned snapshot.
-
-Note : By default the snapshot is activated during snapshot creation.
-
--------------------------------------------------------------------------
-
-**Deactivate a snapshot**
-
-Syntax:
-*gluster snapshot deactivate <snapname\>*
-
-Details:
-Deactivates the mentioned snapshot.
-
--------------------------------------------------------------------------
-
-**Accessing the snapshot**
-
-Snapshots can be activated in 2 ways.
-
-1) Mounting the snapshot:
-
-The snapshot can be accessed via FUSE mount (only fuse). To do that it has to be
-mounted first. A snapshot can be mounted via fuse by below command
-
-*mount -t glusterfs <hostname>:/snaps/<snap-name>/<volume-name> <mount-path>*
-
-i.e. say "host1" is one of the peers. Let "vol" be the volume name and "my-snap"
-be the snapshot name. In this case a snapshot can be mounted via this command
-
-*mount -t glusterfs host1:/snaps/my-snap/vol /mnt/snapshot*
-
-
-2) User serviceability:
-
-Apart from the above method of mounting the snapshot, a list of available
-snapshots and the contents of each snapshot can be viewed from any of the mount
-points accessing the glusterfs volume (either FUSE or NFS or SMB). For having
-user serviceable snapshots, it has to be enabled for a volume first. User
-serviceability can be enabled for a volume using the below command.
-
-*gluster volume set <volname> features.uss enable*
-
-Once enabled, from any of the directory (including root of the filesystem) an
-access point will be created to the snapshot world. The access point is a hidden
-directory cding into which will make the user enter the snapshot world. By
-default the hidden directory is ".snaps". Once user serviceability is enabled,
-one will be able to cd into .snaps from any directory. Doing "ls" on that
-directory shows a list of directories which are nothing but the snapshots
-present for that volume. Say if there are 3 snapshots ("snap1", "snap2",
-"snap3"), then doing ls in .snaps directory will show those 3 names as the
-directory entries. They represent the state of the directory from which .snaps
-was entered, at different points in time.
-
-NOTE: The access to the snapshots are read-only.
-
-Also, the name of the hidden directory (or the access point to the snapshot
-world) can be changed using the below command.
-
-*gluster volume set <volname> snapshot-directory <new-name>*
-
-3) Accessing from windows:
-The glusterfs volumes can be made accessible by windows via samba. (the
-glusterfs plugin for samba helps achieve this, without having to re-export
-a fuse mounted glusterfs volume). The snapshots of a glusterfs volume can
-also be viewed in the windows explorer.
-
-There are 2 ways:
-* Give the path of the entry point directory
-(\\<hostname>\<samba-share>\<directory>\<entry-point path>) in the run command
-window
-* Go to the samba share via windows explorer. Make hidden files and folders
-visible so that in the root of the samba share a folder icon for the entry point
-can be seen.
-
-NOTE: From the explorer, snapshot world can be entered via entry point only from
-the root of the samba share. If snapshots have to be seen from subfolders, then
-the path should be provided in the run command window.
-
-For snapshots to be accessible from windows, below 2 options can be used.
-A) The glusterfs plugin for samba should give the option "snapdir-entry-path"
-while starting. The option is an indication to glusterfs, that samba is loading
-it and the value of the option should be the path that is being used as the
-share for windows.
-Ex: Say, there is a glusterfs volume and a directory called "export" from the
-root of the volume is being used as the samba share, then samba has to load
-glusterfs with this option as well.
-
- ret = glfs_set_xlator_option(fs, "*-snapview-client",
- "snapdir-entry-path", "/export");
-The xlator option "snapdir-entry-path" is not exposed via volume set options,
-cannot be changed from CLI. Its an option that has to be provded at the time of
-mounting glusterfs or when samba loads glusterfs.
-B) The accessibility of snapshots via root of the samba share from windows
-is configurable. By default it is turned off. It is a volume set option which can
-be changed via CLI.
-
-gluster volume set <volname> features.show-snapshot-directory "on/off". By
-default it is off.
-
-Only when both the above options have been provided (i.e snapdir-entry-path
-contains a valid unix path that is exported and show-snapshot-directory option
-is set to true), snapshots can accessed via windows explorer.
-
-If only 1st option (i.e. snapdir-entry-path) is set via samba and 2nd option
-(i.e. show-snapshot-directory) is off, then snapshots can be accessed from
-windows via the run command window, but not via the explorer.
-
-
---------------------------------------------------------------------------------------
diff --git a/doc/admin-guide/en-US/markdown/admin_managing_volumes.md b/doc/admin-guide/en-US/markdown/admin_managing_volumes.md
deleted file mode 100644
index f45567a1141..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_managing_volumes.md
+++ /dev/null
@@ -1,770 +0,0 @@
-#Managing GlusterFS Volumes
-
-This section describes how to perform common GlusterFS management
-operations, including the following:
-
-- [Tuning Volume Options](#tuning-options)
-- [Configuring Transport Types for a Volume](#configuring-transport-types-for-a-volume)
-- [Expanding Volumes](#expanding-volumes)
-- [Shrinking Volumes](#shrinking-volumes)
-- [Migrating Volumes](#migrating-volumes)
-- [Rebalancing Volumes](#rebalancing-volumes)
-- [Stopping Volumes](#stopping-volumes)
-- [Deleting Volumes](#deleting-volumes)
-- [Triggering Self-Heal on Replicate](#triggering-self-heal-on-replicate)
-- [Non Uniform File Allocation(NUFA)](#non-uniform-file-allocation)
-
-<a name="tuning-options" />
-##Tuning Volume Options
-
-You can tune volume options, as needed, while the cluster is online and
-available.
-
-> **Note**
->
-> It is recommended that you to set server.allow-insecure option to ON if
-> there are too many bricks in each volume or if there are too many
-> services which have already utilized all the privileged ports in the
-> system. Turning this option ON allows ports to accept/reject messages
-> from insecure ports. So, use this option only if your deployment
-> requires it.
-
-Tune volume options using the following command:
-
- # gluster volume set
-
-For example, to specify the performance cache size for test-volume:
-
- # gluster volume set test-volume performance.cache-size 256MB
- Set volume successful
-
-The following table lists the Volume options along with its
-description and default value:
-
-> **Note**
->
-> The default options given here are subject to modification at any
-> given time and may not be the same for all versions.
-
-Option | Description | Default Value | Available Options
---- | --- | --- | ---
-auth.allow | IP addresses of the clients which should be allowed to access the volume. | \* (allow all) | Valid IP address which includes wild card patterns including \*, such as 192.168.1.\*
-auth.reject | IP addresses of the clients which should be denied to access the volume. | NONE (reject none) | Valid IP address which includes wild card patterns including \*, such as 192.168.2.\*
-client.grace-timeout | Specifies the duration for the lock state to be maintained on the client after a network disconnection. | 10 | 10 - 1800 secs
-cluster.self-heal-window-size | Specifies the maximum number of blocks per file on which self-heal would happen simultaneously. | 16 | 0 - 1025 blocks
-cluster.data-self-heal-algorithm | Specifies the type of self-heal. If you set the option as "full", the entire file is copied from source to destinations. If the option is set to "diff" the file blocks that are not in sync are copied to destinations. Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte file exists (created by entry self-heal) the entire content has to be copied anyway, so there is no benefit from using the "diff" algorithm. If the file size is about the same as page size, the entire file can be read and written with a few operations, which will be faster than "diff" which has to read checksums and then read and write. | reset | full/diff/reset
-cluster.min-free-disk | Specifies the percentage of disk space that must be kept free. Might be useful for non-uniform bricks | 10% | Percentage of required minimum free disk space
-cluster.stripe-block-size | Specifies the size of the stripe unit that will be read from or written to. | 128 KB (for all files) | size in bytes
-cluster.self-heal-daemon | Allows you to turn-off proactive self-heal on replicated | On | On/Off
-cluster.ensure-durability | This option makes sure the data/metadata is durable across abrupt shutdown of the brick. | On | On/Off
-diagnostics.brick-log-level | Changes the log-level of the bricks. | INFO | DEBUG/WARNING/ERROR/CRITICAL/NONE/TRACE
-diagnostics.client-log-level | Changes the log-level of the clients. | INFO | DEBUG/WARNING/ERROR/CRITICAL/NONE/TRACE
-diagnostics.latency-measurement | Statistics related to the latency of each operation would be tracked. | Off | On/Off
-diagnostics.dump-fd-stats | Statistics related to file-operations would be tracked. | Off | On
-features.read-only | Enables you to mount the entire volume as read-only for all the clients (including NFS clients) accessing it. | Off | On/Off
-features.lock-heal | Enables self-healing of locks when the network disconnects. | On | On/Off
-features.quota-timeout | For performance reasons, quota caches the directory sizes on client. You can set timeout indicating the maximum duration of directory sizes in cache, from the time they are populated, during which they are considered valid | 0 | 0 - 3600 secs
-geo-replication.indexing | Use this option to automatically sync the changes in the filesystem from Master to Slave. | Off | On/Off
-network.frame-timeout | The time frame after which the operation has to be declared as dead, if the server does not respond for a particular operation. | 1800 (30 mins) | 1800 secs
-network.ping-timeout | The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided. | 42 Secs | 42 Secs
-nfs.enable-ino32 | For 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files, use this option from the CLI to make Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers. | Off | On/Off
-nfs.volume-access | Set the access type for the specified sub-volume. | read-write | read-write/read-only
-nfs.trusted-write | If there is an UNSTABLE write from the client, STABLE flag will be returned to force the client to not send a COMMIT request. In some environments, combined with a replicated GlusterFS setup, this option can improve write performance. This flag allows users to trust Gluster replication logic to sync data to the disks and recover when required. COMMIT requests if received will be handled in a default manner by fsyncing. STABLE writes are still handled in a sync manner. | Off | On/Off
-nfs.trusted-sync | All writes and COMMIT requests are treated as async. This implies that no write requests are guaranteed to be on server disks when the write reply is received at the NFS client. Trusted sync includes trusted-write behavior. | Off | On/Off
-nfs.export-dir | This option can be used to export specified comma separated subdirectories in the volume. The path must be an absolute path. Along with path allowed list of IPs/hostname can be associated with each subdirectory. If provided connection will allowed only from these IPs. Format: \<dir\>[(hostspec[hostspec...])][,...]. Where hostspec can be an IP address, hostname or an IP range in CIDR notation. **Note**: Care must be taken while configuring this option as invalid entries and/or unreachable DNS servers can introduce unwanted delay in all the mount calls. | No sub directory exported. | Absolute path with allowed list of IP/hostname
-nfs.export-volumes | Enable/Disable exporting entire volumes, instead if used in conjunction with nfs3.export-dir, can allow setting up only subdirectories as exports. | On | On/Off
-nfs.rpc-auth-unix | Enable/Disable the AUTH\_UNIX authentication type. This option is enabled by default for better interoperability. However, you can disable it if required. | On | On/Off
-nfs.rpc-auth-null | Enable/Disable the AUTH\_NULL authentication type. It is not recommended to change the default value for this option. | On | On/Off
-nfs.rpc-auth-allow\<IP- Addresses\> | Allow a comma separated list of addresses and/or hostnames to connect to the server. By default, all clients are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name
-nfs.rpc-auth-reject\<IP- Addresses\> | Reject a comma separated list of addresses and/or hostnames from connecting to the server. By default, all connections are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name
-nfs.ports-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | Off | On/Off
-nfs.addr-namelookup | Turn-off name lookup for incoming client connections using this option. In some setups, the name server can take too long to reply to DNS queries resulting in timeouts of mount requests. Use this option to turn off name lookups during address authentication. Note, turning this off will prevent you from using hostnames in rpc-auth.addr.\* filters. | On | On/Off
-nfs.register-with-portmap | For systems that need to run multiple NFS servers, you need to prevent more than one from registering with portmap service. Use this option to turn off portmap registration for Gluster NFS. | On | On/Off
-nfs.port \<PORT- NUMBER\> | Use this option on systems that need Gluster NFS to be associated with a non-default port number. | NA | 38465- 38467
-nfs.disable | Turn-off volume being exported by NFS | Off | On/Off
-performance.write-behind-window-size | Size of the per-file write-behind buffer. | 1MB | Write-behind cache size
-performance.io-thread-count | The number of threads in IO threads translator. | 16 | 0-65
-performance.flush-behind | If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed) to application even before flush is sent to backend filesystem. | On | On/Off
-performance.cache-max-file-size | Sets the maximum file size cached by the io-cache translator. Can use the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB). Maximum size uint64. | 2 \^ 64 -1 bytes | size in bytes
-performance.cache-min-file-size | Sets the minimum file size cached by the io-cache translator. Values same as "max" above | 0B | size in bytes
-performance.cache-refresh-timeout | The cached data for a file will be retained till 'cache-refresh-timeout' seconds, after which data re-validation is performed. | 1s | 0-61
-performance.cache-size | Size of the read cache. | 32 MB | size in bytes
-server.allow-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | On | On/Off
-server.grace-timeout | Specifies the duration for the lock state to be maintained on the server after a network disconnection. | 10 | 10 - 1800 secs
-server.statedump-path | Location of the state dump file. | tmp directory of the brick | New directory path
-storage.health-check-interval | Number of seconds between health-checks done on the filesystem that is used for the brick(s). Defaults to 30 seconds, set to 0 to disable. | tmp directory of the brick | New directory path
-
-You can view the changed volume options using command:
-
- # gluster volume info
-
-<a name="configuring-transport-types-for-a-volume" />
-##Configuring Transport Types for a Volume
-
-A volume can support one or more transport types for communication between clients and brick processes.
-There are three types of supported transport, which are tcp, rdma, and tcp,rdma.
-
-To change the supported transport types of a volume, follow the procedure:
-
-1. Unmount the volume on all the clients using the following command:
-
- # umount mount-point
-
-2. Stop the volumes using the following command:
-
- # gluster volume stop volname
-
-3. Change the transport type. For example, to enable both tcp and rdma execute the followimg command:
-
- # gluster volume set volname config.transport tcp,rdma OR tcp OR rdma
-
-4. Mount the volume on all the clients. For example, to mount using rdma transport, use the following command:
-
- # mount -t glusterfs -o transport=rdma server1:/test-volume /mnt/glusterfs
-
-<a name="expanding-volumes" />
-##Expanding Volumes
-
-You can expand volumes, as needed, while the cluster is online and
-available. For example, you might want to add a brick to a distributed
-volume, thereby increasing the distribution and adding to the capacity
-of the GlusterFS volume.
-
-Similarly, you might want to add a group of bricks to a distributed
-replicated volume, increasing the capacity of the GlusterFS volume.
-
-> **Note**
->
-> When expanding distributed replicated and distributed striped volumes,
-> you need to add a number of bricks that is a multiple of the replica
-> or stripe count. For example, to expand a distributed replicated
-> volume with a replica count of 2, you need to add bricks in multiples
-> of 2 (such as 4, 6, 8, etc.).
-
-**To expand a volume**
-
-1. On the first server in the cluster, probe the server to which you
- want to add the new brick using the following command:
-
- `# gluster peer probe `
-
- For example:
-
- # gluster peer probe server4
- Probe successful
-
-2. Add the brick using the following command:
-
- `# gluster volume add-brick `
-
- For example:
-
- # gluster volume add-brick test-volume server4:/exp4
- Add Brick successful
-
-3. Check the volume information using the following command:
-
- `# gluster volume info `
-
- The command displays information similar to the following:
-
- Volume Name: test-volume
- Type: Distribute
- Status: Started
- Number of Bricks: 4
- Bricks:
- Brick1: server1:/exp1
- Brick2: server2:/exp2
- Brick3: server3:/exp3
- Brick4: server4:/exp4
-
-4. Rebalance the volume to ensure that all files are distributed to the
- new brick.
-
- You can use the rebalance command as described in [Rebalancing Volumes](#rebalancing-volumes)
-
-<a name="shrinking-volumes" />
-##Shrinking Volumes
-
-You can shrink volumes, as needed, while the cluster is online and
-available. For example, you might need to remove a brick that has become
-inaccessible in a distributed volume due to hardware or network failure.
-
-> **Note**
->
-> Data residing on the brick that you are removing will no longer be
-> accessible at the Gluster mount point. Note however that only the
-> configuration information is removed - you can continue to access the
-> data directly from the brick, as necessary.
-
-When shrinking distributed replicated and distributed striped volumes,
-you need to remove a number of bricks that is a multiple of the replica
-or stripe count. For example, to shrink a distributed striped volume
-with a stripe count of 2, you need to remove bricks in multiples of 2
-(such as 4, 6, 8, etc.). In addition, the bricks you are trying to
-remove must be from the same sub-volume (the same replica or stripe
-set).
-
-**To shrink a volume**
-
-1. Remove the brick using the following command:
-
- `# gluster volume remove-brick ` `start`
-
- For example, to remove server2:/exp2:
-
- # gluster volume remove-brick test-volume server2:/exp2 force
-
- Removing brick(s) can result in data loss. Do you want to Continue? (y/n)
-
-2. Enter "y" to confirm the operation. The command displays the
- following message indicating that the remove brick operation is
- successfully started:
-
- Remove Brick successful
-
-3. (Optional) View the status of the remove brick operation using the
- following command:
-
- `# gluster volume remove-brick status`
-
- For example, to view the status of remove brick operation on
- server2:/exp2 brick:
-
- # gluster volume remove-brick test-volume server2:/exp2 status
- Node Rebalanced-files size scanned status
- --------- ---------------- ---- ------- -----------
- 617c923e-6450-4065-8e33-865e28d9428f 34 340 162 in progress
-
-4. Check the volume information using the following command:
-
- `# gluster volume info `
-
- The command displays information similar to the following:
-
- # gluster volume info
- Volume Name: test-volume
- Type: Distribute
- Status: Started
- Number of Bricks: 3
- Bricks:
- Brick1: server1:/exp1
- Brick3: server3:/exp3
- Brick4: server4:/exp4
-
-5. Rebalance the volume to ensure that all files are distributed to the
- new brick.
-
- You can use the rebalance command as described in [Rebalancing Volumes](#rebalancing-volumes)
-
-<a name="migrating-volumes" />
-##Migrating Volumes
-
-You can migrate the data from one brick to another, as needed, while the
-cluster is online and available.
-
-**To migrate a volume**
-
-1. Make sure the new brick, server5 in this example, is successfully
- added to the cluster.
-
-2. Migrate the data from one brick to another using the following
- command:
-
- ` # gluster volume replace-brick start`
-
- For example, to migrate the data in server3:/exp3 to server5:/exp5
- in test-volume:
-
- # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 start
- Replace brick start operation successful
-
- > **Note**
- >
- > You need to have the FUSE package installed on the server on which
- > you are running the replace-brick command for the command to work.
-
-3. To pause the migration operation, if needed, use the following
- command:
-
- `# gluster volume replace-brick pause `
-
- For example, to pause the data migration from server3:/exp3 to
- server5:/exp5 in test-volume:
-
- # gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 pause
- Replace brick pause operation successful
-
-4. To abort the migration operation, if needed, use the following
- command:
-
- `# gluster volume replace-brick abort `
-
- For example, to abort the data migration from server3:/exp3 to
- server5:/exp5 in test-volume:
-
- # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 abort
- Replace brick abort operation successful
-
-5. Check the status of the migration operation using the following
- command:
-
- `# gluster volume replace-brick status `
-
- For example, to check the data migration status from server3:/exp3
- to server5:/exp5 in test-volume:
-
- # gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 status
- Current File = /usr/src/linux-headers-2.6.31-14/block/Makefile
- Number of files migrated = 10567
- Migration complete
-
- The status command shows the current file being migrated along with
- the current total number of files migrated. After completion of
- migration, it displays Migration complete.
-
-6. Commit the migration of data from one brick to another using the
- following command:
-
- `# gluster volume replace-brick commit `
-
- For example, to commit the data migration from server3:/exp3 to
- server5:/exp5 in test-volume:
-
- # gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 commit
- replace-brick commit successful
-
-7. Verify the migration of brick by viewing the volume info using the
- following command:
-
- `# gluster volume info `
-
- For example, to check the volume information of new brick
- server5:/exp5 in test-volume:
-
- # gluster volume info test-volume
- Volume Name: testvolume
- Type: Replicate
- Status: Started
- Number of Bricks: 4
- Transport-type: tcp
- Bricks:
- Brick1: server1:/exp1
- Brick2: server2:/exp2
- Brick3: server4:/exp4
- Brick4: server5:/exp5
-
- The new volume details are displayed.
-
- In the above example, previously, there were bricks; 1,2,3, and 4
- and now brick 3 is replaced by brick 5.
-
-<a name="rebalancing-volumes" />
-##Rebalancing Volumes
-
-After expanding or shrinking a volume (using the add-brick and
-remove-brick commands respectively), you need to rebalance the data
-among the servers. New directories created after expanding or shrinking
-of the volume will be evenly distributed automatically. For all the
-existing directories, the distribution can be fixed by rebalancing the
-layout and/or data.
-
-This section describes how to rebalance GlusterFS volumes in your
-storage environment, using the following common scenarios:
-
-- **Fix Layout** - Fixes the layout changes so that the files can actually
- go to newly added nodes.
-
-- **Fix Layout and Migrate Data** - Rebalances volume by fixing the layout
- changes and migrating the existing data.
-
-###Rebalancing Volume to Fix Layout Changes
-
-Fixing the layout is necessary because the layout structure is static
-for a given directory. In a scenario where new bricks have been added to
-the existing volume, newly created files in existing directories will
-still be distributed only among the old bricks. The
-`# gluster volume rebalance fix-layout start `command will fix the
-layout information so that the files can also go to newly added nodes.
-When this command is issued, all the file stat information which is
-already cached will get revalidated.
-
-As of GlusterFS 3.6, the assignment of files to bricks will take into account
-the sizes of the bricks. For example, a 20TB brick will be assigned twice as
-many files as a 10TB brick. In versions before 3.6, the two bricks were
-treated as equal regardless of size, and would have been assigned an equal
-share of files.
-
-A fix-layout rebalance will only fix the layout changes and does not
-migrate data. If you want to migrate the existing data,
-use`# gluster volume rebalance start ` command to rebalance data among
-the servers.
-
-**To rebalance a volume to fix layout changes**
-
-- Start the rebalance operation on any one of the server using the
- following command:
-
- `# gluster volume rebalance fix-layout start`
-
- For example:
-
- # gluster volume rebalance test-volume fix-layout start
- Starting rebalance on volume test-volume has been successful
-
-###Rebalancing Volume to Fix Layout and Migrate Data
-
-After expanding or shrinking a volume (using the add-brick and
-remove-brick commands respectively), you need to rebalance the data
-among the servers.
-
-**To rebalance a volume to fix layout and migrate the existing data**
-
-- Start the rebalance operation on any one of the server using the
- following command:
-
- `# gluster volume rebalance start`
-
- For example:
-
- # gluster volume rebalance test-volume start
- Starting rebalancing on volume test-volume has been successful
-
-- Start the migration operation forcefully on any one of the server
- using the following command:
-
- `# gluster volume rebalance start force`
-
- For example:
-
- # gluster volume rebalance test-volume start force
- Starting rebalancing on volume test-volume has been successful
-
-###Displaying Status of Rebalance Operation
-
-You can display the status information about rebalance volume operation,
-as needed.
-
-- Check the status of the rebalance operation, using the following
- command:
-
- `# gluster volume rebalance status`
-
- For example:
-
- # gluster volume rebalance test-volume status
- Node Rebalanced-files size scanned status
- --------- ---------------- ---- ------- -----------
- 617c923e-6450-4065-8e33-865e28d9428f 416 1463 312 in progress
-
- The time to complete the rebalance operation depends on the number
- of files on the volume along with the corresponding file sizes.
- Continue checking the rebalance status, verifying that the number of
- files rebalanced or total files scanned keeps increasing.
-
- For example, running the status command again might display a result
- similar to the following:
-
- # gluster volume rebalance test-volume status
- Node Rebalanced-files size scanned status
- --------- ---------------- ---- ------- -----------
- 617c923e-6450-4065-8e33-865e28d9428f 498 1783 378 in progress
-
- The rebalance status displays the following when the rebalance is
- complete:
-
- # gluster volume rebalance test-volume status
- Node Rebalanced-files size scanned status
- --------- ---------------- ---- ------- -----------
- 617c923e-6450-4065-8e33-865e28d9428f 502 1873 334 completed
-
-###Stopping Rebalance Operation
-
-You can stop the rebalance operation, as needed.
-
-- Stop the rebalance operation using the following command:
-
- `# gluster volume rebalance stop`
-
- For example:
-
- # gluster volume rebalance test-volume stop
- Node Rebalanced-files size scanned status
- --------- ---------------- ---- ------- -----------
- 617c923e-6450-4065-8e33-865e28d9428f 59 590 244 stopped
- Stopped rebalance process on volume test-volume
-
-<a name="stopping-volumes" />
-##Stopping Volumes
-
-1. Stop the volume using the following command:
-
- `# gluster volume stop `
-
- For example, to stop test-volume:
-
- # gluster volume stop test-volume
- Stopping volume will make its data inaccessible. Do you want to continue? (y/n)
-
-2. Enter `y` to confirm the operation. The output of the command
- displays the following:
-
- Stopping volume test-volume has been successful
-
-<a name="deleting-volumes" />
-##Deleting Volumes
-
-1. Delete the volume using the following command:
-
- `# gluster volume delete `
-
- For example, to delete test-volume:
-
- # gluster volume delete test-volume
- Deleting volume will erase all information about the volume. Do you want to continue? (y/n)
-
-2. Enter `y` to confirm the operation. The command displays the
- following:
-
- Deleting volume test-volume has been successful
-
-<a name="triggering-self-heal-on-replicate" />
-##Triggering Self-Heal on Replicate
-
-In replicate module, previously you had to manually trigger a self-heal
-when a brick goes offline and comes back online, to bring all the
-replicas in sync. Now the pro-active self-heal daemon runs in the
-background, diagnoses issues and automatically initiates self-healing
-every 10 minutes on the files which requires*healing*.
-
-You can view the list of files that need *healing*, the list of files
-which are currently/previously *healed*, list of files which are in
-split-brain state, and you can manually trigger self-heal on the entire
-volume or only on the files which need *healing*.
-
-- Trigger self-heal only on the files which requires *healing*:
-
- `# gluster volume heal `
-
- For example, to trigger self-heal on files which requires *healing*
- of test-volume:
-
- # gluster volume heal test-volume
- Heal operation on volume test-volume has been successful
-
-- Trigger self-heal on all the files of a volume:
-
- `# gluster volume heal full`
-
- For example, to trigger self-heal on all the files of of
- test-volume:
-
- # gluster volume heal test-volume full
- Heal operation on volume test-volume has been successful
-
-- View the list of files that needs *healing*:
-
- `# gluster volume heal info`
-
- For example, to view the list of files on test-volume that needs
- *healing*:
-
- # gluster volume heal test-volume info
- Brick :/gfs/test-volume_0
- Number of entries: 0
-
- Brick :/gfs/test-volume_1
- Number of entries: 101
- /95.txt
- /32.txt
- /66.txt
- /35.txt
- /18.txt
- /26.txt
- /47.txt
- /55.txt
- /85.txt
- ...
-
-- View the list of files that are self-healed:
-
- `# gluster volume heal info healed`
-
- For example, to view the list of files on test-volume that are
- self-healed:
-
- # gluster volume heal test-volume info healed
- Brick :/gfs/test-volume_0
- Number of entries: 0
-
- Brick :/gfs/test-volume_1
- Number of entries: 69
- /99.txt
- /93.txt
- /76.txt
- /11.txt
- /27.txt
- /64.txt
- /80.txt
- /19.txt
- /41.txt
- /29.txt
- /37.txt
- /46.txt
- ...
-
-- View the list of files of a particular volume on which the self-heal
- failed:
-
- `# gluster volume heal info failed`
-
- For example, to view the list of files of test-volume that are not
- self-healed:
-
- # gluster volume heal test-volume info failed
- Brick :/gfs/test-volume_0
- Number of entries: 0
-
- Brick server2:/gfs/test-volume_3
- Number of entries: 72
- /90.txt
- /95.txt
- /77.txt
- /71.txt
- /87.txt
- /24.txt
- ...
-
-- View the list of files of a particular volume which are in
- split-brain state:
-
- `# gluster volume heal info split-brain`
-
- For example, to view the list of files of test-volume which are in
- split-brain state:
-
- # gluster volume heal test-volume info split-brain
- Brick server1:/gfs/test-volume_2
- Number of entries: 12
- /83.txt
- /28.txt
- /69.txt
- ...
-
- Brick :/gfs/test-volume_2
- Number of entries: 12
- /83.txt
- /28.txt
- /69.txt
- ...
-
-<a name="non-uniform-file-allocation" />
-##Non Uniform File Allocation
-
-NUFA translator or Non Uniform File Access translator is designed for giving higher preference
-to a local drive when used in a HPC type of environment. It can be applied to Distribute and Replica translators;
-in the latter case it ensures that *one* copy is local if space permits.
-
-When a client on a server creates files, the files are allocated to a brick in the volume based on the file name.
-This allocation may not be ideal, as there is higher latency and unnecessary network traffic for read/write operations
-to a non-local brick or export directory. NUFA ensures that the files are created in the local export directory
-of the server, and as a result, reduces latency and conserves bandwidth for that server accessing that file.
-This can also be useful for applications running on mount points on the storage server.
-
-If the local brick runs out of space or reaches the minimum disk free limit, instead of allocating files
-to the local brick, NUFA distributes files to other bricks in the same volume if there is
-space available on those bricks.
-
-NUFA should be enabled before creating any data in the volume.
-
-Use the following command to enable NUFA:
-
- # gluster volume set VOLNAME cluster.nufa enable on
-
-**Important**
-
-NUFA is supported under the following conditions:
-
-- Volumes with only with one brick per server.
-- For use with a FUSE client.NUFA is not supported with NFS or SMB.
-- A client that is mounting a NUFA-enabled volume must be present within the trusted storage pool.
-
-The NUFA scheduler also exists, for use with the Unify translator; see below.
-
- volume bricks
- type cluster/nufa
- option local-volume-name brick1
- subvolumes brick1 brick2 brick3 brick4 brick5 brick6 brick7
- end-volume
-
-#####NUFA additional options
-
-- lookup-unhashed
-
- This is an advanced option where files are looked up in all subvolumes if they are missing on the subvolume matching the hash value of the filename. The default is on.
-
-- local-volume-name
-
- The volume name to consider local and prefer file creations on. The default is to search for a volume matching the hostname of the system.
-
-- subvolumes
-
- This option lists the subvolumes that are part of this 'cluster/nufa' volume. This translator requires more than one subvolume.
-
-<a name="bitrot-detection" />
-##BitRot Detection
-
-With BitRot detection in Gluster, it's possible to identify "insidious" type of disk
-errors where data is silently corrupted with no indication from the disk to the storage
-software layer than an error has occured. This also helps in catching "backend" tinkering
-of bricks (where data is directly manipulated on the bricks without going through FUSE,
-NFS or any other access protocol(s).
-
-BitRot detection is disbled by default and needs to be enabled to make use of other
-sub-commands.
-
-1. To enable bitrot detection for a given volume <VOLNAME>:
-
- `# gluster volume bitrot <VOLNAME> enable`
-
- and similarly to disable bitrot use:
-
- `# gluster volume bitrot <VOLNAME> disable`
-
-NOTE: Enabling bitrot spanws the Signer & Scrubber daemon per node. Signer is responsible
- for signing (calculating checksum for each file) an object and scrubber verifies the
- calculated checksum against the objects data.
-
-2. Scrubber daemon has three (3) throttling modes that adjusts the rate at which objects
- are verified.
-
- `# volume bitrot <VOLNAME> scrub-throttle lazy`
-
- `# volume bitrot <VOLNAME> scrub-throttle normal`
-
- `# volume bitrot <VOLNAME> scrub-throttle aggressive`
-
-3. By default scrubber scrubs the filesystem biweekly. It's possible to tune it to scrub
- based on predefined frequency such as monthly, etc. This can be done as shown below:
-
- `# volume bitrot <VOLNAME> scrub-frequency daily`
-
- `# volume bitrot <VOLNAME> scrub-frequency weekly`
-
- `# volume bitrot <VOLNAME> scrub-frequency biweekly`
-
- `# volume bitrot <VOLNAME> scrub-frequency monthly`
-
-NOTE: Daily scrubbing would not be available with GA release.
-
-4. Scrubber daemon can be paused and later resumed when required. This can be done as
- shown below:
-
- `# volume bitrot <VOLNAME> scrub pause`
-
-and to resume scrubbing
-
- `# volume bitrot <VOLNAME> scrub resume`
-
-NOTE: Signing cannot be paused (and resumed) and would always be active as long as
- bitrot is enabled for that particular volume.
diff --git a/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md b/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md
deleted file mode 100644
index c3ac0609b99..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md
+++ /dev/null
@@ -1,893 +0,0 @@
-#Monitoring your GlusterFS Workload
-
-You can monitor the GlusterFS volumes on different parameters.
-Monitoring volumes helps in capacity planning and performance tuning
-tasks of the GlusterFS volume. Using these information, you can identify
-and troubleshoot issues.
-
-You can use Volume Top and Profile commands to view the performance and
-identify bottlenecks/hotspots of each brick of a volume. This helps
-system administrators to get vital performance information whenever
-performance needs to be probed.
-
-You can also perform statedump of the brick processes and nfs server
-process of a volume, and also view volume status and volume information.
-
-##Running GlusterFS Volume Profile Command
-
-GlusterFS Volume Profile command provides an interface to get the
-per-brick I/O information for each File Operation (FOP) of a volume. The
-per brick information helps in identifying bottlenecks in the storage
-system.
-
-This section describes how to run GlusterFS Volume Profile command by
-performing the following operations:
-
-- [Start Profiling](#start-profiling)
-- [Displaying the I/0 Information](#displaying-io)
-- [Stop Profiling](#stop-profiling)
-
-<a name="start-profiling" />
-###Start Profiling
-
-You must start the Profiling to view the File Operation information for
-each brick.
-
-To start profiling, use following command:
-
-`# gluster volume profile start `
-
-For example, to start profiling on test-volume:
-
- # gluster volume profile test-volume start
- Profiling started on test-volume
-
-When profiling on the volume is started, the following additional
-options are displayed in the Volume Info:
-
- diagnostics.count-fop-hits: on
- diagnostics.latency-measurement: on
-
-<a name="displaying-io" />
-###Displaying the I/0 Information
-
-You can view the I/O information of each brick by using the following command:
-
-`# gluster volume profile info`
-
-For example, to see the I/O information on test-volume:
-
- # gluster volume profile test-volume info
- Brick: Test:/export/2
- Cumulative Stats:
-
- Block 1b+ 32b+ 64b+
- Size:
- Read: 0 0 0
- Write: 908 28 8
-
- Block 128b+ 256b+ 512b+
- Size:
- Read: 0 6 4
- Write: 5 23 16
-
- Block 1024b+ 2048b+ 4096b+
- Size:
- Read: 0 52 17
- Write: 15 120 846
-
- Block 8192b+ 16384b+ 32768b+
- Size:
- Read: 52 8 34
- Write: 234 134 286
-
- Block 65536b+ 131072b+
- Size:
- Read: 118 622
- Write: 1341 594
-
-
- %-latency Avg- Min- Max- calls Fop
- latency Latency Latency
- ___________________________________________________________
- 4.82 1132.28 21.00 800970.00 4575 WRITE
- 5.70 156.47 9.00 665085.00 39163 READDIRP
- 11.35 315.02 9.00 1433947.00 38698 LOOKUP
- 11.88 1729.34 21.00 2569638.00 7382 FXATTROP
- 47.35 104235.02 2485.00 7789367.00 488 FSYNC
-
- ------------------
-
- ------------------
-
- Duration : 335
-
- BytesRead : 94505058
-
- BytesWritten : 195571980
-
-<a name="stop-profiling" />
-###Stop Profiling
-
-You can stop profiling the volume, if you do not need profiling
-information anymore.
-
-Stop profiling using the following command:
-
- `# gluster volume profile stop`
-
-For example, to stop profiling on test-volume:
-
- `# gluster volume profile stop`
-
- `Profiling stopped on test-volume`
-
-##Running GlusterFS Volume TOP Command
-
-GlusterFS Volume Top command allows you to view the glusterfs bricks’
-performance metrics like read, write, file open calls, file read calls,
-file write calls, directory open calls, and directory real calls. The
-top command displays up to 100 results.
-
-This section describes how to run and view the results for the following
-GlusterFS Top commands:
-
-- [Viewing Open fd Count and Maximum fd Count](#open-fd-count)
-- [Viewing Highest File Read Calls](#file-read)
-- [Viewing Highest File Write Calls](#file-write)
-- [Viewing Highest Open Calls on Directories](#open-dir)
-- [Viewing Highest Read Calls on Directory](#read-dir)
-- [Viewing List of Read Performance on each Brick](#read-perf)
-- [Viewing List of Write Performance on each Brick](#write-perf)
-
-<a name="open-fd-count" />
-###Viewing Open fd Count and Maximum fd Count
-
-You can view both current open fd count (list of files that are
-currently the most opened and the count) on the brick and the maximum
-open fd count (count of files that are the currently open and the count
-of maximum number of files opened at any given point of time, since the
-servers are up and running). If the brick name is not specified, then
-open fd metrics of all the bricks belonging to the volume will be
-displayed.
-
-- View open fd count and maximum fd count using the following command:
-
- `# gluster volume top open [brick ] [list-cnt ]`
-
- For example, to view open fd count and maximum fd count on brick
- server:/export of test-volume and list top 10 open calls:
-
- `# gluster volume top open brick list-cnt `
-
- `Brick: server:/export/dir1 `
-
- `Current open fd's: 34 Max open fd's: 209 `
-
- ==========Open file stats========
-
- open file name
- call count
-
- 2 /clients/client0/~dmtmp/PARADOX/
- COURSES.DB
-
- 11 /clients/client0/~dmtmp/PARADOX/
- ENROLL.DB
-
- 11 /clients/client0/~dmtmp/PARADOX/
- STUDENTS.DB
-
- 10 /clients/client0/~dmtmp/PWRPNT/
- TIPS.PPT
-
- 10 /clients/client0/~dmtmp/PWRPNT/
- PCBENCHM.PPT
-
- 9 /clients/client7/~dmtmp/PARADOX/
- STUDENTS.DB
-
- 9 /clients/client1/~dmtmp/PARADOX/
- STUDENTS.DB
-
- 9 /clients/client2/~dmtmp/PARADOX/
- STUDENTS.DB
-
- 9 /clients/client0/~dmtmp/PARADOX/
- STUDENTS.DB
-
- 9 /clients/client8/~dmtmp/PARADOX/
- STUDENTS.DB
-
-<a name="file-read" />
-###Viewing Highest File Read Calls
-
-You can view highest read calls on each brick. If brick name is not
-specified, then by default, list of 100 files will be displayed.
-
-- View highest file Read calls using the following command:
-
- `# gluster volume top read [brick ] [list-cnt ] `
-
- For example, to view highest Read calls on brick server:/export of
- test-volume:
-
- `# gluster volume top read brick list-cnt `
-
- `Brick:` server:/export/dir1
-
- ==========Read file stats========
-
- read filename
- call count
-
- 116 /clients/client0/~dmtmp/SEED/LARGE.FIL
-
- 64 /clients/client0/~dmtmp/SEED/MEDIUM.FIL
-
- 54 /clients/client2/~dmtmp/SEED/LARGE.FIL
-
- 54 /clients/client6/~dmtmp/SEED/LARGE.FIL
-
- 54 /clients/client5/~dmtmp/SEED/LARGE.FIL
-
- 54 /clients/client0/~dmtmp/SEED/LARGE.FIL
-
- 54 /clients/client3/~dmtmp/SEED/LARGE.FIL
-
- 54 /clients/client4/~dmtmp/SEED/LARGE.FIL
-
- 54 /clients/client9/~dmtmp/SEED/LARGE.FIL
-
- 54 /clients/client8/~dmtmp/SEED/LARGE.FIL
-
-<a name="file-write" />
-###Viewing Highest File Write Calls
-
-You can view list of files which has highest file write calls on each
-brick. If brick name is not specified, then by default, list of 100
-files will be displayed.
-
-- View highest file Write calls using the following command:
-
- `# gluster volume top write [brick ] [list-cnt ] `
-
- For example, to view highest Write calls on brick server:/export of
- test-volume:
-
- `# gluster volume top write brick list-cnt `
-
- `Brick: server:/export/dir1 `
-
- ==========Write file stats========
- write call count filename
-
- 83 /clients/client0/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client7/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client1/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client2/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client0/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client8/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client5/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client4/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client6/~dmtmp/SEED/LARGE.FIL
-
- 59 /clients/client3/~dmtmp/SEED/LARGE.FIL
-
-<a name="open-dir" />
-###Viewing Highest Open Calls on Directories
-
-You can view list of files which has highest open calls on directories
-of each brick. If brick name is not specified, then the metrics of all
-the bricks belonging to that volume will be displayed.
-
-- View list of open calls on each directory using the following
- command:
-
- `# gluster volume top opendir [brick ] [list-cnt ] `
-
- For example, to view open calls on brick server:/export/ of
- test-volume:
-
- `# gluster volume top opendir brick list-cnt `
-
- `Brick: server:/export/dir1 `
-
- ==========Directory open stats========
-
- Opendir count directory name
-
- 1001 /clients/client0/~dmtmp
-
- 454 /clients/client8/~dmtmp
-
- 454 /clients/client2/~dmtmp
-
- 454 /clients/client6/~dmtmp
-
- 454 /clients/client5/~dmtmp
-
- 454 /clients/client9/~dmtmp
-
- 443 /clients/client0/~dmtmp/PARADOX
-
- 408 /clients/client1/~dmtmp
-
- 408 /clients/client7/~dmtmp
-
- 402 /clients/client4/~dmtmp
-
-<a name="read-dir" />
-###Viewing Highest Read Calls on Directory
-
-You can view list of files which has highest directory read calls on
-each brick. If brick name is not specified, then the metrics of all the
-bricks belonging to that volume will be displayed.
-
-- View list of highest directory read calls on each brick using the
- following command:
-
- `# gluster volume top readdir [brick ] [list-cnt ] `
-
- For example, to view highest directory read calls on brick
- server:/export of test-volume:
-
- `# gluster volume top readdir brick list-cnt `
-
- `Brick: `
-
- ==========Directory readdirp stats========
-
- readdirp count directory name
-
- 1996 /clients/client0/~dmtmp
-
- 1083 /clients/client0/~dmtmp/PARADOX
-
- 904 /clients/client8/~dmtmp
-
- 904 /clients/client2/~dmtmp
-
- 904 /clients/client6/~dmtmp
-
- 904 /clients/client5/~dmtmp
-
- 904 /clients/client9/~dmtmp
-
- 812 /clients/client1/~dmtmp
-
- 812 /clients/client7/~dmtmp
-
- 800 /clients/client4/~dmtmp
-
-<a name="read-perf" />
-###Viewing List of Read Performance on each Brick
-
-You can view the read throughput of files on each brick. If brick name
-is not specified, then the metrics of all the bricks belonging to that
-volume will be displayed. The output will be the read throughput.
-
- ==========Read throughput file stats========
-
- read filename Time
- through
- put(MBp
- s)
-
- 2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31
- TRIDOTS.POT 15:38:36.894610
- 2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31
- PCBENCHM.PPT 15:38:39.815310
- 2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:52:53.631499
-
- 2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:38:36.926198
-
- 2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- LARGE.FIL 15:38:36.930445
-
- 2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31
- COURSES.X04 15:38:40.549919
-
- 2221.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31
- STUDENTS.VAL 15:52:53.298766
-
- 2221.00 /clients/client3/~dmtmp/SEED/ -2011-01-31
- COURSES.DB 15:39:11.776780
-
- 2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:39:10.251764
-
- 2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31
- BASEMACH.DOC 15:39:09.336572
-
-This command will initiate a dd for the specified count and block size
-and measures the corresponding throughput.
-
-- View list of read performance on each brick using the following
- command:
-
- `# gluster volume top read-perf [bs count ] [brick ] [list-cnt ]`
-
- For example, to view read performance on brick server:/export/ of
- test-volume, 256 block size of count 1, and list count 10:
-
- `# gluster volume top read-perf bs 256 count 1 brick list-cnt `
-
- `Brick: server:/export/dir1 256 bytes (256 B) copied, Throughput: 4.1 MB/s `
-
- ==========Read throughput file stats========
-
- read filename Time
- through
- put(MBp
- s)
-
- 2912.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31
- TRIDOTS.POT 15:38:36.896486
-
- 2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31
- PCBENCHM.PPT 15:38:39.815310
-
- 2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:52:53.631499
-
- 2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:38:36.926198
-
- 2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- LARGE.FIL 15:38:36.930445
-
- 2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31
- COURSES.X04 15:38:40.549919
-
- 2221.00 /clients/client9/~dmtmp/PARADOX/ -2011-01-31
- STUDENTS.VAL 15:52:53.298766
-
- 2221.00 /clients/client8/~dmtmp/PARADOX/ -2011-01-31
- COURSES.DB 15:39:11.776780
-
- 2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:39:10.251764
-
- 2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31
- BASEMACH.DOC 15:39:09.336572
-
-<a name="write-perf" />
-###Viewing List of Write Performance on each Brick
-
-You can view list of write throughput of files on each brick. If brick
-name is not specified, then the metrics of all the bricks belonging to
-that volume will be displayed. The output will be the write throughput.
-
-This command will initiate a dd for the specified count and block size
-and measures the corresponding throughput. To view list of write
-performance on each brick:
-
-- View list of write performance on each brick using the following
- command:
-
- `# gluster volume top write-perf [bs count ] [brick ] [list-cnt ] `
-
- For example, to view write performance on brick server:/export/ of
- test-volume, 256 block size of count 1, and list count 10:
-
- `# gluster volume top write-perf bs 256 count 1 brick list-cnt `
-
- `Brick`: server:/export/dir1
-
- `256 bytes (256 B) copied, Throughput: 2.8 MB/s `
-
- ==========Write throughput file stats========
-
- write filename Time
- throughput
- (MBps)
-
- 1170.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- SMALL.FIL 15:39:09.171494
-
- 1008.00 /clients/client6/~dmtmp/SEED/ -2011-01-31
- LARGE.FIL 15:39:09.73189
-
- 949.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:38:36.927426
-
- 936.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- LARGE.FIL 15:38:36.933177
- 897.00 /clients/client5/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:39:09.33628
-
- 897.00 /clients/client6/~dmtmp/SEED/ -2011-01-31
- MEDIUM.FIL 15:39:09.27713
-
- 885.00 /clients/client0/~dmtmp/SEED/ -2011-01-31
- SMALL.FIL 15:38:36.924271
-
- 528.00 /clients/client5/~dmtmp/SEED/ -2011-01-31
- LARGE.FIL 15:39:09.81893
-
- 516.00 /clients/client6/~dmtmp/ACCESS/ -2011-01-31
- FASTENER.MDB 15:39:01.797317
-
-##Displaying Volume Information
-
-You can display information about a specific volume, or all volumes, as
-needed.
-
-- Display information about a specific volume using the following
- command:
-
- `# gluster volume info ``VOLNAME`
-
- For example, to display information about test-volume:
-
- # gluster volume info test-volume
- Volume Name: test-volume
- Type: Distribute
- Status: Created
- Number of Bricks: 4
- Bricks:
- Brick1: server1:/exp1
- Brick2: server2:/exp2
- Brick3: server3:/exp3
- Brick4: server4:/exp4
-
-- Display information about all volumes using the following command:
-
- `# gluster volume info all`
-
- # gluster volume info all
-
- Volume Name: test-volume
- Type: Distribute
- Status: Created
- Number of Bricks: 4
- Bricks:
- Brick1: server1:/exp1
- Brick2: server2:/exp2
- Brick3: server3:/exp3
- Brick4: server4:/exp4
-
- Volume Name: mirror
- Type: Distributed-Replicate
- Status: Started
- Number of Bricks: 2 X 2 = 4
- Bricks:
- Brick1: server1:/brick1
- Brick2: server2:/brick2
- Brick3: server3:/brick3
- Brick4: server4:/brick4
-
- Volume Name: Vol
- Type: Distribute
- Status: Started
- Number of Bricks: 1
- Bricks:
- Brick: server:/brick6
-
-##Performing Statedump on a Volume
-
-Statedump is a mechanism through which you can get details of all
-internal variables and state of the glusterfs process at the time of
-issuing the command.You can perform statedumps of the brick processes
-and nfs server process of a volume using the statedump command. The
-following options can be used to determine what information is to be
-dumped:
-
-- **mem** - Dumps the memory usage and memory pool details of the
- bricks.
-
-- **iobuf** - Dumps iobuf details of the bricks.
-
-- **priv** - Dumps private information of loaded translators.
-
-- **callpool** - Dumps the pending calls of the volume.
-
-- **fd** - Dumps the open fd tables of the volume.
-
-- **inode** - Dumps the inode tables of the volume.
-
-**To display volume statedump**
-
-- Display statedump of a volume or NFS server using the following
- command:
-
- `# gluster volume statedump [nfs] [all|mem|iobuf|callpool|priv|fd|inode]`
-
- For example, to display statedump of test-volume:
-
- # gluster volume statedump test-volume
- Volume statedump successful
-
- The statedump files are created on the brick servers in the` /tmp`
- directory or in the directory set using `server.statedump-path`
- volume option. The naming convention of the dump file is
- `<brick-path>.<brick-pid>.dump`.
-
-- By defult, the output of the statedump is stored at
- ` /tmp/<brickname.PID.dump>` file on that particular server. Change
- the directory of the statedump file using the following command:
-
- `# gluster volume set server.statedump-path `
-
- For example, to change the location of the statedump file of
- test-volume:
-
- # gluster volume set test-volume server.statedump-path /usr/local/var/log/glusterfs/dumps/
- Set volume successful
-
- You can view the changed path of the statedump file using the
- following command:
-
- `# gluster volume info `
-
-##Displaying Volume Status
-
-You can display the status information about a specific volume, brick or
-all volumes, as needed. Status information can be used to understand the
-current status of the brick, nfs processes, and overall file system.
-Status information can also be used to monitor and debug the volume
-information. You can view status of the volume along with the following
-details:
-
-- **detail** - Displays additional information about the bricks.
-
-- **clients** - Displays the list of clients connected to the volume.
-
-- **mem** - Displays the memory usage and memory pool details of the
- bricks.
-
-- **inode** - Displays the inode tables of the volume.
-
-- **fd** - Displays the open fd (file descriptors) tables of the
- volume.
-
-- **callpool** - Displays the pending calls of the volume.
-
-**To display volume status**
-
-- Display information about a specific volume using the following
- command:
-
- `# gluster volume status [all| []] [detail|clients|mem|inode|fd|callpool]`
-
- For example, to display information about test-volume:
-
- # gluster volume status test-volume
- STATUS OF VOLUME: test-volume
- BRICK PORT ONLINE PID
- --------------------------------------------------------
- arch:/export/1 24009 Y 22445
- --------------------------------------------------------
- arch:/export/2 24010 Y 22450
-
-- Display information about all volumes using the following command:
-
- `# gluster volume status all`
-
- # gluster volume status all
- STATUS OF VOLUME: volume-test
- BRICK PORT ONLINE PID
- --------------------------------------------------------
- arch:/export/4 24010 Y 22455
-
- STATUS OF VOLUME: test-volume
- BRICK PORT ONLINE PID
- --------------------------------------------------------
- arch:/export/1 24009 Y 22445
- --------------------------------------------------------
- arch:/export/2 24010 Y 22450
-
-- Display additional information about the bricks using the following
- command:
-
- `# gluster volume status detail`
-
- For example, to display additional information about the bricks of
- test-volume:
-
- # gluster volume status test-volume details
- STATUS OF VOLUME: test-volume
- -------------------------------------------
- Brick : arch:/export/1
- Port : 24009
- Online : Y
- Pid : 16977
- File System : rootfs
- Device : rootfs
- Mount Options : rw
- Disk Space Free : 13.8GB
- Total Disk Space : 46.5GB
- Inode Size : N/A
- Inode Count : N/A
- Free Inodes : N/A
-
- Number of Bricks: 1
- Bricks:
- Brick: server:/brick6
-
-- Display the list of clients accessing the volumes using the
- following command:
-
- `# gluster volume status clients`
-
- For example, to display the list of clients connected to
- test-volume:
-
- # gluster volume status test-volume clients
- Brick : arch:/export/1
- Clients connected : 2
- Hostname Bytes Read BytesWritten
- -------- --------- ------------
- 127.0.0.1:1013 776 676
- 127.0.0.1:1012 50440 51200
-
-- Display the memory usage and memory pool details of the bricks using
- the following command:
-
- `# gluster volume status mem`
-
- For example, to display the memory usage and memory pool details of
- the bricks of test-volume:
-
- Memory status for volume : test-volume
- ----------------------------------------------
- Brick : arch:/export/1
- Mallinfo
- --------
- Arena : 434176
- Ordblks : 2
- Smblks : 0
- Hblks : 12
- Hblkhd : 40861696
- Usmblks : 0
- Fsmblks : 0
- Uordblks : 332416
- Fordblks : 101760
- Keepcost : 100400
-
- Mempool Stats
- -------------
- Name HotCount ColdCount PaddedSizeof AllocCount MaxAlloc
- ---- -------- --------- ------------ ---------- --------
- test-volume-server:fd_t 0 16384 92 57 5
- test-volume-server:dentry_t 59 965 84 59 59
- test-volume-server:inode_t 60 964 148 60 60
- test-volume-server:rpcsvc_request_t 0 525 6372 351 2
- glusterfs:struct saved_frame 0 4096 124 2 2
- glusterfs:struct rpc_req 0 4096 2236 2 2
- glusterfs:rpcsvc_request_t 1 524 6372 2 1
- glusterfs:call_stub_t 0 1024 1220 288 1
- glusterfs:call_stack_t 0 8192 2084 290 2
- glusterfs:call_frame_t 0 16384 172 1728 6
-
-- Display the inode tables of the volume using the following command:
-
- `# gluster volume status inode`
-
- For example, to display the inode tables of the test-volume:
-
- # gluster volume status test-volume inode
- inode tables for volume test-volume
- ----------------------------------------------
- Brick : arch:/export/1
- Active inodes:
- GFID Lookups Ref IA type
- ---- ------- --- -------
- 6f3fe173-e07a-4209-abb6-484091d75499 1 9 2
- 370d35d7-657e-44dc-bac4-d6dd800ec3d3 1 1 2
-
- LRU inodes:
- GFID Lookups Ref IA type
- ---- ------- --- -------
- 80f98abe-cdcf-4c1d-b917-ae564cf55763 1 0 1
- 3a58973d-d549-4ea6-9977-9aa218f233de 1 0 1
- 2ce0197d-87a9-451b-9094-9baa38121155 1 0 2
-
-- Display the open fd tables of the volume using the following
- command:
-
- `# gluster volume status fd`
-
- For example, to display the open fd tables of the test-volume:
-
- # gluster volume status test-volume fd
-
- FD tables for volume test-volume
- ----------------------------------------------
- Brick : arch:/export/1
- Connection 1:
- RefCount = 0 MaxFDs = 128 FirstFree = 4
- FD Entry PID RefCount Flags
- -------- --- -------- -----
- 0 26311 1 2
- 1 26310 3 2
- 2 26310 1 2
- 3 26311 3 2
-
- Connection 2:
- RefCount = 0 MaxFDs = 128 FirstFree = 0
- No open fds
-
- Connection 3:
- RefCount = 0 MaxFDs = 128 FirstFree = 0
- No open fds
-
-- Display the pending calls of the volume using the following command:
-
- `# gluster volume status callpool`
-
- Each call has a call stack containing call frames.
-
- For example, to display the pending calls of test-volume:
-
- # gluster volume status test-volume
-
- Pending calls for volume test-volume
- ----------------------------------------------
- Brick : arch:/export/1
- Pending calls: 2
- Call Stack1
- UID : 0
- GID : 0
- PID : 26338
- Unique : 192138
- Frames : 7
- Frame 1
- Ref Count = 1
- Translator = test-volume-server
- Completed = No
- Frame 2
- Ref Count = 0
- Translator = test-volume-posix
- Completed = No
- Parent = test-volume-access-control
- Wind From = default_fsync
- Wind To = FIRST_CHILD(this)->fops->fsync
- Frame 3
- Ref Count = 1
- Translator = test-volume-access-control
- Completed = No
- Parent = repl-locks
- Wind From = default_fsync
- Wind To = FIRST_CHILD(this)->fops->fsync
- Frame 4
- Ref Count = 1
- Translator = test-volume-locks
- Completed = No
- Parent = test-volume-io-threads
- Wind From = iot_fsync_wrapper
- Wind To = FIRST_CHILD (this)->fops->fsync
- Frame 5
- Ref Count = 1
- Translator = test-volume-io-threads
- Completed = No
- Parent = test-volume-marker
- Wind From = default_fsync
- Wind To = FIRST_CHILD(this)->fops->fsync
- Frame 6
- Ref Count = 1
- Translator = test-volume-marker
- Completed = No
- Parent = /export/1
- Wind From = io_stats_fsync
- Wind To = FIRST_CHILD(this)->fops->fsync
- Frame 7
- Ref Count = 1
- Translator = /export/1
- Completed = No
- Parent = test-volume-server
- Wind From = server_fsync_resume
- Wind To = bound_xl->fops->fsync
-
-
diff --git a/doc/admin-guide/en-US/markdown/admin_object_storage.md b/doc/admin-guide/en-US/markdown/admin_object_storage.md
deleted file mode 100644
index 71edab64536..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_object_storage.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# SwiftOnFile
-
-SwiftOnFile project enables GlusterFS volume to be used as backend for Openstack
-Swift - a distributed object store. This allows objects PUT over Swift's RESTful
-API to be accessed as files over filesystem interface and vice versa i.e files
-created over filesystem interface (NFS/FUSE/native) can be accessed as objects
-over Swift's RESTful API.
-
-SwiftOnFile project was formerly known as `gluster-swift` and also as `UFO
-(Unified File and Object)` before that. More information about SwiftOnFile can
-be found [here](https://github.com/swiftonfile/swiftonfile/blob/master/doc/markdown/quick_start_guide.md).
-There are differences in working of gluster-swift (now obsolete) and swiftonfile
-projects. The older gluster-swift code and relevant documentation can be found
-in [icehouse branch](https://github.com/swiftonfile/swiftonfile/tree/icehouse)
-of swiftonfile repo.
-
-## SwiftOnFile vs gluster-swift
-
-| Gluster-Swift | SwiftOnFile |
-|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
-| One GlusterFS volume maps to and stores only one Swift account. Mountpoint Hierarchy: `container/object` | One GlusterFS volume or XFS partition can have multiple accounts. Mountpoint Hierarchy: `acc/container/object` |
-| Over-rides account server, container server and object server. We need to keep in sync with upstream Swift and often may need code changes or workarounds to support new Swift features | Implements only object-server. Very less need to catch-up to Swift as new features at proxy,container and account level would very likely be compatible with SwiftOnFile as it's just a storage policy. |
-| Does not use DBs for accounts and container.A container listing involves a filesystem crawl.A HEAD on account/container gives inaccurate or stale results without FS crawl. | Uses Swift's DBs to store account and container information. An account or container listing does not involve FS crawl. Accurate info on HEAD to account/container – ability to support account quotas. |
-| GET on a container and account lists actual files in filesystem. | GET on a container and account only lists objects PUT over Swift. Files created over filesystem interface do not appear in container and object listings. |
-| Standalone deployment required and does not integrate with existing Swift cluster. | Integrates with any existing Swift deployment as a Storage Policy. |
-
diff --git a/doc/admin-guide/en-US/markdown/admin_puppet.md b/doc/admin-guide/en-US/markdown/admin_puppet.md
deleted file mode 100644
index 103449be0e7..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_puppet.md
+++ /dev/null
@@ -1,499 +0,0 @@
-#Puppet-Gluster
-<!---
-GlusterFS module by James
-Copyright (C) 2010-2013+ James Shubin
-Written by James Shubin <james@shubin.ca>
-
-This program is free software: you can redistribute it and/or modify
-it under the terms of the GNU Affero General Public License as published by
-the Free Software Foundation, either version 3 of the License, or
-(at your option) any later version.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-GNU Affero General Public License for more details.
-
-You should have received a copy of the GNU Affero General Public License
-along with this program. If not, see <http://www.gnu.org/licenses/>.
--->
-##A GlusterFS Puppet module by [James](https://ttboj.wordpress.com/)
-####Available from:
-####[https://github.com/purpleidea/puppet-gluster/](https://github.com/purpleidea/puppet-gluster/)
-
-####Also available from:
-####[https://forge.gluster.org/puppet-gluster/](https://forge.gluster.org/puppet-gluster/)
-
-####Table of Contents
-
-1. [Overview](#overview)
-2. [Module description - What the module does](#module-description)
-3. [Setup - Getting started with Puppet-Gluster](#setup)
- * [What can Puppet-Gluster manage?](#what-can-puppet-gluster-manage)
- * [Simple setup](#simple-setup)
- * [Elastic setup](#elastic-setup)
- * [Advanced setup](#advanced-setup)
-4. [Usage/FAQ - Notes on management and frequently asked questions](#usage-and-frequently-asked-questions)
-5. [Reference - Class and type reference](#reference)
- * [gluster::simple](#glustersimple)
- * [gluster::elastic](#glusterelastic)
- * [gluster::server](#glusterserver)
- * [gluster::host](#glusterhost)
- * [gluster::brick](#glusterbrick)
- * [gluster::volume](#glustervolume)
- * [gluster::volume::property](#glustervolumeproperty)
-6. [Examples - Example configurations](#examples)
-7. [Limitations - Puppet versions, OS compatibility, etc...](#limitations)
-8. [Development - Background on module development](#development)
-9. [Author - Author and contact information](#author)
-
-##Overview
-
-The Puppet-Gluster module installs, configures, and manages a GlusterFS cluster.
-
-##Module Description
-
-This Puppet-Gluster module handles installation, configuration, and management
-of GlusterFS across all of the hosts in the cluster.
-
-##Setup
-
-###What can Puppet-Gluster manage?
-
-Puppet-Gluster is designed to be able to manage as much or as little of your
-GlusterFS cluster as you wish. All features are optional. If there is a feature
-that doesn't appear to be optional, and you believe it should be, please let me
-know. Having said that, it makes good sense to me to have Puppet-Gluster manage
-as much of your GlusterFS infrastructure as it can. At the moment, it cannot
-rack new servers, but I am accepting funding to explore this feature ;) At the
-moment it can manage:
-
-* GlusterFS packages (rpm)
-* GlusterFS configuration files (/var/lib/glusterd/)
-* GlusterFS host peering (gluster peer probe)
-* GlusterFS storage partitioning (fdisk)
-* GlusterFS storage formatting (mkfs)
-* GlusterFS brick creation (mkdir)
-* GlusterFS services (glusterd)
-* GlusterFS firewalling (whitelisting)
-* GlusterFS volume creation (gluster volume create)
-* GlusterFS volume state (started/stopped)
-* GlusterFS volume properties (gluster volume set)
-* And much more...
-
-###Simple setup
-
-include '::gluster::simple' is enough to get you up and running. When using the
-gluster::simple class, or with any other Puppet-Gluster configuration,
-identical definitions must be used on all hosts in the cluster. The simplest
-way to accomplish this is with a single shared puppet host definition like:
-
-```puppet
-node /^annex\d+$/ { # annex{1,2,..N}
- class { '::gluster::simple':
- }
-}
-```
-
-If you wish to pass in different parameters, you can specify them in the class
-before you provision your hosts:
-
-```puppet
-class { '::gluster::simple':
- replica => 2,
- volume => ['volume1', 'volume2', 'volumeN'],
-}
-```
-
-###Elastic setup
-
-The gluster::elastic class is not yet available. Stay tuned!
-
-###Advanced setup
-
-Some system administrators may wish to manually itemize each of the required
-components for the Puppet-Gluster deployment. This happens automatically with
-the higher level modules, but may still be a desirable feature, particularly
-for non-elastic storage pools where the configuration isn't expected to change
-very often (if ever).
-
-To put together your cluster piece by piece, you must manually include and
-define each class and type that you wish to use. If there are certain aspects
-that you wish to manage yourself, you can omit them from your configuration.
-See the [reference](#reference) section below for the specifics. Here is one
-possible example:
-
-```puppet
-class { '::gluster::server':
- shorewall => true,
-}
-
-gluster::host { 'annex1.example.com':
- # use uuidgen to make these
- uuid => '1f660ca2-2c78-4aa0-8f4d-21608218c69c',
-}
-
-# note that this is using a folder on your existing file system...
-# this can be useful for prototyping gluster using virtual machines
-# if this isn't a separate partition, remember that your root fs will
-# run out of space when your gluster volume does!
-gluster::brick { 'annex1.example.com:/data/gluster-storage1':
- areyousure => true,
-}
-
-gluster::host { 'annex2.example.com':
- # NOTE: specifying a host uuid is now optional!
- # if you don't choose one, one will be assigned
- #uuid => '2fbe6e2f-f6bc-4c2d-a301-62fa90c459f8',
-}
-
-gluster::brick { 'annex2.example.com:/data/gluster-storage2':
- areyousure => true,
-}
-
-$brick_list = [
- 'annex1.example.com:/data/gluster-storage1',
- 'annex2.example.com:/data/gluster-storage2',
-]
-
-gluster::volume { 'examplevol':
- replica => 2,
- bricks => $brick_list,
- start => undef, # i'll start this myself
-}
-
-# namevar must be: <VOLNAME>#<KEY>
-gluster::volume::property { 'examplevol#auth.reject':
- value => ['192.0.2.13', '198.51.100.42', '203.0.113.69'],
-}
-```
-
-##Usage and frequently asked questions
-
-All management should be done by manipulating the arguments on the appropriate
-Puppet-Gluster classes and types. Since certain manipulations are either not
-yet possible with Puppet-Gluster, or are not supported by GlusterFS, attempting
-to manipulate the Puppet configuration in an unsupported way will result in
-undefined behaviour, and possible even data loss, however this is unlikely.
-
-###How do I change the replica count?
-
-You must set this before volume creation. This is a limitation of GlusterFS.
-There are certain situations where you can change the replica count by adding
-a multiple of the existing brick count to get this desired effect. These cases
-are not yet supported by Puppet-Gluster. If you want to use Puppet-Gluster
-before and / or after this transition, you can do so, but you'll have to do the
-changes manually.
-
-###Do I need to use a virtual IP?
-
-Using a virtual IP (VIP) is strongly recommended as a distributed lock manager
-(DLM) and also to provide a highly-available (HA) IP address for your clients
-to connect to. For a more detailed explanation of the reasoning please see:
-
-[https://ttboj.wordpress.com/2012/08/23/how-to-avoid-cluster-race-conditions-or-how-to-implement-a-distributed-lock-manager-in-puppet/](https://ttboj.wordpress.com/2012/08/23/how-to-avoid-cluster-race-conditions-or-how-to-implement-a-distributed-lock-manager-in-puppet/)
-
-Remember that even if you're using a hosted solution (such as AWS) that doesn't
-provide an additional IP address, or you want to avoid using an additional IP,
-and you're okay not having full HA client mounting, you can use an unused
-private RFC1918 IP address as the DLM VIP. Remember that a layer 3 IP can
-co-exist on the same layer 2 network with the layer 3 network that is used by
-your cluster.
-
-###Is it possible to have Puppet-Gluster complete in a single run?
-
-No. This is a limitation of Puppet, and is related to how GlusterFS operates.
-For example, it is not reliably possible to predict which ports a particular
-GlusterFS volume will run on until after the volume is started. As a result,
-this module will initially whitelist connections from GlusterFS host IP
-addresses, and then further restrict this to only allow individual ports once
-this information is known. This is possible in conjunction with the
-[puppet-shorewall](https://github.com/purpleidea/puppet-shorewall) module.
-You should notice that each run should complete without error. If you do see an
-error, it means that either something is wrong with your system and / or
-configuration, or because there is a bug in Puppet-Gluster.
-
-###Can you integrate this with vagrant?
-
-Not until vagrant properly supports libvirt/KVM. I have no desire to use
-VirtualBox for fun.
-
-###Awesome work, but it's missing support for a feature and/or platform!
-
-Since this is an Open Source / Free Software project that I also give away for
-free (as in beer, free as in gratis, free as in libre), I'm unable to provide
-unlimited support. Please consider donating funds, hardware, virtual machines,
-and other resources. For specific needs, you could perhaps sponsor a feature!
-
-###You didn't answer my question, or I have a question!
-
-Contact me through my [technical blog](https://ttboj.wordpress.com/contact/)
-and I'll do my best to help. If you have a good question, please remind me to
-add my answer to this documentation!
-
-##Reference
-Please note that there are a number of undocumented options. For more
-information on these options, please view the source at:
-[https://github.com/purpleidea/puppet-gluster/](https://github.com/purpleidea/puppet-gluster/).
-If you feel that a well used option needs documenting here, please contact me.
-
-###Overview of classes and types
-
-* [gluster::simple](#glustersimple): Simple Puppet-Gluster deployment.
-* [gluster::elastic](#glusterelastic): Under construction.
-* [gluster::server](#glusterserver): Base class for server hosts.
-* [gluster::host](#glusterhost): Host type for each participating host.
-* [gluster::brick](#glusterbrick): Brick type for each defined brick, per host.
-* [gluster::volume](#glustervolume): Volume type for each defined volume.
-* [gluster::volume::property](#glustervolumeproperty): Manages properties for each volume.
-
-###gluster::simple
-This is gluster::simple. It should probably take care of 80% of all use cases.
-It is particularly useful for deploying quick test clusters. It uses a
-finite-state machine (FSM) to decide when the cluster has settled and volume
-creation can begin. For more information on the FSM in Puppet-Gluster see:
-[https://ttboj.wordpress.com/2013/09/28/finite-state-machines-in-puppet/](https://ttboj.wordpress.com/2013/09/28/finite-state-machines-in-puppet/)
-
-####`replica`
-The replica count. Can't be changed automatically after initial deployment.
-
-####`volume`
-The volume name or list of volume names to create.
-
-####`path`
-The valid brick path for each host. Defaults to local file system. If you need
-a different path per host, then Gluster::Simple will not meet your needs.
-
-####`vip`
-The virtual IP address to be used for the cluster distributed lock manager.
-
-####`shorewall`
-Boolean to specify whether puppet-shorewall integration should be used or not.
-
-###gluster::elastic
-Under construction.
-
-###gluster::server
-Main server class for the cluster. Must be included when building the GlusterFS
-cluster manually. Wrapper classes such as [gluster::simple](#glustersimple)
-include this automatically.
-
-####`vip`
-The virtual IP address to be used for the cluster distributed lock manager.
-
-####`shorewall`
-Boolean to specify whether puppet-shorewall integration should be used or not.
-
-###gluster::host
-Main host type for the cluster. Each host participating in the GlusterFS
-cluster must define this type on itself, and on every other host. As a result,
-this is not a singleton like the [gluster::server](#glusterserver) class.
-
-####`ip`
-Specify which IP address this host is using. This defaults to the
-_$::ipaddress_ variable. Be sure to set this manually if you're declaring this
-yourself on each host without using exported resources. If each host thinks the
-other hosts should have the same IP address as itself, then Puppet-Gluster and
-GlusterFS won't work correctly.
-
-####`uuid`
-Universally unique identifier (UUID) for the host. If empty, Puppet-Gluster
-will generate this automatically for the host. You can generate your own
-manually with _uuidgen_, and set them yourself. I found this particularly
-useful for testing, because I would pick easy to recognize UUID's like:
-_aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_,
-_bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb_, and so on. If you set a UUID manually,
-and Puppet-Gluster has a chance to run, then it will remember your choice, and
-store it locally to be used again if you no longer specify the UUID. This is
-particularly useful for upgrading an existing un-managed GlusterFS installation
-to a Puppet-Gluster managed one, without changing any UUID's.
-
-###gluster::brick
-Main brick type for the cluster. Each brick is an individual storage segment to
-be used on a host. Each host must have at least one brick to participate in the
-cluster, but usually a host will have multiple bricks. A brick can be as simple
-as a file system folder, or it can be a separate file system. Please read the
-official GlusterFS documentation, if you aren't entirely comfortable with the
-concept of a brick.
-
-For most test clusters, and for experimentation, it is easiest to use a
-directory on the root file system. You can even use a _/tmp_ sub folder if you
-don't care about the persistence of your data. For more serious clusters, you
-might want to create separate file systems for your data. On self-hosted iron,
-it is not uncommon to create multiple RAID-6 drive pools, and to then create a
-separate file system per virtual drive. Each file system can then be used as a
-single brick.
-
-So that each volume in GlusterFS has the maximum ability to grow, without
-having to partition storage separately, the bricks in Puppet-Gluster are
-actually folders (on whatever backing store you wish) which then contain
-sub folders-- one for each volume. As a result, all the volumes on a given
-GlusterFS cluster can share the total available storage space. If you wish to
-limit the storage used by each volume, you can setup quotas. Alternatively, you
-can buy more hardware, and elastically grow your GlusterFS volumes, since the
-price per GB will be significantly less than any proprietary storage system.
-The one downside to this brick sharing, is that if you have chosen the brick
-per host count specifically to match your performance requirements, and
-each GlusterFS volume on the same cluster has drastically different brick per
-host performance requirements, then this won't suit your needs. I doubt that
-anyone actually has such requirements, but if you do insist on needing this
-compartmentalization, then you can probably use the Puppet-Gluster grouping
-feature to accomplish this goal. Please let me know about your use-case, and
-be warned that the grouping feature hasn't been extensively tested.
-
-To prove to you that I care about automation, this type offers the ability to
-automatically partition and format your file systems. This means you can plug
-in new iron, boot, provision and configure the entire system automatically.
-Regrettably, I don't have a lot of test hardware to routinely use this feature.
-If you'd like to donate some, I'd be happy to test this thoroughly. Having said
-that, I have used this feature, I consider it to be extremely safe, and it has
-never caused me to lose data. If you're uncertain, feel free to look at the
-code, or avoid using this feature entirely. If you think there's a way to make
-it even safer, then feel free to let me know.
-
-####`dev`
-Block device, such as _/dev/sdc_ or _/dev/disk/by-id/scsi-0123456789abcdef_. By
-default, Puppet-Gluster will assume you're using a folder to store the brick
-data, if you don't specify this parameter.
-
-####`fsuuid`
-File system UUID. This ensures we can distinctly identify a file system. You
-can set this to be used with automatic file system creation, or you can specify
-the file system UUID that you'd like to use.
-
-####`labeltype`
-Only _gpt_ is supported. Other options include _msdos_, but this has never been
-used because of it's size limitations.
-
-####`fstype`
-This should be _xfs_ or _ext4_. Using _xfs_ is recommended, but _ext4_ is also
-quite common. This only affects a file system that is getting created by this
-module. If you provision a new machine, with a root file system of _ext4_, and
-the brick you create is a root file system path, then this option does nothing.
-
-####`xfs_inode64`
-Set _inode64_ mount option when using the _xfs_ fstype. Choose _true_ to set.
-
-####`xfs_nobarrier`
-Set _nobarrier_ mount option when using the _xfs_ fstype. Choose _true_ to set.
-
-####`ro`
-Whether the file system should be mounted read only. For emergencies only.
-
-####`force`
-If _true_, this will overwrite any xfs file system it sees. This is useful for
-rebuilding GlusterFS repeatedly and wiping data. There are other safeties in
-place to stop this. In general, you probably don't ever want to touch this.
-
-####`areyousure`
-Do you want to allow Puppet-Gluster to do dangerous things? You have to set
-this to _true_ to allow Puppet-Gluster to _fdisk_ and _mkfs_ your file system.
-
-###gluster::volume
-Main volume type for the cluster. This is where a lot of the magic happens.
-Remember that changing some of these parameters after the volume has been
-created won't work, and you'll experience undefined behaviour. There could be
-FSM based error checking to verify that no changes occur, but it has been left
-out so that this code base can eventually support such changes, and so that the
-user can manually change a parameter if they know that it is safe to do so.
-
-####`bricks`
-List of bricks to use for this volume. If this is left at the default value of
-_true_, then this list is built automatically. The algorithm that determines
-this order does not support all possible situations, and most likely can't
-handle certain corner cases. It is possible to examine the FSM to view the
-selected brick order before it has a chance to create the volume. The volume
-creation script won't run until there is a stable brick list as seen by the FSM
-running on the host that has the DLM. If you specify this list of bricks
-manually, you must choose the order to match your desired volume layout. If you
-aren't sure about how to order the bricks, you should review the GlusterFS
-documentation first.
-
-####`transport`
-Only _tcp_ is supported. Possible values can include _rdma_, but this won't get
-any testing if I don't have access to infiniband hardware. Donations welcome.
-
-####`replica`
-Replica count. Usually you'll want to set this to _2_. Some users choose _3_.
-Other values are seldom seen. A value of _1_ can be used for simply testing a
-distributed setup, when you don't care about your data or high availability. A
-value greater than _4_ is probably wasteful and unnecessary. It might even
-cause performance issues if a synchronous write is waiting on a slow fourth
-server.
-
-####`stripe`
-Stripe count. Thoroughly unsupported and untested option. Not recommended for
-use by GlusterFS.
-
-####`ping`
-Do we want to include ping checks with _fping_?
-
-####`settle`
-Do we want to run settle checks?
-
-####`start`
-Requested state for the volume. Valid values include: _true_ (start), _false_
-(stop), or _undef_ (un-managed start/stop state).
-
-###gluster::volume::property
-Main volume property type for the cluster. This allows you to manage GlusterFS
-volume specific properties. There are a wide range of properties that volumes
-support. For the full list of properties, you should consult the GlusterFS
-documentation, or run the _gluster volume set help_ command. To set a property
-you must use the special name pattern of: _volume_#_key_. The value argument is
-used to set the associated value. It is smart enough to accept values in the
-most logical format for that specific property. Some properties aren't yet
-supported, so please report any problems you have with this functionality.
-Because this feature is an awesome way to _document as code_ the volume
-specific optimizations that you've made, make sure you use this feature even if
-you don't use all the others.
-
-####`value`
-The value to be used for this volume property.
-
-##Examples
-For example configurations, please consult the [examples/](https://github.com/purpleidea/puppet-gluster/tree/master/examples) directory in the git
-source repository. It is available from:
-
-[https://github.com/purpleidea/puppet-gluster/tree/master/examples](https://github.com/purpleidea/puppet-gluster/tree/master/examples)
-
-It is also available from:
-
-[https://forge.gluster.org/puppet-gluster/puppet-gluster/trees/master/examples](https://forge.gluster.org/puppet-gluster/puppet-gluster/trees/master/examples/)
-
-##Limitations
-
-This module has been tested against open source Puppet 3.2.4 and higher.
-
-The module has been tested on:
-
-* CentOS 6.4
-
-It will probably work without incident or without major modification on:
-
-* CentOS 5.x/6.x
-* RHEL 5.x/6.x
-
-It will most likely work with other Puppet versions and on other platforms, but
-testing under other conditions has been light due to lack of resources. It will
-most likely not work on Debian/Ubuntu systems without modification. I would
-really love to add support for these operating systems, but I do not have any
-test resources to do so. Please sponsor this if you'd like to see it happen.
-
-##Development
-
-This is my personal project that I work on in my free time.
-Donations of funding, hardware, virtual machines, and other resources are
-appreciated. Please contact me if you'd like to sponsor a feature, invite me to
-talk/teach or for consulting.
-
-You can follow along [on my technical blog](https://ttboj.wordpress.com/).
-
-##Author
-
-Copyright (C) 2010-2013+ James Shubin
-
-* [github](https://github.com/purpleidea/)
-* [@purpleidea](https://twitter.com/#!/purpleidea)
-* [https://ttboj.wordpress.com/](https://ttboj.wordpress.com/)
-
diff --git a/doc/admin-guide/en-US/markdown/admin_rdma_transport.md b/doc/admin-guide/en-US/markdown/admin_rdma_transport.md
deleted file mode 100644
index 872adb31a08..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_rdma_transport.md
+++ /dev/null
@@ -1,70 +0,0 @@
-# Introduction
-
-GlusterFS supports using RDMA protocol for communication between glusterfs clients and glusterfs bricks.
-GlusterFS clients include FUSE client, libgfapi clients(Samba and NFS-Ganesha included), gNFS server and other glusterfs processes that communicate with bricks like self-heal daemon, quotad, rebalance process etc.
-
-NOTE: As of now only FUSE client and gNFS server would support RDMA transport.
-
-
-NOTE:
-NFS client to gNFS Server/NFS Ganesha Server communication would still happen over tcp.
-CIFS Clients/Windows Clients to Samba Server communication would still happen over tcp.
-
-# Setup
-Please refer to these external documentation to setup RDMA on your machines
-http://pkg-ofed.alioth.debian.org/howto/infiniband-howto.html
-http://people.redhat.com/dledford/infiniband_get_started.html
-
-## Creating Trusted Storage Pool
-All the servers in the Trusted Storage Pool must have RDMA devices if either RDMA or TCP,RDMA volumes are created in the storage pool.
-The peer probe must be performed using IP/hostname assigned to the RDMA device.
-
-## Ports and Firewall
-Process glusterd will listen on both tcp and rdma if rdma device is found. Port used for rdma is 24008. Similarly, brick processes will also listen on two ports for a volume created with transport "tcp,rdma".
-
-Make sure you update the firewall to accept packets on these ports.
-
-# Gluster Volume Create
-
-A volume can support one or more transport types for communication between clients and brick processes. There are three types of supported transport, which are, tcp, rdma, and tcp,rdma.
-
-Example: To create a distributed volume with four storage servers over InfiniBand:
-
-`# gluster volume create test-volume transport rdma server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4`
-Creation of test-volume has been successful
-Please start the volume to access data.
-
-# Changing Transport of Volume
-To change the supported transport types of a existing volume, follow the procedure:
-NOTE: This is possible only if the volume was created with IP/hostname assigned to RDMA device.
-
- 1. Unmount the volume on all the clients using the following command:
-`# umount mount-point`
- 2. Stop the volumes using the following command:
-`# gluster volume stop volname`
- 3. Change the transport type.
-For example, to enable both tcp and rdma execute the followimg command:
-`# gluster volume set volname config.transport tcp,rdma`
- 4. Mount the volume on all the clients.
-For example, to mount using rdma transport, use the following command:
-`# mount -t glusterfs -o transport=rdma server1:/test-volume /mnt/glusterfs`
-
-NOTE:
-config.transport option does not have a entry in help of gluster cli.
-`#gluster vol set help | grep config.transport`
-However, the key is a valid one.
-
-# Mounting a Volume using RDMA
-
-You can use the mount option "transport" to specify the transport type that FUSE client must use to communicate with bricks. If the volume was created with only one transport type, then that becomes the default when no value is specified. In case of tcp,rdma volume, tcp is the default.
-
-For example, to mount using rdma transport, use the following command:
-`# mount -t glusterfs -o transport=rdma server1:/test-volume /mnt/glusterfs`
-
-# Transport used by auxillary processes
-All the auxillary processes like self-heal daemon, rebalance process etc use the default transport.In case you have a tcp,rdma volume it will use tcp.
-In case of rdma volume, rdma will be used.
-Configuration options to select transport used by these processes when volume is tcp,rdma are not yet available and will be coming in later releases.
-
-
-
diff --git a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md b/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
deleted file mode 100644
index d66a6894152..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
+++ /dev/null
@@ -1,674 +0,0 @@
-#Setting up GlusterFS Server Volumes
-
-A volume is a logical collection of bricks where each brick is an export
-directory on a server in the trusted storage pool. Most of the gluster
-management operations are performed on the volume.
-
-To create a new volume in your storage environment, specify the bricks
-that comprise the volume. After you have created a new volume, you must
-start it before attempting to mount it.
-
-###Formatting and Mounting Bricks
-
-####Creating a Thinly Provisioned Logical Volume
-
-To create a thinly provisioned logical volume, proceed with the following steps:
-
- 1. Create a physical volume(PV) by using the pvcreate command.
- For example:
-
- `pvcreate --dataalignment 1280K /dev/sdb`
-
- Here, /dev/sdb is a storage device.
- Use the correct dataalignment option based on your device.
-
- >**Note**
- >
- >The device name and the alignment value will vary based on the device you are using.
-
- 2. Create a Volume Group (VG) from the PV using the vgcreate command:
-
-For example:
-
- `vgcreate --physicalextentsize 128K gfs_vg /dev/sdb`
-
- It is recommended that only one VG must be created from one storage device.
-
- 3. Create a thin-pool using the following commands:
-
- 1. Create an LV to serve as the metadata device using the following command:
-
- `lvcreate -L metadev_sz --name metadata_device_name VOLGROUP`
-
- For example:
-
- `lvcreate -L 16776960K --name gfs_pool_meta gfs_vg`
-
- 2. Create an LV to serve as the data device using the following command:
-
- `lvcreate -L datadev_sz --name thin_pool VOLGROUP`
-
- For example:
-
- `lvcreate -L 536870400K --name gfs_pool gfs_vg`
-
- 3. Create a thin pool from the data LV and the metadata LV using the following command:
-
- `lvconvert --chunksize STRIPE_WIDTH --thinpool VOLGROUP/thin_pool --poolmetadata VOLGROUP/metadata_device_name`
-
- For example:
-
- `lvconvert --chunksize 1280K --thinpool gfs_vg/gfs_pool --poolmetadata gfs_vg/gfs_pool_meta`
-
- >**Note**
- >
- >By default, the newly provisioned chunks in a thin pool are zeroed to prevent data leaking between different block devices.
-
- `lvchange --zero n VOLGROUP/thin_pool`
-
- For example:
-
- `lvchange --zero n gfs_vg/gfs_pool`
-
- 4. Create a thinly provisioned volume from the previously created pool using the lvcreate command:
-
- For example:
-
- `lvcreate -V 1G -T gfs_vg/gfs_pool -n gfs_lv`
-
- It is recommended that only one LV should be created in a thin pool.
-
-Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly.
-
- 1. Run # mkfs.xfs -f -i size=512 -n size=8192 -d su=128K,sw=10 DEVICE to format the bricks to the supported XFS file system format. Here, DEVICE is the thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by GlusterFS.
-
- Run # mkdir /mountpoint to create a directory to link the brick to.
-
- Add an entry in /etc/fstab:
-
- `/dev/gfs_vg/gfs_lv /mountpoint xfs rw,inode64,noatime,nouuid 1 2`
-
- Run # mount /mountpoint to mount the brick.
-
- Run the df -h command to verify the brick is successfully mounted:
-
- `# df -h
- /dev/gfs_vg/gfs_lv 16G 1.2G 15G 7% /exp1`
-
-- Volumes of the following types can be created in your storage
- environment:
-
- - **Distributed** - Distributed volumes distributes files throughout
- the bricks in the volume. You can use distributed volumes where
- the requirement is to scale storage and the redundancy is either
- not important or is provided by other hardware/software layers.
-
- - **Replicated** – Replicated volumes replicates files across bricks
- in the volume. You can use replicated volumes in environments
- where high-availability and high-reliability are critical.
-
- - **Striped** – Striped volumes stripes data across bricks in the
- volume. For best results, you should use striped volumes only in
- high concurrency environments accessing very large files.
-
- - **Distributed Striped** - Distributed striped volumes stripe data
- across two or more nodes in the cluster. You should use
- distributed striped volumes where the requirement is to scale
- storage and in high concurrency environments accessing very
- large files is critical.
-
- - **Distributed Replicated** - Distributed replicated volumes
- distributes files across replicated bricks in the volume. You
- can use distributed replicated volumes in environments where the
- requirement is to scale storage and high-reliability is
- critical. Distributed replicated volumes also offer improved
- read performance in most environments.
-
- - **Distributed Striped Replicated** – Distributed striped replicated
- volumes distributes striped data across replicated bricks in the
- cluster. For best results, you should use distributed striped
- replicated volumes in highly concurrent environments where
- parallel access of very large files and performance is critical.
- In this release, configuration of this volume type is supported
- only for Map Reduce workloads.
-
- - **Striped Replicated** – Striped replicated volumes stripes data
- across replicated bricks in the cluster. For best results, you
- should use striped replicated volumes in highly concurrent
- environments where there is parallel access of very large files
- and performance is critical. In this release, configuration of
- this volume type is supported only for Map Reduce workloads.
-
- - **Dispersed** - Dispersed volumes are based on erasure codes,
- providing space-efficient protection against disk or server failures.
- It stores an encoded fragment of the original file to each brick in
- a way that only a subset of the fragments is needed to recover the
- original file. The number of bricks that can be missing without
- losing access to data is configured by the administrator on volume
- creation time.
-
- - **Distributed Dispersed** - Distributed dispersed volumes distribute
- files across dispersed subvolumes. This has the same advantages of
- distribute replicate volumes, but using disperse to store the data
- into the bricks.
-
-**To create a new volume**
-
-- Create a new volume :
-
- `# gluster volume create [stripe | replica | disperse] [transport tcp | rdma | tcp,rdma] `
-
- For example, to create a volume called test-volume consisting of
- server3:/exp3 and server4:/exp4:
-
- # gluster volume create test-volume server3:/exp3 server4:/exp4
- Creation of test-volume has been successful
- Please start the volume to access data.
-
-##Creating Distributed Volumes
-
-In a distributed volumes files are spread randomly across the bricks in
-the volume. Use distributed volumes where you need to scale storage and
-redundancy is either not important or is provided by other
-hardware/software layers.
-
-> **Note**:
-> Disk/server failure in distributed volumes can result in a serious
-> loss of data because directory contents are spread randomly across the
-> bricks in the volume.
-
-![][]
-
-**To create a distributed volume**
-
-1. Create a trusted storage pool.
-
-2. Create the distributed volume:
-
- `# gluster volume create [transport tcp | rdma | tcp,rdma] `
-
- For example, to create a distributed volume with four storage
- servers using tcp:
-
- # gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- (Optional) You can display the volume information:
-
- # gluster volume info
- Volume Name: test-volume
- Type: Distribute
- Status: Created
- Number of Bricks: 4
- Transport-type: tcp
- Bricks:
- Brick1: server1:/exp1
- Brick2: server2:/exp2
- Brick3: server3:/exp3
- Brick4: server4:/exp4
-
- For example, to create a distributed volume with four storage
- servers over InfiniBand:
-
- # gluster volume create test-volume transport rdma server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- If the transport type is not specified, *tcp* is used as the
- default. You can also set additional options if required, such as
- auth.allow or auth.reject.
-
- > **Note**:
- > Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
-##Creating Replicated Volumes
-
-Replicated volumes create copies of files across multiple bricks in the
-volume. You can use replicated volumes in environments where
-high-availability and high-reliability are critical.
-
-> **Note**:
-> The number of bricks should be equal to of the replica count for a
-> replicated volume. To protect against server and disk failures, it is
-> recommended that the bricks of the volume are from different servers.
-
-![][1]
-
-**To create a replicated volume**
-
-1. Create a trusted storage pool.
-
-2. Create the replicated volume:
-
- `# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma] `
-
- For example, to create a replicated volume with two storage servers:
-
- # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- If the transport type is not specified, *tcp* is used as the
- default. You can also set additional options if required, such as
- auth.allow or auth.reject.
-
- > **Note**:
-
- > - Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
- > - GlusterFS will fail to create a replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node replicated volume with a more that one brick of a replica set is present on the same peer.
- > ```
- # gluster volume create <volname> replica 4 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
- volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.```
-
- > Use the `force` option at the end of command if you want to create the volume in this case.
-
-###Arbiter configuration for replica volumes
-Arbiter volumes are replica 3 volumes where the 3rd brick acts as the arbiter brick. This configuration has mechanisms that prevent occurrence of split-brains.
-It can be created with the following command:
-`# gluster volume create <VOLNAME> replica 3 arbiter 1 host1:brick1 host2:brick2 host3:brick3`
-More information about this configuration can be found at `doc/features/afr-arbiter-volumes.md`
-Note that the arbiter configuration for replica 3 can be used to create distributed-replicate volumes as well.
-
-##Creating Striped Volumes
-
-Striped volumes stripes data across bricks in the volume. For best
-results, you should use striped volumes only in high concurrency
-environments accessing very large files.
-
-> **Note**:
-> The number of bricks should be a equal to the stripe count for a
-> striped volume.
-
-![][2]
-
-**To create a striped volume**
-
-1. Create a trusted storage pool.
-
-2. Create the striped volume:
-
- `# gluster volume create [stripe ] [transport tcp | rdma | tcp,rdma] `
-
- For example, to create a striped volume across two storage servers:
-
- # gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- If the transport type is not specified, *tcp* is used as the
- default. You can also set additional options if required, such as
- auth.allow or auth.reject.
-
- > **Note**:
- > Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
-##Creating Distributed Striped Volumes
-
-Distributed striped volumes stripes files across two or more nodes in
-the cluster. For best results, you should use distributed striped
-volumes where the requirement is to scale storage and in high
-concurrency environments accessing very large files is critical.
-
-> **Note**:
-> The number of bricks should be a multiple of the stripe count for a
-> distributed striped volume.
-
-![][3]
-
-**To create a distributed striped volume**
-
-1. Create a trusted storage pool.
-
-2. Create the distributed striped volume:
-
- `# gluster volume create [stripe ] [transport tcp | rdma | tcp,rdma] `
-
- For example, to create a distributed striped volume across eight
- storage servers:
-
- # gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- If the transport type is not specified, *tcp* is used as the
- default. You can also set additional options if required, such as
- auth.allow or auth.reject.
-
- > **Note**:
- > Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
-##Creating Distributed Replicated Volumes
-
-Distributes files across replicated bricks in the volume. You can use
-distributed replicated volumes in environments where the requirement is
-to scale storage and high-reliability is critical. Distributed
-replicated volumes also offer improved read performance in most
-environments.
-
-> **Note**:
-> The number of bricks should be a multiple of the replica count for a
-> distributed replicated volume. Also, the order in which bricks are
-> specified has a great effect on data protection. Each replica\_count
-> consecutive bricks in the list you give will form a replica set, with
-> all replica sets combined into a volume-wide distribute set. To make
-> sure that replica-set members are not placed on the same node, list
-> the first brick on every server, then the second brick on every server
-> in the same order, and so on.
-
-![][4]
-
-**To create a distributed replicated volume**
-
-1. Create a trusted storage pool.
-
-2. Create the distributed replicated volume:
-
- `# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma] `
-
- For example, four node distributed (replicated) volume with a
- two-way mirror:
-
- # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- For example, to create a six node distributed (replicated) volume
- with a two-way mirror:
-
- # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- If the transport type is not specified, *tcp* is used as the
- default. You can also set additional options if required, such as
- auth.allow or auth.reject.
-
- > **Note**:
- > - Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
- > - GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node distribute (replicated) volume with a more than one brick of a replica set is present on the same peer.
- > ```
- # gluster volume create <volname> replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
- volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.```
-
- > Use the `force` option at the end of command if you want to create the volume in this case.
-
-
-##Creating Distributed Striped Replicated Volumes
-
-Distributed striped replicated volumes distributes striped data across
-replicated bricks in the cluster. For best results, you should use
-distributed striped replicated volumes in highly concurrent environments
-where parallel access of very large files and performance is critical.
-In this release, configuration of this volume type is supported only for
-Map Reduce workloads.
-
-> **Note**:
-> The number of bricks should be a multiples of number of stripe count
-> and replica count for a distributed striped replicated volume.
-
-**To create a distributed striped replicated volume**
-
-1. Create a trusted storage pool.
-
-2. Create a distributed striped replicated volume using the following
- command:
-
- `# gluster volume create [stripe ] [replica ] [transport tcp | rdma | tcp,rdma] `
-
- For example, to create a distributed replicated striped volume
- across eight storage servers:
-
- # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- If the transport type is not specified, *tcp* is used as the
- default. You can also set additional options if required, such as
- auth.allow or auth.reject.
-
- > **Note**:
- > - Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
- > - GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node distribute (replicated) volume with a more than one brick of a replica set is present on the same peer.
- > ```
- # gluster volume create <volname> stripe 2 replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
- volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.```
-
- > Use the `force` option at the end of command if you want to create the volume in this case.
-
-##Creating Striped Replicated Volumes
-
-Striped replicated volumes stripes data across replicated bricks in the
-cluster. For best results, you should use striped replicated volumes in
-highly concurrent environments where there is parallel access of very
-large files and performance is critical. In this release, configuration
-of this volume type is supported only for Map Reduce workloads.
-
-> **Note**:
-> The number of bricks should be a multiple of the replicate count and
-> stripe count for a striped replicated volume.
-
-![][5]
-
-**To create a striped replicated volume**
-
-1. Create a trusted storage pool consisting of the storage servers that
- will comprise the volume.
-
-2. Create a striped replicated volume :
-
- `# gluster volume create [stripe ] [replica ] [transport tcp | rdma | tcp,rdma] `
-
- For example, to create a striped replicated volume across four
- storage servers:
-
- # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- To create a striped replicated volume across six storage servers:
-
- # gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6
- Creation of test-volume has been successful
- Please start the volume to access data.
-
- If the transport type is not specified, *tcp* is used as the
- default. You can also set additional options if required, such as
- auth.allow or auth.reject.
-
- > **Note**:
- > - Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
- > - GlusterFS will fail to create a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. four node distribute (replicated) volume with a more than one brick of replica set is present on the same peer.
- > ```
- # gluster volume create <volname> stripe 2 replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
- volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Use `force` at the end of the command if you want to override this behavior.```
-
- > Use the `force` option at the end of command if you want to create the volume in this case.
-
-##Creating Dispersed Volumes
-
-Dispersed volumes are based on erasure codes. It stripes the encoded data of
-files, with some redundancy addedd, across multiple bricks in the volume. You
-can use dispersed volumes to have a configurable level of reliability with a
-minimum space waste.
-
-**Redundancy**
-
-Each dispersed volume has a redundancy value defined when the volume is
-created. This value determines how many bricks can be lost without
-interrupting the operation of the volume. It also determines the amount of
-usable space of the volume using this formula:
-
- <Usable size> = <Brick size> * (#Bricks - Redundancy)
-
-All bricks of a disperse set should have the same capacity otherwise, when
-the smaller brick becomes full, no additional data will be allowed in the
-disperse set.
-
-It's important to note that a configuration with 3 bricks and redundancy 1
-will have less usable space (66.7% of the total physical space) than a
-configuration with 10 bricks and redundancy 1 (90%). However the first one
-will be safer than the second one (roughly the probability of failure of
-the second configuration if more than 4.5 times bigger than the first one).
-
-For example, a dispersed volume composed by 6 bricks of 4TB and a redundancy
-of 2 will be completely operational even with two bricks inaccessible. However
-a third inaccessible brick will bring the volume down because it won't be
-possible to read or write to it. The usable space of the volume will be equal
-to 16TB.
-
-The implementation of erasure codes in GlusterFS limits the redundancy to a
-value smaller than #Bricks / 2 (or equivalently, redundancy * 2 < #Bricks).
-Having a redundancy equal to half of the number of bricks would be almost
-equivalent to a replica-2 volume, and probably a replicated volume will
-perform better in this case.
-
-**Optimal volumes**
-
-One of the worst things erasure codes have in terms of performance is the
-RMW (Read-Modify-Write) cycle. Erasure codes operate in blocks of a certain
-size and it cannot work with smaller ones. This means that if a user issues
-a write of a portion of a file that doesn't fill a full block, it needs to
-read the remaining portion from the current contents of the file, merge them,
-compute the updated encoded block and, finally, writing the resulting data.
-
-This adds latency, reducing performance when this happens. Some GlusterFS
-performance xlators can help to reduce or even eliminate this problem for
-some workloads, but it should be taken into account when using dispersed
-volumes for a specific use case.
-
-Current implementation of dispersed volumes use blocks of a size that depends
-on the number of bricks and redundancy: 512 * (#Bricks - redundancy) bytes.
-This value is also known as the stripe size.
-
-Using combinations of #Bricks/redundancy that give a power of two for the
-stripe size will make the disperse volume perform better in most workloads
-because it's more typical to write information in blocks that are multiple of
-two (for example databases, virtual machines and many applications).
-
-These combinations are considered *optimal*.
-
-For example, a configuration with 6 bricks and redundancy 2 will have a stripe
-size of 512 * (6 - 2) = 2048 bytes, so it's considered optimal. A configuration
-with 7 bricks and redundancy 2 would have a stripe size of 2560 bytes, needing
-a RMW cycle for many writes (of course this always depends on the use case).
-
-**To create a dispersed volume**
-
-1. Create a trusted storage pool.
-
-2. Create the dispersed volume:
-
- `# gluster volume create [disperse [<count>]] [redundancy <count>] [transport tcp | rdma | tcp,rdma]`
-
- A dispersed volume can be created by specifying the number of bricks in a
- disperse set, by specifying the number of redundancy bricks, or both.
-
- If *disperse* is not specified, or the _&lt;count&gt;_ is missing, the
- entire volume will be treated as a single disperse set composed by all
- bricks enumerated in the command line.
-
- If *redundancy* is not specified, it is computed automatically to be the
- optimal value. If this value does not exist, it's assumed to be '1' and a
- warning message is shown:
-
- # gluster volume create test-volume disperse 4 server{1..4}:/bricks/test-volume
- There isn't an optimal redundancy value for this configuration. Do you want to create the volume with redundancy 1 ? (y/n)
-
- In all cases where *redundancy* is automatically computed and it's not
- equal to '1', a warning message is displayed:
-
- # gluster volume create test-volume disperse 6 server{1..6}:/bricks/test-volume
- The optimal redundancy for this configuration is 2. Do you want to create the volume with this value ? (y/n)
-
- _redundancy_ must be greater than 0, and the total number of bricks must
- be greater than 2 * _redundancy_. This means that a dispersed volume must
- have a minimum of 3 bricks.
-
- If the transport type is not specified, *tcp* is used as the default. You
- can also set additional options if required, like in the other volume
- types.
-
- > **Note**:
-
- > - Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
- > - GlusterFS will fail to create a dispersed volume if more than one brick of a disperse set is present on the same peer.
-
- > ```
- # gluster volume create <volname> disperse 3 server1:/brick{1..3}
- volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
- Do you still want to continue creating the volume? (y/n)```
-
- > Use the `force` option at the end of command if you want to create the volume in this case.
-
-##Creating Distributed Dispersed Volumes
-
-Distributed dispersed volumes are the equivalent to distributed replicated
-volumes, but using dispersed subvolumes instead of replicated ones.
-
-**To create a distributed dispersed volume**
-
-1. Create a trusted storage pool.
-
-2. Create the distributed dispersed volume:
-
- `# gluster volume create disperse <count> [redundancy <count>] [transport tcp | rdma | tcp,rdma]`
-
- To create a distributed dispersed volume, the *disperse* keyword and
- &lt;count&gt; is mandatory, and the number of bricks specified in the
- command line must must be a multiple of the disperse count.
-
- *redundancy* is exactly the same as in the dispersed volume.
-
- If the transport type is not specified, *tcp* is used as the default. You
- can also set additional options if required, like in the other volume
- types.
-
- > **Note**:
-
- > - Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang.
-
- > - GlusterFS will fail to create a distributed dispersed volume if more than one brick of a disperse set is present on the same peer.
-
- > ```
- # gluster volume create <volname> disperse 3 server1:/brick{1..6}
- volume create: <volname>: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
- Do you still want to continue creating the volume? (y/n)```
-
- > Use the `force` option at the end of command if you want to create the volume in this case.
-
-##Starting Volumes
-
-You must start your volumes before you try to mount them.
-
-**To start a volume**
-
-- Start a volume:
-
- `# gluster volume start `
-
- For example, to start test-volume:
-
- # gluster volume start test-volume
- Starting test-volume has been successful
-
- []: ../images/Distributed_Volume.png
- [1]: ../images/Replicated_Volume.png
- [2]: ../images/Striped_Volume.png
- [3]: ../images/Distributed_Striped_Volume.png
- [4]: ../images/Distributed_Replicated_Volume.png
- [5]: ../images/Striped_Replicated_Volume.png
diff --git a/doc/admin-guide/en-US/markdown/admin_settingup_clients.md b/doc/admin-guide/en-US/markdown/admin_settingup_clients.md
deleted file mode 100644
index 909eca5ae0a..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_settingup_clients.md
+++ /dev/null
@@ -1,600 +0,0 @@
-#Accessing Data - Setting Up GlusterFS Client
-
-You can access gluster volumes in multiple ways. You can use Gluster
-Native Client method for high concurrency, performance and transparent
-failover in GNU/Linux clients. You can also use NFS v3 to access gluster
-volumes. Extensive testing has be done on GNU/Linux clients and NFS
-implementation in other operating system, such as FreeBSD, and Mac OS X,
-as well as Windows 7 (Professional and Up) and Windows Server 2003.
-Other NFS client implementations may work with gluster NFS server.
-
-You can use CIFS to access volumes when using Microsoft Windows as well
-as SAMBA clients. For this access method, Samba packages need to be
-present on the client side.
-
-##Gluster Native Client
-
-The Gluster Native Client is a FUSE-based client running in user space.
-Gluster Native Client is the recommended method for accessing volumes
-when high concurrency and high write performance is required.
-
-This section introduces the Gluster Native Client and explains how to
-install the software on client machines. This section also describes how
-to mount volumes on clients (both manually and automatically) and how to
-verify that the volume has mounted successfully.
-
-###Installing the Gluster Native Client
-
-Before you begin installing the Gluster Native Client, you need to
-verify that the FUSE module is loaded on the client and has access to
-the required modules as follows:
-
-1. Add the FUSE loadable kernel module (LKM) to the Linux kernel:
-
- `# modprobe fuse`
-
-2. Verify that the FUSE module is loaded:
-
- `# dmesg | grep -i fuse `
- `fuse init (API version 7.13)`
-
-### Installing on Red Hat Package Manager (RPM) Distributions
-
-To install Gluster Native Client on RPM distribution-based systems
-
-1. Install required prerequisites on the client using the following
- command:
-
- `$ sudo yum -y install openssh-server wget fuse fuse-libs openib libibverbs`
-
-2. Ensure that TCP and UDP ports 24007 and 24008 are open on all
- Gluster servers. Apart from these ports, you need to open one port
- for each brick starting from port 49152 (instead of 24009 onwards as
- with previous releases). The brick ports assignment scheme is now
- compliant with IANA guidelines. For example: if you have
- five bricks, you need to have ports 49152 to 49156 open.
-
- You can use the following chains with iptables:
-
- `$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT `
- `$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 49152:49156 -j ACCEPT`
-
- > **Note**
- >
- > If you already have iptable chains, make sure that the above
- > ACCEPT rules precede the DROP rules. This can be achieved by
- > providing a lower rule number than the DROP rule.
-
-3. Download the latest glusterfs, glusterfs-fuse, and glusterfs-rdma
- RPM files to each client. The glusterfs package contains the Gluster
- Native Client. The glusterfs-fuse package contains the FUSE
- translator required for mounting on client systems and the
- glusterfs-rdma packages contain OpenFabrics verbs RDMA module for
- Infiniband.
-
- You can download the software at [GlusterFS download page][1].
-
-4. Install Gluster Native Client on the client.
-
- `$ sudo rpm -i glusterfs-3.3.0qa30-1.x86_64.rpm `
- `$ sudo rpm -i glusterfs-fuse-3.3.0qa30-1.x86_64.rpm `
- `$ sudo rpm -i glusterfs-rdma-3.3.0qa30-1.x86_64.rpm`
-
- > **Note**
- >
- > The RDMA module is only required when using Infiniband.
-
-### Installing on Debian-based Distributions
-
-To install Gluster Native Client on Debian-based distributions
-
-1. Install OpenSSH Server on each client using the following command:
-
- `$ sudo apt-get install openssh-server vim wget`
-
-2. Download the latest GlusterFS .deb file and checksum to each client.
-
- You can download the software at [GlusterFS download page][1].
-
-3. For each .deb file, get the checksum (using the following command)
- and compare it against the checksum for that file in the md5sum
- file.
-
- `$ md5sum GlusterFS_DEB_file.deb `
-
- The md5sum of the packages is available at: [GlusterFS download page][2]
-
-4. Uninstall GlusterFS v3.1 (or an earlier version) from the client
- using the following command:
-
- `$ sudo dpkg -r glusterfs `
-
- (Optional) Run `$ sudo dpkg -purge glusterfs `to purge the
- configuration files.
-
-5. Install Gluster Native Client on the client using the following
- command:
-
- `$ sudo dpkg -i GlusterFS_DEB_file `
-
- For example:
-
- `$ sudo dpkg -i glusterfs-3.3.x.deb `
-
-6. Ensure that TCP and UDP ports 24007 and 24008 are open on all
- Gluster servers. Apart from these ports, you need to open one port
- for each brick starting from port 49152 (instead of 24009 onwards as
- with previous releases). The brick ports assignment scheme is now
- compliant with IANA guidelines. For example: if you have
- five bricks, you need to have ports 49152 to 49156 open.
-
- You can use the following chains with iptables:
-
- `$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT `
- `$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 49152:49156 -j ACCEPT`
-
- > **Note**
- >
- > If you already have iptable chains, make sure that the above
- > ACCEPT rules precede the DROP rules. This can be achieved by
- > providing a lower rule number than the DROP rule.
-
-### Performing a Source Installation
-
-To build and install Gluster Native Client from the source code
-
-1. Create a new directory using the following commands:
-
- `# mkdir glusterfs `
- `# cd glusterfs`
-
-2. Download the source code.
-
- You can download the source at [][1].
-
-3. Extract the source code using the following command:
-
- `# tar -xvzf SOURCE-FILE `
-
-4. Run the configuration utility using the following command:
-
- `# ./configure `
-
- GlusterFS configure summary
- ===========================
- FUSE client : yes
- Infiniband verbs : yes
- epoll IO multiplex : yes
- argp-standalone : no
- fusermount : no
- readline : yes
-
- The configuration summary shows the components that will be built
- with Gluster Native Client.
-
-5. Build the Gluster Native Client software using the following
- commands:
-
- `# make `
- `# make install`
-
-6. Verify that the correct version of Gluster Native Client is
- installed, using the following command:
-
- `# glusterfs –-version`
-
-##Mounting Volumes
-
-After installing the Gluster Native Client, you need to mount Gluster
-volumes to access data. There are two methods you can choose:
-
-- [Manually Mounting Volumes](#manual-mount)
-- [Automatically Mounting Volumes](#auto-mount)
-
-> **Note**
->
-> Server names selected during creation of Volumes should be resolvable
-> in the client machine. You can use appropriate /etc/hosts entries or
-> DNS server to resolve server names to IP addresses.
-
-<a name="manual-mount" />
-### Manually Mounting Volumes
-
-- To mount a volume, use the following command:
-
- `# mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR`
-
- For example:
-
- `# mount -t glusterfs server1:/test-volume /mnt/glusterfs`
-
- > **Note**
- >
- > The server specified in the mount command is only used to fetch
- > the gluster configuration volfile describing the volume name.
- > Subsequently, the client will communicate directly with the
- > servers mentioned in the volfile (which might not even include the
- > one used for mount).
- >
- > If you see a usage message like "Usage: mount.glusterfs", mount
- > usually requires you to create a directory to be used as the mount
- > point. Run "mkdir /mnt/glusterfs" before you attempt to run the
- > mount command listed above.
-
-**Mounting Options**
-
-You can specify the following options when using the
-`mount -t glusterfs` command. Note that you need to separate all options
-with commas.
-
-backupvolfile-server=server-name
-
-volfile-max-fetch-attempts=number of attempts
-
-log-level=loglevel
-
-log-file=logfile
-
-transport=transport-type
-
-direct-io-mode=[enable|disable]
-
-use-readdirp=[yes|no]
-
-For example:
-
-`# mount -t glusterfs -o backupvolfile-server=volfile_server2,use-readdirp=no,volfile-max-fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs`
-
-If `backupvolfile-server` option is added while mounting fuse client,
-when the first volfile server fails, then the server specified in
-`backupvolfile-server` option is used as volfile server to mount the
-client.
-
-In `volfile-max-fetch-attempts=X` option, specify the number of
-attempts to fetch volume files while mounting a volume. This option is
-useful when you mount a server with multiple IP addresses or when
-round-robin DNS is configured for the server-name..
-
-If `use-readdirp` is set to ON, it forces the use of readdirp
-mode in fuse kernel module
-
-<a name="auto-mount" />
-### Automatically Mounting Volumes
-
-You can configure your system to automatically mount the Gluster volume
-each time your system starts.
-
-The server specified in the mount command is only used to fetch the
-gluster configuration volfile describing the volume name. Subsequently,
-the client will communicate directly with the servers mentioned in the
-volfile (which might not even include the one used for mount).
-
-- To mount a volume, edit the /etc/fstab file and add the following
- line:
-
- `HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0 `
-
- For example:
-
- `server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0`
-
-**Mounting Options**
-
-You can specify the following options when updating the /etc/fstab file.
-Note that you need to separate all options with commas.
-
-log-level=loglevel
-
-log-file=logfile
-
-transport=transport-type
-
-direct-io-mode=[enable|disable]
-
-use-readdirp=no
-
-For example:
-
-`HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0 `
-
-### Testing Mounted Volumes
-
-To test mounted volumes
-
-- Use the following command:
-
- `# mount `
-
- If the gluster volume was successfully mounted, the output of the
- mount command on the client will be similar to this example:
-
- `server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072`
-
-- Use the following command:
-
- `# df`
-
- The output of df command on the client will display the aggregated
- storage space from all the bricks in a volume similar to this
- example:
-
- `# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs`
-
-- Change to the directory and list the contents by entering the
- following:
-
- `# cd MOUNTDIR `
- `# ls`
-
-- For example,
-
- `# cd /mnt/glusterfs `
- `# ls`
-
-#NFS
-
-You can use NFS v3 to access to gluster volumes. Extensive testing has
-be done on GNU/Linux clients and NFS implementation in other operating
-system, such as FreeBSD, and Mac OS X, as well as Windows 7
-(Professional and Up), Windows Server 2003, and others, may work with
-gluster NFS server implementation.
-
-GlusterFS now includes network lock manager (NLM) v4. NLM enables
-applications on NFSv3 clients to do record locking on files on NFS
-server. It is started automatically whenever the NFS server is run.
-
-You must install nfs-common package on both servers and clients (only
-for Debian-based) distribution.
-
-This section describes how to use NFS to mount Gluster volumes (both
-manually and automatically) and how to verify that the volume has been
-mounted successfully.
-
-##Using NFS to Mount Volumes
---------------------------
-
-You can use either of the following methods to mount Gluster volumes:
-
-- [Manually Mounting Volumes Using NFS](#manual-nfs)
-- [Automatically Mounting Volumes Using NFS](#auto-nfs)
-
-**Prerequisite**: Install nfs-common package on both servers and clients
-(only for Debian-based distribution), using the following command:
-
-`$ sudo aptitude install nfs-common `
-
-<a name="manual-nfs" />
-### Manually Mounting Volumes Using NFS
-
-**To manually mount a Gluster volume using NFS**
-
-- To mount a volume, use the following command:
-
- `# mount -t nfs -o vers=3 HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR`
-
- For example:
-
- `# mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs`
-
- > **Note**
- >
- > Gluster NFS server does not support UDP. If the NFS client you are
- > using defaults to connecting using UDP, the following message
- > appears:
- >
- > `requested NFS version or transport protocol is not supported`.
-
- **To connect using TCP**
-
-- Add the following option to the mount command:
-
- `-o mountproto=tcp `
-
- For example:
-
- `# mount -o mountproto=tcp -t nfs server1:/test-volume /mnt/glusterfs`
-
-**To mount Gluster NFS server from a Solaris client**
-
-- Use the following command:
-
- `# mount -o proto=tcp,vers=3 nfs://HOSTNAME-OR-IPADDRESS:38467/VOLNAME MOUNTDIR`
-
- For example:
-
- ` # mount -o proto=tcp,vers=3 nfs://server1:38467/test-volume /mnt/glusterfs`
-
-<a name="auto-nfs" />
-### Automatically Mounting Volumes Using NFS
-
-You can configure your system to automatically mount Gluster volumes
-using NFS each time the system starts.
-
-**To automatically mount a Gluster volume using NFS**
-
-- To mount a volume, edit the /etc/fstab file and add the following
- line:
-
- `HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,vers=3 0 0`
-
- For example,
-
- `server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,vers=3 0 0`
-
- > **Note**
- >
- > Gluster NFS server does not support UDP. If the NFS client you are
- > using defaults to connecting using UDP, the following message
- > appears:
- >
- > `requested NFS version or transport protocol is not supported.`
-
- To connect using TCP
-
-- Add the following entry in /etc/fstab file :
-
- `HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0`
-
- For example,
-
- `server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0`
-
-**To automount NFS mounts**
-
-Gluster supports \*nix standard method of automounting NFS mounts.
-Update the /etc/auto.master and /etc/auto.misc and restart the autofs
-service. After that, whenever a user or process attempts to access the
-directory it will be mounted in the background.
-
-### Testing Volumes Mounted Using NFS
-
-You can confirm that Gluster directories are mounting successfully.
-
-**To test mounted volumes**
-
-- Use the mount command by entering the following:
-
- `# mount`
-
- For example, the output of the mount command on the client will
- display an entry like the following:
-
- `server1:/test-volume on /mnt/glusterfs type nfs (rw,vers=3,addr=server1)`
-
-- Use the df command by entering the following:
-
- `# df`
-
- For example, the output of df command on the client will display the
- aggregated storage space from all the bricks in a volume.
-
- # df -h /mnt/glusterfs
- Filesystem Size Used Avail Use% Mounted on
- server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
-
-- Change to the directory and list the contents by entering the
- following:
-
- `# cd MOUNTDIR`
- `# ls`
-
-#CIFS
-
-You can use CIFS to access to volumes when using Microsoft Windows as
-well as SAMBA clients. For this access method, Samba packages need to be
-present on the client side. You can export glusterfs mount point as the
-samba export, and then mount it using CIFS protocol.
-
-This section describes how to mount CIFS shares on Microsoft
-Windows-based clients (both manually and automatically) and how to
-verify that the volume has mounted successfully.
-
-> **Note**
->
-> CIFS access using the Mac OS X Finder is not supported, however, you
-> can use the Mac OS X command line to access Gluster volumes using
-> CIFS.
-
-##Using CIFS to Mount Volumes
-
-You can use either of the following methods to mount Gluster volumes:
-
-- [Exporting Gluster Volumes Through Samba](#export-samba)
-- [Manually Mounting Volumes Using CIFS](#cifs-manual)
-- [Automatically Mounting Volumes Using CIFS](#cifs-auto)
-
-You can also use Samba for exporting Gluster Volumes through CIFS
-protocol.
-
-<a name="export-samba" />
-### Exporting Gluster Volumes Through Samba
-
-We recommend you to use Samba for exporting Gluster volumes through the
-CIFS protocol.
-
-**To export volumes through CIFS protocol**
-
-1. Mount a Gluster volume.
-
-2. Setup Samba configuration to export the mount point of the Gluster
- volume.
-
- For example, if a Gluster volume is mounted on /mnt/gluster, you
- must edit smb.conf file to enable exporting this through CIFS. Open
- smb.conf file in an editor and add the following lines for a simple
- configuration:
-
- [glustertest]
-
- comment = For testing a Gluster volume exported through CIFS
-
- path = /mnt/glusterfs
-
- read only = no
-
- guest ok = yes
-
-Save the changes and start the smb service using your systems init
-scripts (/etc/init.d/smb [re]start).
-
-> **Note**
->
-> To be able mount from any server in the trusted storage pool, you must
-> repeat these steps on each Gluster node. For more advanced
-> configurations, see Samba documentation.
-
-<a name="cifs-manual" />
-### Manually Mounting Volumes Using CIFS
-
-You can manually mount Gluster volumes using CIFS on Microsoft
-Windows-based client machines.
-
-**To manually mount a Gluster volume using CIFS**
-
-1. Using Windows Explorer, choose **Tools \> Map Network Drive…** from
- the menu. The **Map Network Drive**window appears.
-
-2. Choose the drive letter using the **Drive** drop-down list.
-
-3. Click **Browse**, select the volume to map to the network drive, and
- click **OK**.
-
-4. Click **Finish.**
-
-The network drive (mapped to the volume) appears in the Computer window.
-
-Alternatively, to manually mount a Gluster volume using CIFS by going to
-**Start \> Run** and entering Network path manually.
-
-<a name="cifs-auto" />
-### Automatically Mounting Volumes Using CIFS
-
-You can configure your system to automatically mount Gluster volumes
-using CIFS on Microsoft Windows-based clients each time the system
-starts.
-
-**To automatically mount a Gluster volume using CIFS**
-
-The network drive (mapped to the volume) appears in the Computer window
-and is reconnected each time the system starts.
-
-1. Using Windows Explorer, choose **Tools \> Map Network Drive…** from
- the menu. The **Map Network Drive**window appears.
-
-2. Choose the drive letter using the **Drive** drop-down list.
-
-3. Click **Browse**, select the volume to map to the network drive, and
- click **OK**.
-
-4. Click the **Reconnect** at logon checkbox.
-
-5. Click **Finish.**
-
-### Testing Volumes Mounted Using CIFS
-
-You can confirm that Gluster directories are mounting successfully by
-navigating to the directory using Windows Explorer.
-
- []: http://bits.gluster.com/gluster/glusterfs/3.3.0qa30/x86_64/
- [1]: http://www.gluster.org/download/
- [2]: http://download.gluster.com/pub/gluster/glusterfs
diff --git a/doc/admin-guide/en-US/markdown/admin_ssl.md b/doc/admin-guide/en-US/markdown/admin_ssl.md
deleted file mode 100644
index 4522bcedf88..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_ssl.md
+++ /dev/null
@@ -1,128 +0,0 @@
-# Setting up GlusterFS with SSL/TLS
-
-GlusterFS allows its communication to be secured using the [Transport Layer
-Security][tls] standard (which supersedes Secure Sockets Layer), using the
-[OpenSSL][ossl] library. Setting this up requires a basic working knowledge of
-some SSL/TLS concepts, which can only be briefly summarized here.
-
- * "Authentication" is the process of one entity (e.g. a machine, process, or
- person) proving its identity to a second entity.
-
- * "Authorization" is the process of checking whether an entity has permission
- to perform an action.
-
- * TLS provides authentication and encryption. It does not provide
- authorization, though GlusterFS can use TLS-authenticated identities to
- authorize client connections to bricks/volumes.
-
- * An entity X which must authenticate to a second entity Y does so by sharing
- with Y a *certificate*, which contains information sufficient to prove X's
- identity. X's proof of identity also requires possession of a *private key*
- which matches its certificate, but this key is never seen by Y or anyone
- else. Because the certificate is already public, anyone who has the key can
- claim that identity.
-
- * Each certificate contains the identity of its principal (owner) along with
- the identity of a *certifying authority* or CA who can verify the integrity
- of the certificate's contents. The principal and CA can be the same (a
- "self-signed certificate"). If they are different, the CA must *sign* the
- certificate by appending information derived from both the certificate
- contents and the CA's own private key.
-
- * Certificate-signing relationships can extend through multiple levels. For
- example, a company X could sign another company Y's certificate, which could
- then be used to sign a third certificate Z for a specific user or purpose.
- Anyone who trusts X (and is willing to extend that trust through a
- *certificate depth* of two or more) would therefore be able to authenticate
- Y and Z as well.
-
- * Any entity willing to accept other entities' authentication attempts must
- have some sort of database seeded with the certificates that already accept.
-
-In GlusterFS's case, a client or server X uses the following files to contain
-TLS-related information:
-
- * /etc/ssl/glusterfs.pem X's own certificate
-
- * /etc/ssl/glusterfs.key X's private key
-
- * /etc/ssl/glusterfs.ca concatenation of *others'* certificates
-
-GlusterFS always performs *mutual authentication*, though clients do not
-currently do anything with the authenticated server identity. Thus, if client X
-wants to communicate with server Y, then X's certificate (or that of a signer)
-must be in Y's CA file, and vice versa.
-
-For all uses of TLS in GlusterFS, if one side of a connection is configured to
-use TLS then the other side must use it as well. There is no automatic fallback
-to non-TLS communication, or allowance for concurrent TLS and non-TLS access to
-the same resource, because either would be insecure. Instead, any such "mixed
-mode" connections will be rejected by the TLS-using side, sacrificing
-availability to maintain security.
-
-## Enabling TLS on the I/O Path
-
-To enable authentication and encryption between clients and brick servers, two
-options must be set:
-
- gluster volume set MYVOLUME client.ssl on
- gluster volume set MYVOLUME server.ssl on
-
-Note that the above options affect only the GlusterFS native protocol. Foreign
-protocols such as NFS, SMB, or Swift will not be affected.
-
-## Using TLS Identities for Authorization
-
-Once TLS has been enabled on the I/O path, TLS identities can be used instead of
-IP addresses or plain usernames to control access to specific volumes. For
-example:
-
- gluster volume set MYVOLUME auth.ssl-allow Zaphod
-
-Here, we're allowing the TLS-authenticated identity "Zaphod" to access MYVOLUME.
-This is intentionally identical to the existing "auth.allow" option, except that
-the name is taken from a TLS certificate instead of a command-line string. Note
-that infelicities in the gluster CLI preclude using names that include spaces,
-which would otherwise be allowed.
-
-## Enabling TLS on the Management Path
-
-Management-daemon traffic is not controlled by an option. Instead, it is
-controlled by the presence of a file on each machine:
-
- /var/lib/glusterd/secure-access
-
-Creating this file will cause glusterd connections made from that machine to use
-TLS. Note that even clients must do this to communicate with a remote glusterd
-while mounting, but not thereafter.
-
-## Additional Options
-
-The GlusterFS TLS implementation supports two additional options related to TLS
-internals.
-
-The first option allows the user to set the certificate depth, as mentioned
-above.
-
- gluster volume set MYVOLUME ssl.cert-depth 2
-
-Here, we're setting our certificate depth to two, as in the introductory
-example. By default this value is zero, meaning that only certificates which
-are directly specified in the local CA file will be accepted (i.e. no signed
-certificates at all).
-
-The second option allows the user to specify the set of allowed TLS ciphers.
-
- gluster volume set MYVOLUME ssl.cipher-list HIGH:!SSLv2
-
-Cipher lists are negotiated between the two parties to a TLS connection, so
-that both sides' security needs are satisfied. In this example, we're setting
-the initial cipher list to HIGH, representing ciphers that the cryptography
-community still believes to be unbroken. We are also explicitly disallowing
-ciphers specific to SSL version 2. The default is based on this example but
-also excludes CBC-based cipher modes to provide extra mitigation against the
-[POODLE][poo] attack.
-
-[tls]: http://tools.ietf.org/html/rfc5246
-[ossl]: https://www.openssl.org/
-[poo]: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
diff --git a/doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md b/doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md
deleted file mode 100644
index a47ece8d95b..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md
+++ /dev/null
@@ -1,58 +0,0 @@
-#Managing the glusterd Service
-
-After installing GlusterFS, you must start glusterd service. The
-glusterd service serves as the Gluster elastic volume manager,
-overseeing glusterfs processes, and co-ordinating dynamic volume
-operations, such as adding and removing volumes across multiple storage
-servers non-disruptively.
-
-This section describes how to start the glusterd service in the
-following ways:
-
-- [Starting and Stopping glusterd Manually](#manual)
-- [Starting glusterd Automatically](#auto)
-
-> **Note**: You must start glusterd on all GlusterFS servers.
-
-<a name="manual" />
-##Starting and Stopping glusterd Manually
-
-This section describes how to start and stop glusterd manually
-
-- To start glusterd manually, enter the following command:
-
- `# /etc/init.d/glusterd start `
-
-- To stop glusterd manually, enter the following command:
-
- `# /etc/init.d/glusterd stop`
-
-<a name="auto" />
-##Starting glusterd Automatically
-
-This section describes how to configure the system to automatically
-start the glusterd service every time the system boots.
-
-###Red Hat and Fedora distros
-
-To configure Red Hat-based systems to automatically start the glusterd
-service every time the system boots, enter the following from the
-command line:
-
-`# chkconfig glusterd on `
-
-###Debian and derivatives like Ubuntu
-
-To configure Debian-based systems to automatically start the glusterd
-service every time the system boots, enter the following from the
-command line:
-
-`# update-rc.d glusterd defaults`
-
-###Systems Other than Red Hat and Debain
-
-To configure systems other than Red Hat or Debian to automatically start
-the glusterd service every time the system boots, enter the following
-entry to the*/etc/rc.local* file:
-
-`# echo "glusterd" >> /etc/rc.local `
diff --git a/doc/admin-guide/en-US/markdown/admin_storage_pools.md b/doc/admin-guide/en-US/markdown/admin_storage_pools.md
deleted file mode 100644
index de181f58c18..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_storage_pools.md
+++ /dev/null
@@ -1,91 +0,0 @@
-#Setting up Trusted Storage Pools
-
-Before you can configure a GlusterFS volume, you must create a trusted
-storage pool consisting of the storage servers that provides bricks to a
-volume.
-
-A storage pool is a trusted network of storage servers. When you start
-the first server, the storage pool consists of that server alone. To add
-additional storage servers to the storage pool, you can use the probe
-command from a storage server that is already trusted.
-
-> **Note**: Do not self-probe the first server/localhost from itself.
-
-The GlusterFS service must be running on all storage servers that you
-want to add to the storage pool. See ? for more information.
-
-##Adding Servers to Trusted Storage Pool
-
-To create a trusted storage pool, add servers to the trusted storage
-pool
-
-1. **The servers used to create the storage pool must be resolvable by
- hostname.**
-
- To add a server to the storage pool:
-
- `# gluster peer probe `
-
- For example, to create a trusted storage pool of four servers, add
- three servers to the storage pool from server1:
-
- # gluster peer probe server2
- Probe successful
-
- # gluster peer probe server3
- Probe successful
-
- # gluster peer probe server4
- Probe successful
-
-2. **Verify the peer status from the first server using the following
- commands:**
-
- # gluster peer status
- Number of Peers: 3
-
- Hostname: server2
- Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
- State: Peer in Cluster (Connected)
-
- Hostname: server3
- Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
- State: Peer in Cluster (Connected)
-
- Hostname: server4
- Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7
- State: Peer in Cluster (Connected)
-
- 3. **Assign the hostname to the first server by probing it from another server (not the server used in steps 1 and 2):**
-
- server2# gluster peer probe server1
- Probe successful
-
-4. **Verify the peer status from the same server you used in step 3 using the following
- command:**
-
- server2# gluster peer status
- Number of Peers: 3
-
- Hostname: server1
- Uuid: ceed91d5-e8d1-434d-9d47-63e914c93424
- State: Peer in Cluster (Connected)
-
- Hostname: server3
- Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
- State: Peer in Cluster (Connected)
-
- Hostname: server4
- Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7
- State: Peer in Cluster (Connected)
-
-##Removing Servers from the Trusted Storage Pool
-
-To remove a server from the storage pool:
-
-`# gluster peer detach`
-
-For example, to remove server4 from the trusted storage pool:
-
- # gluster peer detach server4
- Detach successful
diff --git a/doc/admin-guide/en-US/markdown/admin_troubleshooting.md b/doc/admin-guide/en-US/markdown/admin_troubleshooting.md
deleted file mode 100644
index fa19a2f71de..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_troubleshooting.md
+++ /dev/null
@@ -1,495 +0,0 @@
-#Troubleshooting GlusterFS
-
-This section describes how to manage GlusterFS logs and most common
-troubleshooting scenarios related to GlusterFS.
-
-##Contents
-* [Managing GlusterFS Logs](#logs)
-* [Troubleshooting Geo-replication](#georep)
-* [Troubleshooting POSIX ACLs](#posix-acls)
-* [Troubleshooting Hadoop Compatible Storage](#hadoop)
-* [Troubleshooting NFS](#nfs)
-* [Troubleshooting File Locks](#file-locks)
-
-<a name="logs" />
-##Managing GlusterFS Logs
-
-###Rotating Logs
-
-Administrators can rotate the log file in a volume, as needed.
-
-**To rotate a log file**
-
- `# gluster volume log rotate `
-
-For example, to rotate the log file on test-volume:
-
- # gluster volume log rotate test-volume
- log rotate successful
-
-> **Note**
-> When a log file is rotated, the contents of the current log file
-> are moved to log-file- name.epoch-time-stamp.
-
-<a name="georep" />
-##Troubleshooting Geo-replication
-
-This section describes the most common troubleshooting scenarios related
-to GlusterFS Geo-replication.
-
-###Locating Log Files
-
-For every Geo-replication session, the following three log files are
-associated to it (four, if the slave is a gluster volume):
-
-- **Master-log-file** - log file for the process which monitors the Master
- volume
-- **Slave-log-file** - log file for process which initiates the changes in
- slave
-- **Master-gluster-log-file** - log file for the maintenance mount point
- that Geo-replication module uses to monitor the master volume
-- **Slave-gluster-log-file** - is the slave's counterpart of it
-
-**Master Log File**
-
-To get the Master-log-file for geo-replication, use the following
-command:
-
-`gluster volume geo-replication config log-file`
-
-For example:
-
-`# gluster volume geo-replication Volume1 example.com:/data/remote_dir config log-file `
-
-**Slave Log File**
-
-To get the log file for Geo-replication on slave (glusterd must be
-running on slave machine), use the following commands:
-
-1. On master, run the following command:
-
- `# gluster volume geo-replication Volume1 example.com:/data/remote_dir config session-owner 5f6e5200-756f-11e0-a1f0-0800200c9a66 `
-
- Displays the session owner details.
-
-2. On slave, run the following command:
-
- `# gluster volume geo-replication /data/remote_dir config log-file /var/log/gluster/${session-owner}:remote-mirror.log `
-
-3. Replace the session owner details (output of Step 1) to the output
- of the Step 2 to get the location of the log file.
-
- `/var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log`
-
-###Rotating Geo-replication Logs
-
-Administrators can rotate the log file of a particular master-slave
-session, as needed. When you run geo-replication's ` log-rotate`
-command, the log file is backed up with the current timestamp suffixed
-to the file name and signal is sent to gsyncd to start logging to a new
-log file.
-
-**To rotate a geo-replication log file**
-
-- Rotate log file for a particular master-slave session using the
- following command:
-
- `# gluster volume geo-replication log-rotate`
-
- For example, to rotate the log file of master `Volume1` and slave
- `example.com:/data/remote_dir` :
-
- # gluster volume geo-replication Volume1 example.com:/data/remote_dir log rotate
- log rotate successful
-
-- Rotate log file for all sessions for a master volume using the
- following command:
-
- `# gluster volume geo-replication log-rotate`
-
- For example, to rotate the log file of master `Volume1`:
-
- # gluster volume geo-replication Volume1 log rotate
- log rotate successful
-
-- Rotate log file for all sessions using the following command:
-
- `# gluster volume geo-replication log-rotate`
-
- For example, to rotate the log file for all sessions:
-
- # gluster volume geo-replication log rotate
- log rotate successful
-
-###Synchronization is not complete
-
-**Description**: GlusterFS Geo-replication did not synchronize the data
-completely but still the geo- replication status displayed is OK.
-
-**Solution**: You can enforce a full sync of the data by erasing the
-index and restarting GlusterFS Geo- replication. After restarting,
-GlusterFS Geo-replication begins synchronizing all the data. All files
-are compared using checksum, which can be a lengthy and high resource
-utilization operation on large data sets.
-
-
-###Issues in Data Synchronization
-
-**Description**: Geo-replication display status as OK, but the files do
-not get synced, only directories and symlink gets synced with the
-following error message in the log:
-
- [2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to
- sync ./some\_file\`
-
-**Solution**: Geo-replication invokes rsync v3.0.0 or higher on the host
-and the remote machine. You must verify if you have installed the
-required version.
-
-###Geo-replication status displays Faulty very often
-
-**Description**: Geo-replication displays status as faulty very often
-with a backtrace similar to the following:
-
- 2011-04-28 14:06:18.378859] E [syncdutils:131:log\_raise\_exception]
- \<top\>: FAIL: Traceback (most recent call last): File
- "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
- 152, in twraptf(\*aa) File
- "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in
- listen rid, exc, res = recv(self.inf) File
- "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42, in
- recv return pickle.load(inf) EOFError
-
-**Solution**: This error indicates that the RPC communication between
-the master gsyncd module and slave gsyncd module is broken and this can
-happen for various reasons. Check if it satisfies all the following
-pre-requisites:
-
-- Password-less SSH is set up properly between the host and the remote
- machine.
-- If FUSE is installed in the machine, because geo-replication module
- mounts the GlusterFS volume using FUSE to sync data.
-- If the **Slave** is a volume, check if that volume is started.
-- If the Slave is a plain directory, verify if the directory has been
- created already with the required permissions.
-- If GlusterFS 3.2 or higher is not installed in the default location
- (in Master) and has been prefixed to be installed in a custom
- location, configure the `gluster-command` for it to point to the
- exact location.
-- If GlusterFS 3.2 or higher is not installed in the default location
- (in slave) and has been prefixed to be installed in a custom
- location, configure the `remote-gsyncd-command` for it to point to
- the exact place where gsyncd is located.
-
-###Intermediate Master goes to Faulty State
-
-**Description**: In a cascading set-up, the intermediate master goes to
-faulty state with the following log:
-
- raise RuntimeError ("aborting on uuid change from %s to %s" % \\
- RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f-
- 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154
-
-**Solution**: In a cascading set-up the Intermediate master is loyal to
-the original primary master. The above log means that the
-geo-replication module has detected change in primary master. If this is
-the desired behavior, delete the config option volume-id in the session
-initiated from the intermediate master.
-
-<a name="posix-acls" />
-##Troubleshooting POSIX ACLs
-
-This section describes the most common troubleshooting issues related to
-POSIX ACLs.
-
- setfacl command fails with “setfacl: \<file or directory name\>: Operation not supported” error
-
-You may face this error when the backend file systems in one of the
-servers is not mounted with the "-o acl" option. The same can be
-confirmed by viewing the following error message in the log file of the
-server "Posix access control list is not supported".
-
-**Solution**: Remount the backend file system with "-o acl" option.
-
-<a name="hadoop" />
-##Troubleshooting Hadoop Compatible Storage
-
-###Time Sync
-
-**Problem**: Running MapReduce job may throw exceptions if the time is out-of-sync on
-the hosts in the cluster.
-
-**Solution**: Sync the time on all hosts using ntpd program.
-
-<a name="nfs" />
-##Troubleshooting NFS
-
-This section describes the most common troubleshooting issues related to
-NFS .
-
-###mount command on NFS client fails with “RPC Error: Program not registered”
-
- Start portmap or rpcbind service on the NFS server.
-
-This error is encountered when the server has not started correctly.
-On most Linux distributions this is fixed by starting portmap:
-
-`$ /etc/init.d/portmap start`
-
-On some distributions where portmap has been replaced by rpcbind, the
-following command is required:
-
-`$ /etc/init.d/rpcbind start `
-
-After starting portmap or rpcbind, gluster NFS server needs to be
-restarted.
-
-###NFS server start-up fails with “Port is already in use” error in the log file.
-
-Another Gluster NFS server is running on the same machine.
-
-This error can arise in case there is already a Gluster NFS server
-running on the same machine. This situation can be confirmed from the
-log file, if the following error lines exist:
-
- [2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
- [2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
- [2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
- [2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
- [2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
- [2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
- [2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
-
-To resolve this error one of the Gluster NFS servers will have to be
-shutdown. At this time, Gluster NFS server does not support running
-multiple NFS servers on the same machine.
-
-###mount command fails with “rpc.statd” related error message
-
-If the mount command fails with the following error message:
-
- mount.nfs: rpc.statd is not running but is required for remote locking.
- mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
-
-For NFS clients to mount the NFS server, rpc.statd service must be
-running on the clients. Start rpc.statd service by running the following command:
-
-`$ rpc.statd `
-
-###mount command takes too long to finish.
-
-**Start rpcbind service on the NFS client**
-
-The problem is that the rpcbind or portmap service is not running on the
-NFS client. The resolution for this is to start either of these services
-by running the following command:
-
-`$ /etc/init.d/portmap start`
-
-On some distributions where portmap has been replaced by rpcbind, the
-following command is required:
-
-`$ /etc/init.d/rpcbind start`
-
-###NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log.
-
-NFS start-up can succeed but the initialization of the NFS service can
-still fail preventing clients from accessing the mount points. Such a
-situation can be confirmed from the following error messages in the log
-file:
-
- [2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
- [2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
- [2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
- [2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
- [2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols
- [2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap
- [2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
- [2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
-
-1. **Start portmap or rpcbind service on the NFS server**
-
- On most Linux distributions, portmap can be started using the
- following command:
-
- `$ /etc/init.d/portmap start `
-
- On some distributions where portmap has been replaced by rpcbind,
- run the following command:
-
- `$ /etc/init.d/rpcbind start `
-
- After starting portmap or rpcbind, gluster NFS server needs to be
- restarted.
-
-2. **Stop another NFS server running on the same machine**
-
- Such an error is also seen when there is another NFS server running
- on the same machine but it is not the Gluster NFS server. On Linux
- systems, this could be the kernel NFS server. Resolution involves
- stopping the other NFS server or not running the Gluster NFS server
- on the machine. Before stopping the kernel NFS server, ensure that
- no critical service depends on access to that NFS server's exports.
-
- On Linux, kernel NFS servers can be stopped by using either of the
- following commands depending on the distribution in use:
-
- `$ /etc/init.d/nfs-kernel-server stop`
-
- `$ /etc/init.d/nfs stop`
-
-3. **Restart Gluster NFS server**
-
-###mount command fails with NFS server failed error.
-
-mount command fails with following error
-
- *mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).*
-
-Perform one of the following to resolve this issue:
-
-1. **Disable name lookup requests from NFS server to a DNS server**
-
- The NFS server attempts to authenticate NFS clients by performing a
- reverse DNS lookup to match hostnames in the volume file with the
- client IP addresses. There can be a situation where the NFS server
- either is not able to connect to the DNS server or the DNS server is
- taking too long to responsd to DNS request. These delays can result
- in delayed replies from the NFS server to the NFS client resulting
- in the timeout error seen above.
-
- NFS server provides a work-around that disables DNS requests,
- instead relying only on the client IP addresses for authentication.
- The following option can be added for successful mounting in such
- situations:
-
- `option rpc-auth.addr.namelookup off `
-
- > **Note**: Remember that disabling the NFS server forces authentication
- > of clients to use only IP addresses and if the authentication
- > rules in the volume file use hostnames, those authentication rules
- > will fail and disallow mounting for those clients.
-
- **OR**
-
-2. **NFS version used by the NFS client is other than version 3**
-
- Gluster NFS server supports version 3 of NFS protocol. In recent
- Linux kernels, the default NFS version has been changed from 3 to 4.
- It is possible that the client machine is unable to connect to the
- Gluster NFS server because it is using version 4 messages which are
- not understood by Gluster NFS server. The timeout can be resolved by
- forcing the NFS client to use version 3. The **vers** option to
- mount command is used for this purpose:
-
- `$ mount -o vers=3 `
-
-###showmount fails with clnt\_create: RPC: Unable to receive
-
-Check your firewall setting to open ports 111 for portmap
-requests/replies and Gluster NFS server requests/replies. Gluster NFS
-server operates over the following port numbers: 38465, 38466, and
-38467.
-
-###Application fails with "Invalid argument" or "Value too large for defined data type" error.
-
-These two errors generally happen for 32-bit nfs clients or applications
-that do not support 64-bit inode numbers or large files. Use the
-following option from the CLI to make Gluster NFS return 32-bit inode
-numbers instead: nfs.enable-ino32 \<on|off\>
-
-Applications that will benefit are those that were either:
-
-- built 32-bit and run on 32-bit machines such that they do not
- support large files by default
-- built 32-bit on 64-bit systems
-
-This option is disabled by default so NFS returns 64-bit inode numbers
-by default.
-
-Applications which can be rebuilt from source are recommended to rebuild
-using the following flag with gcc:
-
-` -D_FILE_OFFSET_BITS=64`
-
-<a name="file-locks" />
-##Troubleshooting File Locks
-
-In GlusterFS 3.3 you can use `statedump` command to list the locks held
-on files. The statedump output also provides information on each lock
-with its range, basename, PID of the application holding the lock, and
-so on. You can analyze the output to know about the locks whose
-owner/application is no longer running or interested in that lock. After
-ensuring that the no application is using the file, you can clear the
-lock using the following `clear lock` commands.
-
-1. **Perform statedump on the volume to view the files that are locked
- using the following command:**
-
- `# gluster volume statedump inode`
-
- For example, to display statedump of test-volume:
-
- # gluster volume statedump test-volume
- Volume statedump successful
-
- The statedump files are created on the brick servers in the` /tmp`
- directory or in the directory set using `server.statedump-path`
- volume option. The naming convention of the dump file is
- `<brick-path>.<brick-pid>.dump`.
-
- The following are the sample contents of the statedump file. It
- indicates that GlusterFS has entered into a state where there is an
- entry lock (entrylk) and an inode lock (inodelk). Ensure that those
- are stale locks and no resources own them.
-
- [xlator.features.locks.vol-locks.inode]
- path=/
- mandatory=0
- entrylk-count=1
- lock-dump.domain.domain=vol-replicate-0
- xlator.feature.locks.lock-dump.domain.entrylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCK on basename=file1, pid = 714782904, owner=ffffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012
-
- conn.2.bound_xl./gfs/brick1.hashsize=14057
- conn.2.bound_xl./gfs/brick1.name=/gfs/brick1/inode
- conn.2.bound_xl./gfs/brick1.lru_limit=16384
- conn.2.bound_xl./gfs/brick1.active_size=2
- conn.2.bound_xl./gfs/brick1.lru_size=0
- conn.2.bound_xl./gfs/brick1.purge_size=0
-
- [conn.2.bound_xl./gfs/brick1.active.1]
- gfid=538a3d4a-01b0-4d03-9dc9-843cd8704d07
- nlookup=1
- ref=2
- ia_type=1
- [xlator.features.locks.vol-locks.inode]
- path=/file1
- mandatory=0
- inodelk-count=1
- lock-dump.domain.domain=vol-replicate-0
- inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 714787072, owner=00ffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012
-
-2. **Clear the lock using the following command:**
-
- `# gluster volume clear-locks`
-
- For example, to clear the entry lock on `file1` of test-volume:
-
- # gluster volume clear-locks test-volume / kind granted entry file1
- Volume clear-locks successful
- vol-locks: entry blocked locks=0 granted locks=1
-
-3. **Clear the inode lock using the following command:**
-
- `# gluster volume clear-locks`
-
- For example, to clear the inode lock on `file1` of test-volume:
-
- # gluster volume clear-locks test-volume /file1 kind granted inode 0,0-0
- Volume clear-locks successful
- vol-locks: inode blocked locks=0 granted locks=1
-
- You can perform statedump on test-volume again to verify that the
- above inode and entry locks are cleared.
-
-
diff --git a/doc/admin-guide/en-US/markdown/did-you-know.md b/doc/admin-guide/en-US/markdown/did-you-know.md
deleted file mode 100644
index 085b4a81a7a..00000000000
--- a/doc/admin-guide/en-US/markdown/did-you-know.md
+++ /dev/null
@@ -1,36 +0,0 @@
-#Did you know?
-
-This document is an attempt to describe less-documented behaviours and features
-of GlusterFS that an admin always wanted to know but was too shy or busy to
-ask.
-
-## Trusted Volfiles
-
-Observant admins would have wondered why there are two similar volume files for
-every volume, namely trusted-<VOLNAME>-fuse.vol and <VOLNAME>-fuse.vol. To
-appreciate this one needs to know about the IP address/hostname based access
-restriction schemes available in GlusterFS. They are "auth-allow" and
-"auth-reject". The "auth-allow" and "auth-reject" options take a comma
-separated list of IP addresses/hostnames as value. "auth-allow" allows access
-_only_ to clients running on machines whose IP address/hostname are on this
-list. It is highly likely for an admin to configure the "auth-allow" option
-without including the list of nodes in the cluster. One would expect this to
-work. Previously, in this configuration (internal) clients such as
-gluster-nfs, glustershd etc., running in the trusted storage pool, would be
-denied access to the volume. This is undesirable and counter-intuitive. The
-work around was to add the IP address/hostnames of all the nodes in the trusted
-storage pool to the "auth-allow" list. This is bad for a reasonably large
-number of nodes. To fix this, an alternate authentication mechanism for nodes
-in the storage pool was introduced. Following is a brief explanation of how
-this works.
-
-The volume file with trusted prefix in its name (i.e trusted-volfile) has a
-username and password option in the client xlator. The trusted-volfile is used
-_only_ by mount processes running in the trusted storage pool (hence the name).
-The username and password, when present, allow "mount" (and other glusterfs)
-processes to access the brick processes even if the node they are running on is
-not explicitly added in "auth-allow" addresses. 'Regular' mount processes,
-running on nodes outside the trusted storage pool, use the non-trusted-volfile.
-The important thing to note is that "trusted" in this context only implied
-belonging to the trusted storage pool.
-
diff --git a/doc/admin-guide/en-US/markdown/glossary.md b/doc/admin-guide/en-US/markdown/glossary.md
deleted file mode 100644
index 496d0a428d4..00000000000
--- a/doc/admin-guide/en-US/markdown/glossary.md
+++ /dev/null
@@ -1,300 +0,0 @@
-Glossary
-========
-
-**Brick**
-: A Brick is the basic unit of storage in GlusterFS, represented by an export
- directory on a server in the trusted storage pool.
- A brick is expressed by combining a server with an export directory in the following format:
-
- `SERVER:EXPORT`
- For example:
- `myhostname:/exports/myexportdir/`
-
-**Volume**
-: A volume is a logical collection of bricks. Most of the gluster
- management operations happen on the volume.
-
-
-**Subvolume**
-: A brick after being processed by at least one translator or in other words
- set of one or more xlator stacked together is called a sub-volume.
-
-
-**Volfile**
-: Volume (vol) files are configuration files that determine the behavior of the
- GlusterFs trusted storage pool. Volume file is a textual representation of a
- collection of modules (also known as translators) that together implement the
- various functions required. The collection of modules are arranged in a graph-like
- fashion. E.g, A replicated volume's volfile, among other things, would have a
- section describing the replication translator and its tunables.
- This section describes how the volume would replicate data written to it.
- Further, a client process that serves a mount point, would interpret its volfile
- and load the translators described in it. While serving I/O, it would pass the
- request to the collection of modules in the order specified in the volfile.
-
- At a high level, GlusterFs has three entities,that is, Server, Client and Management daemon.
- Each of these entities have their own volume files.
- Volume files for servers and clients are generated by the management daemon
- after the volume is created.
-
- Server and Client Vol files are located in /var/lib/glusterd/vols/VOLNAME directory.
- The management daemon vol file is named as glusterd.vol and is located in /etc/glusterfs/
- directory.
-
-**glusterd**
-: The daemon/service that manages volumes and cluster membership. It is required to
- run on all the servers in the trusted storage pool.
-
-**Cluster**
-: A trusted pool of linked computers working together, resembling a single computing resource.
- In GlusterFs, a cluster is also referred to as a trusted storage pool.
-
-**Client**
-: Any machine that mounts a GlusterFS volume. Any applications that use libgfapi access
- mechanism can also be treated as clients in GlusterFS context.
-
-
-**Server**
-: The machine (virtual or bare metal) that hosts the bricks in which data is stored.
-
-
-**Block Storage**
-: Block special files, or block devices, correspond to devices through which the system moves
- data in the form of blocks. These device nodes often represent addressable devices such as
- hard disks, CD-ROM drives, or memory regions. GlusterFS requires a filesystem (like XFS) that
- supports extended attributes.
-
-
-
-**Filesystem**
-: A method of storing and organizing computer files and their data.
- Essentially, it organizes these files into a database for the
- storage, organization, manipulation, and retrieval by the computer's
- operating system.
-
- Source: [Wikipedia][]
-
-**Distributed File System**
-: A file system that allows multiple clients to concurrently access data which is spread across
- servers/bricks in a trusted storage pool. Data sharing among multiple locations is fundamental
- to all distributed file systems.
-
-**Virtual File System (VFS)
- VFS is a kernel software layer which handles all system calls related to the standard Linux file system.
- It provides a common interface to several kinds of file systems.
-
-**POSIX**
-: Portable Operating System Interface (for Unix) is the name of a
- family of related standards specified by the IEEE to define the
- application programming interface (API), along with shell and
- utilities interfaces for software compatible with variants of the
- Unix operating system. Gluster exports a fully POSIX compliant file
- system.
-
-**Extended Attributes**
-: Extended file attributes (abbreviated xattr) is a filesystem feature
- that enables users/programs to associate files/dirs with metadata.
-
-
-**FUSE**
-: Filesystem in Userspace (FUSE) is a loadable kernel module for
- Unix-like computer operating systems that lets non-privileged users
- create their own filesystems without editing kernel code. This is
- achieved by running filesystem code in user space while the FUSE
- module provides only a "bridge" to the actual kernel interfaces.
-
- Source: [Wikipedia][1]
-
-
-**GFID**
-: Each file/directory on a GlusterFS volume has a unique 128-bit number
- associated with it called the GFID. This is analogous to inode in a
- regular filesystem.
-
-
-**Infiniband**
- InfiniBand is a switched fabric computer network communications link
- used in high-performance computing and enterprise data centers.
-
-**Metadata**
-: Metadata is data providing information about one or more other
- pieces of data.
-
-**Namespace**
-: Namespace is an abstract container or environment created to hold a
- logical grouping of unique identifiers or symbols. Each Gluster
- volume exposes a single namespace as a POSIX mount point that
- contains every file in the cluster.
-
-**Node**
-: A server or computer that hosts one or more bricks.
-
-**Open Source**
-: Open source describes practices in production and development that
- promote access to the end product's source materials. Some consider
- open source a philosophy, others consider it a pragmatic
- methodology.
-
- Before the term open source became widely adopted, developers and
- producers used a variety of phrases to describe the concept; open
- source gained hold with the rise of the Internet, and the attendant
- need for massive retooling of the computing source code.
-
- Opening the source code enabled a self-enhancing diversity of
- production models, communication paths, and interactive communities.
- Subsequently, a new, three-word phrase "open source software" was
- born to describe the environment that the new copyright, licensing,
- domain, and consumer issues created.
-
- Source: [Wikipedia][2]
-
-**Petabyte**
-: A petabyte (derived from the SI prefix peta- ) is a unit of
- information equal to one quadrillion (short scale) bytes, or 1000
- terabytes. The unit symbol for the petabyte is PB. The prefix peta-
- (P) indicates a power of 1000:
-
- 1 PB = 1,000,000,000,000,000 B = 10005 B = 1015 B.
-
- The term "pebibyte" (PiB), using a binary prefix, is used for the
- corresponding power of 1024.
-
- Source: [Wikipedia][3]
-
-
-
-**Quorum**
-: The configuration of quorum in a trusted storage pool determines the
- number of server failures that the trusted storage pool can sustain.
- If an additional failure occurs, the trusted storage pool becomes
- unavailable.
-
-**Quota**
-: Quota allows you to set limits on usage of disk space by directories or
- by volumes.
-
-**RAID**
-: Redundant Array of Inexpensive Disks (RAID) is a technology that
- provides increased storage reliability through redundancy, combining
- multiple low-cost, less-reliable disk drives components into a
- logical unit where all drives in the array are interdependent.
-
-**RDMA**
-: Remote direct memory access (RDMA) is a direct memory access from the
- memory of one computer into that of another without involving either
- one's operating system. This permits high-throughput, low-latency
- networking, which is especially useful in massively parallel computer
- clusters.
-
-**Rebalance**
-: A process of fixing layout and resdistributing data in a volume when a
- brick is added or removed.
-
-**RRDNS**
-: Round Robin Domain Name Service (RRDNS) is a method to distribute
- load across application servers. RRDNS is implemented by creating
- multiple A records with the same name and different IP addresses in
- the zone file of a DNS server.
-
-**Samba**
-: Samba allows file and print sharing between computers running Windows and
- computers running Linux. It is an implementation of several services and
- protocols including SMB and CIFS.
-
-**Self-Heal**
-: The self-heal daemon that runs in the background, identifies
- inconsistencies in files/dirs in a replicated volume and then resolves
- or heals them. This healing process is usually required when one or more
- bricks of a volume goes down and then comes up later.
-
-**Split-brain**
-: This is a situation where data on two or more bricks in a replicated
- volume start to diverge in terms of content or metadata. In this state,
- one cannot determine programitically which set of data is "right" and
- which is "wrong".
-
-**Translator**
-: Translators (also called xlators) are stackable modules where each
- module has a very specific purpose. Translators are stacked in a
- hierarchical structure called as graph. A translator receives data
- from its parent translator, performs necessary operations and then
- passes the data down to its child translator in hierarchy.
-
-**Trusted Storage Pool**
-: A storage pool is a trusted network of storage servers. When you
- start the first server, the storage pool consists of that server
- alone.
-
-**Scale-Up Storage**
-: Increases the capacity of the storage device in a single dimension.
- For example, adding additional disk capacity to an existing trusted storage pool.
-
-**Scale-Out Storage**
- Scale out systems are designed to scale on both capacity and performance.
- It increases the capability of a storage device in single dimension.
- For example, adding more systems of the same size, or adding servers to a trusted storage pool
- that increases CPU, disk capacity, and throughput for the trusted storage pool.
-
-**Userspace**
-: Applications running in user space don’t directly interact with
- hardware, instead using the kernel to moderate access. Userspace
- applications are generally more portable than applications in kernel
- space. Gluster is a user space application.
-
-
-**Geo-Replication**
-: Geo-replication provides a continuous, asynchronous, and incremental
- replication service from site to another over Local Area Networks
- (LAN), Wide Area Network (WAN), and across the Internet.
-
-**N-way Replication**
-: Local synchronous data replication which is typically deployed across campus
- or Amazon Web Services Availability Zones.
-
-**Distributed Hash Table Terminology**
-**Hashed subvolume**
-: A Distributed Hash Table Translator subvolume to which the file or directory name is hashed to.
-
-**Cached subvolume**
-: A Distributed Hash Table Translator subvolume where the file content is actually present.
- For directories, the concept of cached-subvolume is not relevant. It is loosely used to mean
- subvolumes which are not hashed-subvolume.
-
-**Linkto-file**
-
-: For a newly created file, the hashed and cached subvolumes are the same.
- When directory entry operations like rename (which can change the name and hence hashed
- subvolume of the file) are performed on the file, instead of moving the entire data in the file
- to a new hashed subvolume, a file is created with the same name on the newly hashed subvolume.
- The purpose of this file is only to act as a pointer to the node where the data is present.
- In the extended attributes of this file, the name of the cached subvolume is stored.
- This file on the newly hashed-subvolume is called a linkto-file.
- The linkto file is relevant only for non-directory entities.
-
-**Directory Layout**
-: The directory layout specifies the hash-ranges of the subdirectories of a directory to which
- subvolumes they correspond to.
-
-**Properties of directory layouts:**
-: The layouts are created at the time of directory creation and are persisted as extended attributes
- of the directory.
- A subvolume is not included in the layout if it remained offline at the time of directory creation
- and no directory entries ( such as files and directories) of that directory are created on
- that subvolume. The subvolume is not part of the layout until the fix-layout is complete
- as part of running the rebalance command. If a subvolume is down during access (after directory creation),
- access to any files that hash to that subvolume fails.
-
-**Fix Layout**
-: A command that is executed during the rebalance process.
- The rebalance process itself comprises of two stages:
- Fixes the layouts of directories to accommodate any subvolumes that are added or removed.
- It also heals the directories, checks whether the layout is non-contiguous, and persists the
- layout in extended attributes, if needed. It also ensures that the directories have the same
- attributes across all the subvolumes.
-
- Migrates the data from the cached-subvolume to the hashed-subvolume.
-
- [Wikipedia]: http://en.wikipedia.org/wiki/Filesystem
- [1]: http://en.wikipedia.org/wiki/Filesystem_in_Userspace
- [2]: http://en.wikipedia.org/wiki/Open_source
- [3]: http://en.wikipedia.org/wiki/Petabyte
diff --git a/doc/admin-guide/en-US/markdown/glusterfs_introduction.md b/doc/admin-guide/en-US/markdown/glusterfs_introduction.md
deleted file mode 100644
index 02334f7b108..00000000000
--- a/doc/admin-guide/en-US/markdown/glusterfs_introduction.md
+++ /dev/null
@@ -1,63 +0,0 @@
-Introducing Gluster File System
-===============================
-
-GlusterFS is an open source, distributed file system capable of scaling to
-several petabytes and handling thousands of clients. It is a file system with
-a modular, stackable design, and a unique no-metadata server architecture.
-This no-metadata server architecture ensures better performance,
-linear scalability, and reliability. GlusterFS can be
-flexibly combined with commodity physical, virtual, and cloud resources
-to deliver highly available and performant enterprise storage at a
-fraction of the cost of traditional solutions.
-
-GlusterFS clusters together storage building blocks over Infiniband RDMA
-and/or TCP/IP interconnect, aggregating disk and memory resources and
-managing data in a single global namespace.
-
-GlusterFS aggregates various storage servers over network interconnects
-into one large parallel network file system. Based on a stackable user space
-design, it delivers exceptional performance for diverse workloads and is a key
-building block of GlusterFS.
-The POSIX compatible GlusterFS servers, use any ondisk file system which supports
-extended attributes (eg: ext4, XFS, etc) to format to store data on disks, can be
-accessed using industry-standard access protocols including Network File System (NFS)
-and Server Message Block (SMB).
-
-![ Virtualized Cloud Environments ](../images/640px-GlusterFS_Architecture.png)
-
-GlusterFS is designed for today's high-performance, virtualized cloud
-environments. Unlike traditional data centers, cloud environments
-require multi-tenancy along with the ability to grow or shrink resources
-on demand. Enterprises can scale capacity, performance, and availability
-on demand, with no vendor lock-in, across on-premise, public cloud, and
-hybrid environments.
-
-GlusterFS is in production at thousands of enterprises spanning media,
-healthcare, government, education, web 2.0, and financial services.
-
-## Commercial offerings and support ##
-
-Several companies offer support or consulting - http://www.gluster.org/consultants/.
-
-Red Hat Storage (http://www.redhat.com/en/technologies/storage/storage-server)
-is a commercial storage software product, based on GlusterFS.
-
-
-## About On-premise Installation ##
-
-GlusterFS for On-Premise allows physical storage to be utilized as a
-virtualized, scalable, and centrally managed pool of storage.
-
-GlusterFS can be installed on commodity servers resulting in a
-powerful, massively scalable, and highly available NAS environment.
-
-GlusterFS On-premise enables enterprises to treat physical storage as a
-virtualized, scalable, and centrally managed storage pool by using commodity
-storage hardware. It supports multi-tenancy by partitioning users or groups into
-logical volumes on shared storage. It enables users to eliminate, decrease, or
-manage their dependence on high-cost, monolithic and difficult-to-deploy storage arrays.
-You can add capacity in a matter of minutes across a wide variety of workloads without
-affecting performance. Storage can also be centrally managed across a variety of
-workloads, thus increasing storage efficiency.
-
-
diff --git a/doc/admin-guide/en-US/markdown/pdfgen.sh b/doc/admin-guide/en-US/markdown/pdfgen.sh
deleted file mode 100755
index 68b320617b1..00000000000
--- a/doc/admin-guide/en-US/markdown/pdfgen.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash
-# pdfgen.sh simple pdf generation helper script.
-# Copyright (C) 2012-2013 James Shubin
-# Written by James Shubin <james@shubin.ca>
-
-#dir='/tmp/pdf'
-dir=`pwd`'/output/'
-ln -s ../images images
-mkdir -p "$dir"
-
-for i in *.md; do
- pandoc $i -o "$dir"`echo $i | sed 's/\.md$/\.pdf/'`
-done
-
-rm images # remove symlink
-