summaryrefslogtreecommitdiffstats
path: root/doc/admin-guide
diff options
context:
space:
mode:
authorPrashanth Pai <ppai@redhat.com>2013-12-13 16:20:11 +0530
committerVijay Bellur <vbellur@redhat.com>2013-12-16 07:01:33 -0800
commit588185463d1bbf1b011e3b0471771b3d4f4aa145 (patch)
tree998835cc31c7d5bbf3c88b2ac08de0ff2f73b042 /doc/admin-guide
parenta9623ada6f7b39ac2d567f66a496072487d8e6ec (diff)
doc: Fix markdown format errors
Made the following minor changes: * Fix broken links and point to correct image paths * Remove dead links and references * Fix table format to conform to Github Flavoured Markdown * Add few common terms to glossary * Maintain consistency of format in writing headings <h1..h6> * Remove irrelevant files * Remove references to contact Red Hat support. Change-Id: I4aed4945d56b5d68b8ea133ce5fa3162bfc2864f Signed-off-by: Prashanth Pai <ppai@redhat.com> Reviewed-on: http://review.gluster.org/6514 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'doc/admin-guide')
-rw-r--r--doc/admin-guide/en-US/markdown/Administration_Guide.md1
-rw-r--r--doc/admin-guide/en-US/markdown/Author_Group.md5
-rw-r--r--doc/admin-guide/en-US/markdown/Book_Info.md1
-rw-r--r--doc/admin-guide/en-US/markdown/Chapter.md18
-rw-r--r--doc/admin-guide/en-US/markdown/Preface.md22
-rw-r--r--doc/admin-guide/en-US/markdown/Revision_History.md4
-rw-r--r--doc/admin-guide/en-US/markdown/admin_ACLs.md46
-rw-r--r--doc/admin-guide/en-US/markdown/admin_Hadoop.md60
-rw-r--r--doc/admin-guide/en-US/markdown/admin_UFO.md286
-rw-r--r--doc/admin-guide/en-US/markdown/admin_commandref.md180
-rw-r--r--doc/admin-guide/en-US/markdown/admin_console.md5
-rw-r--r--doc/admin-guide/en-US/markdown/admin_directory_Quota.md26
-rw-r--r--doc/admin-guide/en-US/markdown/admin_geo-replication.md134
-rw-r--r--doc/admin-guide/en-US/markdown/admin_managing_volumes.md274
-rw-r--r--doc/admin-guide/en-US/markdown/admin_monitoring_workload.md118
-rw-r--r--doc/admin-guide/en-US/markdown/admin_setting_volumes.md161
-rw-r--r--doc/admin-guide/en-US/markdown/admin_settingup_clients.md120
-rw-r--r--doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md34
-rw-r--r--doc/admin-guide/en-US/markdown/admin_storage_pools.md21
-rw-r--r--doc/admin-guide/en-US/markdown/admin_troubleshooting.md214
-rw-r--r--doc/admin-guide/en-US/markdown/gfs_introduction.md29
-rw-r--r--doc/admin-guide/en-US/markdown/glossary.md104
22 files changed, 634 insertions, 1229 deletions
diff --git a/doc/admin-guide/en-US/markdown/Administration_Guide.md b/doc/admin-guide/en-US/markdown/Administration_Guide.md
deleted file mode 100644
index 8b137891791..00000000000
--- a/doc/admin-guide/en-US/markdown/Administration_Guide.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/doc/admin-guide/en-US/markdown/Author_Group.md b/doc/admin-guide/en-US/markdown/Author_Group.md
deleted file mode 100644
index ef2a5e677fe..00000000000
--- a/doc/admin-guide/en-US/markdown/Author_Group.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Divya
-Muntimadugu
-Red Hat
-Engineering Content Services
-divya@redhat.com
diff --git a/doc/admin-guide/en-US/markdown/Book_Info.md b/doc/admin-guide/en-US/markdown/Book_Info.md
deleted file mode 100644
index 8b137891791..00000000000
--- a/doc/admin-guide/en-US/markdown/Book_Info.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/doc/admin-guide/en-US/markdown/Chapter.md b/doc/admin-guide/en-US/markdown/Chapter.md
deleted file mode 100644
index 8420259c439..00000000000
--- a/doc/admin-guide/en-US/markdown/Chapter.md
+++ /dev/null
@@ -1,18 +0,0 @@
-Test Chapter
-============
-
-This is a test paragraph
-
-Test Section 1
-==============
-
-This is a test paragraph in a section
-
-Test Section 2
-==============
-
-This is a test paragraph in Section 2
-
-1. listitem text
-
-
diff --git a/doc/admin-guide/en-US/markdown/Preface.md b/doc/admin-guide/en-US/markdown/Preface.md
deleted file mode 100644
index f7e934ae84b..00000000000
--- a/doc/admin-guide/en-US/markdown/Preface.md
+++ /dev/null
@@ -1,22 +0,0 @@
-Preface
-=======
-
-This guide describes how to configure, operate, and manage Gluster File
-System (GlusterFS).
-
-Audience
-========
-
-This guide is intended for Systems Administrators interested in
-configuring and managing GlusterFS.
-
-This guide assumes that you are familiar with the Linux operating
-system, concepts of File System, GlusterFS concepts, and GlusterFS
-Installation
-
-License
-=======
-
-The License information is available at [][].
-
- []: http://www.redhat.com/licenses/rhel_rha_eula.html
diff --git a/doc/admin-guide/en-US/markdown/Revision_History.md b/doc/admin-guide/en-US/markdown/Revision_History.md
deleted file mode 100644
index 2084309d195..00000000000
--- a/doc/admin-guide/en-US/markdown/Revision_History.md
+++ /dev/null
@@ -1,4 +0,0 @@
-Revision History
-================
-
-1-0 Thu Apr 5 2012 Divya Muntimadugu <divya@redhat.com> Draft
diff --git a/doc/admin-guide/en-US/markdown/admin_ACLs.md b/doc/admin-guide/en-US/markdown/admin_ACLs.md
index 308e069ca50..8fc4e1dae70 100644
--- a/doc/admin-guide/en-US/markdown/admin_ACLs.md
+++ b/doc/admin-guide/en-US/markdown/admin_ACLs.md
@@ -1,5 +1,4 @@
-POSIX Access Control Lists
-==========================
+#POSIX Access Control Lists
POSIX Access Control Lists (ACLs) allows you to assign different
permissions for different users or groups even though they do not
@@ -13,14 +12,12 @@ This means, in addition to the file owner, the file group, and others,
additional users and groups can be granted or denied access by using
POSIX ACLs.
-Activating POSIX ACLs Support
-=============================
+##Activating POSIX ACLs Support
To use POSIX ACLs for a file or directory, the partition of the file or
directory must be mounted with POSIX ACLs support.
-Activating POSIX ACLs Support on Sever
---------------------------------------
+###Activating POSIX ACLs Support on Sever
To mount the backend export directories for POSIX ACLs support, use the
following command:
@@ -36,8 +33,7 @@ the following entry for the partition to include the POSIX ACLs option:
`LABEL=/work /export1 ext3 rw, acl 14 `
-Activating POSIX ACLs Support on Client
----------------------------------------
+###Activating POSIX ACLs Support on Client
To mount the glusterfs volumes for POSIX ACLs support, use the following
command:
@@ -48,8 +44,7 @@ For example:
`# mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster`
-Setting POSIX ACLs
-==================
+##Setting POSIX ACLs
You can set two types of POSIX ACLs, that is, access ACLs and default
ACLs. You can use access ACLs to grant permission for a specific file or
@@ -60,8 +55,7 @@ of the default ACLs of the directory.
You can set ACLs for per user, per group, for users not in the user
group for the file, and via the effective right mask.
-Setting Access ACLs
--------------------
+##Setting Access ACLs
You can apply access ACLs to grant permission for both files and
directories.
@@ -80,12 +74,12 @@ Permissions must be a combination of the characters `r` (read), `w`
following format and can specify multiple entry types separated by
commas.
- ACL Entry Description
- ---------------------- --------------------------------------------------------------------------------------------------------------------------------------------------
- u:uid:\<permission\> Sets the access ACLs for a user. You can specify user name or UID
- g:gid:\<permission\> Sets the access ACLs for a group. You can specify group name or GID.
- m:\<permission\> Sets the effective rights mask. The mask is the combination of all access permissions of the owning group and all of the user and group entries.
- o:\<permission\> Sets the access ACLs for users other than the ones in the group for the file.
+ ACL Entry | Description
+ --- | ---
+ u:uid:\<permission\> | Sets the access ACLs for a user. You can specify user name or UID
+ g:gid:\<permission\> | Sets the access ACLs for a group. You can specify group name or GID.
+ m:\<permission\> | Sets the effective rights mask. The mask is the combination of all access permissions of the owning group and all of the user and group entries.
+ o:\<permission\> | Sets the access ACLs for users other than the ones in the group for the file.
If a file or directory already has an POSIX ACLs, and the setfacl
command is used, the additional permissions are added to the existing
@@ -95,8 +89,7 @@ For example, to give read and write permissions to user antony:
`# setfacl -m u:antony:rw /mnt/gluster/data/testfile `
-Setting Default ACLs
---------------------
+##Setting Default ACLs
You can apply default ACLs only to directories. They determine the
permissions of a file system objects that inherits from its parent
@@ -126,11 +119,9 @@ default ACLs are passed to the files and subdirectories in it:
- A subdirectory inherits the default ACLs of the parent directory
both as its default ACLs and as an access ACLs.
-
- A file inherits the default ACLs as its access ACLs.
-Retrieving POSIX ACLs
-=====================
+##Retrieving POSIX ACLs
You can view the existing POSIX ACLs for a file or directory.
@@ -169,8 +160,7 @@ You can view the existing POSIX ACLs for a file or directory.
default:mask::rwx
default:other::r-x
-Removing POSIX ACLs
-===================
+##Removing POSIX ACLs
To remove all the permissions for a user, groups, or others, use the
following command:
@@ -181,16 +171,14 @@ For example, to remove all permissions from the user antony:
`# setfacl -x u:antony /mnt/gluster/data/test-file`
-Samba and ACLs
-==============
+##Samba and ACLs
If you are using Samba to access GlusterFS FUSE mount, then POSIX ACLs
are enabled by default. Samba has been compiled with the
`--with-acl-support` option, so no special flags are required when
accessing or mounting a Samba share.
-NFS and ACLs
-============
+##NFS and ACLs
Currently we do not support ACLs configuration through NFS, i.e. setfacl
and getfacl commands do not work. However, ACLs permissions set using
diff --git a/doc/admin-guide/en-US/markdown/admin_Hadoop.md b/doc/admin-guide/en-US/markdown/admin_Hadoop.md
index 2894fa71302..742e8ad6255 100644
--- a/doc/admin-guide/en-US/markdown/admin_Hadoop.md
+++ b/doc/admin-guide/en-US/markdown/admin_Hadoop.md
@@ -1,5 +1,4 @@
-Managing Hadoop Compatible Storage
-==================================
+#Managing Hadoop Compatible Storage
GlusterFS provides compatibility for Apache Hadoop and it uses the
standard file system APIs available in Hadoop to provide a new storage
@@ -7,54 +6,44 @@ option for Hadoop deployments. Existing MapReduce based applications can
use GlusterFS seamlessly. This new functionality opens up data within
Hadoop deployments to any file-based or object-based application.
-Architecture Overview
-=====================
+##Architecture Overview
The following diagram illustrates Hadoop integration with GlusterFS:
-Advantages
-==========
+![ Hadoop Architecture ](../images/Hadoop_Architecture.png)
+
+##Advantages
The following are the advantages of Hadoop Compatible Storage with
GlusterFS:
- Provides simultaneous file-based and object-based access within
Hadoop.
-
- Eliminates the centralized metadata server.
-
- Provides compatibility with MapReduce applications and rewrite is
not required.
-
- Provides a fault tolerant file system.
-Preparing to Install Hadoop Compatible Storage
-==============================================
+##Preparing to Install Hadoop Compatible Storage
This section provides information on pre-requisites and list of
dependencies that will be installed during installation of Hadoop
compatible storage.
-Pre-requisites
---------------
+###Pre-requisites
The following are the pre-requisites to install Hadoop Compatible
Storage :
- Hadoop 0.20.2 is installed, configured, and is running on all the
machines in the cluster.
-
- Java Runtime Environment
-
- Maven (mandatory only if you are building the plugin from the
source)
-
- JDK (mandatory only if you are building the plugin from the source)
-
- getfattr - command line utility
-Installing, and Configuring Hadoop Compatible Storage
-=====================================================
+##Installing, and Configuring Hadoop Compatible Storage
This section describes how to install and configure Hadoop Compatible
Storage in your storage environment and verify that it is functioning
@@ -70,9 +59,8 @@ correctly.
The following files will be extracted:
- - /usr/local/lib/glusterfs-Hadoop-version-gluster\_plugin\_version.jar
-
- - /usr/local/lib/conf/core-site.xml
+ - /usr/local/lib/glusterfs-Hadoop-version-gluster\_plugin\_version.jar
+ - /usr/local/lib/conf/core-site.xml
3. (Optional) To install Hadoop Compatible Storage in a different
location, run the following command:
@@ -116,22 +104,13 @@ correctly.
The following are the configurable fields:
- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Property Name Default Value Description
- ---------------------- -------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- fs.default.name glusterfs://fedora1:9000 Any hostname in the cluster as the server and any port number.
-
- fs.glusterfs.volname hadoopvol GlusterFS volume to mount.
-
- fs.glusterfs.mount /mnt/glusterfs The directory used to fuse mount the volume.
-
- fs.glusterfs.server fedora2 Any hostname or IP address on the cluster except the client/master.
-
- quick.slave.io Off Performance tunable option. If this option is set to On, the plugin will try to perform I/O directly from the disk file system (like ext3 or ext4) the file resides on. Hence read performance will improve and job would run faster.
- > **Note**
- >
- > This option is not tested widely
- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Property Name | Default Value | Description
+ --- | --- | ---
+ fs.default.name | glusterfs://fedora1:9000 | Any hostname in the cluster as the server and any port number.
+ fs.glusterfs.volname | hadoopvol | GlusterFS volume to mount.
+ fs.glusterfs.mount | /mnt/glusterfs | The directory used to fuse mount the volume.
+ fs.glusterfs.server | fedora2 | Any hostname or IP address on the cluster except the client/master.
+ quick.slave.io | Off | Performance tunable option. If this option is set to On, the plugin will try to perform I/O directly from the disk file system (like ext3 or ext4) the file resides on. Hence read performance will improve and job would run faster. **Note*: This option is not tested widely
5. Create a soft link in Hadoop’s library and configuration directory
for the downloaded files (in Step 3) using the following commands:
@@ -141,7 +120,6 @@ correctly.
For example,
`# ln –s /usr/local/lib/glusterfs-0.20.2-0.1.jar /lib/glusterfs-0.20.2-0.1.jar`
-
`# ln –s /usr/local/lib/conf/core-site.xml /conf/core-site.xml `
6. (Optional) You can run the following command on Hadoop master to
@@ -150,8 +128,7 @@ correctly.
`# build-deploy-jar.py -d -c `
-Starting and Stopping the Hadoop MapReduce Daemon
-=================================================
+##Starting and Stopping the Hadoop MapReduce Daemon
To start and stop MapReduce daemon
@@ -164,7 +141,6 @@ To start and stop MapReduce daemon
`# /bin/stop-mapred.sh `
> **Note**
->
> You must start Hadoop MapReduce daemon on all servers.
[]: http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm
diff --git a/doc/admin-guide/en-US/markdown/admin_UFO.md b/doc/admin-guide/en-US/markdown/admin_UFO.md
index 3311eff0188..88271041046 100644
--- a/doc/admin-guide/en-US/markdown/admin_UFO.md
+++ b/doc/admin-guide/en-US/markdown/admin_UFO.md
@@ -1,5 +1,4 @@
-Managing Unified File and Object Storage
-========================================
+#Managing Unified File and Object Storage
Unified File and Object Storage (UFO) unifies NAS and object storage
technology. It provides a system for data storage that enables users to
@@ -35,8 +34,7 @@ a traditional file system. You will not be able to mount this system
like traditional SAN or NAS volumes and perform POSIX compliant
operations.
-Components of Object Storage
-============================
+##Components of Object Storage
The major components of Object Storage are:
@@ -88,35 +86,26 @@ objects within that account. If a user wants to access the content from
another account, they must have API access key or a session token
provided by their authentication system.
-Advantages of using GlusterFS Unified File and Object Storage
-=============================================================
+##Advantages of using GlusterFS Unified File and Object Storage
The following are the advantages of using GlusterFS UFO:
- No limit on upload and download files sizes as compared to Open
Stack Swift which limits the object size to 5GB.
-
- A unified view of data across NAS and Object Storage technologies.
-
- Using GlusterFS's UFO has other advantages like the following:
-
- High availability
-
- Scalability
-
- Replication
-
- Elastic Volume management
-Preparing to Deploy Unified File and Object Storage
-===================================================
+##Preparing to Deploy Unified File and Object Storage
This section provides information on pre-requisites and list of
dependencies that will be installed during the installation of Unified
File and Object Storage.
-Pre-requisites
---------------
+###Pre-requisites
GlusterFS's Unified File and Object Storage needs `user_xattr` support
from the underlying disk file system. Use the following command to
@@ -128,50 +117,33 @@ For example,
`# mount –o remount,user_xattr /dev/hda1 `
-Dependencies
+####Dependencies
------------
The following packages are installed on GlusterFS when you install
Unified File and Object Storage:
- curl
-
- memcached
-
- openssl
-
- xfsprogs
-
- python2.6
-
- pyxattr
-
- python-configobj
-
- python-setuptools
-
- python-simplejson
-
- python-webob
-
- python-eventlet
-
- python-greenlet
-
- python-pastedeploy
-
- python-netifaces
-Installing and Configuring Unified File and Object Storage
-==========================================================
+##Installing and Configuring Unified File and Object Storage
This section provides instructions on how to install and configure
Unified File and Object Storage in your storage environment.
-Installing Unified File and Object Storage
-------------------------------------------
-
-To install Unified File and Object Storage:
+##Installing Unified File and Object Storage
1. Download `rhel_install.sh` install script from [][] .
@@ -197,15 +169,13 @@ To install Unified File and Object Storage:
> use a load balancer like pound, nginx, and so on to distribute the
> request across the machines.
-Adding Users
-------------
+###Adding Users
The authentication system allows the administrator to grant different
levels of access to different users based on the requirement. The
following are the types of user permissions:
- admin user
-
- normal user
Admin user has read and write permissions on the account. By default, a
@@ -228,10 +198,7 @@ For example,
> the `proxy-server.conf` file. It is highly recommended that you remove
> all the default sample user entries from the configuration file.
-For more information on setting ACLs, see ?.
-
-Configuring Proxy Server
-------------------------
+##Configuring Proxy Server
The Proxy Server is responsible for connecting to the rest of the
OpenStack Object Storage architecture. For each request, it looks up the
@@ -251,7 +218,8 @@ The configurable options pertaining to proxy server are stored in
account_autocreate=true
[filter:tempauth]
- use = egg:swift#tempauth user_admin_admin=admin.admin.reseller_admin
+ use = egg:swift#tempauth
+ user_admin_admin=admin.admin.reseller_admin
user_test_tester=testing.admin
user_test2_tester2=testing2.admin
user_test_tester3=testing3
@@ -266,15 +234,12 @@ By default, GlusterFS's Unified File and Object Storage is configured to
support HTTP protocol and uses temporary authentication to authenticate
the HTTP requests.
-Configuring Authentication System
----------------------------------
+###Configuring Authentication System
-Proxy server must be configured to authenticate using `
-
- `.
+There are several different authentication system like tempauth, keystone,
+swauth etc. Their respective documentation has detailed usage.
-Configuring Proxy Server for HTTPS
-----------------------------------
+###Configuring Proxy Server for HTTPS
By default, proxy server only handles HTTP request. To configure the
proxy server to process HTTPS requests, perform the following steps:
@@ -288,8 +253,8 @@ proxy server to process HTTPS requests, perform the following steps:
[DEFAULT]
bind_port = 443
- cert_file = /etc/swift/cert.crt
- key_file = /etc/swift/cert.key
+ cert_file = /etc/swift/cert.crt
+ key_file = /etc/swift/cert.key
3. Restart the servers using the following commands:
@@ -298,41 +263,40 @@ proxy server to process HTTPS requests, perform the following steps:
The following are the configurable options:
- Option Default Description
- ------------ ------------ -------------------------------
- bind\_ip 0.0.0.0 IP Address for server to bind
- bind\_port 80 Port for server to bind
- swift\_dir /etc/swift Swift configuration directory
- workers 1 Number of workers to fork
- user swift swift user
- cert\_file Path to the ssl .crt
- key\_file Path to the ssl .key
+ Option | Default | Description
+ ------------ | ------------ | -------------------------------
+ bind\_ip | 0.0.0.0 | IP Address for server to bind
+ bind\_port | 80 | Port for server to bind
+ swift\_dir | /etc/swift | Swift configuration directory
+ workers | 1 | Number of workers to fork
+ user | swift | swift user
+ cert\_file | | Path to the ssl .crt
+ key\_file | | Path to the ssl .key
: proxy-server.conf Default Options in the [DEFAULT] section
- Option Default Description
- ------------------------------- ----------------- -----------------------------------------------------------------------------------------------------------
- use paste.deploy entry point for the container server. For most cases, this should be `egg:swift#container`.
- log\_name proxy-server Label used when logging
- log\_facility LOG\_LOCAL0 Syslog log facility
- log\_level INFO Log level
- log\_headers True If True, log headers in each request
- recheck\_account\_existence 60 Cache timeout in seconds to send memcached for account existence
- recheck\_container\_existence 60 Cache timeout in seconds to send memcached for container existence
- object\_chunk\_size 65536 Chunk size to read from object servers
- client\_chunk\_size 65536 Chunk size to read from clients
- memcache\_servers 127.0.0.1:11211 Comma separated list of memcached servers ip:port
- node\_timeout 10 Request timeout to external services
- client\_timeout 60 Timeout to read one chunk from a client
- conn\_timeout 0.5 Connection timeout to external services
- error\_suppression\_interval 60 Time in seconds that must elapse since the last error for a node to be considered no longer error limited
- error\_suppression\_limit 10 Error count to consider a node error limited
- allow\_account\_management false Whether account `PUT`s and `DELETE`s are even callable
+ Option | Default | Description
+ ------------------------------- | ----------------- | -----------------------------------------------------------------------
+ use | | paste.deploy entry point for the container server. For most cases, this should be `egg:swift#container`.
+ log\_name | proxy-server | Label used when logging
+ log\_facility | LOG\_LOCAL0 | Syslog log facility
+ log\_level | INFO | Log level
+ log\_headers | True | If True, log headers in each request
+ recheck\_account\_existence | 60 | Cache timeout in seconds to send memcached for account existence
+ recheck\_container\_existence | 60 | Cache timeout in seconds to send memcached for container existence
+ object\_chunk\_size | 65536 | Chunk size to read from object servers
+ client\_chunk\_size | 65536 | Chunk size to read from clients
+ memcache\_servers | 127.0.0.1:11211 | Comma separated list of memcached servers ip:port
+ node\_timeout | 10 | Request timeout to external services
+ client\_timeout | 60 | Timeout to read one chunk from a client
+ conn\_timeout | 0.5 | Connection timeout to external services
+ error\_suppression\_interval | 60 | Time in seconds that must elapse since the last error for a node to be considered no longer error limited
+ error\_suppression\_limit | 10 | Error count to consider a node error limited
+ allow\_account\_management | false | Whether account `PUT`s and `DELETE`s are even callable
: proxy-server.conf Server Options in the [proxy-server] section
-Configuring Object Server
--------------------------
+##Configuring Object Server
The Object Server is a very simple blob storage server that can store,
retrieve, and delete objects stored on local devices. Objects are stored
@@ -368,36 +332,35 @@ The configurable options pertaining Object Server are stored in the file
The following are the configurable options:
- Option Default Description
- -------------- ------------ ----------------------------------------------------------------------------------------------------
- swift\_dir /etc/swift Swift configuration directory
- devices /srv/node Mount parent directory where devices are mounted
- mount\_check true Whether or not check if the devices are mounted to prevent accidentally writing to the root device
- bind\_ip 0.0.0.0 IP Address for server to bind
- bind\_port 6000 Port for server to bind
- workers 1 Number of workers to fork
+ Option | Default | Description
+ -------------- | ------------ | ----------------------------------------------------------------------------------------------
+ swift\_dir | /etc/swift | Swift configuration directory
+ devices | /srv/node | Mount parent directory where devices are mounted
+ mount\_check | true | Whether or not check if the devices are mounted to prevent accidentally writing to the root device
+ bind\_ip | 0.0.0.0 | IP Address for server to bind
+ bind\_port | 6000 | Port for server to bind
+ workers | 1 | Number of workers to fork
: object-server.conf Default Options in the [DEFAULT] section
- Option Default Description
- ---------------------- --------------- ----------------------------------------------------------------------------------------------------
- use paste.deploy entry point for the object server. For most cases, this should be `egg:swift#object`.
- log\_name object-server log name used when logging
- log\_facility LOG\_LOCAL0 Syslog log facility
- log\_level INFO Logging level
- log\_requests True Whether or not to log each request
- user swift swift user
- node\_timeout 3 Request timeout to external services
- conn\_timeout 0.5 Connection timeout to external services
- network\_chunk\_size 65536 Size of chunks to read or write over the network
- disk\_chunk\_size 65536 Size of chunks to read or write to disk
- max\_upload\_time 65536 Maximum time allowed to upload an object
- slow 0 If \> 0, Minimum time in seconds for a `PUT` or `DELETE` request to complete
+ Option | Default | Description
+ ---------------------- | --------------- | ------------
+ use | | paste.deploy entry point for the object server. For most cases, this should be `egg:swift#object`.
+ log\_name | object-server | log name used when logging
+ log\_facility | LOG\_LOCAL0 | Syslog log facility
+ log\_level | INFO | Logging level
+ log\_requests | True | Whether or not to log each request
+ user | swift | swift user
+ node\_timeout | 3 | Request timeout to external services
+ conn\_timeout | 0.5 | Connection timeout to external services
+ network\_chunk\_size | 65536 | Size of chunks to read or write over the network
+ disk\_chunk\_size | 65536 | Size of chunks to read or write to disk
+ max\_upload\_time | 65536 | Maximum time allowed to upload an object
+ slow | 0 | If \> 0, Minimum time in seconds for a `PUT` or `DELETE` request to complete
: object-server.conf Server Options in the [object-server] section
-Configuring Container Server
-----------------------------
+##Configuring Container Server
The Container Server’s primary job is to handle listings of objects. The
listing is done by querying the GlusterFS mount point with path. This
@@ -430,32 +393,31 @@ The configurable options pertaining to container server are stored in
The following are the configurable options:
- Option Default Description
- -------------- ------------ ----------------------------------------------------------------------------------------------------
- swift\_dir /etc/swift Swift configuration directory
- devices /srv/node Mount parent directory where devices are mounted
- mount\_check true Whether or not check if the devices are mounted to prevent accidentally writing to the root device
- bind\_ip 0.0.0.0 IP Address for server to bind
- bind\_port 6001 Port for server to bind
- workers 1 Number of workers to fork
- user swift Swift user
+ Option | Default | Description
+ -------------- | ------------ | ------------
+ swift\_dir | /etc/swift | Swift configuration directory
+ devices | /srv/node | Mount parent directory where devices are mounted
+ mount\_check | true | Whether or not check if the devices are mounted to prevent accidentally writing to the root device
+ bind\_ip | 0.0.0.0 | IP Address for server to bind
+ bind\_port | 6001 | Port for server to bind
+ workers | 1 | Number of workers to fork
+ user | swift | Swift user
: container-server.conf Default Options in the [DEFAULT] section
- Option Default Description
- --------------- ------------------ ----------------------------------------------------------------------------------------------------------
- use paste.deploy entry point for the container server. For most cases, this should be `egg:swift#container`.
- log\_name container-server Label used when logging
- log\_facility LOG\_LOCAL0 Syslog log facility
- log\_level INFO Logging level
- node\_timeout 3 Request timeout to external services
- conn\_timeout 0.5 Connection timeout to external services
+ Option | Default | Description
+ --------------- | ------------------ | ------------
+ use | | paste.deploy entry point for the container server. For most cases, this should be `egg:swift#container`.
+ log\_name | container-server | Label used when logging
+ log\_facility | LOG\_LOCAL0 | Syslog log facility
+ log\_level | INFO | Logging level
+ node\_timeout | 3 | Request timeout to external services
+ conn\_timeout | 0.5 | Connection timeout to external services
: container-server.conf Server Options in the [container-server]
section
-Configuring Account Server
---------------------------
+##Configuring Account Server
The Account Server is very similar to the Container Server, except that
it is responsible for listing of containers rather than objects. In UFO,
@@ -489,29 +451,28 @@ The configurable options pertaining to account server are stored in
The following are the configurable options:
- Option Default Description
- -------------- ------------ ----------------------------------------------------------------------------------------------------
- swift\_dir /etc/swift Swift configuration directory
- devices /srv/node mount parent directory where devices are mounted
- mount\_check true Whether or not check if the devices are mounted to prevent accidentally writing to the root device
- bind\_ip 0.0.0.0 IP Address for server to bind
- bind\_port 6002 Port for server to bind
- workers 1 Number of workers to fork
- user swift Swift user
+ Option | Default | Description
+ -------------- | ------------ | ---------------------------
+ swift\_dir | /etc/swift | Swift configuration directory
+ devices | /srv/node | mount parent directory where devices are mounted
+ mount\_check | true | Whether or not check if the devices are mounted to prevent accidentally writing to the root device
+ bind\_ip | 0.0.0.0 | IP Address for server to bind
+ bind\_port | 6002 | Port for server to bind
+ workers | 1 | Number of workers to fork
+ user | swift | Swift user
: account-server.conf Default Options in the [DEFAULT] section
- Option Default Description
- --------------- ---------------- ----------------------------------------------------------------------------------------------------------
- use paste.deploy entry point for the container server. For most cases, this should be `egg:swift#container`.
- log\_name account-server Label used when logging
- log\_facility LOG\_LOCAL0 Syslog log facility
- log\_level INFO Logging level
+ Option | Default | Description
+ --------------- | ---------------- | ---------------------------
+ use | | paste.deploy entry point for the container server. For most cases, this should be `egg:swift#container`.
+ log\_name | account-server | Label used when logging
+ log\_facility | LOG\_LOCAL0 | Syslog log facility
+ log\_level | INFO | Logging level
: account-server.conf Server Options in the [account-server] section
-Starting and Stopping Server
-----------------------------
+##Starting and Stopping Server
You must start the server manually when system reboots and whenever you
update/modify the configuration files.
@@ -524,16 +485,14 @@ update/modify the configuration files.
`# swift_init main stop`
-Working with Unified File and Object Storage
-============================================
+##Working with Unified File and Object Storage
This section describes the REST API for administering and managing
Object Storage. All requests will be directed to the host and URL
described in the `X-Storage-URL HTTP` header obtained during successful
authentication.
-Configuring Authenticated Access
---------------------------------
+###Configuring Authenticated Access
Authentication is the process of proving identity to the system. To use
the REST interface, you must obtain an authorization token using GET
@@ -581,8 +540,7 @@ the headers of the response.
>
> The authentication tokens are valid for a 24 hour period.
-Working with Accounts
----------------------
+##Working with Accounts
This section describes the list of operations you can perform at the
account level of the URL.
@@ -593,11 +551,11 @@ You can list the objects of a specific container, or all containers, as
needed using GET command. You can use the following optional parameters
with GET request to refine the results:
- Parameter Description
- ----------- --------------------------------------------------------------------------
- limit Limits the number of results to at most *n* value.
- marker Returns object names greater in value than the specified marker.
- format Specify either json or xml to return the respective serialized response.
+ Parameter | Description
+ ----------- | --------------------------------------------------------------------------
+ limit | Limits the number of results to at most *n* value.
+ marker | Returns object names greater in value than the specified marker.
+ format | Specify either json or xml to return the respective serialized response.
**To display container information**
@@ -660,8 +618,7 @@ containers and the total bytes stored in the account.
AUTH_tkde3ad38b087b49bbbac0494f7600a554'
https://example.storage.com:443/v1/AUTH_test -k
-Working with Containers
------------------------
+##Working with Containers
This section describes the list of operations you can perform at the
container level of the URL.
@@ -706,14 +663,14 @@ You can list the objects of a container using GET command. You can use
the following optional parameters with GET request to refine the
results:
- Parameter Description
- ----------- --------------------------------------------------------------------------------------------------------------
- limit Limits the number of results to at most *n* value.
- marker Returns object names greater in value than the specified marker.
- prefix Displays the results limited to object names beginning with the substring x. beginning with the substring x.
- path Returns the object names nested in the pseudo path.
- format Specify either json or xml to return the respective serialized response.
- delimiter Returns all the object names nested in the container.
+ Parameter | Description
+ ----------- | --------------------------------------------------------------------------------------------------------------
+ limit | Limits the number of results to at most *n* value.
+ marker | Returns object names greater in value than the specified marker.
+ prefix | Displays the results limited to object names beginning with the substring x. beginning with the substring x.
+ path | Returns the object names nested in the pseudo path.
+ format | Specify either json or xml to return the respective serialized response.
+ delimiter | Returns all the object names nested in the container.
To display objects of a container
@@ -896,8 +853,7 @@ container using cURL (for the above example), run the following command:
https://example.storage.com:443/v1/AUTH_test/images
-H 'X-Container-Read: .r:*' -k
-Working with Objects
---------------------
+##Working with Objects
An object represents the data and any metadata for the files stored in
the system. Through the REST interface, metadata for an object can be
diff --git a/doc/admin-guide/en-US/markdown/admin_commandref.md b/doc/admin-guide/en-US/markdown/admin_commandref.md
deleted file mode 100644
index 4ff05f4eff2..00000000000
--- a/doc/admin-guide/en-US/markdown/admin_commandref.md
+++ /dev/null
@@ -1,180 +0,0 @@
-Command Reference
-=================
-
-This section describes the available commands and includes the following
-section:
-
-- gluster Command
-
- Gluster Console Manager (command line interpreter)
-
-- glusterd Daemon
-
- Gluster elastic volume management daemon
-
-gluster Command
-===============
-
-**NAME**
-
-gluster - Gluster Console Manager (command line interpreter)
-
-**SYNOPSIS**
-
-To run the program and display the gluster prompt:
-
-**gluster**
-
-To specify a command directly: gluster [COMMANDS] [OPTIONS]
-
-**DESCRIPTION**
-
-The Gluster Console Manager is a command line utility for elastic volume
-management. You can run the gluster command on any export server. The
-command enables administrators to perform cloud operations such as
-creating, expanding, shrinking, rebalancing, and migrating volumes
-without needing to schedule server downtime.
-
-**COMMANDS**
-
- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Command Description
- ---------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- **Volume**
-
- volume info [all | VOLNAME] Displays information about all volumes, or the specified volume.
-
- volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK ... Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp).
-
- volume delete VOLNAME Deletes the specified volume.
-
- volume start VOLNAME Starts the specified volume.
-
- volume stop VOLNAME [force] Stops the specified volume.
-
- volume rename VOLNAME NEW-VOLNAME Renames the specified volume.
-
- volume help Displays help for the volume command.
-
- **Brick**
-
- volume add-brick VOLNAME NEW-BRICK ... Adds the specified brick to the specified volume.
-
- volume replace-brick VOLNAME (BRICK NEW-BRICK) start | pause | abort | status Replaces the specified brick.
-
- volume remove-brick VOLNAME [(replica COUNT)|(stripe COUNT)] BRICK ... Removes the specified brick from the specified volume.
-
- **Rebalance**
-
- volume rebalance VOLNAME start Starts rebalancing the specified volume.
-
- volume rebalance VOLNAME stop Stops rebalancing the specified volume.
-
- volume rebalance VOLNAME status Displays the rebalance status of the specified volume.
-
- **Log**
-
- volume log filename VOLNAME [BRICK] DIRECTORY Sets the log directory for the corresponding volume/brick.
-
- volume log rotate VOLNAME [BRICK] Rotates the log file for corresponding volume/brick.
-
- volume log locate VOLNAME [BRICK] Locates the log file for corresponding volume/brick.
-
- **Peer**
-
- peer probe HOSTNAME Probes the specified peer.
-
- peer detach HOSTNAME Detaches the specified peer.
-
- peer status Displays the status of peers.
-
- peer help Displays help for the peer command.
-
- **Geo-replication**
-
- volume geo-replication MASTER SLAVE start Start geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME.
-
- You can specify a local slave volume as :VOLUME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY.
-
- volume geo-replication MASTER SLAVE stop Stop geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME and a local master directory as /DIRECTORY/SUB-DIRECTORY.
-
- You can specify a local slave volume as :VOLNAME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY.
-
- volume geo-replication MASTER SLAVE config [options] Configure geo-replication options between the hosts specified by MASTER and SLAVE.
-
- gluster-command COMMAND The path where the gluster command is installed.
-
- gluster-log-level LOGFILELEVEL The log level for gluster processes.
-
- log-file LOGFILE The path to the geo-replication log file.
-
- log-level LOGFILELEVEL The log level for geo-replication.
-
- remote-gsyncd COMMAND The path where the gsyncd binary is installed on the remote machine.
-
- ssh-command COMMAND The ssh command to use to connect to the remote machine (the default is ssh).
-
- rsync-command COMMAND The rsync command to use for synchronizing the files (the default is rsync).
-
- volume\_id= UID The command to delete the existing master UID for the intermediate/slave node.
-
- timeout SECONDS The timeout period.
-
- sync-jobs N The number of simultaneous files/directories that can be synchronized.
-
- ignore-deletes If this option is set to 1, a file deleted on master will not trigger a delete operation on the slave. Hence, the slave will remain as a superset of the master and can be used to recover the master in case of crash and/or accidental delete.
-
- **Other**
-
- help Display the command options.
-
- quit Exit the gluster command line interface.
- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-**FILES**
-
-/var/lib/glusterd/\*
-
-**SEE ALSO**
-
-fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8),
-glusterd(8)
-
-glusterd Daemon
-===============
-
-**NAME**
-
-glusterd - Gluster elastic volume management daemon
-
-**SYNOPSIS**
-
-glusterd [OPTION...]
-
-**DESCRIPTION**
-
-The glusterd daemon is used for elastic volume management. The daemon
-must be run on all export servers.
-
-**OPTIONS**
-
- Option Description
- ----------------------------------- ----------------------------------------------------------------------------------------------------------------
- **Basic**
- -l=LOGFILE, --log-file=LOGFILE Files to use for logging (the default is /usr/local/var/log/glusterfs/glusterfs.log).
- -L=LOGLEVEL, --log-level=LOGLEVEL Logging severity. Valid options are TRACE, DEBUG, INFO, WARNING, ERROR and CRITICAL (the default is INFO).
- --debug Runs the program in debug mode. This option sets --no-daemon, --log-level to DEBUG, and --log-file to console.
- -N, --no-daemon Runs the program in the foreground.
- **Miscellaneous**
- -?, --help Displays this help.
- --usage Displays a short usage message.
- -V, --version Prints the program version.
-
-**FILES**
-
-/var/lib/glusterd/\*
-
-**SEE ALSO**
-
-fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8),
-gluster(8)
diff --git a/doc/admin-guide/en-US/markdown/admin_console.md b/doc/admin-guide/en-US/markdown/admin_console.md
index 9b69de02d3b..126b7e2064f 100644
--- a/doc/admin-guide/en-US/markdown/admin_console.md
+++ b/doc/admin-guide/en-US/markdown/admin_console.md
@@ -1,5 +1,4 @@
-Using the Gluster Console Manager – Command Line Utility
-========================================================
+##Using the Gluster Console Manager – Command Line Utility
The Gluster Console Manager is a single command line utility that
simplifies configuration and management of your storage environment. The
@@ -18,7 +17,7 @@ You can also use the commands to create scripts for automation, as well
as use the commands as an API to allow integration with third-party
applications.
-**Running the Gluster Console Manager**
+###Running the Gluster Console Manager
You can run the Gluster Console Manager on any GlusterFS server either
by invoking the commands or by running the Gluster CLI in interactive
diff --git a/doc/admin-guide/en-US/markdown/admin_directory_Quota.md b/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
index 09c757781bd..21e42c66902 100644
--- a/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
+++ b/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
@@ -1,5 +1,4 @@
-Managing Directory Quota
-========================
+#Managing Directory Quota
Directory quotas in GlusterFS allow you to set limits on usage of disk
space by directories or volumes. The storage administrators can control
@@ -19,9 +18,8 @@ the storage for the users depending on their role in the organization.
You can set the quota at the following levels:
-- Directory level – limits the usage at the directory level
-
-- Volume level – limits the usage at the volume level
+- **Directory level** – limits the usage at the directory level
+- **Volume level** – limits the usage at the volume level
> **Note**
>
@@ -29,8 +27,7 @@ You can set the quota at the following levels:
> The disk limit is enforced immediately after creating that directory.
> For more information on setting disk limit, see ?.
-Enabling Quota
-==============
+##Enabling Quota
You must enable Quota to set disk limits.
@@ -45,8 +42,7 @@ You must enable Quota to set disk limits.
# gluster volume quota test-volume enable
Quota is enabled on /test-volume
-Disabling Quota
-===============
+##Disabling Quota
You can disable Quota, if needed.
@@ -61,8 +57,7 @@ You can disable Quota, if needed.
# gluster volume quota test-volume disable
Quota translator is disabled on /test-volume
-Setting or Replacing Disk Limit
-===============================
+##Setting or Replacing Disk Limit
You can create new directories in your storage environment and set the
disk limit or set disk limit for the existing directories. The directory
@@ -86,8 +81,7 @@ being treated as "/".
> In a multi-level directory hierarchy, the strictest disk limit
> will be considered for enforcement.
-Displaying Disk Limit Information
-=================================
+##Displaying Disk Limit Information
You can display disk limit information on all the directories on which
the limit is set.
@@ -119,8 +113,7 @@ the limit is set.
/Test/data 10 GB 6 GB
-Updating Memory Cache Size
-==========================
+##Updating Memory Cache Size
For performance reasons, quota caches the directory sizes on client. You
can set timeout indicating the maximum valid duration of directory sizes
@@ -151,8 +144,7 @@ on client side.
# gluster volume set test-volume features.quota-timeout 5
Set volume successful
-Removing Disk Limit
-===================
+##Removing Disk Limit
You can remove set disk limit, if you do not want quota anymore.
diff --git a/doc/admin-guide/en-US/markdown/admin_geo-replication.md b/doc/admin-guide/en-US/markdown/admin_geo-replication.md
index 849957244a9..47a2f66283f 100644
--- a/doc/admin-guide/en-US/markdown/admin_geo-replication.md
+++ b/doc/admin-guide/en-US/markdown/admin_geo-replication.md
@@ -1,5 +1,4 @@
-Managing Geo-replication
-========================
+#Managing Geo-replication
Geo-replication provides a continuous, asynchronous, and incremental
replication service from one site to another over Local Area Networks
@@ -8,9 +7,9 @@ replication service from one site to another over Local Area Networks
Geo-replication uses a master–slave model, whereby replication and
mirroring occurs between the following partners:
-- Master – a GlusterFS volume
+- **Master** – a GlusterFS volume
-- Slave – a slave which can be of the following types:
+- **Slave** – a slave which can be of the following types:
- A local directory which can be represented as file URL like
`file:///path/to/dir`. You can use shortened form, for example,
@@ -34,37 +33,24 @@ This section introduces Geo-replication, illustrates the various
deployment scenarios, and explains how to configure the system to
provide replication and mirroring in your environment.
-Replicated Volumes vs Geo-replication
-=====================================
+##Replicated Volumes vs Geo-replication
The following table lists the difference between replicated volumes and
geo-replication:
- Replicated Volumes Geo-replication
- --------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------
- Mirrors data across clusters Mirrors data across geographically distributed clusters
- Provides high-availability Ensures backing up of data for disaster recovery
- Synchronous replication (each and every file operation is sent across all the bricks) Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences)
+ Replicated Volumes | Geo-replication
+ --- | ---
+ Mirrors data across clusters | Mirrors data across geographically distributed clusters
+ Provides high-availability | Ensures backing up of data for disaster recovery
+ Synchronous replication (each and every file operation is sent across all the bricks) | Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences)
-Preparing to Deploy Geo-replication
-===================================
+##Preparing to Deploy Geo-replication
This section provides an overview of the Geo-replication deployment
scenarios, describes how you can check the minimum system requirements,
and explores common deployment scenarios.
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-Exploring Geo-replication Deployment Scenarios
-----------------------------------------------
+##Exploring Geo-replication Deployment Scenarios
Geo-replication provides an incremental replication service over Local
Area Networks (LANs), Wide Area Network (WANs), and across the Internet.
@@ -72,11 +58,8 @@ This section illustrates the most common deployment scenarios for
Geo-replication, including the following:
- Geo-replication over LAN
-
- Geo-replication over WAN
-
- Geo-replication over the Internet
-
- Multi-site cascading Geo-replication
**Geo-replication over LAN**
@@ -106,22 +89,15 @@ across multiple sites.
![ Multi-site cascading Geo-replication ][]
-Geo-replication Deployment Overview
------------------------------------
+##Geo-replication Deployment Overview
Deploying Geo-replication involves the following steps:
1. Verify that your environment matches the minimum system requirement.
- For more information, see ?.
-
-2. Determine the appropriate deployment scenario. For more information,
- see ?.
+2. Determine the appropriate deployment scenario.
+3. Start Geo-replication on master and slave systems, as required.
-3. Start Geo-replication on master and slave systems, as required. For
- more information, see ?.
-
-Checking Geo-replication Minimum Requirements
----------------------------------------------
+##Checking Geo-replication Minimum Requirements
Before deploying GlusterFS Geo-replication, verify that your systems
match the minimum requirements.
@@ -129,17 +105,16 @@ match the minimum requirements.
The following table outlines the minimum requirements for both master
and slave nodes within your environment:
- Component Master Slave
- ------------------------ --------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Operating System GNU/Linux GNU/Linux
- Filesystem GlusterFS 3.2 or higher GlusterFS 3.2 or higher (GlusterFS needs to be installed, but does not need to be running), ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively)
- Python Python 2.4 (with ctypes external module), or Python 2.5 (or higher) Python 2.4 (with ctypes external module), or Python 2.5 (or higher)
- Secure shell OpenSSH version 4.0 (or higher) SSH2-compliant daemon
- Remote synchronization rsync 3.0.7 or higher rsync 3.0.7 or higher
- FUSE GlusterFS supported versions GlusterFS supported versions
+ Component | Master | Slave
+ --- | --- | ---
+ Operating System | GNU/Linux | GNU/Linux
+ Filesystem | GlusterFS 3.2 or higher | GlusterFS 3.2 or higher (GlusterFS needs to be installed, but does not need to be running), ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively)
+ Python | Python 2.4 (with ctypes external module), or Python 2.5 (or higher) | Python 2.4 (with ctypes external module), or Python 2.5 (or higher)
+ Secure shell | OpenSSH version 4.0 (or higher) | SSH2-compliant daemon
+ Remote synchronization | rsync 3.0.7 or higher | rsync 3.0.7 or higher
+ FUSE | GlusterFS supported versions | GlusterFS supported versions
-Setting Up the Environment for Geo-replication
-----------------------------------------------
+##Setting Up the Environment for Geo-replication
**Time Synchronization**
@@ -172,8 +147,7 @@ geo-replication Start command will be issued) and the remote machine
`# ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub @`
-Setting Up the Environment for a Secure Geo-replication Slave
--------------------------------------------------------------
+##Setting Up the Environment for a Secure Geo-replication Slave
You can configure a secure slave using SSH so that master is granted a
restricted access. With GlusterFS, you need not specify configuration
@@ -366,25 +340,13 @@ following command:
`# gluster volume geo-replication '*' config allow-network ::1,127.0.0.1`
-Starting Geo-replication
-========================
+##Starting Geo-replication
This section describes how to configure and start Gluster
Geo-replication in your storage environment, and verify that it is
functioning correctly.
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-Starting Geo-replication
-------------------------
+###Starting Geo-replication
To start Gluster Geo-replication
@@ -401,10 +363,9 @@ To start Gluster Geo-replication
> **Note**
>
> You may need to configure the service before starting Gluster
- > Geo-replication. For more information, see ?.
+ > Geo-replication.
-Verifying Successful Deployment
--------------------------------
+###Verifying Successful Deployment
You can use the gluster command to verify the status of Gluster
Geo-replication in your environment.
@@ -425,8 +386,7 @@ Geo-replication in your environment.
______ ______________________________ ____________
Volume1 root@example.com:/data/remote_dir Starting....
-Displaying Geo-replication Status Information
----------------------------------------------
+###Displaying Geo-replication Status Information
You can display status information about a specific geo-replication
master session, or a particular master-slave session, or all
@@ -480,15 +440,13 @@ geo-replication sessions, as needed.
- **OK**: The geo-replication session is in a stable state.
- **Faulty**: The geo-replication session has witnessed some
- abnormality and the situation has to be investigated further. For
- further information, see ? section.
+ abnormality and the situation has to be investigated further.
- **Corrupt**: The monitor thread which is monitoring the
geo-replication session has died. This situation should not occur
- normally, if it persists contact Red Hat Support[][1].
+ normally.
-Configuring Geo-replication
----------------------------
+##Configuring Geo-replication
To configure Gluster Geo-replication
@@ -496,16 +454,13 @@ To configure Gluster Geo-replication
`# gluster volume geo-replication config [options]`
- For more information about the options, see ?.
-
For example:
To view list of all option/value pair, use the following command:
`# gluster volume geo-replication Volume1 example.com:/data/remote_dir config`
-Stopping Geo-replication
-------------------------
+##Stopping Geo-replication
You can use the gluster command to stop Gluster Geo-replication (syncing
of data from Master to Slave) in your environment.
@@ -522,10 +477,7 @@ of data from Master to Slave) in your environment.
Stopping geo-replication session between Volume1 and
example.com:/data/remote_dir has been successful
- See ? for more information about the gluster command.
-
-Restoring Data from the Slave
-=============================
+##Restoring Data from the Slave
You can restore data from the slave to the master volume, whenever the
master volume becomes faulty for reasons like hardware failure.
@@ -687,15 +639,13 @@ Run the following command on slave (example.com):
Starting geo-replication session between Volume1 &
example.com:/data/remote_dir has been successful
-Best Practices
-==============
+##Best Practices
**Manually Setting Time**
If you have to change the time on your bricks manually, then you must
-set uniform time on all bricks. This avoids the out-of-time sync issue
-described in ?. Setting time backward corrupts the geo-replication
-index, so the recommended way to set the time manually is:
+set uniform time on all bricks. Setting time backward corrupts the
+geo-replication index, so the recommended way to set the time manually is:
1. Stop geo-replication between the master and slave using the
following command:
@@ -730,9 +680,9 @@ machine / chroot/container type solution) by the administrator to run
the geo-replication slave in it. Enhancement in this regard will be
available in follow-up minor release.
- [ Geo-replication over LAN ]: images/Geo-Rep_LAN.png
- [ Geo-replication over WAN ]: images/Geo-Rep_WAN.png
- [ Geo-replication over Internet ]: images/Geo-Rep03_Internet.png
- [ Multi-site cascading Geo-replication ]: images/Geo-Rep04_Cascading.png
+ [ Geo-replication over LAN ]: ../images/Geo-Rep_LAN.png
+ [ Geo-replication over WAN ]: ../images/Geo-Rep_WAN.png
+ [ Geo-replication over Internet ]: ../images/Geo-Rep03_Internet.png
+ [ Multi-site cascading Geo-replication ]: ../images/Geo-Rep04_Cascading.png
[]: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Migration_Planning_Guide/ch04s07.html
[1]: www.redhat.com/support/
diff --git a/doc/admin-guide/en-US/markdown/admin_managing_volumes.md b/doc/admin-guide/en-US/markdown/admin_managing_volumes.md
index 6c06e27a0a2..f59134b8005 100644
--- a/doc/admin-guide/en-US/markdown/admin_managing_volumes.md
+++ b/doc/admin-guide/en-US/markdown/admin_managing_volumes.md
@@ -1,167 +1,104 @@
-Managing GlusterFS Volumes
-==========================
+#Managing GlusterFS Volumes
This section describes how to perform common GlusterFS management
operations, including the following:
-- ?
+- [Tuning Volume Options](#tuning-options)
+- [Expanding Volumes](#expanding-volumes)
+- [Shrinking Volumes](#shrinking-volumes)
+- [Migrating Volumes](#migrating-volumes)
+- [Rebalancing Volumes](#rebalancing-volumes)
+- [Stopping Volumes](#stopping-volumes)
+- [Deleting Volumes](#deleting-volumes)
+- [Triggering Self-Heal on Replicate](#self-heal)
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-Tuning Volume Options
-=====================
+<a name="tuning-options" />
+##Tuning Volume Options
You can tune volume options, as needed, while the cluster is online and
available.
> **Note**
>
-> Red Hat recommends you to set server.allow-insecure option to ON if
+> It is recommended that you to set server.allow-insecure option to ON if
> there are too many bricks in each volume or if there are too many
> services which have already utilized all the privileged ports in the
> system. Turning this option ON allows ports to accept/reject messages
> from insecure ports. So, use this option only if your deployment
> requires it.
-To tune volume options
-
-- Tune volume options using the following command:
+Tune volume options using the following command:
`# gluster volume set `
- For example, to specify the performance cache size for test-volume:
-
- # gluster volume set test-volume performance.cache-size 256MB
- Set volume successful
-
- The following table lists the Volume options along with its
- description and default value:
-
- > **Note**
- >
- > The default options given here are subject to modification at any
- > given time and may not be the same for all versions.
-
- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Option Description Default Value Available Options
- -------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------- ---------------------------------------------------------------------------------------
- auth.allow IP addresses of the clients which should be allowed to access the volume. \* (allow all) Valid IP address which includes wild card patterns including \*, such as 192.168.1.\*
-
- auth.reject IP addresses of the clients which should be denied to access the volume. NONE (reject none) Valid IP address which includes wild card patterns including \*, such as 192.168.2.\*
-
- client.grace-timeout Specifies the duration for the lock state to be maintained on the client after a network disconnection. 10 10 - 1800 secs
-
- cluster.self-heal-window-size Specifies the maximum number of blocks per file on which self-heal would happen simultaneously. 16 0 - 1025 blocks
-
- cluster.data-self-heal-algorithm Specifies the type of self-heal. If you set the option as "full", the entire file is copied from source to destinations. If the option is set to "diff" the file blocks that are not in sync are copied to destinations. Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte file exists (created by entry self-heal) the entire content has to be copied anyway, so there is no benefit from using the "diff" algorithm. If the file size is about the same as page size, the entire file can be read and written with a few operations, which will be faster than "diff" which has to read checksums and then read and write. reset full | diff | reset
-
- cluster.min-free-disk Specifies the percentage of disk space that must be kept free. Might be useful for non-uniform bricks. 10% Percentage of required minimum free disk space
-
- cluster.stripe-block-size Specifies the size of the stripe unit that will be read from or written to. 128 KB (for all files) size in bytes
-
- cluster.self-heal-daemon Allows you to turn-off proactive self-heal on replicated volumes. on On | Off
-
- cluster.ensure-durability This option makes sure the data/metadata is durable across abrupt shutdown of the brick. on On | Off
-
- diagnostics.brick-log-level Changes the log-level of the bricks. INFO DEBUG|WARNING|ERROR|CRITICAL|NONE|TRACE
-
- diagnostics.client-log-level Changes the log-level of the clients. INFO DEBUG|WARNING|ERROR|CRITICAL|NONE|TRACE
-
- diagnostics.latency-measurement Statistics related to the latency of each operation would be tracked. off On | Off
-
- diagnostics.dump-fd-stats Statistics related to file-operations would be tracked. off On | Off
-
- feature.read-only Enables you to mount the entire volume as read-only for all the clients (including NFS clients) accessing it. off On | Off
-
- features.lock-heal Enables self-healing of locks when the network disconnects. on On | Off
-
- features.quota-timeout For performance reasons, quota caches the directory sizes on client. You can set timeout indicating the maximum duration of directory sizes in cache, from the time they are populated, during which they are considered valid. 0 0 - 3600 secs
-
- geo-replication.indexing Use this option to automatically sync the changes in the filesystem from Master to Slave. off On | Off
-
- network.frame-timeout The time frame after which the operation has to be declared as dead, if the server does not respond for a particular operation. 1800 (30 mins) 1800 secs
-
- network.ping-timeout The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. 42 Secs 42 Secs
- This reconnect is a very expensive operation and should be avoided.
+For example, to specify the performance cache size for test-volume:
- nfs.enable-ino32 For 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files, use this option from the CLI to make Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers. Applications that will benefit are those that were either: off On | Off
- \* Built 32-bit and run on 32-bit machines.
-
- \* Built 32-bit on 64-bit systems.
-
- \* Built 64-bit but use a library built 32-bit, especially relevant for python and perl scripts.
-
- Either of the conditions above can lead to application on Linux NFS clients failing with "Invalid argument" or "Value too large for defined data type" errors.
+ # gluster volume set test-volume performance.cache-size 256MB
+ Set volume successful
- nfs.volume-access Set the access type for the specified sub-volume. read-write read-write|read-only
+The following table lists the Volume options along with its
+description and default value:
- nfs.trusted-write If there is an UNSTABLE write from the client, STABLE flag will be returned to force the client to not send a COMMIT request. off On | Off
- In some environments, combined with a replicated GlusterFS setup, this option can improve write performance. This flag allows users to trust Gluster replication logic to sync data to the disks and recover when required. COMMIT requests if received will be handled in a default manner by fsyncing. STABLE writes are still handled in a sync manner.
-
- nfs.trusted-sync All writes and COMMIT requests are treated as async. This implies that no write requests are guaranteed to be on server disks when the write reply is received at the NFS client. Trusted sync includes trusted-write behavior. off On | Off
-
- nfs.export-dir This option can be used to export specified comma separated subdirectories in the volume. The path must be an absolute path. Along with path allowed list of IPs/hostname can be associated with each subdirectory. If provided connection will allowed only from these IPs. Format: \<dir\>[(hostspec[|hostspec|...])][,...]. Where hostspec can be an IP address, hostname or an IP range in CIDR notation. **Note**: Care must be taken while configuring this option as invalid entries and/or unreachable DNS servers can introduce unwanted delay in all the mount calls. No sub directory exported. Absolute path with allowed list of IP/hostname.
-
- nfs.export-volumes Enable/Disable exporting entire volumes, instead if used in conjunction with nfs3.export-dir, can allow setting up only subdirectories as exports. on On | Off
-
- nfs.rpc-auth-unix Enable/Disable the AUTH\_UNIX authentication type. This option is enabled by default for better interoperability. However, you can disable it if required. on On | Off
-
- nfs.rpc-auth-null Enable/Disable the AUTH\_NULL authentication type. It is not recommended to change the default value for this option. on On | Off
-
- nfs.rpc-auth-allow\<IP- Addresses\> Allow a comma separated list of addresses and/or hostnames to connect to the server. By default, all clients are disallowed. This allows you to define a general rule for all exported volumes. Reject All IP address or Host name
-
- nfs.rpc-auth-reject IP- Addresses Reject a comma separated list of addresses and/or hostnames from connecting to the server. By default, all connections are disallowed. This allows you to define a general rule for all exported volumes. Reject All IP address or Host name
-
- nfs.ports-insecure Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. off On | Off
-
- nfs.addr-namelookup Turn-off name lookup for incoming client connections using this option. In some setups, the name server can take too long to reply to DNS queries resulting in timeouts of mount requests. Use this option to turn off name lookups during address authentication. Note, turning this off will prevent you from using hostnames in rpc-auth.addr.\* filters. on On | Off
-
- nfs.register-with- portmap For systems that need to run multiple NFS servers, you need to prevent more than one from registering with portmap service. Use this option to turn off portmap registration for Gluster NFS. on On | Off
-
- nfs.port \<PORT- NUMBER\> Use this option on systems that need Gluster NFS to be associated with a non-default port number. 38465- 38467
-
- nfs.disable Turn-off volume being exported by NFS off On | Off
-
- performance.write-behind-window-size Size of the per-file write-behind buffer. 1 MB Write-behind cache size
-
- performance.io-thread-count The number of threads in IO threads translator. 16 0 - 65
-
- performance.flush-behind If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed) to application even before flush is sent to backend filesystem. On On | Off
-
- performance.cache-max-file-size Sets the maximum file size cached by the io-cache translator. Can use the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB). Maximum size uint64. 2 \^ 64 -1 bytes size in bytes
-
- performance.cache-min-file-size Sets the minimum file size cached by the io-cache translator. Values same as "max" above. 0B size in bytes
-
- performance.cache-refresh-timeout The cached data for a file will be retained till 'cache-refresh-timeout' seconds, after which data re-validation is performed. 1 sec 0 - 61
-
- performance.cache-size Size of the read cache. 32 MB size in bytes
-
- server.allow-insecure Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. on On | Off
-
- server.grace-timeout Specifies the duration for the lock state to be maintained on the server after a network disconnection. 10 10 - 1800 secs
-
- server.statedump-path Location of the state dump file. /tmp directory of the brick New directory path
-
- storage.health-check-interval Number of seconds between health-checks done on the filesystem that is used for the brick(s). Defaults to 30 seconds, set to 0 to disable. /tmp directory of the brick New directory path
- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
- You can view the changed volume options using
- the` # gluster volume info ` command. For more information, see ?.
-
-Expanding Volumes
-=================
+> **Note**
+>
+> The default options given here are subject to modification at any
+> given time and may not be the same for all versions.
+
+
+Option | Description | Default Value | Available Options
+--- | --- | --- | ---
+auth.allow | IP addresses of the clients which should be allowed to access the volume. | \* (allow all) | Valid IP address which includes wild card patterns including \*, such as 192.168.1.\*
+auth.reject | IP addresses of the clients which should be denied to access the volume. | NONE (reject none) | Valid IP address which includes wild card patterns including \*, such as 192.168.2.\*
+client.grace-timeout | Specifies the duration for the lock state to be maintained on the client after a network disconnection. | 10 | 10 - 1800 secs
+cluster.self-heal-window-size | Specifies the maximum number of blocks per file on which self-heal would happen simultaneously. | 16 | 0 - 1025 blocks
+cluster.data-self-heal-algorithm | Specifies the type of self-heal. If you set the option as "full", the entire file is copied from source to destinations. If the option is set to "diff" the file blocks that are not in sync are copied to destinations. Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte file exists (created by entry self-heal) the entire content has to be copied anyway, so there is no benefit from using the "diff" algorithm. If the file size is about the same as page size, the entire file can be read and written with a few operations, which will be faster than "diff" which has to read checksums and then read and write. | reset | full/diff/reset
+cluster.min-free-disk | Specifies the percentage of disk space that must be kept free. Might be useful for non-uniform bricks | 10% | Percentage of required minimum free disk space
+cluster.stripe-block-size | Specifies the size of the stripe unit that will be read from or written to. | 128 KB (for all files) | size in bytes
+cluster.self-heal-daemon | Allows you to turn-off proactive self-heal on replicated | On | On/Off
+cluster.ensure-durability | This option makes sure the data/metadata is durable across abrupt shutdown of the brick. | On | On/Off
+diagnostics.brick-log-level | Changes the log-level of the bricks. | INFO | DEBUG/WARNING/ERROR/CRITICAL/NONE/TRACE
+diagnostics.client-log-level | Changes the log-level of the clients. | INFO | DEBUG/WARNING/ERROR/CRITICAL/NONE/TRACE
+diagnostics.latency-measurement | Statistics related to the latency of each operation would be tracked. | Off | On/Off
+diagnostics.dump-fd-stats | Statistics related to file-operations would be tracked. | Off | On
+feature.read-only | Enables you to mount the entire volume as read-only for all the clients (including NFS clients) accessing it. | Off | On/Off
+features.lock-heal | Enables self-healing of locks when the network disconnects. | On | On/Off
+features.quota-timeout | For performance reasons, quota caches the directory sizes on client. You can set timeout indicating the maximum duration of directory sizes in cache, from the time they are populated, during which they are considered valid | 0 | 0 - 3600 secs
+geo-replication.indexing | Use this option to automatically sync the changes in the filesystem from Master to Slave. | Off | On/Off
+network.frame-timeout | The time frame after which the operation has to be declared as dead, if the server does not respond for a particular operation. | 1800 (30 mins) | 1800 secs
+network.ping-timeout | The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided. | 42 Secs | 42 Secs
+nfs.enable-ino32 | For 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files, use this option from the CLI to make Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers. | Off | On/Off
+nfs.volume-access | Set the access type for the specified sub-volume. | read-write | read-write/read-only
+nfs.trusted-write | If there is an UNSTABLE write from the client, STABLE flag will be returned to force the client to not send a COMMIT request. In some environments, combined with a replicated GlusterFS setup, this option can improve write performance. This flag allows users to trust Gluster replication logic to sync data to the disks and recover when required. COMMIT requests if received will be handled in a default manner by fsyncing. STABLE writes are still handled in a sync manner. | Off | On/Off
+nfs.trusted-sync | All writes and COMMIT requests are treated as async. This implies that no write requests are guaranteed to be on server disks when the write reply is received at the NFS client. Trusted sync includes trusted-write behavior. | Off | On/Off
+nfs.export-dir | This option can be used to export specified comma separated subdirectories in the volume. The path must be an absolute path. Along with path allowed list of IPs/hostname can be associated with each subdirectory. If provided connection will allowed only from these IPs. Format: \<dir\>[(hostspec[hostspec...])][,...]. Where hostspec can be an IP address, hostname or an IP range in CIDR notation. **Note**: Care must be taken while configuring this option as invalid entries and/or unreachable DNS servers can introduce unwanted delay in all the mount calls. | No sub directory exported. | Absolute path with allowed list of IP/hostname
+nfs.export-volumes | Enable/Disable exporting entire volumes, instead if used in conjunction with nfs3.export-dir, can allow setting up only subdirectories as exports. | On | On/Off
+nfs.rpc-auth-unix | Enable/Disable the AUTH\_UNIX authentication type. This option is enabled by default for better interoperability. However, you can disable it if required. | On | On/Off
+nfs.rpc-auth-null | Enable/Disable the AUTH\_NULL authentication type. It is not recommended to change the default value for this option. | On | On/Off
+nfs.rpc-auth-allow\<IP- Addresses\> | Allow a comma separated list of addresses and/or hostnames to connect to the server. By default, all clients are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name
+nfs.rpc-auth-reject\<IP- Addresses\> | Reject a comma separated list of addresses and/or hostnames from connecting to the server. By default, all connections are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name
+nfs.ports-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | Off | On/Off
+nfs.addr-namelookup | Turn-off name lookup for incoming client connections using this option. In some setups, the name server can take too long to reply to DNS queries resulting in timeouts of mount requests. Use this option to turn off name lookups during address authentication. Note, turning this off will prevent you from using hostnames in rpc-auth.addr.\* filters. | On | On/Off
+nfs.register-with-portmap | For systems that need to run multiple NFS servers, you need to prevent more than one from registering with portmap service. Use this option to turn off portmap registration for Gluster NFS. | On | On/Off
+nfs.port \<PORT- NUMBER\> | Use this option on systems that need Gluster NFS to be associated with a non-default port number. | NA | 38465- 38467
+nfs.disable | Turn-off volume being exported by NFS | Off | On/Off
+performance.write-behind-window-size | Size of the per-file write-behind buffer. | 1MB | Write-behind cache size
+performance.io-thread-count | The number of threads in IO threads translator. | 16 | 0-65
+performance.flush-behind | If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed) to application even before flush is sent to backend filesystem. | On | On/Off
+performance.cache-max-file-size | Sets the maximum file size cached by the io-cache translator. Can use the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB). Maximum size uint64. | 2 \^ 64 -1 bytes | size in bytes
+performance.cache-min-file-size | Sets the minimum file size cached by the io-cache translator. Values same as "max" above | 0B | size in bytes
+performance.cache-refresh-timeout | The cached data for a file will be retained till 'cache-refresh-timeout' seconds, after which data re-validation is performed. | 1s | 0-61
+performance.cache-size | Size of the read cache. | 32 MB | size in bytes
+server.allow-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | On | On/Off
+server.grace-timeout | Specifies the duration for the lock state to be maintained on the server after a network disconnection. | 10 | 10 - 1800 secs
+server.statedump-path | Location of the state dump file. | tmp directory of the brick | New directory path
+storage.health-check-interval | Number of seconds between health-checks done on the filesystem that is used for the brick(s). Defaults to 30 seconds, set to 0 to disable. | tmp directory of the brick | New directory path
+
+You can view the changed volume options using command:
+
+ ` # gluster volume info `
+
+<a name="expanding-volumes" />
+##Expanding Volumes
You can expand volumes, as needed, while the cluster is online and
available. For example, you might want to add a brick to a distributed
@@ -221,8 +158,8 @@ replicated volume, increasing the capacity of the GlusterFS volume.
You can use the rebalance command as described in ?.
-Shrinking Volumes
-=================
+<a name="shrinking-volumes" />
+##Shrinking Volumes
You can shrink volumes, as needed, while the cluster is online and
available. For example, you might need to remove a brick that has become
@@ -295,8 +232,8 @@ set).
You can use the rebalance command as described in ?.
-Migrating Volumes
-=================
+<a name="migrating-volumes" />
+##Migrating Volumes
You can migrate the data from one brick to another, as needed, while the
cluster is online and available.
@@ -306,8 +243,6 @@ cluster is online and available.
1. Make sure the new brick, server5 in this example, is successfully
added to the cluster.
- For more information, see ?.
-
2. Migrate the data from one brick to another using the following
command:
@@ -401,8 +336,8 @@ cluster is online and available.
In the above example, previously, there were bricks; 1,2,3, and 4
and now brick 3 is replaced by brick 5.
-Rebalancing Volumes
-===================
+<a name="rebalancing-volumes" />
+##Rebalancing Volumes
After expanding or shrinking a volume (using the add-brick and
remove-brick commands respectively), you need to rebalance the data
@@ -414,15 +349,13 @@ layout and/or data.
This section describes how to rebalance GlusterFS volumes in your
storage environment, using the following common scenarios:
-- Fix Layout - Fixes the layout changes so that the files can actually
- go to newly added nodes. For more information, see ?.
+- **Fix Layout** - Fixes the layout changes so that the files can actually
+ go to newly added nodes.
-- Fix Layout and Migrate Data - Rebalances volume by fixing the layout
- changes and migrating the existing data. For more information, see
- ?.
+- **Fix Layout and Migrate Data** - Rebalances volume by fixing the layout
+ changes and migrating the existing data.
-Rebalancing Volume to Fix Layout Changes
-----------------------------------------
+###Rebalancing Volume to Fix Layout Changes
Fixing the layout is necessary because the layout structure is static
for a given directory. In a scenario where new bricks have been added to
@@ -450,8 +383,7 @@ the servers.
# gluster volume rebalance test-volume fix-layout start
Starting rebalance on volume test-volume has been successful
-Rebalancing Volume to Fix Layout and Migrate Data
--------------------------------------------------
+###Rebalancing Volume to Fix Layout and Migrate Data
After expanding or shrinking a volume (using the add-brick and
remove-brick commands respectively), you need to rebalance the data
@@ -479,14 +411,11 @@ among the servers.
# gluster volume rebalance test-volume start force
Starting rebalancing on volume test-volume has been successful
-Displaying Status of Rebalance Operation
-----------------------------------------
+###Displaying Status of Rebalance Operation
You can display the status information about rebalance volume operation,
as needed.
-**To view status of rebalance volume**
-
- Check the status of the rebalance operation, using the following
command:
@@ -520,13 +449,10 @@ as needed.
--------- ---------------- ---- ------- -----------
617c923e-6450-4065-8e33-865e28d9428f 502 1873 334 completed
-Stopping Rebalance Operation
-----------------------------
+###Stopping Rebalance Operation
You can stop the rebalance operation, as needed.
-**To stop rebalance**
-
- Stop the rebalance operation using the following command:
`# gluster volume rebalance stop`
@@ -539,10 +465,8 @@ You can stop the rebalance operation, as needed.
617c923e-6450-4065-8e33-865e28d9428f 59 590 244 stopped
Stopped rebalance process on volume test-volume
-Stopping Volumes
-================
-
-To stop a volume
+<a name="stopping-volumes" />
+##Stopping Volumes
1. Stop the volume using the following command:
@@ -558,10 +482,8 @@ To stop a volume
Stopping volume test-volume has been successful
-Deleting Volumes
-================
-
-To delete a volume
+<a name="" />
+##Deleting Volumes
1. Delete the volume using the following command:
@@ -577,8 +499,8 @@ To delete a volume
Deleting volume test-volume has been successful
-Triggering Self-Heal on Replicate
-=================================
+<a name="self-heal" />
+##Triggering Self-Heal on Replicate
In replicate module, previously you had to manually trigger a self-heal
when a brick goes offline and comes back online, to bring all the
diff --git a/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md b/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md
index 0312bd048ea..c3ac0609b99 100644
--- a/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md
+++ b/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md
@@ -1,5 +1,4 @@
-Monitoring your GlusterFS Workload
-==================================
+#Monitoring your GlusterFS Workload
You can monitor the GlusterFS volumes on different parameters.
Monitoring volumes helps in capacity planning and performance tuning
@@ -14,8 +13,7 @@ performance needs to be probed.
You can also perform statedump of the brick processes and nfs server
process of a volume, and also view volume status and volume information.
-Running GlusterFS Volume Profile Command
-========================================
+##Running GlusterFS Volume Profile Command
GlusterFS Volume Profile command provides an interface to get the
per-brick I/O information for each File Operation (FOP) of a volume. The
@@ -25,21 +23,17 @@ system.
This section describes how to run GlusterFS Volume Profile command by
performing the following operations:
-- ?
+- [Start Profiling](#start-profiling)
+- [Displaying the I/0 Information](#displaying-io)
+- [Stop Profiling](#stop-profiling)
-- ?
-
-- ?
-
-Start Profiling
----------------
+<a name="start-profiling" />
+###Start Profiling
You must start the Profiling to view the File Operation information for
each brick.
-**To start profiling:**
-
-- Start profiling using the following command:
+To start profiling, use following command:
`# gluster volume profile start `
@@ -52,17 +46,12 @@ When profiling on the volume is started, the following additional
options are displayed in the Volume Info:
diagnostics.count-fop-hits: on
-
diagnostics.latency-measurement: on
-Displaying the I/0 Information
-------------------------------
-
-You can view the I/O information of each brick.
-
-To display I/O information:
+<a name="displaying-io" />
+###Displaying the I/0 Information
-- Display the I/O information using the following command:
+You can view the I/O information of each brick by using the following command:
`# gluster volume profile info`
@@ -117,26 +106,23 @@ For example, to see the I/O information on test-volume:
BytesWritten : 195571980
-Stop Profiling
---------------
+<a name="stop-profiling" />
+###Stop Profiling
You can stop profiling the volume, if you do not need profiling
information anymore.
-**To stop profiling**
-
-- Stop profiling using the following command:
+Stop profiling using the following command:
`# gluster volume profile stop`
- For example, to stop profiling on test-volume:
+For example, to stop profiling on test-volume:
`# gluster volume profile stop`
`Profiling stopped on test-volume`
-Running GlusterFS Volume TOP Command
-====================================
+##Running GlusterFS Volume TOP Command
GlusterFS Volume Top command allows you to view the glusterfs bricks’
performance metrics like read, write, file open calls, file read calls,
@@ -146,22 +132,16 @@ top command displays up to 100 results.
This section describes how to run and view the results for the following
GlusterFS Top commands:
-- ?
-
-- ?
-
-- ?
-
-- ?
-
-- ?
+- [Viewing Open fd Count and Maximum fd Count](#open-fd-count)
+- [Viewing Highest File Read Calls](#file-read)
+- [Viewing Highest File Write Calls](#file-write)
+- [Viewing Highest Open Calls on Directories](#open-dir)
+- [Viewing Highest Read Calls on Directory](#read-dir)
+- [Viewing List of Read Performance on each Brick](#read-perf)
+- [Viewing List of Write Performance on each Brick](#write-perf)
-- ?
-
-- ?
-
-Viewing Open fd Count and Maximum fd Count
-------------------------------------------
+<a name="open-fd-count" />
+###Viewing Open fd Count and Maximum fd Count
You can view both current open fd count (list of files that are
currently the most opened and the count) on the brick and the maximum
@@ -171,8 +151,6 @@ servers are up and running). If the brick name is not specified, then
open fd metrics of all the bricks belonging to the volume will be
displayed.
-**To view open fd count and maximum fd count:**
-
- View open fd count and maximum fd count using the following command:
`# gluster volume top open [brick ] [list-cnt ]`
@@ -221,14 +199,12 @@ displayed.
9 /clients/client8/~dmtmp/PARADOX/
STUDENTS.DB
-Viewing Highest File Read Calls
--------------------------------
+<a name="file-read" />
+###Viewing Highest File Read Calls
You can view highest read calls on each brick. If brick name is not
specified, then by default, list of 100 files will be displayed.
-**To view highest file Read calls:**
-
- View highest file Read calls using the following command:
`# gluster volume top read [brick ] [list-cnt ] `
@@ -265,15 +241,13 @@ specified, then by default, list of 100 files will be displayed.
54 /clients/client8/~dmtmp/SEED/LARGE.FIL
-Viewing Highest File Write Calls
---------------------------------
+<a name="file-write" />
+###Viewing Highest File Write Calls
You can view list of files which has highest file write calls on each
brick. If brick name is not specified, then by default, list of 100
files will be displayed.
-**To view highest file Write calls:**
-
- View highest file Write calls using the following command:
`# gluster volume top write [brick ] [list-cnt ] `
@@ -308,15 +282,13 @@ files will be displayed.
59 /clients/client3/~dmtmp/SEED/LARGE.FIL
-Viewing Highest Open Calls on Directories
------------------------------------------
+<a name="open-dir" />
+###Viewing Highest Open Calls on Directories
You can view list of files which has highest open calls on directories
of each brick. If brick name is not specified, then the metrics of all
the bricks belonging to that volume will be displayed.
-To view list of open calls on each directory
-
- View list of open calls on each directory using the following
command:
@@ -353,15 +325,13 @@ To view list of open calls on each directory
402 /clients/client4/~dmtmp
-Viewing Highest Read Calls on Directory
----------------------------------------
+<a name="read-dir" />
+###Viewing Highest Read Calls on Directory
You can view list of files which has highest directory read calls on
each brick. If brick name is not specified, then the metrics of all the
bricks belonging to that volume will be displayed.
-**To view list of highest directory read calls on each brick**
-
- View list of highest directory read calls on each brick using the
following command:
@@ -398,8 +368,8 @@ bricks belonging to that volume will be displayed.
800 /clients/client4/~dmtmp
-Viewing List of Read Performance on each Brick
-----------------------------------------------
+<a name="read-perf" />
+###Viewing List of Read Performance on each Brick
You can view the read throughput of files on each brick. If brick name
is not specified, then the metrics of all the bricks belonging to that
@@ -443,8 +413,6 @@ volume will be displayed. The output will be the read throughput.
This command will initiate a dd for the specified count and block size
and measures the corresponding throughput.
-**To view list of read performance on each brick**
-
- View list of read performance on each brick using the following
command:
@@ -494,9 +462,8 @@ and measures the corresponding throughput.
2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31
BASEMACH.DOC 15:39:09.336572
-
-Viewing List of Write Performance on each Brick
------------------------------------------------
+<a name="write-perf" />
+###Viewing List of Write Performance on each Brick
You can view list of write throughput of files on each brick. If brick
name is not specified, then the metrics of all the bricks belonging to
@@ -552,14 +519,11 @@ performance on each brick:
516.00 /clients/client6/~dmtmp/ACCESS/ -2011-01-31
FASTENER.MDB 15:39:01.797317
-Displaying Volume Information
-=============================
+##Displaying Volume Information
You can display information about a specific volume, or all volumes, as
needed.
-**To display volume information**
-
- Display information about a specific volume using the following
command:
@@ -611,8 +575,7 @@ needed.
Bricks:
Brick: server:/brick6
-Performing Statedump on a Volume
-================================
+##Performing Statedump on a Volume
Statedump is a mechanism through which you can get details of all
internal variables and state of the glusterfs process at the time of
@@ -668,8 +631,7 @@ dumped:
`# gluster volume info `
-Displaying Volume Status
-========================
+##Displaying Volume Status
You can display the status information about a specific volume, brick or
all volumes, as needed. Status information can be used to understand the
diff --git a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md b/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
index 4038523c841..455238048be 100644
--- a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
+++ b/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
@@ -1,5 +1,4 @@
-Setting up GlusterFS Server Volumes
-===================================
+#Setting up GlusterFS Server Volumes
A volume is a logical collection of bricks where each brick is an export
directory on a server in the trusted storage pool. Most of the gluster
@@ -12,51 +11,46 @@ start it before attempting to mount it.
- Volumes of the following types can be created in your storage
environment:
- - Distributed - Distributed volumes distributes files throughout
+ - **Distributed** - Distributed volumes distributes files throughout
the bricks in the volume. You can use distributed volumes where
the requirement is to scale storage and the redundancy is either
not important or is provided by other hardware/software layers.
- For more information, see ? .
- - Replicated – Replicated volumes replicates files across bricks
+ - **Replicated** – Replicated volumes replicates files across bricks
in the volume. You can use replicated volumes in environments
- where high-availability and high-reliability are critical. For
- more information, see ?.
+ where high-availability and high-reliability are critical.
- - Striped – Striped volumes stripes data across bricks in the
+ - **Striped** – Striped volumes stripes data across bricks in the
volume. For best results, you should use striped volumes only in
- high concurrency environments accessing very large files. For
- more information, see ?.
+ high concurrency environments accessing very large files.
- - Distributed Striped - Distributed striped volumes stripe data
+ - **Distributed Striped** - Distributed striped volumes stripe data
across two or more nodes in the cluster. You should use
distributed striped volumes where the requirement is to scale
storage and in high concurrency environments accessing very
- large files is critical. For more information, see ?.
+ large files is critical.
- - Distributed Replicated - Distributed replicated volumes
+ - **Distributed Replicated** - Distributed replicated volumes
distributes files across replicated bricks in the volume. You
can use distributed replicated volumes in environments where the
requirement is to scale storage and high-reliability is
critical. Distributed replicated volumes also offer improved
- read performance in most environments. For more information, see
- ?.
+ read performance in most environments.
- - Distributed Striped Replicated – Distributed striped replicated
+ - **Distributed Striped Replicated** – Distributed striped replicated
volumes distributes striped data across replicated bricks in the
cluster. For best results, you should use distributed striped
replicated volumes in highly concurrent environments where
parallel access of very large files and performance is critical.
In this release, configuration of this volume type is supported
- only for Map Reduce workloads. For more information, see ?.
+ only for Map Reduce workloads.
- - Striped Replicated – Striped replicated volumes stripes data
+ - **Striped Replicated** – Striped replicated volumes stripes data
across replicated bricks in the cluster. For best results, you
should use striped replicated volumes in highly concurrent
environments where there is parallel access of very large files
and performance is critical. In this release, configuration of
- this volume type is supported only for Map Reduce workloads. For
- more information, see ?.
+ this volume type is supported only for Map Reduce workloads.
**To create a new volume**
@@ -71,16 +65,14 @@ start it before attempting to mount it.
Creation of test-volume has been successful
Please start the volume to access data.
-Creating Distributed Volumes
-============================
+##Creating Distributed Volumes
In a distributed volumes files are spread randomly across the bricks in
the volume. Use distributed volumes where you need to scale storage and
redundancy is either not important or is provided by other
hardware/software layers.
-> **Note**
->
+> **Note**:
> Disk/server failure in distributed volumes can result in a serious
> loss of data because directory contents are spread randomly across the
> bricks in the volume.
@@ -89,7 +81,7 @@ hardware/software layers.
**To create a distributed volume**
-1. Create a trusted storage pool as described earlier in ?.
+1. Create a trusted storage pool.
2. Create the distributed volume:
@@ -125,23 +117,19 @@ hardware/software layers.
If the transport type is not specified, *tcp* is used as the
default. You can also set additional options if required, such as
- auth.allow or auth.reject. For more information, see ?
+ auth.allow or auth.reject.
- > **Note**
- >
+ > **Note**:
> Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang, see ? for
- > details.
+ > else client operations after the mount will hang.
-Creating Replicated Volumes
-===========================
+##Creating Replicated Volumes
Replicated volumes create copies of files across multiple bricks in the
volume. You can use replicated volumes in environments where
high-availability and high-reliability are critical.
-> **Note**
->
+> **Note**:
> The number of bricks should be equal to of the replica count for a
> replicated volume. To protect against server and disk failures, it is
> recommended that the bricks of the volume are from different servers.
@@ -150,7 +138,7 @@ high-availability and high-reliability are critical.
**To create a replicated volume**
-1. Create a trusted storage pool as described earlier in ?.
+1. Create a trusted storage pool.
2. Create the replicated volume:
@@ -164,23 +152,19 @@ high-availability and high-reliability are critical.
If the transport type is not specified, *tcp* is used as the
default. You can also set additional options if required, such as
- auth.allow or auth.reject. For more information, see ?
+ auth.allow or auth.reject.
- > **Note**
- >
+ > **Note**:
> Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang, see ? for
- > details.
+ > else client operations after the mount will hang.
-Creating Striped Volumes
-========================
+##Creating Striped Volumes
Striped volumes stripes data across bricks in the volume. For best
results, you should use striped volumes only in high concurrency
environments accessing very large files.
-> **Note**
->
+> **Note**:
> The number of bricks should be a equal to the stripe count for a
> striped volume.
@@ -188,7 +172,7 @@ environments accessing very large files.
**To create a striped volume**
-1. Create a trusted storage pool as described earlier in ?.
+1. Create a trusted storage pool.
2. Create the striped volume:
@@ -202,24 +186,20 @@ environments accessing very large files.
If the transport type is not specified, *tcp* is used as the
default. You can also set additional options if required, such as
- auth.allow or auth.reject. For more information, see ?
+ auth.allow or auth.reject.
- > **Note**
- >
+ > **Note**:
> Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang, see ? for
- > details.
+ > else client operations after the mount will hang.
-Creating Distributed Striped Volumes
-====================================
+##Creating Distributed Striped Volumes
Distributed striped volumes stripes files across two or more nodes in
the cluster. For best results, you should use distributed striped
volumes where the requirement is to scale storage and in high
concurrency environments accessing very large files is critical.
-> **Note**
->
+> **Note**:
> The number of bricks should be a multiple of the stripe count for a
> distributed striped volume.
@@ -227,7 +207,7 @@ concurrency environments accessing very large files is critical.
**To create a distributed striped volume**
-1. Create a trusted storage pool as described earlier in ?.
+1. Create a trusted storage pool.
2. Create the distributed striped volume:
@@ -242,16 +222,13 @@ concurrency environments accessing very large files is critical.
If the transport type is not specified, *tcp* is used as the
default. You can also set additional options if required, such as
- auth.allow or auth.reject. For more information, see ?
+ auth.allow or auth.reject.
- > **Note**
- >
+ > **Note**:
> Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang, see ? for
- > details.
+ > else client operations after the mount will hang.
-Creating Distributed Replicated Volumes
-=======================================
+##Creating Distributed Replicated Volumes
Distributes files across replicated bricks in the volume. You can use
distributed replicated volumes in environments where the requirement is
@@ -259,8 +236,7 @@ to scale storage and high-reliability is critical. Distributed
replicated volumes also offer improved read performance in most
environments.
-> **Note**
->
+> **Note**:
> The number of bricks should be a multiple of the replica count for a
> distributed replicated volume. Also, the order in which bricks are
> specified has a great effect on data protection. Each replica\_count
@@ -274,7 +250,7 @@ environments.
**To create a distributed replicated volume**
-1. Create a trusted storage pool as described earlier in ?.
+1. Create a trusted storage pool.
2. Create the distributed replicated volume:
@@ -296,16 +272,13 @@ environments.
If the transport type is not specified, *tcp* is used as the
default. You can also set additional options if required, such as
- auth.allow or auth.reject. For more information, see ?
+ auth.allow or auth.reject.
- > **Note**
- >
+ > **Note**:
> Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang, see ? for
- > details.
+ > else client operations after the mount will hang.
-Creating Distributed Striped Replicated Volumes
-===============================================
+##Creating Distributed Striped Replicated Volumes
Distributed striped replicated volumes distributes striped data across
replicated bricks in the cluster. For best results, you should use
@@ -314,14 +287,13 @@ where parallel access of very large files and performance is critical.
In this release, configuration of this volume type is supported only for
Map Reduce workloads.
-> **Note**
->
+> **Note**:
> The number of bricks should be a multiples of number of stripe count
> and replica count for a distributed striped replicated volume.
**To create a distributed striped replicated volume**
-1. Create a trusted storage pool as described earlier in ?.
+1. Create a trusted storage pool.
2. Create a distributed striped replicated volume using the following
command:
@@ -337,16 +309,13 @@ Map Reduce workloads.
If the transport type is not specified, *tcp* is used as the
default. You can also set additional options if required, such as
- auth.allow or auth.reject. For more information, see ?
+ auth.allow or auth.reject.
- > **Note**
- >
+ > **Note**:
> Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang, see ? for
- > details.
+ > else client operations after the mount will hang.
-Creating Striped Replicated Volumes
-===================================
+##Creating Striped Replicated Volumes
Striped replicated volumes stripes data across replicated bricks in the
cluster. For best results, you should use striped replicated volumes in
@@ -354,8 +323,7 @@ highly concurrent environments where there is parallel access of very
large files and performance is critical. In this release, configuration
of this volume type is supported only for Map Reduce workloads.
-> **Note**
->
+> **Note**:
> The number of bricks should be a multiple of the replicate count and
> stripe count for a striped replicated volume.
@@ -366,8 +334,6 @@ of this volume type is supported only for Map Reduce workloads.
1. Create a trusted storage pool consisting of the storage servers that
will comprise the volume.
- For more information, see ?.
-
2. Create a striped replicated volume :
`# gluster volume create [stripe ] [replica ] [transport tcp | rdma | tcp,rdma] `
@@ -387,16 +353,13 @@ of this volume type is supported only for Map Reduce workloads.
If the transport type is not specified, *tcp* is used as the
default. You can also set additional options if required, such as
- auth.allow or auth.reject. For more information, see ?
+ auth.allow or auth.reject.
- > **Note**
- >
+ > **Note**:
> Make sure you start your volumes before you try to mount them or
- > else client operations after the mount will hang, see ? for
- > details.
+ > else client operations after the mount will hang.
-Starting Volumes
-================
+##Starting Volumes
You must start your volumes before you try to mount them.
@@ -411,9 +374,9 @@ You must start your volumes before you try to mount them.
# gluster volume start test-volume
Starting test-volume has been successful
- []: images/Distributed_Volume.png
- [1]: images/Replicated_Volume.png
- [2]: images/Striped_Volume.png
- [3]: images/Distributed_Striped_Volume.png
- [4]: images/Distributed_Replicated_Volume.png
- [5]: images/Striped_Replicated_Volume.png
+ []: ../images/Distributed_Volume.png
+ [1]: ../images/Replicated_Volume.png
+ [2]: ../images/Striped_Volume.png
+ [3]: ../images/Distributed_Striped_Volume.png
+ [4]: ../images/Distributed_Replicated_Volume.png
+ [5]: ../images/Striped_Replicated_Volume.png
diff --git a/doc/admin-guide/en-US/markdown/admin_settingup_clients.md b/doc/admin-guide/en-US/markdown/admin_settingup_clients.md
index 85b28c9525f..bb45c8b8940 100644
--- a/doc/admin-guide/en-US/markdown/admin_settingup_clients.md
+++ b/doc/admin-guide/en-US/markdown/admin_settingup_clients.md
@@ -1,5 +1,4 @@
-Accessing Data - Setting Up GlusterFS Client
-============================================
+#Accessing Data - Setting Up GlusterFS Client
You can access gluster volumes in multiple ways. You can use Gluster
Native Client method for high concurrency, performance and transparent
@@ -13,8 +12,7 @@ You can use CIFS to access volumes when using Microsoft Windows as well
as SAMBA clients. For this access method, Samba packages need to be
present on the client side.
-Gluster Native Client
-=====================
+##Gluster Native Client
The Gluster Native Client is a FUSE-based client running in user space.
Gluster Native Client is the recommended method for accessing volumes
@@ -25,8 +23,7 @@ install the software on client machines. This section also describes how
to mount volumes on clients (both manually and automatically) and how to
verify that the volume has mounted successfully.
-Installing the Gluster Native Client
-------------------------------------
+###Installing the Gluster Native Client
Before you begin installing the Gluster Native Client, you need to
verify that the FUSE module is loaded on the client and has access to
@@ -39,7 +36,6 @@ the required modules as follows:
2. Verify that the FUSE module is loaded:
`# dmesg | grep -i fuse `
-
`fuse init (API version 7.13)`
### Installing on Red Hat Package Manager (RPM) Distributions
@@ -59,7 +55,6 @@ To install Gluster Native Client on RPM distribution-based systems
You can use the following chains with iptables:
`$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT `
-
`$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT`
> **Note**
@@ -80,9 +75,7 @@ To install Gluster Native Client on RPM distribution-based systems
4. Install Gluster Native Client on the client.
`$ sudo rpm -i glusterfs-3.3.0qa30-1.x86_64.rpm `
-
`$ sudo rpm -i glusterfs-fuse-3.3.0qa30-1.x86_64.rpm `
-
`$ sudo rpm -i glusterfs-rdma-3.3.0qa30-1.x86_64.rpm`
> **Note**
@@ -134,7 +127,6 @@ To install Gluster Native Client on Debian-based distributions
You can use the following chains with iptables:
`$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT `
-
`$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT`
> **Note**
@@ -150,7 +142,6 @@ To build and install Gluster Native Client from the source code
1. Create a new directory using the following commands:
`# mkdir glusterfs `
-
`# cd glusterfs`
2. Download the source code.
@@ -165,21 +156,14 @@ To build and install Gluster Native Client from the source code
`# ./configure `
- `GlusterFS configure summary `
-
- `================== `
-
- `FUSE client : yes `
-
- `Infiniband verbs : yes `
-
- `epoll IO multiplex : yes `
-
- `argp-standalone : no `
-
- `fusermount : no `
-
- `readline : yes`
+ GlusterFS configure summary
+ ===========================
+ FUSE client : yes
+ Infiniband verbs : yes
+ epoll IO multiplex : yes
+ argp-standalone : no
+ fusermount : no
+ readline : yes
The configuration summary shows the components that will be built
with Gluster Native Client.
@@ -188,7 +172,6 @@ To build and install Gluster Native Client from the source code
commands:
`# make `
-
`# make install`
6. Verify that the correct version of Gluster Native Client is
@@ -196,18 +179,13 @@ To build and install Gluster Native Client from the source code
`# glusterfs –-version`
-Mounting Volumes
-----------------
+##Mounting Volumes
After installing the Gluster Native Client, you need to mount Gluster
volumes to access data. There are two methods you can choose:
-- ?
-
-- ?
-
-After mounting a volume, you can test the mounted volume using the
-procedure described in ?.
+- [Manually Mounting Volumes](#manual-mount)
+- [Automatically Mounting Volumes](#auto-mount)
> **Note**
>
@@ -215,10 +193,9 @@ procedure described in ?.
> in the client machine. You can use appropriate /etc/hosts entries or
> DNS server to resolve server names to IP addresses.
+<a name="manual-mount" />
### Manually Mounting Volumes
-To manually mount a Gluster volume
-
- To mount a volume, use the following command:
`# mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR`
@@ -272,6 +249,7 @@ attempts to fetch volume files while mounting a volume. This option is
useful when you mount a server with multiple IP addresses or when
round-robin DNS is configured for the server-name..
+<a name="auto-mount" />
### Automatically Mounting Volumes
You can configure your system to automatically mount the Gluster volume
@@ -282,8 +260,6 @@ gluster configuration volfile describing the volume name. Subsequently,
the client will communicate directly with the servers mentioned in the
volfile (which might not even include the one used for mount).
-**To automatically mount a Gluster volume**
-
- To mount a volume, edit the /etc/fstab file and add the following
line:
@@ -337,17 +313,14 @@ To test mounted volumes
following:
`# cd MOUNTDIR `
-
`# ls`
- For example,
`# cd /mnt/glusterfs `
-
`# ls`
-NFS
-===
+#NFS
You can use NFS v3 to access to gluster volumes. Extensive testing has
be done on GNU/Linux clients and NFS implementation in other operating
@@ -366,26 +339,23 @@ This section describes how to use NFS to mount Gluster volumes (both
manually and automatically) and how to verify that the volume has been
mounted successfully.
-Using NFS to Mount Volumes
+##Using NFS to Mount Volumes
--------------------------
You can use either of the following methods to mount Gluster volumes:
-- ?
-
-- ?
+- [Manually Mounting Volumes Using NFS](#manual-nfs)
+- [Automatically Mounting Volumes Using NFS](#auto-nfs)
**Prerequisite**: Install nfs-common package on both servers and clients
(only for Debian-based distribution), using the following command:
`$ sudo aptitude install nfs-common `
-After mounting a volume, you can test the mounted volume using the
-procedure described in ?.
-
+<a name="manual-nfs" />
### Manually Mounting Volumes Using NFS
-To manually mount a Gluster volume using NFS
+**To manually mount a Gluster volume using NFS**
- To mount a volume, use the following command:
@@ -423,6 +393,7 @@ To manually mount a Gluster volume using NFS
` # mount -o proto=tcp,vers=3 nfs://server1:38467/test-volume /mnt/glusterfs`
+<a name="auto-nfs" />
### Automatically Mounting Volumes Using NFS
You can configure your system to automatically mount Gluster volumes
@@ -494,19 +465,9 @@ You can confirm that Gluster directories are mounting successfully.
following:
`# cd MOUNTDIR`
-
`# ls`
- For example,
-
- `
-
- `
-
- `# ls`
-
-CIFS
-====
+#CIFS
You can use CIFS to access to volumes when using Microsoft Windows as
well as SAMBA clients. For this access method, Samba packages need to be
@@ -523,21 +484,18 @@ verify that the volume has mounted successfully.
> can use the Mac OS X command line to access Gluster volumes using
> CIFS.
-Using CIFS to Mount Volumes
----------------------------
+##Using CIFS to Mount Volumes
You can use either of the following methods to mount Gluster volumes:
-- ?
-
-- ?
-
-After mounting a volume, you can test the mounted volume using the
-procedure described in ?.
+- [Exporting Gluster Volumes Through Samba](#export-samba)
+- [Manually Mounting Volumes Using CIFS](#cifs-manual)
+- [Automatically Mounting Volumes Using CIFS](#cifs-auto)
You can also use Samba for exporting Gluster Volumes through CIFS
protocol.
+<a name="export-samba" />
### Exporting Gluster Volumes Through Samba
We recommend you to use Samba for exporting Gluster volumes through the
@@ -545,8 +503,7 @@ CIFS protocol.
**To export volumes through CIFS protocol**
-1. Mount a Gluster volume. For more information on mounting volumes,
- see ?.
+1. Mount a Gluster volume.
2. Setup Samba configuration to export the mount point of the Gluster
volume.
@@ -575,6 +532,7 @@ scripts (/etc/init.d/smb [re]start).
> repeat these steps on each Gluster node. For more advanced
> configurations, see Samba documentation.
+<a name="cifs-manual" />
### Manually Mounting Volumes Using CIFS
You can manually mount Gluster volumes using CIFS on Microsoft
@@ -594,20 +552,10 @@ Windows-based client machines.
The network drive (mapped to the volume) appears in the Computer window.
-**Alternatively, to manually mount a Gluster volume using CIFS.**
-
-- Click **Start \> Run** and enter the following:
-
- `
-
- `
-
- For example:
-
- `
-
- `
+Alternatively, to manually mount a Gluster volume using CIFS by going to
+**Start \> Run** and entering Network path manually.
+<a name="cifs-auto" />
### Automatically Mounting Volumes Using CIFS
You can configure your system to automatically mount Gluster volumes
diff --git a/doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md b/doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md
index 43251cd0157..a47ece8d95b 100644
--- a/doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md
+++ b/doc/admin-guide/en-US/markdown/admin_start_stop_daemon.md
@@ -1,5 +1,4 @@
-Managing the glusterd Service
-=============================
+#Managing the glusterd Service
After installing GlusterFS, you must start glusterd service. The
glusterd service serves as the Gluster elastic volume manager,
@@ -10,16 +9,13 @@ servers non-disruptively.
This section describes how to start the glusterd service in the
following ways:
-- ?
+- [Starting and Stopping glusterd Manually](#manual)
+- [Starting glusterd Automatically](#auto)
-- ?
+> **Note**: You must start glusterd on all GlusterFS servers.
-> **Note**
->
-> You must start glusterd on all GlusterFS servers.
-
-Starting and Stopping glusterd Manually
-=======================================
+<a name="manual" />
+##Starting and Stopping glusterd Manually
This section describes how to start and stop glusterd manually
@@ -31,19 +27,13 @@ This section describes how to start and stop glusterd manually
`# /etc/init.d/glusterd stop`
-Starting glusterd Automatically
-===============================
+<a name="auto" />
+##Starting glusterd Automatically
This section describes how to configure the system to automatically
start the glusterd service every time the system boots.
-To automatically start the glusterd service every time the system boots,
-enter the following from the command line:
-
-`# chkconfig glusterd on `
-
-Red Hat-based Systems
----------------------
+###Red Hat and Fedora distros
To configure Red Hat-based systems to automatically start the glusterd
service every time the system boots, enter the following from the
@@ -51,8 +41,7 @@ command line:
`# chkconfig glusterd on `
-Debian-based Systems
---------------------
+###Debian and derivatives like Ubuntu
To configure Debian-based systems to automatically start the glusterd
service every time the system boots, enter the following from the
@@ -60,8 +49,7 @@ command line:
`# update-rc.d glusterd defaults`
-Systems Other than Red Hat and Debain
--------------------------------------
+###Systems Other than Red Hat and Debain
To configure systems other than Red Hat or Debian to automatically start
the glusterd service every time the system boots, enter the following
diff --git a/doc/admin-guide/en-US/markdown/admin_storage_pools.md b/doc/admin-guide/en-US/markdown/admin_storage_pools.md
index 2a35cbea57d..a0d8837ffe2 100644
--- a/doc/admin-guide/en-US/markdown/admin_storage_pools.md
+++ b/doc/admin-guide/en-US/markdown/admin_storage_pools.md
@@ -1,5 +1,4 @@
-Setting up Trusted Storage Pools
-================================
+#Setting up Trusted Storage Pools
Before you can configure a GlusterFS volume, you must create a trusted
storage pool consisting of the storage servers that provides bricks to a
@@ -10,21 +9,18 @@ the first server, the storage pool consists of that server alone. To add
additional storage servers to the storage pool, you can use the probe
command from a storage server that is already trusted.
-> **Note**
->
-> Do not self-probe the first server/localhost.
+> **Note**: Do not self-probe the first server/localhost.
The GlusterFS service must be running on all storage servers that you
want to add to the storage pool. See ? for more information.
-Adding Servers to Trusted Storage Pool
-======================================
+##Adding Servers to Trusted Storage Pool
To create a trusted storage pool, add servers to the trusted storage
pool
-1. The hostnames used to create the storage pool must be resolvable by
- DNS.
+1. **The hostnames used to create the storage pool must be resolvable by
+ DNS**
To add a server to the storage pool:
@@ -42,8 +38,8 @@ pool
# gluster peer probe server4
Probe successful
-2. Verify the peer status from the first server using the following
- commands:
+2. **Verify the peer status from the first server using the following
+ commands:**
# gluster peer status
Number of Peers: 3
@@ -60,8 +56,7 @@ pool
Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)
-Removing Servers from the Trusted Storage Pool
-==============================================
+##Removing Servers from the Trusted Storage Pool
To remove a server from the storage pool:
diff --git a/doc/admin-guide/en-US/markdown/admin_troubleshooting.md b/doc/admin-guide/en-US/markdown/admin_troubleshooting.md
index 88fb85c240c..fa19a2f71de 100644
--- a/doc/admin-guide/en-US/markdown/admin_troubleshooting.md
+++ b/doc/admin-guide/en-US/markdown/admin_troubleshooting.md
@@ -1,60 +1,54 @@
-Troubleshooting GlusterFS
-=========================
+#Troubleshooting GlusterFS
This section describes how to manage GlusterFS logs and most common
troubleshooting scenarios related to GlusterFS.
-Managing GlusterFS Logs
-=======================
+##Contents
+* [Managing GlusterFS Logs](#logs)
+* [Troubleshooting Geo-replication](#georep)
+* [Troubleshooting POSIX ACLs](#posix-acls)
+* [Troubleshooting Hadoop Compatible Storage](#hadoop)
+* [Troubleshooting NFS](#nfs)
+* [Troubleshooting File Locks](#file-locks)
-This section describes how to manage GlusterFS logs by performing the
-following operation:
+<a name="logs" />
+##Managing GlusterFS Logs
-- Rotating Logs
-
-Rotating Logs
--------------
+###Rotating Logs
Administrators can rotate the log file in a volume, as needed.
**To rotate a log file**
-- Rotate the log file using the following command:
-
`# gluster volume log rotate `
- For example, to rotate the log file on test-volume:
+For example, to rotate the log file on test-volume:
- # gluster volume log rotate test-volume
- log rotate successful
+ # gluster volume log rotate test-volume
+ log rotate successful
- > **Note**
- >
- > When a log file is rotated, the contents of the current log file
- > are moved to log-file- name.epoch-time-stamp.
+> **Note**
+> When a log file is rotated, the contents of the current log file
+> are moved to log-file- name.epoch-time-stamp.
-Troubleshooting Geo-replication
-===============================
+<a name="georep" />
+##Troubleshooting Geo-replication
This section describes the most common troubleshooting scenarios related
to GlusterFS Geo-replication.
-Locating Log Files
-------------------
+###Locating Log Files
For every Geo-replication session, the following three log files are
associated to it (four, if the slave is a gluster volume):
-- Master-log-file - log file for the process which monitors the Master
+- **Master-log-file** - log file for the process which monitors the Master
volume
-
-- Slave-log-file - log file for process which initiates the changes in
+- **Slave-log-file** - log file for process which initiates the changes in
slave
-
-- Master-gluster-log-file - log file for the maintenance mount point
+- **Master-gluster-log-file** - log file for the maintenance mount point
that Geo-replication module uses to monitor the master volume
-
-- Slave-gluster-log-file - is the slave's counterpart of it
+- **Slave-gluster-log-file** - is the slave's counterpart of it
**Master Log File**
@@ -87,8 +81,7 @@ running on slave machine), use the following commands:
`/var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log`
-Rotating Geo-replication Logs
------------------------------
+###Rotating Geo-replication Logs
Administrators can rotate the log file of a particular master-slave
session, as needed. When you run geo-replication's ` log-rotate`
@@ -128,8 +121,7 @@ log file.
# gluster volume geo-replication log rotate
log rotate successful
-Synchronization is not complete
--------------------------------
+###Synchronization is not complete
**Description**: GlusterFS Geo-replication did not synchronize the data
completely but still the geo- replication status displayed is OK.
@@ -138,39 +130,35 @@ completely but still the geo- replication status displayed is OK.
index and restarting GlusterFS Geo- replication. After restarting,
GlusterFS Geo-replication begins synchronizing all the data. All files
are compared using checksum, which can be a lengthy and high resource
-utilization operation on large data sets. If the error situation
-persists, contact Red Hat Support.
+utilization operation on large data sets.
-For more information about erasing index, see ?.
-Issues in Data Synchronization
-------------------------------
+###Issues in Data Synchronization
**Description**: Geo-replication display status as OK, but the files do
not get synced, only directories and symlink gets synced with the
following error message in the log:
-[2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to
-sync ./some\_file\`
+ [2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to
+ sync ./some\_file\`
**Solution**: Geo-replication invokes rsync v3.0.0 or higher on the host
and the remote machine. You must verify if you have installed the
required version.
-Geo-replication status displays Faulty very often
--------------------------------------------------
+###Geo-replication status displays Faulty very often
**Description**: Geo-replication displays status as faulty very often
with a backtrace similar to the following:
-2011-04-28 14:06:18.378859] E [syncdutils:131:log\_raise\_exception]
-\<top\>: FAIL: Traceback (most recent call last): File
-"/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
-152, in twraptf(\*aa) File
-"/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in
-listen rid, exc, res = recv(self.inf) File
-"/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42, in
-recv return pickle.load(inf) EOFError
+ 2011-04-28 14:06:18.378859] E [syncdutils:131:log\_raise\_exception]
+ \<top\>: FAIL: Traceback (most recent call last): File
+ "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
+ 152, in twraptf(\*aa) File
+ "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in
+ listen rid, exc, res = recv(self.inf) File
+ "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42, in
+ recv return pickle.load(inf) EOFError
**Solution**: This error indicates that the RPC communication between
the master gsyncd module and slave gsyncd module is broken and this can
@@ -179,34 +167,28 @@ pre-requisites:
- Password-less SSH is set up properly between the host and the remote
machine.
-
- If FUSE is installed in the machine, because geo-replication module
mounts the GlusterFS volume using FUSE to sync data.
-
- If the **Slave** is a volume, check if that volume is started.
-
- If the Slave is a plain directory, verify if the directory has been
created already with the required permissions.
-
- If GlusterFS 3.2 or higher is not installed in the default location
(in Master) and has been prefixed to be installed in a custom
location, configure the `gluster-command` for it to point to the
exact location.
-
- If GlusterFS 3.2 or higher is not installed in the default location
(in slave) and has been prefixed to be installed in a custom
location, configure the `remote-gsyncd-command` for it to point to
the exact place where gsyncd is located.
-Intermediate Master goes to Faulty State
-----------------------------------------
+###Intermediate Master goes to Faulty State
**Description**: In a cascading set-up, the intermediate master goes to
faulty state with the following log:
-raise RuntimeError ("aborting on uuid change from %s to %s" % \\
-RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f-
-4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154
+ raise RuntimeError ("aborting on uuid change from %s to %s" % \\
+ RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f-
+ 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154
**Solution**: In a cascading set-up the Intermediate master is loyal to
the original primary master. The above log means that the
@@ -214,50 +196,42 @@ geo-replication module has detected change in primary master. If this is
the desired behavior, delete the config option volume-id in the session
initiated from the intermediate master.
-Troubleshooting POSIX ACLs
-==========================
+<a name="posix-acls" />
+##Troubleshooting POSIX ACLs
This section describes the most common troubleshooting issues related to
POSIX ACLs.
-setfacl command fails with “setfacl: \<file or directory name\>: Operation not supported” error
------------------------------------------------------------------------------------------------
+ setfacl command fails with “setfacl: \<file or directory name\>: Operation not supported” error
You may face this error when the backend file systems in one of the
servers is not mounted with the "-o acl" option. The same can be
confirmed by viewing the following error message in the log file of the
server "Posix access control list is not supported".
-**Solution**: Remount the backend file system with "-o acl" option. For
-more information, see ?.
+**Solution**: Remount the backend file system with "-o acl" option.
-Troubleshooting Hadoop Compatible Storage
-=========================================
+<a name="hadoop" />
+##Troubleshooting Hadoop Compatible Storage
-This section describes the most common troubleshooting issues related to
-Hadoop Compatible Storage.
-
-Time Sync
----------
+###Time Sync
-Running MapReduce job may throw exceptions if the time is out-of-sync on
+**Problem**: Running MapReduce job may throw exceptions if the time is out-of-sync on
the hosts in the cluster.
**Solution**: Sync the time on all hosts using ntpd program.
-Troubleshooting NFS
-===================
+<a name="nfs" />
+##Troubleshooting NFS
This section describes the most common troubleshooting issues related to
NFS .
-mount command on NFS client fails with “RPC Error: Program not registered”
---------------------------------------------------------------------------
+###mount command on NFS client fails with “RPC Error: Program not registered”
-Start portmap or rpcbind service on the NFS server.
+ Start portmap or rpcbind service on the NFS server.
This error is encountered when the server has not started correctly.
-
On most Linux distributions this is fixed by starting portmap:
`$ /etc/init.d/portmap start`
@@ -270,8 +244,7 @@ following command is required:
After starting portmap or rpcbind, gluster NFS server needs to be
restarted.
-NFS server start-up fails with “Port is already in use” error in the log file."
--------------------------------------------------------------------------------
+###NFS server start-up fails with “Port is already in use” error in the log file.
Another Gluster NFS server is running on the same machine.
@@ -291,27 +264,21 @@ To resolve this error one of the Gluster NFS servers will have to be
shutdown. At this time, Gluster NFS server does not support running
multiple NFS servers on the same machine.
-mount command fails with “rpc.statd” related error message
-----------------------------------------------------------
+###mount command fails with “rpc.statd” related error message
If the mount command fails with the following error message:
-mount.nfs: rpc.statd is not running but is required for remote locking.
-mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
-
-Start rpc.statd
+ mount.nfs: rpc.statd is not running but is required for remote locking.
+ mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
For NFS clients to mount the NFS server, rpc.statd service must be
-running on the clients.
-
-Start rpc.statd service by running the following command:
+running on the clients. Start rpc.statd service by running the following command:
`$ rpc.statd `
-mount command takes too long to finish.
----------------------------------------
+###mount command takes too long to finish.
-Start rpcbind service on the NFS client.
+**Start rpcbind service on the NFS client**
The problem is that the rpcbind or portmap service is not running on the
NFS client. The resolution for this is to start either of these services
@@ -324,8 +291,7 @@ following command is required:
`$ /etc/init.d/rpcbind start`
-NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log.
-----------------------------------------------------------------------------------------------------------------------------------------------
+###NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log.
NFS start-up can succeed but the initialization of the NFS service can
still fail preventing clients from accessing the mount points. Such a
@@ -341,7 +307,7 @@ file:
[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
-1. Start portmap or rpcbind service on the NFS server.
+1. **Start portmap or rpcbind service on the NFS server**
On most Linux distributions, portmap can be started using the
following command:
@@ -356,7 +322,7 @@ file:
After starting portmap or rpcbind, gluster NFS server needs to be
restarted.
-2. Stop another NFS server running on the same machine.
+2. **Stop another NFS server running on the same machine**
Such an error is also seen when there is another NFS server running
on the same machine but it is not the Gluster NFS server. On Linux
@@ -372,18 +338,17 @@ file:
`$ /etc/init.d/nfs stop`
-3. Restart Gluster NFS server.
+3. **Restart Gluster NFS server**
-mount command fails with NFS server failed error.
--------------------------------------------------
+###mount command fails with NFS server failed error.
mount command fails with following error
-*mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).*
+ *mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).*
Perform one of the following to resolve this issue:
-1. Disable name lookup requests from NFS server to a DNS server.
+1. **Disable name lookup requests from NFS server to a DNS server**
The NFS server attempts to authenticate NFS clients by performing a
reverse DNS lookup to match hostnames in the volume file with the
@@ -400,16 +365,14 @@ Perform one of the following to resolve this issue:
`option rpc-auth.addr.namelookup off `
- > **Note**
- >
- > Note: Remember that disabling the NFS server forces authentication
+ > **Note**: Remember that disabling the NFS server forces authentication
> of clients to use only IP addresses and if the authentication
> rules in the volume file use hostnames, those authentication rules
> will fail and disallow mounting for those clients.
- or
+ **OR**
-2. NFS version used by the NFS client is other than version 3.
+2. **NFS version used by the NFS client is other than version 3**
Gluster NFS server supports version 3 of NFS protocol. In recent
Linux kernels, the default NFS version has been changed from 3 to 4.
@@ -421,18 +384,14 @@ Perform one of the following to resolve this issue:
`$ mount -o vers=3 `
-showmount fails with clnt\_create: RPC: Unable to receive
----------------------------------------------------------
+###showmount fails with clnt\_create: RPC: Unable to receive
Check your firewall setting to open ports 111 for portmap
requests/replies and Gluster NFS server requests/replies. Gluster NFS
server operates over the following port numbers: 38465, 38466, and
38467.
-For more information, see ?.
-
-Application fails with "Invalid argument" or "Value too large for defined data type" error.
--------------------------------------------------------------------------------------------
+###Application fails with "Invalid argument" or "Value too large for defined data type" error.
These two errors generally happen for 32-bit nfs clients or applications
that do not support 64-bit inode numbers or large files. Use the
@@ -443,7 +402,6 @@ Applications that will benefit are those that were either:
- built 32-bit and run on 32-bit machines such that they do not
support large files by default
-
- built 32-bit on 64-bit systems
This option is disabled by default so NFS returns 64-bit inode numbers
@@ -454,8 +412,8 @@ using the following flag with gcc:
` -D_FILE_OFFSET_BITS=64`
-Troubleshooting File Locks
-==========================
+<a name="file-locks" />
+##Troubleshooting File Locks
In GlusterFS 3.3 you can use `statedump` command to list the locks held
on files. The statedump output also provides information on each lock
@@ -463,16 +421,10 @@ with its range, basename, PID of the application holding the lock, and
so on. You can analyze the output to know about the locks whose
owner/application is no longer running or interested in that lock. After
ensuring that the no application is using the file, you can clear the
-lock using the following `clear lock` command:
-
-`# `
-
-For more information on performing `statedump`, see ?
-
-**To identify locked file and clear locks**
+lock using the following `clear lock` commands.
-1. Perform statedump on the volume to view the files that are locked
- using the following command:
+1. **Perform statedump on the volume to view the files that are locked
+ using the following command:**
`# gluster volume statedump inode`
@@ -517,9 +469,9 @@ For more information on performing `statedump`, see ?
lock-dump.domain.domain=vol-replicate-0
inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 714787072, owner=00ffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012
-2. Clear the lock using the following command:
+2. **Clear the lock using the following command:**
- `# `
+ `# gluster volume clear-locks`
For example, to clear the entry lock on `file1` of test-volume:
@@ -527,9 +479,9 @@ For more information on performing `statedump`, see ?
Volume clear-locks successful
vol-locks: entry blocked locks=0 granted locks=1
-3. Clear the inode lock using the following command:
+3. **Clear the inode lock using the following command:**
- `# `
+ `# gluster volume clear-locks`
For example, to clear the inode lock on `file1` of test-volume:
diff --git a/doc/admin-guide/en-US/markdown/gfs_introduction.md b/doc/admin-guide/en-US/markdown/gfs_introduction.md
index fd2c53dc992..9f9c058156e 100644
--- a/doc/admin-guide/en-US/markdown/gfs_introduction.md
+++ b/doc/admin-guide/en-US/markdown/gfs_introduction.md
@@ -13,7 +13,7 @@ managing data in a single global namespace. GlusterFS is based on a
stackable user space design, delivering exceptional performance for
diverse workloads.
-![ Virtualized Cloud Environments ][]
+![ Virtualized Cloud Environments ](../images/640px-GlusterFS_Architecture.png)
GlusterFS is designed for today's high-performance, virtualized cloud
environments. Unlike traditional data centers, cloud environments
@@ -24,27 +24,8 @@ hybrid environments.
GlusterFS is in production at thousands of enterprises spanning media,
healthcare, government, education, web 2.0, and financial services. The
-following table lists the commercial offerings and its documentation
-location:
+following table lists the commercial offerings:
- ------------------------------------------------------------------------
- Product Documentation Location
- ----------- ------------------------------------------------------------
- Red Hat [][]
- Storage
- Software
- Appliance
-
- Red Hat [][1]
- Virtual
- Storage
- Appliance
-
- Red Hat [][2]
- Storage
- ------------------------------------------------------------------------
-
- [ Virtualized Cloud Environments ]: images/640px-GlusterFS_Architecture.png
- []: http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/index.html
- [1]: http://docs.redhat.com/docs/en-US/Red_Hat_Virtual_Storage_Appliance/index.html
- [2]: http://docs.redhat.com/docs/en-US/Red_Hat_Storage/index.html
+* [Red Hat Storage](https://access.redhat.com/site/documentation/Red_Hat_Storage/)
+* Red Hat Storage Software Appliance
+* Red Hat Virtual Storage Appliance
diff --git a/doc/admin-guide/en-US/markdown/glossary.md b/doc/admin-guide/en-US/markdown/glossary.md
index 0febaff8fb8..0203319b08a 100644
--- a/doc/admin-guide/en-US/markdown/glossary.md
+++ b/doc/admin-guide/en-US/markdown/glossary.md
@@ -1,10 +1,10 @@
Glossary
========
-Brick
-: A Brick is the GlusterFS basic unit of storage, represented by an
+**Brick**
+: A Brick is the basic unit of storage in GlusterFS, represented by an
export directory on a server in the trusted storage pool. A Brick is
- expressed by combining a server with an export directory in the
+ represented by combining a server name with an export directory in the
following format:
`SERVER:EXPORT`
@@ -13,15 +13,22 @@ Brick
`myhostname:/exports/myexportdir/`
-Cluster
+**Client**
+: Any machine that mounts a GlusterFS volume.
+
+**Cluster**
: A cluster is a group of linked computers, working together closely
thus in many respects forming a single computer.
-Distributed File System
+**Distributed File System**
: A file system that allows multiple clients to concurrently access
data over a computer network.
-Filesystem
+**Extended Attributes**
+: Extended file attributes (abbreviated xattr) is a file system feature
+ that enables users/programs to associate files/dirs with metadata.
+
+**Filesystem**
: A method of storing and organizing computer files and their data.
Essentially, it organizes these files into a database for the
storage, organization, manipulation, and retrieval by the computer's
@@ -29,7 +36,7 @@ Filesystem
Source: [Wikipedia][]
-FUSE
+**FUSE**
: Filesystem in Userspace (FUSE) is a loadable kernel module for
Unix-like computer operating systems that lets non-privileged users
create their own file systems without editing kernel code. This is
@@ -38,26 +45,38 @@ FUSE
Source: [Wikipedia][1]
-Geo-Replication
+**Geo-Replication**
: Geo-replication provides a continuous, asynchronous, and incremental
replication service from site to another over Local Area Networks
(LAN), Wide Area Network (WAN), and across the Internet.
-glusterd
+**GFID**
+: Each file/directory on a GlusterFS volume has a unique 128-bit number
+ associated with it called the GFID. This is analogous to inode in a
+ regular filesystem.
+
+**glusterd**
: The Gluster management daemon that needs to run on all servers in
the trusted storage pool.
-Metadata
+**Infiniband**
+ InfiniBand is a switched fabric computer network communications link
+ used in high-performance computing and enterprise data centers.
+
+**Metadata**
: Metadata is data providing information about one or more other
pieces of data.
-Namespace
+**Namespace**
: Namespace is an abstract container or environment created to hold a
logical grouping of unique identifiers or symbols. Each Gluster
volume exposes a single namespace as a POSIX mount point that
contains every file in the cluster.
-Open Source
+**Node**
+: A server or computer that hosts one or more bricks.
+
+**Open Source**
: Open source describes practices in production and development that
promote access to the end product's source materials. Some consider
open source a philosophy, others consider it a pragmatic
@@ -76,7 +95,7 @@ Open Source
Source: [Wikipedia][2]
-Petabyte
+**Petabyte**
: A petabyte (derived from the SI prefix peta- ) is a unit of
information equal to one quadrillion (short scale) bytes, or 1000
terabytes. The unit symbol for the petabyte is PB. The prefix peta-
@@ -89,7 +108,7 @@ Petabyte
Source: [Wikipedia][3]
-POSIX
+**POSIX**
: Portable Operating System Interface (for Unix) is the name of a
family of related standards specified by the IEEE to define the
application programming interface (API), along with shell and
@@ -97,34 +116,79 @@ POSIX
Unix operating system. Gluster exports a fully POSIX compliant file
system.
-RAID
+**Quorum**
+: The configuration of quorum in a trusted storage pool determines the
+ number of server failures that the trusted storage pool can sustain.
+ If an additional failure occurs, the trusted storage pool becomes
+ unavailable.
+
+**Quota**
+: Quotas allow you to set limits on usage of disk space by directories or
+ by volumes.
+
+**RAID**
: Redundant Array of Inexpensive Disks (RAID) is a technology that
provides increased storage reliability through redundancy, combining
multiple low-cost, less-reliable disk drives components into a
logical unit where all drives in the array are interdependent.
-RRDNS
+**RDMA**
+: Remote direct memory access (RDMA) is a direct memory access from the
+ memory of one computer into that of another without involving either
+ one's operating system. This permits high-throughput, low-latency
+ networking, which is especially useful in massively parallel computer
+ clusters.
+
+**Rebalance**
+: A process of fixing layout and resdistributing data in a volume when a
+ brick is added or removed.
+
+**RRDNS**
: Round Robin Domain Name Service (RRDNS) is a method to distribute
load across application servers. RRDNS is implemented by creating
multiple A records with the same name and different IP addresses in
the zone file of a DNS server.
-Trusted Storage Pool
+**Samba**
+: Samba allows file and print sharing between computers running Windows and
+ computers running Linux. It is an implementation of several services and
+ protocols including SMB and CIFS.
+
+**Self-Heal**
+: The self-heal daemon that runs in the background, identifies
+ inconsistencies in files/dirs in a replicated volume and then resolves
+ or heals them. This healing process is usually required when one or more
+ bricks of a volume goes down and then comes up later.
+
+**Split-brain**
+: This is a situation where data on two or more bricks in a replicated
+ volume start to diverge in terms of content or metadata. In this state,
+ one cannot determine programitically which set of data is "right" and
+ which is "wrong".
+
+**Translator**
+: Translators (also called xlators) are stackable modules where each
+ module has a very specific purpose. Translators are stacked in a
+ hierarchical structure called as graph. A translator recieves data
+ from its parent translator, performs necessary operations and then
+ passes the data down to its child translator in hierarchy.
+
+**Trusted Storage Pool**
: A storage pool is a trusted network of storage servers. When you
start the first server, the storage pool consists of that server
alone.
-Userspace
+**Userspace**
: Applications running in user space don’t directly interact with
hardware, instead using the kernel to moderate access. Userspace
applications are generally more portable than applications in kernel
space. Gluster is a user space application.
-Volfile
+**Volfile**
: Volfile is a configuration file used by glusterfs process. Volfile
will be usually located at `/var/lib/glusterd/vols/VOLNAME`.
-Volume
+**Volume**
: A volume is a logical collection of bricks. Most of the gluster
management operations happen on the volume.