summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAmar Tumballi <amarts@redhat.com>2012-05-28 19:32:14 +0530
committerVijay Bellur <vijay@gluster.com>2012-06-04 03:47:21 -0700
commitf037fe9143706375bea140b61fd87d13e5b2b961 (patch)
treeb000ca4c98db600ea99283ff62690f0d761a0453
parent1b798491193add9cb296ce6817a6cbc2fdb9db34 (diff)
documentation - Admin-Guide Updates
Change-Id: I6e053b6a5f099fb7b1c228668949463c795b4fc7 Signed-off-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: http://review.gluster.com/3496 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
-rw-r--r--doc/admin-guide/en-US/Book_Info.xml4
-rw-r--r--doc/admin-guide/en-US/admin_ACLs.xml15
-rw-r--r--doc/admin-guide/en-US/admin_commandref.xml68
-rw-r--r--doc/admin-guide/en-US/admin_console.xml9
-rw-r--r--doc/admin-guide/en-US/admin_directory_Quota.xml19
-rw-r--r--doc/admin-guide/en-US/admin_geo-replication.xml122
-rw-r--r--doc/admin-guide/en-US/admin_managing_volumes.xml11
-rw-r--r--doc/admin-guide/en-US/admin_setting_volumes.xml16
-rw-r--r--doc/admin-guide/en-US/admin_settingup_clients.xml324
-rw-r--r--doc/admin-guide/en-US/admin_storage_pools.xml6
-rw-r--r--doc/admin-guide/en-US/admin_troubleshooting.xml49
-rw-r--r--doc/admin-guide/en-US/gfs_introduction.xml4
12 files changed, 280 insertions, 367 deletions
diff --git a/doc/admin-guide/en-US/Book_Info.xml b/doc/admin-guide/en-US/Book_Info.xml
index 6be6a7816ca..19fb40a2f34 100644
--- a/doc/admin-guide/en-US/Book_Info.xml
+++ b/doc/admin-guide/en-US/Book_Info.xml
@@ -5,9 +5,9 @@
]>
<bookinfo id="book-Administration_Guide-Administration_Guide">
<title>Administration Guide</title>
- <subtitle>Using Gluster File System <remark> Beta 3</remark> </subtitle>
+ <subtitle>Using Gluster File System </subtitle>
<productname>Gluster File System</productname>
- <productnumber>3.3</productnumber>
+ <productnumber>3.3.0</productnumber>
<edition>1</edition>
<pubsnumber>1</pubsnumber>
<abstract>
diff --git a/doc/admin-guide/en-US/admin_ACLs.xml b/doc/admin-guide/en-US/admin_ACLs.xml
index 156e52c17f2..edad2d67d60 100644
--- a/doc/admin-guide/en-US/admin_ACLs.xml
+++ b/doc/admin-guide/en-US/admin_ACLs.xml
@@ -13,7 +13,7 @@ be granted or denied access by using POSIX ACLs.
</para>
<section id="sect-Administration_Guide-ACLs-Activating_ACLs">
<title>Activating POSIX ACLs Support </title>
- <para>To use POSIX ACLs for a file or directory, the partition of the file or directory must be mounted with
+ <para>To use POSIX ACLs for a file or directory, the mount point where the file or directory exists, must be mounted with
POSIX ACLs support.
</para>
<section id="sect-Administration_Guide-ACLs-Activating_ACLs-Server">
@@ -22,23 +22,28 @@ POSIX ACLs support.
</para>
<para><command># mount -o acl <replaceable>device-name</replaceable><replaceable>partition</replaceable></command>
</para>
+ <para>If the backend export directory is already mounted, use the following command:
+</para>
+ <para><command># mount -oremount,acl <replaceable>device-name</replaceable><replaceable>partition</replaceable></command>
+</para>
+
<para>For example:
</para>
<para><command># mount -o acl /dev/sda1 /export1 </command></para>
<para>Alternatively, if the partition is listed in the /etc/fstab file, add the following entry for the partition
to include the POSIX ACLs option:
</para>
- <para><command>LABEL=/work /export1 ext3 rw, acl 14 </command></para>
+ <para><command>LABEL=/work /export1 xfs rw,acl 1 4 </command></para>
</section>
<section>
<title>Activating POSIX ACLs Support on Client </title>
- <para>To mount the glusterfs volumes for POSIX ACLs support, use the following command:
+ <para>To mount the glusterfs volume with POSIX ACLs support, use the following command:
</para>
- <para><command># mount –t glusterfs -o acl <replaceable>severname:volume-id</replaceable><replaceable>mount point</replaceable></command>
+ <para><command># mount –t glusterfs -o acl <replaceable>severname:/volume-id</replaceable><replaceable>mount point</replaceable></command>
</para>
<para>For example:
</para>
- <para><command># mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster</command>
+ <para><command># mount -t glusterfs -o acl 198.192.198.234:/glustervolume /mnt/gluster</command>
</para>
</section>
</section>
diff --git a/doc/admin-guide/en-US/admin_commandref.xml b/doc/admin-guide/en-US/admin_commandref.xml
index df4c78f4869..05196b77326 100644
--- a/doc/admin-guide/en-US/admin_commandref.xml
+++ b/doc/admin-guide/en-US/admin_commandref.xml
@@ -2,8 +2,7 @@
<!-- This document was created with Syntext Serna Free. --><!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "docbookV4.5/docbookx.dtd" []>
<chapter id="chap-Administration_Guide-Com_Ref">
<title>Command Reference </title>
- <para>This section describes the available commands and includes the
-following section:
+ <para>This section describes the available commands and includes thefollowing section:
</para>
<itemizedlist>
<listitem>
@@ -36,10 +35,8 @@ gluster [COMMANDS] [OPTIONS]
</para>
<para><emphasis role="bold">DESCRIPTION</emphasis>
</para>
- <para>The Gluster Console Manager is a command line utility for elastic volume management. You can run
-the gluster command on any export server. The command enables administrators to perform cloud
-operations such as creating, expanding, shrinking, rebalancing, and migrating volumes without
-needing to schedule server downtime.
+ <para>The Gluster Console Manager is a command line utility for elastic volume management. 'gluster' command enables administrators to perform cloud
+operations such as creating, expanding, shrinking, rebalancing, and migrating volumes without needing to schedule server downtime.
</para>
<para><emphasis role="bold">COMMANDS</emphasis>
</para>
@@ -66,7 +63,8 @@ needing to schedule server downtime.
</row>
<row>
<entry>volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK ...</entry>
- <entry namest="cgen1" nameend="c1">Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp).</entry>
+ <entry namest="cgen1" nameend="c1">Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp).
+ NOTE: with 3.3.0 release, transport type 'rdma' and 'tcp,rdma' are not fully supported.</entry>
</row>
<row>
<entry>volume delete VOLNAME</entry>
@@ -81,10 +79,6 @@ needing to schedule server downtime.
<entry namest="cgen1" nameend="c1">Stops the specified volume. </entry>
</row>
<row>
- <entry>volume rename VOLNAME NEW-VOLNAME </entry>
- <entry namest="cgen1" nameend="c1">Renames the specified volume.</entry>
- </row>
- <row>
<entry>volume help </entry>
<entry namest="cgen1" nameend="c1">Displays help for the volume command.</entry>
</row>
@@ -94,16 +88,16 @@ needing to schedule server downtime.
</entry>
</row>
<row>
- <entry>volume add-brick VOLNAME NEW-BRICK ... </entry>
- <entry namest="cgen1" nameend="c1">Adds the specified brick to the specified volume.</entry>
+ <entry>volume add-brick VOLNAME [replica N] [stripe N] NEW-BRICK1 ... </entry>
+ <entry namest="cgen1" nameend="c1">Adds the specified brick(s) to the given VOLUME. Using add-brick, users can increase the replica/stripe count of the volume, or increase the volume capacity by adding the brick(s) without changing volume type. </entry>
</row>
<row>
- <entry>volume replace-brick VOLNAME (BRICK NEW-BRICK) start | pause | abort | status </entry>
- <entry namest="cgen1" nameend="c1">Replaces the specified brick.</entry>
+ <entry>volume replace-brick VOLNAME (BRICK NEW-BRICK) [start | start force | abort | status | commit | commit force] </entry>
+ <entry namest="cgen1" nameend="c1">Used to replace BRICK with NEW-BRICK in a given VOLUME. After replace-brick is complete, the changes to get reflected in volume information, replace-brick 'commit' command is neccessary.</entry>
</row>
<row>
- <entry>volume remove-brick VOLNAME [(replica COUNT)|(stripe COUNT)] BRICK ... </entry>
- <entry namest="cgen1" nameend="c1">Removes the specified brick from the specified volume.</entry>
+ <entry>volume remove-brick VOLNAME [replica N] BRICK1 ... [start | stop | status | commit | force ] </entry>
+ <entry namest="cgen1" nameend="c1">Removes the specified brick(s) from the specified volume. 'remove-brick' command can be used to reduce the replica count of the volume when 'replica N' option is given. To ensure data migration from the removed brick to existing bricks, give 'start' sub-command at the end of the command. After the 'status' command says remove-brick operation is complete, user can 'commit' the changes to volume file. Using 'remove-brick' without 'start' option works similar to 'force' command, which makes the changes to volume configuration without migrating the data.</entry>
</row>
<row>
<entry namest="c0" nameend="c1" align="left">
@@ -112,7 +106,7 @@ needing to schedule server downtime.
</row>
<row>
<entry>volume rebalance VOLNAME start</entry>
- <entry namest="cgen1" nameend="c1">Starts rebalancing the specified volume.</entry>
+ <entry namest="cgen1" nameend="c1">Starts rebalancing of the data on specified volume.</entry>
</row>
<row>
<entry>volume rebalance VOLNAME stop </entry>
@@ -128,16 +122,31 @@ needing to schedule server downtime.
</entry>
</row>
<row>
- <entry>volume log filename VOLNAME [BRICK] DIRECTORY </entry>
- <entry namest="cgen1" nameend="c1">Sets the log directory for the corresponding volume/brick. </entry>
- </row>
- <row>
<entry>volume log rotate VOLNAME [BRICK] </entry>
<entry namest="cgen1" nameend="c1">Rotates the log file for corresponding volume/brick.</entry>
</row>
+
+ <row>
+ <entry namest="c0" nameend="c1" align="left">
+ <emphasis role="bold">Debugging</emphasis>
+ </entry>
+ </row>
<row>
- <entry>volume log locate VOLNAME [BRICK] </entry>
- <entry namest="cgen1" nameend="c1">Locates the log file for corresponding volume/brick. </entry>
+ <entry>volume top VOLNAME {[open|read|write|opendir|readdir [nfs]] |[read-perf|write-perf [nfs|{bs COUNT count COUNT}]]|[clear [nfs]]} [BRICK] [list-cnt COUNT]</entry>
+ <entry namest="cgen1" nameend="c1">Shows the operation details on the volume depending on the arguments given. </entry>
+ </row>
+
+ <row>
+ <entry>volume profile VOLNAME {start|info|stop} [nfs]</entry>
+ <entry namest="cgen1" nameend="c1">Shows the file operation details on each bricks of the volume.</entry>
+ </row>
+ <row>
+ <entry>volume status [all | VOLNAME] [nfs|shd|BRICK] [detail|clients|mem|inode|fd|callpool]</entry>
+ <entry namest="cgen1" nameend="c1">Show details of activity, internal data of the processes (nfs/shd/BRICK) corresponding to one of the next argument given. If now argument is given, this command outputs bare minimum details of the current status (include PID of brick process etc) of volume's bricks.</entry>
+ </row>
+ <row>
+ <entry>statedump VOLNAME [nfs] [all|mem|iobuf|callpool|priv|fd|inode|history]</entry>
+ <entry namest="cgen1" nameend="c1">Command is used to take the statedump of the process, which is used captures most of the internal details. </entry>
</row>
<row>
<entry namest="c0" nameend="c1" align="left">
@@ -182,7 +191,6 @@ needing to schedule server downtime.
</row>
<row>
<entry morerows="10">volume geo-replication MASTER SLAVE config [options]</entry>
- <entry/>
<entry>Configure geo-replication options between the hosts specified by MASTER and SLAVE. </entry>
</row>
<row>
@@ -226,7 +234,6 @@ needing to schedule server downtime.
<entry>The number of simultaneous files/directories that can be synchronized.</entry>
</row>
<row>
- <entry/>
<entry>ignore-deletes</entry>
<entry>If this option is set to 1, a file deleted on master will not trigger a delete operation on the slave. Hence, the slave will remain as a superset of the master and can be used to recover the master in case of crash and/or accidental delete.</entry>
</row>
@@ -237,7 +244,6 @@ needing to schedule server downtime.
</row>
<row>
<entry>help</entry>
- <entry/>
<entry>Display the command options.</entry>
</row>
<row>
@@ -251,10 +257,8 @@ needing to schedule server downtime.
<para><emphasis role="bold">FILES</emphasis>
</para>
- <para>/etc/glusterd/*
+ <para>/var/lib/glusterd/*
</para>
- <para><emphasis role="bold">SEE ALSO </emphasis></para>
- <para>fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), glusterd(8)</para>
</section>
<section>
<title>glusterd Daemon </title>
@@ -326,9 +330,7 @@ needing to schedule server downtime.
<para><emphasis role="bold">FILES</emphasis>
</para>
- <para>/etc/glusterd/*
+ <para>/var/lib/glusterd/*
</para>
- <para><emphasis role="bold">SEE ALSO </emphasis></para>
- <para>fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), gluster(8)</para>
</section>
</chapter>
diff --git a/doc/admin-guide/en-US/admin_console.xml b/doc/admin-guide/en-US/admin_console.xml
index ebf273935ca..74d12b965d9 100644
--- a/doc/admin-guide/en-US/admin_console.xml
+++ b/doc/admin-guide/en-US/admin_console.xml
@@ -2,11 +2,11 @@
<!-- This document was created with Syntext Serna Free. --><!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "docbookV4.5/docbookx.dtd" []>
<chapter>
<title>Using the Gluster Console Manager – Command Line Utility</title>
- <para>The Gluster Console Manager is a single command line utility that simplifies configuration and management of your storage environment. The Gluster Console Manager is similar to the LVM (Logical Volume Manager) CLI or ZFS Command Line Interface, but across multiple storage servers. You can use the Gluster Console Manager online, while volumes are mounted and active. Gluster automatically synchronizes volume configuration information across all Gluster servers.</para>
- <para>Using the Gluster Console Manager, you can create new volumes, start volumes, and stop volumes, as required. You can also add bricks to volumes, remove bricks from existing volumes, as well as change translator settings, among other operations.</para>
- <para>You can also use the commands to create scripts for automation, as well as use the commands as an API to allow integration with third-party applications. </para>
+ <para>The Gluster Console Manager is a single command line utility that simplifies configuration and management of your storage environment. The Gluster Console Manager is similar to the LVM (Logical Volume Manager) CLI or ZFS Command Line Interface, but it works in sync with multiple storage servers. You can use the Gluster Console Manager while volumes are mounted and active too. Gluster automatically synchronizes volume configuration information across all Gluster servers.</para>
+ <para>Using the Gluster Console Manager, you can create new volumes, start volumes, and stop volumes, as required. You can also add bricks to volumes, remove bricks from existing volumes, as well as change volume settings (such as some translator specific options), among other operations.</para>
+ <para>You can also use these CLI commands to create scripts for automation, as well as use the commands as an API to allow integration with third-party applications. </para>
<para><emphasis role="bold">Running the Gluster Console Manager</emphasis></para>
- <para>You can run the Gluster Console Manager on any GlusterFS server either by invoking the commands or by running the Gluster CLI in interactive mode. You can also use the gluster command remotely using SSH. </para>
+ <para>You can run the Gluster Console Manager on any GlusterFS server either by invoking the commands or by running the Gluster CLI in interactive mode. You can also use the gluster command remotely using SSH. </para>
<itemizedlist>
<listitem>
<para>To run commands directly: </para>
@@ -25,4 +25,5 @@
<para>Display the status of the peer.</para>
</listitem>
</itemizedlist>
+ <para> With any 'gluster' installation, to check all the supported CLI commands, use <command> 'gluster help' </command>.</para>
</chapter>
diff --git a/doc/admin-guide/en-US/admin_directory_Quota.xml b/doc/admin-guide/en-US/admin_directory_Quota.xml
index 8a1012a6ac2..83c4ff451fb 100644
--- a/doc/admin-guide/en-US/admin_directory_Quota.xml
+++ b/doc/admin-guide/en-US/admin_directory_Quota.xml
@@ -2,14 +2,14 @@
<!-- This document was created with Syntext Serna Free. --><!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "docbookV4.5/docbookx.dtd" []>
<chapter id="chap-Administration_Guide-Dir_Quota">
<title>Managing Directory Quota </title>
- <para>Directory quotas in GlusterFS allow you to set limits on usage of disk space by directories or volumes.
-The storage administrators can control the disk space utilization at the directory and/or volume
-levels in GlusterFS by setting limits to allocatable disk space at any level in the volume and directory
-hierarchy. This is particularly useful in cloud deployments to facilitate utility billing model.
+ <para>Directory quotas in GlusterFS allow you to set limits on usage of disk space of a given directory.
+The storage administrators can control the disk space utilization at the directory level in GlusterFS by setting quota limits on the given directory. If
+admin sets the quota limit on the '/' of the volume, it can be treated as 'volume level quota'. GlusterFS's quota implementation can have different quota
+limit set on any directory and it can be nested. This is particularly useful in cloud deployments to facilitate utility billing model.
</para>
<para> <note>
<para>For now, only Hard limit is supported. Here, the limit cannot be exceeded and attempts to use
-more disk space or inodes beyond the set limit will be denied.
+more disk space beyond the set limit will be denied.
</para>
</note></para>
<para>System administrators can also monitor the resource utilization to limit the storage for the users
@@ -46,7 +46,7 @@ immediately after creating that directory. For more information on setting disk
<para>For example, to enable quota on test-volume:
</para>
<programlisting># gluster volume quota test-volume enable
-Quota is enabled on /test-volume</programlisting>
+Quota is enabled on /test-volume</programlisting> <!-- why '/' here before test-volume? its volume name, not directory -->
</listitem>
</itemizedlist>
</section>
@@ -61,10 +61,10 @@ Quota is enabled on /test-volume</programlisting>
<para>Disable the quota using the following command:
</para>
<para><command># gluster volume quota <replaceable>VOLNAME</replaceable> disable </command></para>
- <para>For example, to disable quota translator on test-volume:
+ <para>For example, to disable quota on test-volume:
</para>
<programlisting># gluster volume quota test-volume disable
-Quota translator is disabled on /test-volume</programlisting>
+Quota translator is disabled on /test-volume</programlisting> <!-- why '/' here before test-volume? its volume name, not directory -->
</listitem>
</itemizedlist>
</section>
@@ -87,7 +87,7 @@ export directory:
<programlisting># gluster volume quota test-volume limit-usage /data 10GB
Usage limit has been set on /data</programlisting>
<para><note>
- <para>In a multi-level directory hierarchy, the strictest disk limit will be considered for enforcement.
+ <para>In a multi-level directory hierarchy, the minimum of disk limit in entire hierarchy will be considered for enforcement.
</para>
</note></para>
</listitem>
@@ -115,6 +115,7 @@ command:
</emphasis>
/Test/data 10 GB 6 GB
/Test/data1 10 GB 4 GB</programlisting>
+<para> NOTE that, the directory listed here is not absolute directory name, but relative path to the volume's root ('/'). For example, if 'test-volume' is mounted on '/mnt/glusterfs', then for the above example, '/Test/data' means, '/mnt/glusterfs/Test/data' </para>
</listitem>
<listitem>
<para>Display disk limit information on a particular directory on which limit is set, using the following
diff --git a/doc/admin-guide/en-US/admin_geo-replication.xml b/doc/admin-guide/en-US/admin_geo-replication.xml
index b546bb8da8c..4691116acb8 100644
--- a/doc/admin-guide/en-US/admin_geo-replication.xml
+++ b/doc/admin-guide/en-US/admin_geo-replication.xml
@@ -39,7 +39,7 @@
</thead>
<tbody>
<row>
- <entry>Mirrors data across clusters</entry>
+ <entry>Mirrors data across nodes in a cluster</entry>
<entry>Mirrors data across geographically distributed clusters </entry>
</row>
<row>
@@ -47,7 +47,7 @@
<entry>Ensures backing up of data for disaster recovery</entry>
</row>
<row>
- <entry>Synchronous replication (each and every file operation is sent across all the bricks)</entry>
+ <entry>Synchronous replication (each and every file modify operation is sent across all the bricks)</entry>
<entry>Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences) </entry>
</row>
</tbody>
@@ -79,11 +79,11 @@
<para>Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and across the Internet. This section illustrates the most common deployment scenarios for Geo-replication, including the following: </para>
<itemizedlist>
<listitem>
- <para>Geo-replication over LAN
+ <para>Geo-replication over LAN
</para>
</listitem>
<listitem>
- <para>Geo-replication over WAN
+ <para>Geo-replication over WAN
</para>
</listitem>
<listitem>
@@ -104,7 +104,7 @@
</imageobject>
</mediaobject>
<para><emphasis role="bold">Geo-replication over WAN</emphasis></para>
- <para>You can configure Geo-replication to replicate data over a Wide Area Network.</para>
+ <para>You can configure Geo-replication to replicate data over a Wide Area Network.</para>
<mediaobject>
<textobject>
<phrase>
@@ -116,7 +116,7 @@
</imageobject>
</mediaobject>
<para><emphasis role="bold">Geo-replication over Internet</emphasis></para>
- <para>You can configure Geo-replication to mirror data over the Internet.</para>
+ <para>You can configure Geo-replication to mirror data over the Internet.</para>
<mediaobject>
<textobject>
<phrase>
@@ -128,7 +128,7 @@
</imageobject>
</mediaobject>
<para><emphasis role="bold">Multi-site cascading Geo-replication</emphasis> </para>
- <para>You can configure Geo-replication to mirror data in a cascading fashion across multiple sites. </para>
+ <para>You can configure Geo-replication to mirror data in a cascading fashion across multiple sites. </para>
<mediaobject>
<textobject>
<phrase>
@@ -142,16 +142,16 @@
</section>
<section id="chap-Administration_Guide-Geo_Rep-Preparation-Deployment_Overview">
<title>Geo-replication Deployment Overview</title>
- <para>Deploying Geo-replication involves the following steps:</para>
+ <para>Deploying Geo-replication involves the following steps:</para>
<orderedlist>
<listitem>
- <para>Verify that your environment matches the minimum system requirement. For more information, see <xref linkend="chap-Administration_Guide-Geo_Rep-Preparation-Minimum_Reqs"/>.</para>
+ <para>Verify that your environment matches the minimum system requirements. For more information, see <xref linkend="chap-Administration_Guide-Geo_Rep-Preparation-Minimum_Reqs"/>.</para>
</listitem>
<listitem>
<para>Determine the appropriate deployment scenario. For more information, see <xref linkend="chap-Administration_Guide-Geo_Rep-Preparation-Deployment_options"/>.</para>
</listitem>
<listitem>
- <para>Start Geo-replication on master and slave systems, as required. For more information, see <xref linkend="chap-Administration_Guide-Geo_Rep-Starting"/>.</para>
+ <para>Start Geo-replication on master and slave systems, as required. For more information, see <xref linkend="chap-Administration_Guide-Geo_Rep-Starting"/>.</para>
</listitem>
</orderedlist>
</section>
@@ -180,7 +180,7 @@
<row>
<entry>Filesystem</entry>
<entry>GlusterFS 3.2 or higher</entry>
- <entry>GlusterFS 3.2 or higher (GlusterFS needs to be installed, but does not need to be running), ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively) </entry>
+ <entry>GlusterFS 3.2 or higher, ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively) </entry>
</row>
<row>
<entry>Python </entry>
@@ -194,8 +194,8 @@
</row>
<row>
<entry>Remote synchronization</entry>
- <entry>rsync 3.0.7 or higher </entry>
- <entry>rsync 3.0.7 or higher </entry>
+ <entry>rsync 3.0.0 or higher </entry>
+ <entry>rsync 3.0.0 or higher </entry>
</row>
<row>
<entry>FUSE </entry>
@@ -211,37 +211,36 @@
<para><emphasis role="bold">Time Synchronization</emphasis> </para>
<itemizedlist>
<listitem>
- <para>On bricks of a geo-replication master volume, all the servers&apos; time must be uniform. You are recommended to set up NTP (Network Time Protocol) service to keep the bricks sync in time and avoid out-of-time sync effect.</para>
+ <para>All servers that are part of a geo-replication master volume need to have their clocks in sync. You are recommended to set up NTP (Network Time Protocol) daemon service to keep the clocks in sync.</para>
<para>For example: In a Replicated volume where brick1 of the master is at 12.20 hrs and brick 2 of the master is at 12.10 hrs with 10 minutes time lag, all the changes in brick2 between this period may go unnoticed during synchronization of files with Slave.</para>
- <para>For more information on setting up NTP, see <ulink url="http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Migration_Planning_Guide/ch04s07.html"/>.</para>
+ <para>For more information on setting up NTP daemon, see <ulink url="http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Migration_Planning_Guide/ch04s07.html"/>.</para>
</listitem>
</itemizedlist>
<para><emphasis role="bold">To setup Geo-replication for SSH </emphasis></para>
- <para>Password-less login has to be set up between the host machine (where geo-replication Start command will be issued) and the remote machine (where slave process should be launched through SSH).</para>
+ <para>Password-less login has to be set up between the host machine (where geo-replication start command will be issued) and the remote machine (where slave process should be launched through SSH).</para>
<orderedlist>
<listitem>
- <para>On the node where geo-replication sessions are to be set up, run the following command:</para>
- <para><command># ssh-keygen -f /etc/glusterd/geo-replication/secret.pem</command>
+ <para>On the node where geo-replication start commands are to be issued, run the following command:</para>
+ <para><command># ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem</command>
</para>
<para>Press Enter twice to avoid passphrase.
</para>
</listitem>
<listitem>
<para>Run the following command on master for all the slave hosts: </para>
- <para><command># ssh-copy-id -i /etc/glusterd/geo-replication/secret.pem.pub <varname>user</varname>@<varname>slavehost</varname></command></para>
+ <para><command># ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub <varname>user</varname>@<varname>slavehost</varname></command></para>
</listitem>
</orderedlist>
</section>
<section id="chap-Administration_Guide-Geo_Rep-Preparation-Settingup_Slave">
<title>Setting Up the Environment for a Secure Geo-replication Slave</title>
- <para>You can configure a secure slave using SSH so that master is granted a
-restricted access. With GlusterFS, you need not specify
-configuration parameters regarding the slave on the master-side
-configuration. For example, the master does not require the location of
+ <para>You can configure a secure slave using SSH so that master is granted
+restricted access. With GlusterFS 3.3, you need not specify slave
+configuration parameters on the master-side. For example, the master does not require the location of
the rsync program on slave but the slave must ensure that rsync is in
the PATH of the user which the master connects using SSH. The only
information that master and slave have to negotiate are the slave-side
-user account, slave&apos;s resources that master uses as slave resources, and
+user account, slave&apos;s resources and
the master&apos;s public key. Secure access to the slave can be established
using the following options:</para>
<itemizedlist>
@@ -256,43 +255,39 @@ using the following options:</para>
</listitem>
</itemizedlist>
<para><emphasis role="bold">Backward Compatibility</emphasis> </para>
- <para>Your existing Ge-replication environment will work with GlusterFS,
-except for the following:</para>
+ <para>Your existing Geo-replication environment will work with GlusterFS
+ 3.3, except for the following:</para>
<itemizedlist>
<listitem>
- <para>The process of secure reconfiguration affects only the glusterfs
+ <para>The process of secure reconfiguration affects only the GlusterFS
instance on slave. The changes are transparent to master with the
exception that you may have to change the SSH target to an unprivileged
- account on slave.</para>
+ account on the slave.</para>
</listitem>
<listitem>
- <para>The following are the some exceptions where this might not work:</para>
+ <para>The following are some exceptions where backward compatibility cannot be provided:</para>
<para><itemizedlist>
<listitem>
- <para>Geo-replication URLs which specify the slave resource when configuring master will include the following special characters: space, *, ?, [;</para>
+ <para>Geo-replication URLs which specify the slave resource include the following special characters: space, *, ?, [;</para>
</listitem>
<listitem>
- <para>Slave must have a running instance of glusterd, even if there is no
-gluster volume among the mounted slave resources (that is, file tree
-slaves are used exclusively) .</para>
+ <para>Slave does not have glusterd running.</para>
</listitem>
</itemizedlist></para>
</listitem>
</itemizedlist>
<section>
<title>Restricting Remote Command Execution</title>
- <para>If you restrict remote command execution, then the Slave audits commands
-coming from the master and the commands related to the given
-geo-replication session is allowed. The Slave also provides access only
-to the files within the slave resource which can be read or manipulated
-by the Master.</para>
+ <para>If you restrict remote command execution, then the slave audits commands
+coming from the master and only the pre-configured commands are allowed. The slave also provides access only
+to the files which are pre-configured to be read or manipulated by the master.</para>
<para>To restrict remote command execution:</para>
<orderedlist>
<listitem>
<para>Identify the location of the gsyncd helper utility on Slave. This utility is installed in <filename>PREFIX/libexec/glusterfs/gsyncd</filename>, where PREFIX is a compile-time parameter of glusterfs. For example, <filename>--prefix=PREFIX</filename> to the configure script with the following common values<filename> /usr, /usr/local, and /opt/glusterfs/glusterfs_version</filename>.</para>
</listitem>
<listitem>
- <para>Ensure that command invoked from master to slave passed through the slave&apos;s gsyncd utility. </para>
+ <para>Ensure that command invoked from master to slave is passed through the slave&apos;s gsyncd utility. </para>
<para>You can use either of the following two options:</para>
<itemizedlist>
<listitem>
@@ -312,14 +307,13 @@ account, then set it up by creating a new user with UID 0. </para>
<section>
<title>Using Mountbroker for Slaves </title>
<para><filename>mountbroker</filename> is a new service of glusterd. This service allows an
-unprivileged process to own a GlusterFS mount by registering a label
-(and DSL (Domain-specific language) options ) with glusterd through a
-glusterd volfile. Using CLI, you can send a mount request to glusterd to
-receive an alias (symlink) of the mounted volume.</para>
- <para>A request from the agent , the unprivileged slave agents use the
-mountbroker service of glusterd to set up an auxiliary gluster mount for
-the agent in a special environment which ensures that the agent is only
-allowed to access with special parameters that provide administrative
+unprivileged process to own a GlusterFS mount. This is accomplished by registering a label
+(and DSL (Domain-specific language) options ) with glusterd through the
+glusterd volfile. Using CLI, you can send a mount request to glusterd and
+receive an alias (symlink) to the mounted volume.</para>
+ <para>The unprivileged process/agent uses the
+mountbroker service of glusterd to set up an auxiliary gluster mount. The mount
+is setup so as to allow only that agent to provide administrative
level access to the particular volume.</para>
<para><emphasis role="bold">To setup an auxiliary gluster mount for the agent</emphasis>:</para>
<orderedlist>
@@ -330,15 +324,17 @@ level access to the particular volume.</para>
<para>Create a unprivileged account. For example, <filename> geoaccount</filename>. Make it a member of <filename> geogroup</filename>.</para>
</listitem>
<listitem>
- <para>Create a new directory owned by root and with permissions <emphasis role="italic">0711.</emphasis> For example, create a create mountbroker-root directory <filename>/var/mountbroker-root</filename>.</para>
+ <para>Create a new directory as superuser to be used as mountbroker's root. </para>
</listitem>
<listitem>
- <para>Add the following options to the glusterd volfile, assuming the name of the slave gluster volume as <filename>slavevol</filename>:</para>
+ <para> Change the permission of the directory to <emphasis role="italic">0711.</emphasis> </para>
+ </listitem>
+ <listitem>
+ <para>Add the following options to the glusterd volfile, located at /etc/glusterfs/glusterd.vol, assuming the name of the slave gluster volume as <filename>slavevol</filename>:</para>
<para><command>option mountbroker-root /var/mountbroker-root </command></para>
<para><command>option mountbroker-geo-replication.geoaccount slavevol</command></para>
<para><command>option geo-replication-log-group geogroup</command></para>
- <para>If you are unable to locate the glusterd volfile at <filename>/etc/glusterfs/glusterd.vol</filename>, you can create a volfile containing both the default configuration and the above options and place it at <filename>/etc/glusterfs/</filename>. </para>
- <para>A sample glusterd volfile along with default options:</para>
+ <para>A sample glusterd volfile along with default options:</para>
<para><screen>volume management
type mgmt/glusterd
option working-directory /etc/glusterd
@@ -347,17 +343,18 @@ level access to the particular volume.</para>
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
- option mountbroker-root /var/mountbroker-root
+ option mountbroker-root /var/mountbroker-root
option mountbroker-geo-replication.geoaccount slavevol
option geo-replication-log-group geogroup
end-volume</screen></para>
- <para>If you host multiple slave volumes on Slave, you can repeat step 2. for each of them and add the following options to the <filename>volfile</filename>:</para>
+ <para>If you host multiple slave volumes, you can repeat step 2. for each of the slave volumes and add the following options to the <filename>volfile</filename>:</para>
<para><screen>option mountbroker-geo-replication.geoaccount2 slavevol2
option mountbroker-geo-replication.geoaccount3 slavevol3</screen></para>
</listitem>
<listitem>
<para>Setup Master to access Slave as <filename>geoaccount@Slave</filename>.</para>
- <para>You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list (without spaces) as the argument of <command>mountbroker-geo-replication.geogroup</command>. You can also have multiple options of the form <command>mountbroker-geo-replication.*</command>. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile:
+ <para>You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list of slave
+ volumes (without spaces) as the argument of <command>mountbroker-geo-replication.geogroup</command>. You can also have multiple options of the form <command>mountbroker-geo-replication.*</command>. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile:
</para>
<para><command>option mountbroker-geo-replication.geoaccount1 slavevol11,slavevol12,slavevol13</command></para>
<para><command>option mountbroker-geo-replication.geoaccount2 slavevol21,slavevol22</command></para>
@@ -365,17 +362,16 @@ option mountbroker-geo-replication.geoaccount3 slavevol3</screen></para>
<para>
Now set up Master1 to ssh to geoaccount1@Slave, etc.
</para>
- <para>You must restart glusterd after making changes in the configuration to effect the updates. </para>
+ <para>You must restart glusterd to make the configuration changes effective. </para>
</listitem>
</orderedlist>
</section>
<section>
<title>Using IP based Access Control</title>
- <para>You can use IP based access control method to provide access control for
-the slave resources using IP address. You can use method for both Slave
-and file tree slaves, but in the section, we are focusing on file tree
-slaves using this method.</para>
- <para>To set access control based on IP address for file tree slaves:</para>
+ <para>You can provide access control for the slave resources using IP
+ addresses. You can use method for both Gluster volume and and
+ file tree slaves, but in this section, we are focusing on file tree slaves.</para>
+ <para>To set IP address based access control for file tree slaves:</para>
<orderedlist>
<listitem>
<para>Set a general restriction for accessibility of file tree resources:
@@ -427,7 +423,7 @@ comma-separated lists of CIDR subnets.</para>
<listitem>
<para>Start geo-replication between the hosts using the following command:
</para>
- <para><command># gluster volume geo-replication <replaceable>MASTER SLAVE</replaceable> start</command>
+ <para><command># gluster volume geo-replication <replaceable>MASTER SLAVE</replaceable> start</command>
</para>
<para>For example:
</para>
@@ -435,7 +431,8 @@ comma-separated lists of CIDR subnets.</para>
Starting geo-replication session between Volume1
example.com:/data/remote_dir has been successful</programlisting></para>
<para><note>
- <para>You may need to configure the service before starting Gluster Geo-replication. For more information, see <xref linkend="chap-Administration_Guide-Geo_Rep-Starting-Configure"/>.</para>
+ <para>You may need to configure the Geo-replication service before
+ starting it. For more information, see <xref linkend="chap-Administration_Guide-Geo_Rep-Starting-Configure"/>.</para>
</note></para>
</listitem>
</itemizedlist>
@@ -730,3 +727,4 @@ example.com:/data/remote_dir has been successful</programlisting></para>
</para>
</section>
</chapter>
+
diff --git a/doc/admin-guide/en-US/admin_managing_volumes.xml b/doc/admin-guide/en-US/admin_managing_volumes.xml
index 0c4d2e922cf..333d46cd6e4 100644
--- a/doc/admin-guide/en-US/admin_managing_volumes.xml
+++ b/doc/admin-guide/en-US/admin_managing_volumes.xml
@@ -36,7 +36,7 @@
<title>Tuning Volume Options</title>
<para>You can tune volume options, as needed, while the cluster is online and available. </para>
<para><note>
- <para>Red Hat recommends you to set server.allow-insecure option to ON if there are too many bricks in each volume or if there are too many services which have already utilized all the privileged ports in the system. Turning this option ON allows ports to accept/reject messages from insecure ports. So, use this option only if your deployment requires it. </para>
+ <para>It is recommend to set server.allow-insecure option to ON if there are too many bricks in each volume or if there are too many services which have already utilized all the privileged ports in the system. Turning this option ON allows ports to accept/reject messages from insecure ports. So, use this option only if your deployment requires it. </para>
</note></para>
<para>To tune volume options </para>
<itemizedlist>
@@ -387,7 +387,7 @@ Brick4: server4:/exp4</programlisting></para>
<para>Remove the brick using the following command:</para>
<para><command># gluster volume remove-brick <varname>VOLNAME</varname><replaceable> BRICK</replaceable></command> <command>start</command></para>
<para>For example, to remove server2:/exp2:</para>
- <para><programlisting># gluster volume remove-brick test-volume server2:/exp2
+ <para><programlisting># gluster volume remove-brick test-volume server2:/exp2 start
Removing brick(s) can result in data loss. Do you want to Continue? (y/n)</programlisting></para>
</listitem>
@@ -405,6 +405,13 @@ Removing brick(s) can result in data loss. Do you want to Continue? (y/n)</progr
617c923e-6450-4065-8e33-865e28d9428f 34 340 162 in progress</screen></para>
</listitem>
<listitem>
+ <para>Commit the remove brick operation using the following command:</para>
+ <para><command># gluster volume remove-brick <varname>VOLNAME</varname><replaceable> BRICK</replaceable></command><command> commit</command></para>
+ <para>For example, to view the status of remove brick operation on server2:/exp2 brick:</para>
+ <para><screen># gluster volume remove-brick test-volume server2:/exp2 commit</screen></para>
+ <para><programlisting>Remove Brick successful </programlisting></para>
+ </listitem>
+ <listitem>
<para>Check the volume information using the following command: </para>
<para><command># gluster volume info </command></para>
<para>The command displays information similar to the following:</para>
diff --git a/doc/admin-guide/en-US/admin_setting_volumes.xml b/doc/admin-guide/en-US/admin_setting_volumes.xml
index 6a8468d5f11..051fb723a04 100644
--- a/doc/admin-guide/en-US/admin_setting_volumes.xml
+++ b/doc/admin-guide/en-US/admin_setting_volumes.xml
@@ -41,7 +41,7 @@ information, see <xref linkend="sect-Administration_Guide-Setting_Volumes-Stripe
<itemizedlist>
<listitem>
<para>Create a new volume :</para>
- <para><command># gluster volume create<replaceable> NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable> | replica <replaceable>COUNT</replaceable>] [transport tcp | rdma | tcp, rdma] <replaceable>NEW-BRICK1 NEW-BRICK2 NEW-BRICK3...</replaceable></command></para>
+ <para><command># gluster volume create<replaceable> NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable> | replica <replaceable>COUNT</replaceable>] [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK1 NEW-BRICK2 NEW-BRICK3...</replaceable></command></para>
<para>For example, to create a volume called test-volume consisting of server3:/exp3 and server4:/exp4:</para>
<para><programlisting># gluster volume create test-volume server3:/exp3 server4:/exp4
Creation of test-volume has been successful
@@ -69,7 +69,7 @@ Please start the volume to access data.</programlisting></para>
</listitem>
<listitem>
<para>Create the distributed volume:</para>
- <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [transport tcp | rdma | tcp,rdma] <replaceable>NEW-BRICK...</replaceable></command></para>
+ <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK...</replaceable></command></para>
<para>For example, to create a distributed volume with four storage servers using tcp:</para>
<para><programlisting># gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
Creation of test-volume has been successful
@@ -119,7 +119,7 @@ To protect against server and disk failures, it is recommended that the bricks o
</listitem>
<listitem>
<para>Create the replicated volume:</para>
- <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [replica <replaceable>COUNT</replaceable>] [transport tcp | rdma tcp,rdma] <replaceable>NEW-BRICK...</replaceable></command></para>
+ <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [replica <replaceable>COUNT</replaceable>] [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK...</replaceable></command></para>
<para>For example, to create a replicated volume with two storage servers:</para>
<para><programlisting># gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
@@ -152,7 +152,7 @@ Please start the volume to access data.</programlisting></para>
</listitem>
<listitem>
<para>Create the striped volume:</para>
- <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [transport tcp | rdma | tcp,rdma] <replaceable>NEW-BRICK...</replaceable></command></para>
+ <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK...</replaceable></command></para>
<para>For example, to create a striped volume across two storage servers:</para>
<para><programlisting># gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
@@ -185,7 +185,7 @@ Please start the volume to access data.</programlisting></para>
</listitem>
<listitem>
<para>Create the distributed striped volume:</para>
- <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [transport tcp | rdma | tcp,rdma] <replaceable>NEW-BRICK...</replaceable></command></para>
+ <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK...</replaceable></command></para>
<para>For example, to create a distributed striped volume across eight storage servers:</para>
<para><programlisting># gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
Creation of test-volume has been successful
@@ -218,7 +218,7 @@ Please start the volume to access data.</programlisting></para>
</listitem>
<listitem>
<para>Create the distributed replicated volume:</para>
- <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [replica <replaceable>COUNT</replaceable>] [transport tcp | rdma | tcp,rdma] <replaceable>NEW-BRICK...</replaceable></command></para>
+ <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [replica <replaceable>COUNT</replaceable>] [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK...</replaceable></command></para>
<para>For example, four node distributed (replicated) volume with a two-way mirror:
</para>
<para><programlisting># gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
@@ -251,7 +251,7 @@ a distributed striped replicated volume.
</listitem>
<listitem>
<para>Create a distributed striped replicated volume using the following command:</para>
- <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [replica <replaceable>COUNT</replaceable>] [transport tcp | rdma | tcp,rdma] <replaceable>NEW-BRICK...</replaceable></command></para>
+ <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [replica <replaceable>COUNT</replaceable>] [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK...</replaceable></command></para>
<para>For example, to create a distributed replicated striped volume across eight storage servers:
</para>
<para><programlisting># gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
@@ -289,7 +289,7 @@ striped replicated volume.
</listitem>
<listitem>
<para>Create a striped replicated volume :</para>
- <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [replica <replaceable>COUNT</replaceable>] [transport tcp | rdma | tcp,rdma] <replaceable>NEW-BRICK...</replaceable></command></para>
+ <para><command># gluster volume create <replaceable>NEW-VOLNAME</replaceable> [stripe <replaceable>COUNT</replaceable>] [replica <replaceable>COUNT</replaceable>] [transport [tcp | rdma | tcp,rdma]] <replaceable>NEW-BRICK...</replaceable></command></para>
<para>For example, to create a striped replicated volume across four storage servers:
</para>
diff --git a/doc/admin-guide/en-US/admin_settingup_clients.xml b/doc/admin-guide/en-US/admin_settingup_clients.xml
index 22979acf477..233aac4389c 100644
--- a/doc/admin-guide/en-US/admin_settingup_clients.xml
+++ b/doc/admin-guide/en-US/admin_settingup_clients.xml
@@ -5,15 +5,15 @@
]>
<chapter id="chap-Administration_Guide-GlusterFS_Client">
<title>Accessing Data - Setting Up GlusterFS Client</title>
- <para>You can access gluster volumes in multiple ways. You can use Gluster Native Client method for high concurrency, performance and transparent failover in GNU/Linux clients. You can also use NFS v3 to access gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up) and Windows Server 2003. Other NFS client implementations may work with gluster NFS server.</para>
- <para>You can use CIFS to access volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. </para>
- <section id="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Native">
+ <para>Gluster volumes can be accessed in multiple ways. One can use Gluster Native Client method for high concurrency, performance and transparent failover in GNU/Linux clients. Gluster exports volumes using NFS v3 protocol too.</para>
+ <para>CIFS can also be used to access volumes by exporting the Gluster Native mount point as a samba export.</para>
+ <section id="sect-Administration_Guide-GlusterFS_Client-Native">
<title>Gluster Native Client</title>
- <para>The Gluster Native Client is a FUSE-based client running in user space. Gluster Native Client is the recommended method for accessing volumes when high concurrency and high write performance is required.</para>
- <para>This section introduces the Gluster Native Client and explains how to install the software on client machines. This section also describes how to mount volumes on clients (both manually and automatically) and how to verify that the volume has mounted successfully. </para>
+ <para>The Gluster Native Client is a FUSE-based client running in user space. Gluster Native Client is the recommended method for accessing volumes if all the clustered features of GlusterFS has to be utilized.</para>
+ <para>This section introduces the Gluster Native Client and explains how to install the software on client machines. This section also describes how to mount volumes on clients (both manually and automatically). </para>
<section>
<title>Installing the Gluster Native Client</title>
- <para>Before you begin installing the Gluster Native Client, you need to verify that the FUSE module is loaded on the client and has access to the required modules as follows: </para>
+ <para>Gluster Native Client has a dependancy on FUSE module. To make sure FUSE module is loaded, execute below commands: </para>
<orderedlist>
<listitem>
<para>Add the FUSE loadable kernel module (LKM) to the Linux kernel:</para>
@@ -25,35 +25,24 @@
<para><command>fuse init (API version 7.13)</command></para>
</listitem>
</orderedlist>
- <section id="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Native-RPM">
- <title>Installing on Red Hat Package Manager (RPM) Distributions </title>
+ <section id="sect-Administration_Guide-GlusterFS_Client-Native-RPM">
+ <title>Installing on RPM Based Distributions </title>
<para>To install Gluster Native Client on RPM distribution-based systems</para>
<orderedlist>
<listitem>
<para>Install required prerequisites on the client using the following command:</para>
- <para><command>$ sudo yum -y install openssh-server wget fuse fuse-libs openib libibverbs</command></para>
+ <para><command>$ sudo yum -y install fuse fuse-libs</command></para>
</listitem>
<listitem>
- <para>Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open.</para>
- <para>You can use the following chains with iptables:</para>
- <para><code>$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT </code></para>
- <para><code>$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT</code></para>
- <para><note>
- <para>If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule.</para>
- </note></para>
- </listitem>
- <listitem>
- <para>Download the latest glusterfs, glusterfs-fuse, and glusterfs-rdma RPM files to each client. The glusterfs package contains the Gluster Native Client. The glusterfs-fuse package contains the FUSE translator required for mounting on client systems and the glusterfs-rdma packages contain OpenFabrics verbs RDMA module for Infiniband.</para>
- <para>You can download the software at <ulink url="http://bits.gluster.com/gluster/glusterfs/3.3.0qa30/x86_64/"/>.</para>
+ <para>Download the latest glusterfs, glusterfs-fuse RPM files on each client. The glusterfs package contains the GlusterFS Binary and required libraries. The glusterfs-fuse package contains the FUSE plugin (in GlusterFS terms, its called Translator) required for mounting.</para>
+ <para><note><para>Install 'glusterfs-rdma' RPM if RDMA support is required. 'glusterfs-rdma' contains RDMA transport module for Infiniband interconnect.</para></note></para>
+ <para>You can download the software at <ulink url="http://bits.gluster.com/gluster/glusterfs/3.3.0/x86_64/"/>.</para>
</listitem>
<listitem>
<para>Install Gluster Native Client on the client.</para>
- <para><command>$ sudo rpm -i glusterfs-3.3.0qa30-1.x86_64.rpm </command></para>
- <para><command>$ sudo rpm -i glusterfs-fuse-3.3.0qa30-1.x86_64.rpm </command></para>
- <para><command>$ sudo rpm -i glusterfs-rdma-3.3.0qa30-1.x86_64.rpm</command></para>
- <para><note>
- <para>The RDMA module is only required when using Infiniband.</para>
- </note></para>
+ <para><command>$ sudo rpm -i glusterfs-3.3.0-1.x86_64.rpm </command></para>
+ <para><command>$ sudo rpm -i glusterfs-fuse-3.3.0-1.x86_64.rpm </command></para>
+ <para><command>$ sudo rpm -i glusterfs-rdma-3.3.0-1.x86_64.rpm</command></para>
</listitem>
</orderedlist>
</section>
@@ -62,21 +51,11 @@
<para>To install Gluster Native Client on Debian-based distributions</para>
<orderedlist>
<listitem>
- <para>Install OpenSSH Server on each client using the following command:</para>
- <para><command>$ sudo apt-get install openssh-server vim wget</command></para>
- </listitem>
- <listitem>
- <para>Download the latest GlusterFS .deb file and checksum to each client.</para>
+ <para>Download the latest GlusterFS .deb file.</para>
<para>You can download the software at <ulink url="http://www.gluster.org/download/"/>.</para>
</listitem>
<listitem>
- <para>For each .deb file, get the checksum (using the following command) and compare it against the checksum for that file in the md5sum file.</para>
- <para>
-<command>$ md5sum GlusterFS_DEB_file.deb </command></para>
- <para>The md5sum of the packages is available at: <ulink url="http://download.gluster.com/pub/gluster/glusterfs"/></para>
- </listitem>
- <listitem>
- <para>Uninstall GlusterFS v3.1 (or an earlier version) from the client using the following command:
+ <para>Uninstall GlusterFS v3.1.x/v3.2.x (or an earlier version) from the client using the following command:
</para>
<para><command>$ sudo dpkg -r glusterfs </command></para>
<para>(Optional) Run <command>$ sudo dpkg -purge glusterfs </command>to purge the configuration files.</para>
@@ -84,21 +63,10 @@
<listitem>
<para>Install Gluster Native Client on the client using the following command:
</para>
- <para><command>$ sudo dpkg -i GlusterFS_DEB_file </command></para>
+ <para><command>$ sudo dpkg -i glusterfs-$version.deb </command></para>
<para>For example:
</para>
- <para><command>$ sudo dpkg -i glusterfs-3.3.x.deb </command></para>
- </listitem>
- <listitem>
- <para>Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open.
-</para>
- <para>You can use the following chains with iptables:
-</para>
- <para><code>$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT </code></para>
- <para><code>$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT</code></para>
- <para><note>
- <para>If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule.</para>
- </note></para>
+ <para><command>$ sudo dpkg -i glusterfs-3.3.0.deb </command></para>
</listitem>
</orderedlist>
</section>
@@ -112,28 +80,28 @@
<para><command># cd glusterfs</command></para>
</listitem>
<listitem>
- <para>Download the source code.
-</para>
+ <para>Download the source code.</para>
<para>You can download the source at <ulink url="http://www.gluster.org/download/"/>.</para>
</listitem>
<listitem>
<para>Extract the source code using the following command:
</para>
- <para><command># tar -xvzf SOURCE-FILE </command></para>
+ <para><command># tar -xvzf glusterfs-3.3.0.tar.gz </command></para>
</listitem>
<listitem>
<para>Run the configuration utility using the following command:
</para>
<para><code># ./configure </code></para>
+ <para><code>...</code></para>
<para><code>GlusterFS configure summary </code></para>
- <para><code>================== </code></para>
- <para><code>FUSE client : yes </code></para>
- <para><code>Infiniband verbs : yes </code></para>
+ <para><code>=========================== </code></para>
+ <para><code>FUSE client : yes </code></para>
+ <para><code>Infiniband verbs : yes </code></para>
<para><code>epoll IO multiplex : yes </code></para>
- <para><code>argp-standalone : no </code></para>
- <para><code>fusermount : no </code></para>
- <para><code>readline : yes</code></para>
- <para>The configuration summary shows the components that will be built with Gluster Native Client.</para>
+ <para><code>argp-standalone : no </code></para>
+ <para><code>fusermount : no </code></para>
+ <para><code>readline : yes</code></para>
+ <para><note><para>The configuration summary shown above is sample, it can vary depending on other packages.</para></note></para>
</listitem>
<listitem>
<para>Build the Gluster Native Client software using the following commands:
@@ -149,27 +117,27 @@
</orderedlist>
</section>
</section>
- <section id="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Mounting_Volumes">
+ <section id="sect-Administration_Guide-GlusterFS_Client-Mounting_Volumes">
<title>Mounting Volumes</title>
<para>After installing the Gluster Native Client, you need to mount Gluster volumes to access data. There are two methods you can choose: </para>
<itemizedlist>
<listitem>
- <para><xref linkend="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Manuall"/></para>
+ <para><xref linkend="sect-Administration_Guide-GlusterFS_Client-Manuall"/></para>
</listitem>
<listitem>
- <para><xref linkend="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Automatic"/></para>
+ <para><xref linkend="sect-Administration_Guide-GlusterFS_Client-Automatic"/></para>
</listitem>
</itemizedlist>
- <para>After mounting a volume, you can test the mounted volume using the procedure described in <xref linkend="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Native-Testing"/>. </para>
+ <para>After mounting a volume, you can test the mounted volume using the procedure described in <xref linkend="sect-Administration_Guide-GlusterFS_Client-Testing"/>. </para>
<para><note>
<para>Server names selected during creation of Volumes should be resolvable in the client machine. You can use appropriate /etc/hosts entries or DNS server to resolve server names to IP addresses. </para>
</note></para>
- <section id="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Manuall">
- <title>Manually Mounting Volumes</title>
- <para>To manually mount a Gluster volume </para>
- <itemizedlist>
- <listitem>
- <para>To mount a volume, use the following command:
+ <section id="sect-Administration_Guide-GlusterFS_Client-Manuall">
+ <title>Manually Mounting Volumes</title>
+ <para>To manually mount a Gluster volume </para>
+ <itemizedlist>
+ <listitem>
+ <para>To mount a volume, use the following command:
</para>
<para><command># mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR</command>
</para>
@@ -177,40 +145,13 @@
</para>
<para><command># mount -t glusterfs server1:/test-volume /mnt/glusterfs</command></para>
<note>
- <para>The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount).
-
-</para>
- <para>If you see a usage message like &quot;Usage: mount.glusterfs&quot;, mount usually requires you to create a directory to be used as the mount point. Run &quot;mkdir /mnt/glusterfs&quot; before you attempt to run the mount command listed above.</para>
+ <para>The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). </para>
</note>
</listitem>
</itemizedlist>
- <para><emphasis role="bold">Mounting Options</emphasis></para>
- <para>You can specify the following options when using the <command>mount -t glusterfs</command> command. Note that you need to separate all options with commas.
-
-</para>
- <para>backupvolfile-server=server-name</para>
- <para>volfile-max-fetch-attempts=number of attempts</para>
- <para>log-level=loglevel
-</para>
- <para>log-file=logfile
-</para>
- <para>transport=transport-type
-</para>
- <para>direct-io-mode=[enable|disable]
-
-</para>
- <para>For example:
-</para>
- <para><code># mount -t glusterfs -o backupvolfile-server=volfile_server2 --volfile-max-fetch-attempts=2 log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs</code></para>
- <para>If <option>backupvolfile-server</option> option is added while mounting fuse client, when the first
-volfile server fails, then the server specified in <option>backupvolfile-server</option> option is used as volfile server to mount
-the client.</para>
- <para>In <code>--volfile-max-fetch-attempts=X</code> option, specify the number of attempts to fetch volume files while mounting a volume. This option is useful when you mount a server with multiple IP addresses or when round-robin DNS is configured for the server-name.. </para>
</section>
- <section id="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Automatic" dir="lro">
+ <section id="sect-Administration_Guide-GlusterFS_Client-Automatic" dir="lro">
<title>Automatically Mounting Volumes</title>
- <para>You can configure your system to automatically mount the Gluster volume each time your system starts. </para>
- <para>The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). </para>
<para><emphasis role="bold">To automatically mount a Gluster volume</emphasis></para>
<itemizedlist>
<listitem>
@@ -222,68 +163,35 @@ the client.</para>
<para><code>server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0</code></para>
</listitem>
</itemizedlist>
- <para><emphasis role="bold">Mounting Options </emphasis></para>
- <para>You can specify the following options when updating the /etc/fstab file. Note that you need to separate all options with commas.
-
-</para>
- <para>log-level=loglevel
-</para>
- <para>log-file=logfile
-</para>
- <para>transport=transport-type
-</para>
- <para>direct-io-mode=[enable|disable]
-
-</para>
- <para>For example:
-</para>
+ <para><emphasis role="bold">Mounting Options</emphasis></para>
+ <para>You can specify the following options when using the <command>mount -t glusterfs</command> command. Note that you need to separate all options with commas.</para>
+ <para>backupvolfile-server=server-name</para>
+ <para>fetch-attempts=N (where N is number of attempts)</para>
+ <para>log-level=loglevel</para>
+ <para>log-file=logfile</para>
+ <para>direct-io-mode=[enable|disable]</para>
+ <para>ro (for readonly mounts)</para>
+ <para>acl (for enabling posix-ACLs)</para>
+ <para>worm (making the mount WORM - Write Once, Read Many type)</para>
+ <para>selinux (enable selinux on GlusterFS mount</para>
+ <para></para>
+ <para>For example: </para>
+ <para><code># mount -t glusterfs -o backupvolfile-server=volfile_server2,fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs</code></para>
+ <para>Using /etc/fstab, options would look like below:</para>
<para><code>HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0 </code></para>
- </section>
- <section id="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Native-Testing">
- <title>Testing Mounted Volumes</title>
- <para>To test mounted volumes</para>
- <itemizedlist>
- <listitem>
- <para>Use the following command:
-</para>
- <para><command># mount </command></para>
- <para>If the gluster volume was successfully mounted, the output of the mount command on the client will be similar to this example:
-
-</para>
- <para><code>server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072</code></para>
- </listitem>
- </itemizedlist>
- <itemizedlist>
- <listitem>
- <para>Use the following command:
-</para>
- <para><command># df</command>
-</para>
- <para>The output of df command on the client will display the aggregated storage space from all the bricks in a volume similar to this example:
-</para>
- <para><code># df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</code></para>
- </listitem>
- <listitem>
- <para>Change to the directory and list the contents by entering the following:
-</para>
- <para><command># cd MOUNTDIR </command></para>
- <para><command># ls</command></para>
- </listitem>
- <listitem>
- <para>For example,</para>
- <para><code># cd /mnt/glusterfs </code></para>
- <para><code># ls</code></para>
- </listitem>
- </itemizedlist>
+ <para>If <option>backupvolfile-server</option> option is added while mounting fuse client, when the first
+ volfile server fails, then the server specified in <option>backupvolfile-server</option> option is used as volfile server to mount the client.</para>
+ <para>In <code>fetch-attempts=N</code> option, specify the number of attempts to fetch volume files while mounting a volume. This option will be useful when round-robin DNS is configured for the server-name. </para>
</section>
</section>
</section>
+
<section id="sect-Administration_Guide-GlusterFS_Client-NFS">
<title>NFS</title>
- <para>You can use NFS v3 to access to gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up), Windows Server 2003, and others, may work with gluster NFS server implementation. </para>
- <para>GlusterFS now includes network lock manager (NLM) v4. NLM enables applications on NFSv3 clients to do record locking on files on NFS server. It is started automatically whenever the NFS server is run.</para>
- <para condition="gfs">You must install nfs-common package on both servers and clients (only for Debian-based) distribution.</para>
- <para>This section describes how to use NFS to mount Gluster volumes (both manually and automatically) and how to verify that the volume has been mounted successfully. </para>
+ <para>You can use NFS v3 to access to gluster volumes.</para>
+ <para>GlusterFS 3.3.0, now includes network lock manager (NLM) v4 feature too. NLM enables applications on NFSv3 clients to do record locking on files. NLM program is started automatically with the NFS server process.</para>
+ <para>This section describes how to use NFS to mount Gluster volumes (both manually and automatically). </para>
+
<section>
<title>Using NFS to Mount Volumes</title>
<para>You can use either of the following methods to mount Gluster volumes: </para>
@@ -295,9 +203,8 @@ the client.</para>
<para><xref linkend="sect-Administration_Guide-GlusterFS_Client-NFS-Automatic"/></para>
</listitem>
</itemizedlist></para>
- <para condition="gfs"><emphasis role="bold">Prerequisite</emphasis>: Install nfs-common package on both servers and clients (only for Debian-based distribution), using the following command: </para>
- <para condition="gfs"><command>$ sudo aptitude install nfs-common </command></para>
- <para>After mounting a volume, you can test the mounted volume using the procedure described in <xref linkend="sect-Administration_Guide-GlusterFS_Client-NFS-Testing"/>. </para>
+ <para>After mounting a volume, you can test the mounted volume using the procedure described in <xref linkend="sect-Administration_Guide-GlusterFS_Client-Testing"/>. </para>
+
<section id="sect-Administration_Guide-GlusterFS_Client-NFS-Manual">
<title>Manually Mounting Volumes Using NFS </title>
<para>To manually mount a Gluster volume using NFS </para>
@@ -310,8 +217,7 @@ the client.</para>
<para>For example:</para>
<para><command># mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs</command></para>
<para><note>
- <para> Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears:
-</para>
+ <para> Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: </para>
<para><code>requested NFS version or transport protocol is not supported</code>. </para>
</note></para>
<para><emphasis role="bold">To connect using TCP</emphasis></para>
@@ -322,7 +228,7 @@ the client.</para>
<para><command>-o mountproto=tcp </command></para>
<para>For example:
</para>
- <para><command># mount -o mountproto=tcp -t nfs server1:/test-volume /mnt/glusterfs</command></para>
+ <para><command># mount -o mountproto=tcp,vers=3 -t nfs server1:/test-volume /mnt/glusterfs</command></para>
</listitem>
</itemizedlist>
<para><emphasis role="bold">To mount Gluster NFS server from a Solaris client </emphasis></para>
@@ -337,6 +243,7 @@ For example:</para>
</listitem>
</itemizedlist>
</section>
+
<section id="sect-Administration_Guide-GlusterFS_Client-NFS-Automatic">
<title>Automatically Mounting Volumes Using NFS</title>
<para>You can configure your system to automatically mount Gluster volumes using NFS each time the system starts.</para>
@@ -347,58 +254,18 @@ For example:</para>
<para><command>HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,vers=3 0 0</command></para>
<para>For example,</para>
<para><command>server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,vers=3 0 0</command></para>
- <note>
- <para>Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: </para>
- <para><command>requested NFS version or transport protocol is not supported.</command></para>
- </note>
- <para/>
- <para>To connect using TCP </para>
- </listitem>
- <listitem>
- <para>Add the following entry in /etc/fstab file :</para>
- <para><command>HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0</command></para>
- <para>For example,</para>
+ <para>If default transport to mount NFS is UDP, use below line in fstab</para>
<para><command>server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0</command></para>
</listitem>
</itemizedlist>
<para><emphasis role="bold">To automount NFS mounts</emphasis></para>
<para>Gluster supports *nix standard method of automounting NFS mounts. Update the /etc/auto.master and /etc/auto.misc and restart the autofs service. After that, whenever a user or process attempts to access the directory it will be mounted in the background. </para>
</section>
- <section id="sect-Administration_Guide-GlusterFS_Client-NFS-Testing">
- <title>Testing Volumes Mounted Using NFS</title>
- <para>You can confirm that Gluster directories are mounting successfully. </para>
- <para><emphasis role="bold">To test mounted volumes</emphasis></para>
- <itemizedlist>
- <listitem>
- <para>Use the mount command by entering the following:</para>
- <para><command># mount</command></para>
- <para>For example, the output of the mount command on the client will display an entry like the following:</para>
- <para><command>server1:/test-volume on /mnt/glusterfs type nfs (rw,vers=3,addr=server1)</command></para>
- </listitem>
- </itemizedlist>
- <itemizedlist>
- <listitem>
- <para>Use the df command by entering the following:</para>
- <para><command># df</command></para>
- <para>For example, the output of df command on the client will display the aggregated storage space from all the bricks in a volume.</para>
- <para><screen># df -h /mnt/glusterfs
-Filesystem Size Used Avail Use% Mounted on
-server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</screen></para>
- </listitem>
- <listitem>
- <para>Change to the directory and list the contents by entering the following:</para>
- <para><command># cd MOUNTDIR</command></para>
- <para><command># ls</command></para>
- <para>For example,</para>
- <para><command>
- <command># cd /mnt/glusterfs</command>
- </command></para>
- <para><command># ls</command></para>
- </listitem>
- </itemizedlist>
- </section>
+
</section>
+
</section>
+
<section id="sect-Administration_Guide-GlusterFS_Client-CIFS">
<title>CIFS</title>
<para>You can use CIFS to access to volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. You can export glusterfs mount point as the samba export, and then mount it using CIFS protocol.</para>
@@ -406,6 +273,7 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</screen></para>
<para><note>
<para> CIFS access using the Mac OS X Finder is not supported, however, you can use the Mac OS X command line to access Gluster volumes using CIFS.</para>
</note></para>
+
<section>
<title>Using CIFS to Mount Volumes</title>
<para>You can use either of the following methods to mount Gluster volumes: </para>
@@ -417,15 +285,16 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</screen></para>
<para><xref linkend="sect-Administration_Guide-GlusterFS_Client-CIFS-Automatic"/></para>
</listitem>
</itemizedlist>
- <para>After mounting a volume, you can test the mounted volume using the procedure described in <xref linkend="sect-Administration_Guide-GlusterFS_Client-CIFS-Testing"/>.</para>
+ <para>After mounting a volume, you can test the mounted volume using the procedure described in <xref linkend="sect-Administration_Guide-GlusterFS_Client-Testing"/>.</para>
<para>You can also use Samba for exporting Gluster Volumes through CIFS protocol.</para>
+
<section>
<title>Exporting Gluster Volumes Through Samba</title>
<para>We recommend you to use Samba for exporting Gluster volumes through the CIFS protocol. </para>
<para><emphasis role="bold">To export volumes through CIFS protocol </emphasis></para>
<orderedlist>
<listitem>
- <para>Mount a Gluster volume. For more information on mounting volumes, see <xref linkend="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Mounting_Volumes"/>.</para>
+ <para>Mount a Gluster volume. For more information on mounting volumes, see <xref linkend="sect-Administration_Guide-GlusterFS_Client-Mounting_Volumes"/>.</para>
</listitem>
<listitem>
<para>Setup Samba configuration to export the mount point of the Gluster volume.</para>
@@ -446,6 +315,7 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</screen></para>
<para>To be able mount from any server in the trusted storage pool, you must repeat these steps on each Gluster node. For more advanced configurations, see Samba documentation. </para>
</note></para>
</section>
+
<section id="sect-Administration_Guide-GlusterFS_Client-CIFS-Manual">
<title>Manually Mounting Volumes Using CIFS </title>
<para>You can manually mount Gluster volumes using CIFS on Microsoft Windows-based client machines. </para>
@@ -479,6 +349,7 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</screen></para>
</listitem>
</itemizedlist>
</section>
+
<section id="sect-Administration_Guide-GlusterFS_Client-CIFS-Automatic">
<title>Automatically Mounting Volumes Using CIFS</title>
<para>You can configure your system to automatically mount Gluster volumes using CIFS on Microsoft Windows-based clients each time the system starts.</para>
@@ -502,10 +373,39 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</screen></para>
</listitem>
</orderedlist>
</section>
- <section id="sect-Administration_Guide-GlusterFS_Client-CIFS-Testing">
- <title>Testing Volumes Mounted Using CIFS</title>
- <para>You can confirm that Gluster directories are mounting successfully by navigating to the directory using Windows Explorer. </para>
- </section>
+
</section>
</section>
+ <section id="sect-Administration_Guide-GlusterFS_Client-Testing">
+ <title>Testing Mounted Volumes</title>
+ <para>To test mounted volumes</para>
+ <itemizedlist>
+ <listitem>
+ <para>Use the following command:</para>
+ <para><command># mount </command></para>
+ <para>If the gluster volume was successfully mounted, the output of the mount command on the client will be similar to this example:</para>
+ <para><code>server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072</code></para>
+ </listitem>
+ </itemizedlist>
+ <itemizedlist>
+ <listitem>
+ <para>Use the following command:</para>
+ <para><command># df -h </command></para>
+ <para>The output of df command on the client will display the aggregated storage space from all the bricks in a volume similar to this example:</para>
+ <para><code># df -h /mnt/glusterfs</code></para>
+ <para><code>Filesystem Size Used Avail Use% Mounted on </code></para>
+ <para><code>server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs</code></para>
+ </listitem>
+ <listitem>
+ <para>Change to the directory and list the contents by entering the following:</para>
+ <para><command># cd MOUNTDIR </command></para>
+ <para><command># ls</command></para>
+ </listitem>
+ <listitem>
+ <para>For example,</para>
+ <para><code># cd /mnt/glusterfs </code></para>
+ <para><code># ls</code></para>
+ </listitem>
+ </itemizedlist>
+ </section>
</chapter>
diff --git a/doc/admin-guide/en-US/admin_storage_pools.xml b/doc/admin-guide/en-US/admin_storage_pools.xml
index 87b6320bd4b..2c4a5cabe1d 100644
--- a/doc/admin-guide/en-US/admin_storage_pools.xml
+++ b/doc/admin-guide/en-US/admin_storage_pools.xml
@@ -3,17 +3,17 @@
<chapter id="chap-Administration_Guide-Storage-pool">
<title>Setting up Trusted Storage Pools</title>
<para>Before you can configure a GlusterFS volume, you must create a trusted storage pool consisting of the storage servers that provides bricks to a volume. </para>
- <para>A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. To add additional storage servers to the storage pool, you can use the probe command from a storage server that is already trusted. </para>
+ <para>A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. To add additional storage servers to the storage pool, you can use the probe command from a storage server that is already part of the trusted storage pool. </para>
<para><note>
<para>Do not self-probe the first server/localhost.</para>
</note></para>
- <para>The GlusterFS service must be running on all storage servers that you want to add to the storage pool. See <xref linkend="chap-Administration_Guide-Start_Stop_Daemon"/> for more information.</para>
+ <para>The glusterd service must be running on all storage servers that you want to add to the storage pool. See <xref linkend="chap-Administration_Guide-Start_Stop_Daemon"/> for more information.</para>
<section id="sect-Administration_Guide-Storage_Pools-Adding_Servers">
<title>Adding Servers to Trusted Storage Pool</title>
<para>To create a trusted storage pool, add servers to the trusted storage pool</para>
<orderedlist>
<listitem>
- <para>The hostnames used to create the storage pool must be resolvable by DNS.</para>
+ <para>The hostnames used to create the storage pool must be resolvable by DNS. Also make sure that firewall is not blocking the probe requests/replies. (iptables -F)</para>
<para>To add a server to the storage pool:</para>
<para><command># gluster peer probe <replaceable>server</replaceable></command></para>
<para>For example, to create a trusted storage pool of four servers, add three servers to the storage pool from server1:</para>
diff --git a/doc/admin-guide/en-US/admin_troubleshooting.xml b/doc/admin-guide/en-US/admin_troubleshooting.xml
index dff182c5f1e..1c6866d8ba7 100644
--- a/doc/admin-guide/en-US/admin_troubleshooting.xml
+++ b/doc/admin-guide/en-US/admin_troubleshooting.xml
@@ -106,7 +106,7 @@ location of the log file.
<section>
<title>Rotating Geo-replication Logs</title>
<para>Administrators can rotate the log file of a particular master-slave session, as needed.
-When you run geo-replication&apos;s <command> log-rotate</command> command, the log file
+When you run geo-replication&apos;s <command> log-rotate</command> command, the log file
is backed up with the current timestamp suffixed to the file
name and signal is sent to gsyncd to start logging to a new
log file.</para>
@@ -139,7 +139,7 @@ log rotate successful</programlisting>
<para><command># gluster volume geo-replication log-rotate</command>
</para>
<para>For example, to rotate the log file for all sessions:</para>
- <programlisting># gluster volume geo-replication log rotate
+ <programlisting># gluster volume geo-replication log-rotate
log rotate successful</programlisting>
</listitem>
</itemizedlist>
@@ -172,18 +172,17 @@ you have installed the required version.
the following:
</para>
<para><errortext>2011-04-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] &lt;top&gt;: FAIL: Traceback (most recent call last): File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py&quot;, line 152, in twraptf(*aa) File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 118, in listen rid, exc, res = recv(self.inf) File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 42, in recv return pickle.load(inf) EOFError </errortext></para>
- <para><emphasis role="bold">Solution</emphasis>: This error indicates that the RPC communication between the master gsyncd module and slave
-gsyncd module is broken and this can happen for various reasons. Check if it satisfies all the following
+ <para><emphasis role="bold">Solution</emphasis>: This error indicates that the RPC communication between the master geo-replication module and slave
+geo-replication module is broken and this can happen for various reasons. Check if it satisfies all the following
pre-requisites:
</para>
<itemizedlist>
<listitem>
- <para>Password-less SSH is set up properly between the host and the remote machine.
+ <para>Password-less SSH is set up properly between the host where geo-replication command is executed and the remote machine where the slave geo-replication is located.
</para>
</listitem>
<listitem>
- <para>If FUSE is installed in the machine, because geo-replication module mounts the GlusterFS volume
-using FUSE to sync data.
+ <para>If FUSE is installed in the machine where the geo-replication command is executed, because geo-replication module mounts the GlusterFS volume using FUSE to sync data.
</para>
</listitem>
<listitem>
@@ -196,15 +195,15 @@ required permissions.
</para>
</listitem>
<listitem>
- <para>If GlusterFS 3.2 or higher is not installed in the default location (in Master) and has been prefixed to be
+ <para>If GlusterFS 3.3 or higher is not installed in the default location (in Master) and has been prefixed to be
installed in a custom location, configure the <command>gluster-command</command> for it to point to the exact
location.
</para>
</listitem>
<listitem>
- <para>If GlusterFS 3.2 or higher is not installed in the default location (in slave) and has been prefixed to be
+ <para>If GlusterFS 3.3 or higher is not installed in the default location (in slave) and has been prefixed to be
installed in a custom location, configure the <command>remote-gsyncd-command</command> for it to point to the
-exact place where gsyncd is located.
+exact place where geo-replication is located.
</para>
</listitem>
</itemizedlist>
@@ -224,7 +223,7 @@ intermediate master.
</section>
<section>
<title>Troubleshooting POSIX ACLs </title>
- <para>This section describes the most common troubleshooting issues related to POSIX ACLs.
+ <para>This section describes the most common troubleshooting issues related to POSIX ACLs.
</para>
<section>
<title>setfacl command fails with “setfacl: &lt;file or directory name&gt;: Operation not supported” error </title>
@@ -244,7 +243,7 @@ Storage.
</para>
<section id="sect-Administration_Guide-Troubleshooting-Test_Section_1">
<title>Time Sync</title>
- <para>Running MapReduce job may throw exceptions if the time is out-of-sync on the hosts in the cluster.
+ <para>Running MapReduce job may throw exceptions if the clocks are out-of-sync on the hosts in the cluster.
</para>
<para><emphasis role="bold">Solution</emphasis>: Sync the time on all hosts using ntpd program.
@@ -257,7 +256,7 @@ Storage.
</para>
<section>
<title>mount command on NFS client fails with “RPC Error: Program not registered” </title>
- <para>Start portmap or rpcbind service on the NFS server.
+ <para>Start portmap or rpcbind service on the machine where NFS server is running.
</para>
<para>This error is encountered when the server has not started correctly.
</para>
@@ -280,11 +279,11 @@ required:
This situation can be confirmed from the log file, if the following error lines exist:
</para>
<para><screen>[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
-[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
-[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
-[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
-[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
-[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
+[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
+[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
+[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
+[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
+[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols</screen></para>
<para>To resolve this error one of the Gluster NFS servers will have to be shutdown. At this time,
Gluster NFS server does not support running multiple NFS servers on the same machine.
@@ -296,7 +295,7 @@ Gluster NFS server does not support running multiple NFS servers on the same mac
</para>
<para><errortext>mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use &apos;-o nolock&apos; to keep locks local, or start statd. </errortext></para>
<para><errortext>Start rpc.statd </errortext></para>
- <para>For NFS clients to mount the NFS server, rpc.statd service must be running on the clients. </para>
+ <para>For NFS clients to mount the NFS server, rpc.statd service must be running on the client machine. </para>
<para>Start
rpc.statd service by running the following command:
</para>
@@ -317,12 +316,12 @@ required:
<para><command>$ /etc/init.d/rpcbind start</command></para>
</section>
<section>
- <title>NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. </title>
+ <title>NFS server, glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. </title>
<para>NFS start-up can succeed but the initialization of the NFS service can still fail preventing clients
from accessing the mount points. Such a situation can be confirmed from the following error
messages in the log file:
</para>
- <para><screen>[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
+ <para><screen>[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
@@ -415,14 +414,14 @@ Gluster NFS server. The timeout can be resolved by forcing the NFS client to use
server requests/replies. Gluster NFS server operates over the following port numbers: 38465,
38466, and 38467.
</para>
- <para>For more information, see <xref linkend="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Native-RPM"/>.
+ <para>For more information, see <xref linkend="sect-Administration_Guide-GlusterFS_Client-Native-RPM"/>.
</para>
</section>
<section>
<title>Application fails with &quot;Invalid argument&quot; or &quot;Value too large for defined data type&quot; error. </title>
<para>These two errors generally happen for 32-bit nfs clients or applications that do not support 64-bit
inode numbers or large files.
-Use the following option from the CLI to make Gluster NFS return 32-bit inode numbers instead:
+Use the following option from the CLI to make Gluster NFS server return 32-bit inode numbers instead:
nfs.enable-ino32 &lt;on|off&gt;
</para>
<para>Applications that will benefit are those that were either:
@@ -436,7 +435,7 @@ nfs.enable-ino32 &lt;on|off&gt;
</para>
</listitem>
</itemizedlist>
- <para>This option is disabled by default so NFS returns 64-bit inode numbers by default.
+ <para>This option is disabled by default. So Gluster NFS server returns 64-bit inode numbers by default.
</para>
<para>Applications which can be rebuilt from source are recommended to rebuild using the following
flag with gcc:</para>
@@ -458,7 +457,7 @@ flag with gcc:</para>
<para><programlisting># gluster volume statedump test-volume
Volume statedump successful</programlisting></para>
<para>The statedump files are created on the brick servers in the<filename> /tmp</filename> directory or in the directory set using <command>server.statedump-path</command> volume option. The naming convention of the dump file is <filename>&lt;brick-path&gt;.&lt;brick-pid&gt;.dump</filename>.</para>
- <para>The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them. </para>
+ <para>The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them before clearing. </para>
<para><screen>[xlator.features.locks.vol-locks.inode]
path=/
mandatory=0
diff --git a/doc/admin-guide/en-US/gfs_introduction.xml b/doc/admin-guide/en-US/gfs_introduction.xml
index 5fd88730556..64b1c077930 100644
--- a/doc/admin-guide/en-US/gfs_introduction.xml
+++ b/doc/admin-guide/en-US/gfs_introduction.xml
@@ -3,7 +3,7 @@
<chapter>
<title>Introducing Gluster File System</title>
<para>GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. GlusterFS can be flexibly combined with commodity physical, virtual, and cloud resources to deliver highly available and performant enterprise storage at a fraction of the cost of traditional solutions.</para>
- <para>GlusterFS clusters together storage building blocks over Infiniband RDMA and/or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design, delivering exceptional performance for diverse workloads.
+ <para>GlusterFS clusters together storage building blocks over Infiniband RDMA and/or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design, delivering exceptional performance for diverse workloads.
</para>
<figure>
<title>Virtualized Cloud Environments</title>
@@ -17,7 +17,7 @@
</mediaobject>
</figure>
<para>GlusterFS is designed for today&apos;s high-performance, virtualized cloud environments. Unlike traditional data centers, cloud environments require multi-tenancy along with the ability to grow or shrink resources on demand. Enterprises can scale capacity, performance, and availability on demand, with no vendor lock-in, across on-premise, public cloud, and hybrid environments. </para>
- <para>GlusterFS is in production at thousands of enterprises spanning media, healthcare, government, education, web 2.0, and financial services. The following table lists the commercial offerings and its documentation location:
+ <para>GlusterFS is in production at thousands of enterprises spanning media, healthcare, government, education, web 2.0, and financial services. The following table lists the commercial offerings and its documentation location:
</para>
<informaltable frame="all">
<tgroup cols="2">