From f037fe9143706375bea140b61fd87d13e5b2b961 Mon Sep 17 00:00:00 2001 From: Amar Tumballi Date: Mon, 28 May 2012 19:32:14 +0530 Subject: documentation - Admin-Guide Updates Change-Id: I6e053b6a5f099fb7b1c228668949463c795b4fc7 Signed-off-by: Amar Tumballi Reviewed-on: http://review.gluster.com/3496 Tested-by: Gluster Build System Reviewed-by: Vijay Bellur --- doc/admin-guide/en-US/Book_Info.xml | 4 +- doc/admin-guide/en-US/admin_ACLs.xml | 15 +- doc/admin-guide/en-US/admin_commandref.xml | 68 ++--- doc/admin-guide/en-US/admin_console.xml | 9 +- doc/admin-guide/en-US/admin_directory_Quota.xml | 19 +- doc/admin-guide/en-US/admin_geo-replication.xml | 122 ++++---- doc/admin-guide/en-US/admin_managing_volumes.xml | 11 +- doc/admin-guide/en-US/admin_setting_volumes.xml | 16 +- doc/admin-guide/en-US/admin_settingup_clients.xml | 324 ++++++++-------------- doc/admin-guide/en-US/admin_storage_pools.xml | 6 +- doc/admin-guide/en-US/admin_troubleshooting.xml | 49 ++-- doc/admin-guide/en-US/gfs_introduction.xml | 4 +- 12 files changed, 280 insertions(+), 367 deletions(-) (limited to 'doc/admin-guide') diff --git a/doc/admin-guide/en-US/Book_Info.xml b/doc/admin-guide/en-US/Book_Info.xml index 6be6a7816ca..19fb40a2f34 100644 --- a/doc/admin-guide/en-US/Book_Info.xml +++ b/doc/admin-guide/en-US/Book_Info.xml @@ -5,9 +5,9 @@ ]> Administration Guide - Using Gluster File System Beta 3 + Using Gluster File System Gluster File System - 3.3 + 3.3.0 1 1 diff --git a/doc/admin-guide/en-US/admin_ACLs.xml b/doc/admin-guide/en-US/admin_ACLs.xml index 156e52c17f2..edad2d67d60 100644 --- a/doc/admin-guide/en-US/admin_ACLs.xml +++ b/doc/admin-guide/en-US/admin_ACLs.xml @@ -13,7 +13,7 @@ be granted or denied access by using POSIX ACLs.
Activating POSIX ACLs Support - To use POSIX ACLs for a file or directory, the partition of the file or directory must be mounted with + To use POSIX ACLs for a file or directory, the mount point where the file or directory exists, must be mounted with POSIX ACLs support.
@@ -22,23 +22,28 @@ POSIX ACLs support. # mount -o acl device-namepartition + If the backend export directory is already mounted, use the following command: + + # mount -oremount,acl device-namepartition + + For example: # mount -o acl /dev/sda1 /export1 Alternatively, if the partition is listed in the /etc/fstab file, add the following entry for the partition to include the POSIX ACLs option: - LABEL=/work /export1 ext3 rw, acl 14 + LABEL=/work /export1 xfs rw,acl 1 4
Activating POSIX ACLs Support on Client - To mount the glusterfs volumes for POSIX ACLs support, use the following command: + To mount the glusterfs volume with POSIX ACLs support, use the following command: - # mount –t glusterfs -o acl severname:volume-idmount point + # mount –t glusterfs -o acl severname:/volume-idmount point For example: - # mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster + # mount -t glusterfs -o acl 198.192.198.234:/glustervolume /mnt/gluster
diff --git a/doc/admin-guide/en-US/admin_commandref.xml b/doc/admin-guide/en-US/admin_commandref.xml index df4c78f4869..05196b77326 100644 --- a/doc/admin-guide/en-US/admin_commandref.xml +++ b/doc/admin-guide/en-US/admin_commandref.xml @@ -2,8 +2,7 @@ Command Reference - This section describes the available commands and includes the -following section: + This section describes the available commands and includes thefollowing section: @@ -36,10 +35,8 @@ gluster [COMMANDS] [OPTIONS] DESCRIPTION - The Gluster Console Manager is a command line utility for elastic volume management. You can run -the gluster command on any export server. The command enables administrators to perform cloud -operations such as creating, expanding, shrinking, rebalancing, and migrating volumes without -needing to schedule server downtime. + The Gluster Console Manager is a command line utility for elastic volume management. 'gluster' command enables administrators to perform cloud +operations such as creating, expanding, shrinking, rebalancing, and migrating volumes without needing to schedule server downtime. COMMANDS @@ -66,7 +63,8 @@ needing to schedule server downtime. volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK ... - Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp). + Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp). + NOTE: with 3.3.0 release, transport type 'rdma' and 'tcp,rdma' are not fully supported. volume delete VOLNAME @@ -80,10 +78,6 @@ needing to schedule server downtime. volume stop VOLNAME [force] Stops the specified volume. - - volume rename VOLNAME NEW-VOLNAME - Renames the specified volume. - volume help Displays help for the volume command. @@ -94,16 +88,16 @@ needing to schedule server downtime. - volume add-brick VOLNAME NEW-BRICK ... - Adds the specified brick to the specified volume. + volume add-brick VOLNAME [replica N] [stripe N] NEW-BRICK1 ... + Adds the specified brick(s) to the given VOLUME. Using add-brick, users can increase the replica/stripe count of the volume, or increase the volume capacity by adding the brick(s) without changing volume type. - volume replace-brick VOLNAME (BRICK NEW-BRICK) start | pause | abort | status - Replaces the specified brick. + volume replace-brick VOLNAME (BRICK NEW-BRICK) [start | start force | abort | status | commit | commit force] + Used to replace BRICK with NEW-BRICK in a given VOLUME. After replace-brick is complete, the changes to get reflected in volume information, replace-brick 'commit' command is neccessary. - volume remove-brick VOLNAME [(replica COUNT)|(stripe COUNT)] BRICK ... - Removes the specified brick from the specified volume. + volume remove-brick VOLNAME [replica N] BRICK1 ... [start | stop | status | commit | force ] + Removes the specified brick(s) from the specified volume. 'remove-brick' command can be used to reduce the replica count of the volume when 'replica N' option is given. To ensure data migration from the removed brick to existing bricks, give 'start' sub-command at the end of the command. After the 'status' command says remove-brick operation is complete, user can 'commit' the changes to volume file. Using 'remove-brick' without 'start' option works similar to 'force' command, which makes the changes to volume configuration without migrating the data. @@ -112,7 +106,7 @@ needing to schedule server downtime. volume rebalance VOLNAME start - Starts rebalancing the specified volume. + Starts rebalancing of the data on specified volume. volume rebalance VOLNAME stop @@ -127,17 +121,32 @@ needing to schedule server downtime. Log - - volume log filename VOLNAME [BRICK] DIRECTORY - Sets the log directory for the corresponding volume/brick. - volume log rotate VOLNAME [BRICK] Rotates the log file for corresponding volume/brick. + + + + Debugging + + - volume log locate VOLNAME [BRICK] - Locates the log file for corresponding volume/brick. + volume top VOLNAME {[open|read|write|opendir|readdir [nfs]] |[read-perf|write-perf [nfs|{bs COUNT count COUNT}]]|[clear [nfs]]} [BRICK] [list-cnt COUNT] + Shows the operation details on the volume depending on the arguments given. + + + + volume profile VOLNAME {start|info|stop} [nfs] + Shows the file operation details on each bricks of the volume. + + + volume status [all | VOLNAME] [nfs|shd|BRICK] [detail|clients|mem|inode|fd|callpool] + Show details of activity, internal data of the processes (nfs/shd/BRICK) corresponding to one of the next argument given. If now argument is given, this command outputs bare minimum details of the current status (include PID of brick process etc) of volume's bricks. + + + statedump VOLNAME [nfs] [all|mem|iobuf|callpool|priv|fd|inode|history] + Command is used to take the statedump of the process, which is used captures most of the internal details. @@ -182,7 +191,6 @@ needing to schedule server downtime. volume geo-replication MASTER SLAVE config [options] - Configure geo-replication options between the hosts specified by MASTER and SLAVE. @@ -226,7 +234,6 @@ needing to schedule server downtime. The number of simultaneous files/directories that can be synchronized. - ignore-deletes If this option is set to 1, a file deleted on master will not trigger a delete operation on the slave. Hence, the slave will remain as a superset of the master and can be used to recover the master in case of crash and/or accidental delete. @@ -237,7 +244,6 @@ needing to schedule server downtime. help - Display the command options. @@ -251,10 +257,8 @@ needing to schedule server downtime. FILES - /etc/glusterd/* + /var/lib/glusterd/* - SEE ALSO - fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), glusterd(8)
glusterd Daemon @@ -326,9 +330,7 @@ needing to schedule server downtime. FILES - /etc/glusterd/* + /var/lib/glusterd/* - SEE ALSO - fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), gluster(8)
diff --git a/doc/admin-guide/en-US/admin_console.xml b/doc/admin-guide/en-US/admin_console.xml index ebf273935ca..74d12b965d9 100644 --- a/doc/admin-guide/en-US/admin_console.xml +++ b/doc/admin-guide/en-US/admin_console.xml @@ -2,11 +2,11 @@ Using the Gluster Console Manager – Command Line Utility - The Gluster Console Manager is a single command line utility that simplifies configuration and management of your storage environment. The Gluster Console Manager is similar to the LVM (Logical Volume Manager) CLI or ZFS Command Line Interface, but across multiple storage servers. You can use the Gluster Console Manager online, while volumes are mounted and active. Gluster automatically synchronizes volume configuration information across all Gluster servers. - Using the Gluster Console Manager, you can create new volumes, start volumes, and stop volumes, as required. You can also add bricks to volumes, remove bricks from existing volumes, as well as change translator settings, among other operations. - You can also use the commands to create scripts for automation, as well as use the commands as an API to allow integration with third-party applications. + The Gluster Console Manager is a single command line utility that simplifies configuration and management of your storage environment. The Gluster Console Manager is similar to the LVM (Logical Volume Manager) CLI or ZFS Command Line Interface, but it works in sync with multiple storage servers. You can use the Gluster Console Manager while volumes are mounted and active too. Gluster automatically synchronizes volume configuration information across all Gluster servers. + Using the Gluster Console Manager, you can create new volumes, start volumes, and stop volumes, as required. You can also add bricks to volumes, remove bricks from existing volumes, as well as change volume settings (such as some translator specific options), among other operations. + You can also use these CLI commands to create scripts for automation, as well as use the commands as an API to allow integration with third-party applications. Running the Gluster Console Manager - You can run the Gluster Console Manager on any GlusterFS server either by invoking the commands or by running the Gluster CLI in interactive mode. You can also use the gluster command remotely using SSH. + You can run the Gluster Console Manager on any GlusterFS server either by invoking the commands or by running the Gluster CLI in interactive mode. You can also use the gluster command remotely using SSH. To run commands directly: @@ -25,4 +25,5 @@ Display the status of the peer. + With any 'gluster' installation, to check all the supported CLI commands, use 'gluster help' . diff --git a/doc/admin-guide/en-US/admin_directory_Quota.xml b/doc/admin-guide/en-US/admin_directory_Quota.xml index 8a1012a6ac2..83c4ff451fb 100644 --- a/doc/admin-guide/en-US/admin_directory_Quota.xml +++ b/doc/admin-guide/en-US/admin_directory_Quota.xml @@ -2,14 +2,14 @@ Managing Directory Quota - Directory quotas in GlusterFS allow you to set limits on usage of disk space by directories or volumes. -The storage administrators can control the disk space utilization at the directory and/or volume -levels in GlusterFS by setting limits to allocatable disk space at any level in the volume and directory -hierarchy. This is particularly useful in cloud deployments to facilitate utility billing model. + Directory quotas in GlusterFS allow you to set limits on usage of disk space of a given directory. +The storage administrators can control the disk space utilization at the directory level in GlusterFS by setting quota limits on the given directory. If +admin sets the quota limit on the '/' of the volume, it can be treated as 'volume level quota'. GlusterFS's quota implementation can have different quota +limit set on any directory and it can be nested. This is particularly useful in cloud deployments to facilitate utility billing model. For now, only Hard limit is supported. Here, the limit cannot be exceeded and attempts to use -more disk space or inodes beyond the set limit will be denied. +more disk space beyond the set limit will be denied. System administrators can also monitor the resource utilization to limit the storage for the users @@ -46,7 +46,7 @@ immediately after creating that directory. For more information on setting disk For example, to enable quota on test-volume: # gluster volume quota test-volume enable -Quota is enabled on /test-volume +Quota is enabled on /test-volume @@ -61,10 +61,10 @@ Quota is enabled on /test-volume Disable the quota using the following command: # gluster volume quota VOLNAME disable - For example, to disable quota translator on test-volume: + For example, to disable quota on test-volume: # gluster volume quota test-volume disable -Quota translator is disabled on /test-volume +Quota translator is disabled on /test-volume @@ -87,7 +87,7 @@ export directory: # gluster volume quota test-volume limit-usage /data 10GB Usage limit has been set on /data - In a multi-level directory hierarchy, the strictest disk limit will be considered for enforcement. + In a multi-level directory hierarchy, the minimum of disk limit in entire hierarchy will be considered for enforcement. @@ -115,6 +115,7 @@ command: /Test/data 10 GB 6 GB /Test/data1 10 GB 4 GB + NOTE that, the directory listed here is not absolute directory name, but relative path to the volume's root ('/'). For example, if 'test-volume' is mounted on '/mnt/glusterfs', then for the above example, '/Test/data' means, '/mnt/glusterfs/Test/data' Display disk limit information on a particular directory on which limit is set, using the following diff --git a/doc/admin-guide/en-US/admin_geo-replication.xml b/doc/admin-guide/en-US/admin_geo-replication.xml index b546bb8da8c..4691116acb8 100644 --- a/doc/admin-guide/en-US/admin_geo-replication.xml +++ b/doc/admin-guide/en-US/admin_geo-replication.xml @@ -39,7 +39,7 @@ - Mirrors data across clusters + Mirrors data across nodes in a cluster Mirrors data across geographically distributed clusters @@ -47,7 +47,7 @@ Ensures backing up of data for disaster recovery - Synchronous replication (each and every file operation is sent across all the bricks) + Synchronous replication (each and every file modify operation is sent across all the bricks) Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences) @@ -79,11 +79,11 @@ Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and across the Internet. This section illustrates the most common deployment scenarios for Geo-replication, including the following: - Geo-replication over LAN + Geo-replication over LAN - Geo-replication over WAN + Geo-replication over WAN @@ -104,7 +104,7 @@ Geo-replication over WAN - You can configure Geo-replication to replicate data over a Wide Area Network. + You can configure Geo-replication to replicate data over a Wide Area Network. @@ -116,7 +116,7 @@ Geo-replication over Internet - You can configure Geo-replication to mirror data over the Internet. + You can configure Geo-replication to mirror data over the Internet. @@ -128,7 +128,7 @@ Multi-site cascading Geo-replication - You can configure Geo-replication to mirror data in a cascading fashion across multiple sites. + You can configure Geo-replication to mirror data in a cascading fashion across multiple sites. @@ -142,16 +142,16 @@
Geo-replication Deployment Overview - Deploying Geo-replication involves the following steps: + Deploying Geo-replication involves the following steps: - Verify that your environment matches the minimum system requirement. For more information, see . + Verify that your environment matches the minimum system requirements. For more information, see . Determine the appropriate deployment scenario. For more information, see . - Start Geo-replication on master and slave systems, as required. For more information, see . + Start Geo-replication on master and slave systems, as required. For more information, see .
@@ -180,7 +180,7 @@ Filesystem GlusterFS 3.2 or higher - GlusterFS 3.2 or higher (GlusterFS needs to be installed, but does not need to be running), ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively) + GlusterFS 3.2 or higher, ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively) Python @@ -194,8 +194,8 @@ Remote synchronization - rsync 3.0.7 or higher - rsync 3.0.7 or higher + rsync 3.0.0 or higher + rsync 3.0.0 or higher FUSE @@ -211,37 +211,36 @@ Time Synchronization - On bricks of a geo-replication master volume, all the servers' time must be uniform. You are recommended to set up NTP (Network Time Protocol) service to keep the bricks sync in time and avoid out-of-time sync effect. + All servers that are part of a geo-replication master volume need to have their clocks in sync. You are recommended to set up NTP (Network Time Protocol) daemon service to keep the clocks in sync. For example: In a Replicated volume where brick1 of the master is at 12.20 hrs and brick 2 of the master is at 12.10 hrs with 10 minutes time lag, all the changes in brick2 between this period may go unnoticed during synchronization of files with Slave. - For more information on setting up NTP, see . + For more information on setting up NTP daemon, see . To setup Geo-replication for SSH - Password-less login has to be set up between the host machine (where geo-replication Start command will be issued) and the remote machine (where slave process should be launched through SSH). + Password-less login has to be set up between the host machine (where geo-replication start command will be issued) and the remote machine (where slave process should be launched through SSH). - On the node where geo-replication sessions are to be set up, run the following command: - # ssh-keygen -f /etc/glusterd/geo-replication/secret.pem + On the node where geo-replication start commands are to be issued, run the following command: + # ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem Press Enter twice to avoid passphrase. Run the following command on master for all the slave hosts: - # ssh-copy-id -i /etc/glusterd/geo-replication/secret.pem.pub user@slavehost + # ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub user@slavehost
Setting Up the Environment for a Secure Geo-replication Slave - You can configure a secure slave using SSH so that master is granted a -restricted access. With GlusterFS, you need not specify -configuration parameters regarding the slave on the master-side -configuration. For example, the master does not require the location of + You can configure a secure slave using SSH so that master is granted +restricted access. With GlusterFS 3.3, you need not specify slave +configuration parameters on the master-side. For example, the master does not require the location of the rsync program on slave but the slave must ensure that rsync is in the PATH of the user which the master connects using SSH. The only information that master and slave have to negotiate are the slave-side -user account, slave's resources that master uses as slave resources, and +user account, slave's resources and the master's public key. Secure access to the slave can be established using the following options: @@ -256,43 +255,39 @@ using the following options: Backward Compatibility - Your existing Ge-replication environment will work with GlusterFS, -except for the following: + Your existing Geo-replication environment will work with GlusterFS + 3.3, except for the following: - The process of secure reconfiguration affects only the glusterfs + The process of secure reconfiguration affects only the GlusterFS instance on slave. The changes are transparent to master with the exception that you may have to change the SSH target to an unprivileged - account on slave. + account on the slave. - The following are the some exceptions where this might not work: + The following are some exceptions where backward compatibility cannot be provided: - Geo-replication URLs which specify the slave resource when configuring master will include the following special characters: space, *, ?, [; + Geo-replication URLs which specify the slave resource include the following special characters: space, *, ?, [; - Slave must have a running instance of glusterd, even if there is no -gluster volume among the mounted slave resources (that is, file tree -slaves are used exclusively) . + Slave does not have glusterd running.
Restricting Remote Command Execution - If you restrict remote command execution, then the Slave audits commands -coming from the master and the commands related to the given -geo-replication session is allowed. The Slave also provides access only -to the files within the slave resource which can be read or manipulated -by the Master. + If you restrict remote command execution, then the slave audits commands +coming from the master and only the pre-configured commands are allowed. The slave also provides access only +to the files which are pre-configured to be read or manipulated by the master. To restrict remote command execution: Identify the location of the gsyncd helper utility on Slave. This utility is installed in PREFIX/libexec/glusterfs/gsyncd, where PREFIX is a compile-time parameter of glusterfs. For example, --prefix=PREFIX to the configure script with the following common values /usr, /usr/local, and /opt/glusterfs/glusterfs_version. - Ensure that command invoked from master to slave passed through the slave's gsyncd utility. + Ensure that command invoked from master to slave is passed through the slave's gsyncd utility. You can use either of the following two options: @@ -312,14 +307,13 @@ account, then set it up by creating a new user with UID 0.
Using Mountbroker for Slaves mountbroker is a new service of glusterd. This service allows an -unprivileged process to own a GlusterFS mount by registering a label -(and DSL (Domain-specific language) options ) with glusterd through a -glusterd volfile. Using CLI, you can send a mount request to glusterd to -receive an alias (symlink) of the mounted volume. - A request from the agent , the unprivileged slave agents use the -mountbroker service of glusterd to set up an auxiliary gluster mount for -the agent in a special environment which ensures that the agent is only -allowed to access with special parameters that provide administrative +unprivileged process to own a GlusterFS mount. This is accomplished by registering a label +(and DSL (Domain-specific language) options ) with glusterd through the +glusterd volfile. Using CLI, you can send a mount request to glusterd and +receive an alias (symlink) to the mounted volume. + The unprivileged process/agent uses the +mountbroker service of glusterd to set up an auxiliary gluster mount. The mount +is setup so as to allow only that agent to provide administrative level access to the particular volume. To setup an auxiliary gluster mount for the agent: @@ -330,15 +324,17 @@ level access to the particular volume. Create a unprivileged account. For example, geoaccount. Make it a member of geogroup. - Create a new directory owned by root and with permissions 0711. For example, create a create mountbroker-root directory /var/mountbroker-root. + Create a new directory as superuser to be used as mountbroker's root. - Add the following options to the glusterd volfile, assuming the name of the slave gluster volume as slavevol: + Change the permission of the directory to 0711. + + + Add the following options to the glusterd volfile, located at /etc/glusterfs/glusterd.vol, assuming the name of the slave gluster volume as slavevol: option mountbroker-root /var/mountbroker-root option mountbroker-geo-replication.geoaccount slavevol option geo-replication-log-group geogroup - If you are unable to locate the glusterd volfile at /etc/glusterfs/glusterd.vol, you can create a volfile containing both the default configuration and the above options and place it at /etc/glusterfs/. - A sample glusterd volfile along with default options: + A sample glusterd volfile along with default options: volume management type mgmt/glusterd option working-directory /etc/glusterd @@ -347,17 +343,18 @@ level access to the particular volume. option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off - option mountbroker-root /var/mountbroker-root + option mountbroker-root /var/mountbroker-root option mountbroker-geo-replication.geoaccount slavevol option geo-replication-log-group geogroup end-volume - If you host multiple slave volumes on Slave, you can repeat step 2. for each of them and add the following options to the volfile: + If you host multiple slave volumes, you can repeat step 2. for each of the slave volumes and add the following options to the volfile: option mountbroker-geo-replication.geoaccount2 slavevol2 option mountbroker-geo-replication.geoaccount3 slavevol3 Setup Master to access Slave as geoaccount@Slave. - You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list (without spaces) as the argument of mountbroker-geo-replication.geogroup. You can also have multiple options of the form mountbroker-geo-replication.*. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile: + You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list of slave + volumes (without spaces) as the argument of mountbroker-geo-replication.geogroup. You can also have multiple options of the form mountbroker-geo-replication.*. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile: option mountbroker-geo-replication.geoaccount1 slavevol11,slavevol12,slavevol13 option mountbroker-geo-replication.geoaccount2 slavevol21,slavevol22 @@ -365,17 +362,16 @@ option mountbroker-geo-replication.geoaccount3 slavevol3 Now set up Master1 to ssh to geoaccount1@Slave, etc. - You must restart glusterd after making changes in the configuration to effect the updates. + You must restart glusterd to make the configuration changes effective.
Using IP based Access Control - You can use IP based access control method to provide access control for -the slave resources using IP address. You can use method for both Slave -and file tree slaves, but in the section, we are focusing on file tree -slaves using this method. - To set access control based on IP address for file tree slaves: + You can provide access control for the slave resources using IP + addresses. You can use method for both Gluster volume and and + file tree slaves, but in this section, we are focusing on file tree slaves. + To set IP address based access control for file tree slaves: Set a general restriction for accessibility of file tree resources: @@ -427,7 +423,7 @@ comma-separated lists of CIDR subnets. Start geo-replication between the hosts using the following command: - # gluster volume geo-replication MASTER SLAVE start + # gluster volume geo-replication MASTER SLAVE start For example: @@ -435,7 +431,8 @@ comma-separated lists of CIDR subnets. Starting geo-replication session between Volume1 example.com:/data/remote_dir has been successful - You may need to configure the service before starting Gluster Geo-replication. For more information, see . + You may need to configure the Geo-replication service before + starting it. For more information, see . @@ -730,3 +727,4 @@ example.com:/data/remote_dir has been successful
+ diff --git a/doc/admin-guide/en-US/admin_managing_volumes.xml b/doc/admin-guide/en-US/admin_managing_volumes.xml index 0c4d2e922cf..333d46cd6e4 100644 --- a/doc/admin-guide/en-US/admin_managing_volumes.xml +++ b/doc/admin-guide/en-US/admin_managing_volumes.xml @@ -36,7 +36,7 @@ Tuning Volume Options You can tune volume options, as needed, while the cluster is online and available. - Red Hat recommends you to set server.allow-insecure option to ON if there are too many bricks in each volume or if there are too many services which have already utilized all the privileged ports in the system. Turning this option ON allows ports to accept/reject messages from insecure ports. So, use this option only if your deployment requires it. + It is recommend to set server.allow-insecure option to ON if there are too many bricks in each volume or if there are too many services which have already utilized all the privileged ports in the system. Turning this option ON allows ports to accept/reject messages from insecure ports. So, use this option only if your deployment requires it. To tune volume options @@ -387,7 +387,7 @@ Brick4: server4:/exp4 Remove the brick using the following command: # gluster volume remove-brick VOLNAME BRICK start For example, to remove server2:/exp2: - # gluster volume remove-brick test-volume server2:/exp2 + # gluster volume remove-brick test-volume server2:/exp2 start Removing brick(s) can result in data loss. Do you want to Continue? (y/n)
@@ -404,6 +404,13 @@ Removing brick(s) can result in data loss. Do you want to Continue? (y/n)
+ + Commit the remove brick operation using the following command: + # gluster volume remove-brick VOLNAME BRICK commit + For example, to view the status of remove brick operation on server2:/exp2 brick: + # gluster volume remove-brick test-volume server2:/exp2 commit + Remove Brick successful + Check the volume information using the following command: # gluster volume info diff --git a/doc/admin-guide/en-US/admin_setting_volumes.xml b/doc/admin-guide/en-US/admin_setting_volumes.xml index 6a8468d5f11..051fb723a04 100644 --- a/doc/admin-guide/en-US/admin_setting_volumes.xml +++ b/doc/admin-guide/en-US/admin_setting_volumes.xml @@ -41,7 +41,7 @@ information, see Accessing Data - Setting Up GlusterFS Client - You can access gluster volumes in multiple ways. You can use Gluster Native Client method for high concurrency, performance and transparent failover in GNU/Linux clients. You can also use NFS v3 to access gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up) and Windows Server 2003. Other NFS client implementations may work with gluster NFS server. - You can use CIFS to access volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. -
+ Gluster volumes can be accessed in multiple ways. One can use Gluster Native Client method for high concurrency, performance and transparent failover in GNU/Linux clients. Gluster exports volumes using NFS v3 protocol too. + CIFS can also be used to access volumes by exporting the Gluster Native mount point as a samba export. +
Gluster Native Client - The Gluster Native Client is a FUSE-based client running in user space. Gluster Native Client is the recommended method for accessing volumes when high concurrency and high write performance is required. - This section introduces the Gluster Native Client and explains how to install the software on client machines. This section also describes how to mount volumes on clients (both manually and automatically) and how to verify that the volume has mounted successfully. + The Gluster Native Client is a FUSE-based client running in user space. Gluster Native Client is the recommended method for accessing volumes if all the clustered features of GlusterFS has to be utilized. + This section introduces the Gluster Native Client and explains how to install the software on client machines. This section also describes how to mount volumes on clients (both manually and automatically).
Installing the Gluster Native Client - Before you begin installing the Gluster Native Client, you need to verify that the FUSE module is loaded on the client and has access to the required modules as follows: + Gluster Native Client has a dependancy on FUSE module. To make sure FUSE module is loaded, execute below commands: Add the FUSE loadable kernel module (LKM) to the Linux kernel: @@ -25,35 +25,24 @@ fuse init (API version 7.13) -
- Installing on Red Hat Package Manager (RPM) Distributions +
+ Installing on RPM Based Distributions To install Gluster Native Client on RPM distribution-based systems Install required prerequisites on the client using the following command: - $ sudo yum -y install openssh-server wget fuse fuse-libs openib libibverbs + $ sudo yum -y install fuse fuse-libs - Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open. - You can use the following chains with iptables: - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT - - If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule. - - - - Download the latest glusterfs, glusterfs-fuse, and glusterfs-rdma RPM files to each client. The glusterfs package contains the Gluster Native Client. The glusterfs-fuse package contains the FUSE translator required for mounting on client systems and the glusterfs-rdma packages contain OpenFabrics verbs RDMA module for Infiniband. - You can download the software at . + Download the latest glusterfs, glusterfs-fuse RPM files on each client. The glusterfs package contains the GlusterFS Binary and required libraries. The glusterfs-fuse package contains the FUSE plugin (in GlusterFS terms, its called Translator) required for mounting. + Install 'glusterfs-rdma' RPM if RDMA support is required. 'glusterfs-rdma' contains RDMA transport module for Infiniband interconnect. + You can download the software at . Install Gluster Native Client on the client. - $ sudo rpm -i glusterfs-3.3.0qa30-1.x86_64.rpm - $ sudo rpm -i glusterfs-fuse-3.3.0qa30-1.x86_64.rpm - $ sudo rpm -i glusterfs-rdma-3.3.0qa30-1.x86_64.rpm - - The RDMA module is only required when using Infiniband. - + $ sudo rpm -i glusterfs-3.3.0-1.x86_64.rpm + $ sudo rpm -i glusterfs-fuse-3.3.0-1.x86_64.rpm + $ sudo rpm -i glusterfs-rdma-3.3.0-1.x86_64.rpm
@@ -62,21 +51,11 @@ To install Gluster Native Client on Debian-based distributions - Install OpenSSH Server on each client using the following command: - $ sudo apt-get install openssh-server vim wget - - - Download the latest GlusterFS .deb file and checksum to each client. + Download the latest GlusterFS .deb file. You can download the software at . - For each .deb file, get the checksum (using the following command) and compare it against the checksum for that file in the md5sum file. - -$ md5sum GlusterFS_DEB_file.deb - The md5sum of the packages is available at: - - - Uninstall GlusterFS v3.1 (or an earlier version) from the client using the following command: + Uninstall GlusterFS v3.1.x/v3.2.x (or an earlier version) from the client using the following command: $ sudo dpkg -r glusterfs (Optional) Run $ sudo dpkg -purge glusterfs to purge the configuration files. @@ -84,21 +63,10 @@ Install Gluster Native Client on the client using the following command: - $ sudo dpkg -i GlusterFS_DEB_file + $ sudo dpkg -i glusterfs-$version.deb For example: - $ sudo dpkg -i glusterfs-3.3.x.deb - - - Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open. - - You can use the following chains with iptables: - - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT - - If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule. - + $ sudo dpkg -i glusterfs-3.3.0.deb
@@ -112,28 +80,28 @@ # cd glusterfs - Download the source code. - + Download the source code. You can download the source at . Extract the source code using the following command: - # tar -xvzf SOURCE-FILE + # tar -xvzf glusterfs-3.3.0.tar.gz Run the configuration utility using the following command: # ./configure + ... GlusterFS configure summary - ================== - FUSE client : yes - Infiniband verbs : yes + =========================== + FUSE client : yes + Infiniband verbs : yes epoll IO multiplex : yes - argp-standalone : no - fusermount : no - readline : yes - The configuration summary shows the components that will be built with Gluster Native Client. + argp-standalone : no + fusermount : no + readline : yes + The configuration summary shown above is sample, it can vary depending on other packages. Build the Gluster Native Client software using the following commands: @@ -149,27 +117,27 @@
-
+
Mounting Volumes After installing the Gluster Native Client, you need to mount Gluster volumes to access data. There are two methods you can choose: - + - + - After mounting a volume, you can test the mounted volume using the procedure described in . + After mounting a volume, you can test the mounted volume using the procedure described in . Server names selected during creation of Volumes should be resolvable in the client machine. You can use appropriate /etc/hosts entries or DNS server to resolve server names to IP addresses. -
- Manually Mounting Volumes - To manually mount a Gluster volume - - - To mount a volume, use the following command: +
+ Manually Mounting Volumes + To manually mount a Gluster volume + + + To mount a volume, use the following command: # mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR @@ -177,40 +145,13 @@ # mount -t glusterfs server1:/test-volume /mnt/glusterfs - The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). - - - If you see a usage message like "Usage: mount.glusterfs", mount usually requires you to create a directory to be used as the mount point. Run "mkdir /mnt/glusterfs" before you attempt to run the mount command listed above. + The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). - Mounting Options - You can specify the following options when using the mount -t glusterfs command. Note that you need to separate all options with commas. - - - backupvolfile-server=server-name - volfile-max-fetch-attempts=number of attempts - log-level=loglevel - - log-file=logfile - - transport=transport-type - - direct-io-mode=[enable|disable] - - - For example: - - # mount -t glusterfs -o backupvolfile-server=volfile_server2 --volfile-max-fetch-attempts=2 log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs - If option is added while mounting fuse client, when the first -volfile server fails, then the server specified in option is used as volfile server to mount -the client. - In --volfile-max-fetch-attempts=X option, specify the number of attempts to fetch volume files while mounting a volume. This option is useful when you mount a server with multiple IP addresses or when round-robin DNS is configured for the server-name..
-
+
Automatically Mounting Volumes - You can configure your system to automatically mount the Gluster volume each time your system starts. - The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). To automatically mount a Gluster volume @@ -222,68 +163,35 @@ the client. server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0 - Mounting Options - You can specify the following options when updating the /etc/fstab file. Note that you need to separate all options with commas. - - - log-level=loglevel - - log-file=logfile - - transport=transport-type - - direct-io-mode=[enable|disable] - - - For example: - + Mounting Options + You can specify the following options when using the mount -t glusterfs command. Note that you need to separate all options with commas. + backupvolfile-server=server-name + fetch-attempts=N (where N is number of attempts) + log-level=loglevel + log-file=logfile + direct-io-mode=[enable|disable] + ro (for readonly mounts) + acl (for enabling posix-ACLs) + worm (making the mount WORM - Write Once, Read Many type) + selinux (enable selinux on GlusterFS mount + + For example: + # mount -t glusterfs -o backupvolfile-server=volfile_server2,fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs + Using /etc/fstab, options would look like below: HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0 -
-
- Testing Mounted Volumes - To test mounted volumes - - - Use the following command: - - # mount - If the gluster volume was successfully mounted, the output of the mount command on the client will be similar to this example: - - - server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072 - - - - - Use the following command: - - # df - - The output of df command on the client will display the aggregated storage space from all the bricks in a volume similar to this example: - - # df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs - - - Change to the directory and list the contents by entering the following: - - # cd MOUNTDIR - # ls - - - For example, - # cd /mnt/glusterfs - # ls - - + If option is added while mounting fuse client, when the first + volfile server fails, then the server specified in option is used as volfile server to mount the client. + In fetch-attempts=N option, specify the number of attempts to fetch volume files while mounting a volume. This option will be useful when round-robin DNS is configured for the server-name.
+
NFS - You can use NFS v3 to access to gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up), Windows Server 2003, and others, may work with gluster NFS server implementation. - GlusterFS now includes network lock manager (NLM) v4. NLM enables applications on NFSv3 clients to do record locking on files on NFS server. It is started automatically whenever the NFS server is run. - You must install nfs-common package on both servers and clients (only for Debian-based) distribution. - This section describes how to use NFS to mount Gluster volumes (both manually and automatically) and how to verify that the volume has been mounted successfully. + You can use NFS v3 to access to gluster volumes. + GlusterFS 3.3.0, now includes network lock manager (NLM) v4 feature too. NLM enables applications on NFSv3 clients to do record locking on files. NLM program is started automatically with the NFS server process. + This section describes how to use NFS to mount Gluster volumes (both manually and automatically). +
Using NFS to Mount Volumes You can use either of the following methods to mount Gluster volumes: @@ -295,9 +203,8 @@ the client. - Prerequisite: Install nfs-common package on both servers and clients (only for Debian-based distribution), using the following command: - $ sudo aptitude install nfs-common - After mounting a volume, you can test the mounted volume using the procedure described in . + After mounting a volume, you can test the mounted volume using the procedure described in . +
Manually Mounting Volumes Using NFS To manually mount a Gluster volume using NFS @@ -310,8 +217,7 @@ the client. For example: # mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs - Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: - + Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: requested NFS version or transport protocol is not supported. To connect using TCP @@ -322,7 +228,7 @@ the client. -o mountproto=tcp For example: - # mount -o mountproto=tcp -t nfs server1:/test-volume /mnt/glusterfs + # mount -o mountproto=tcp,vers=3 -t nfs server1:/test-volume /mnt/glusterfs To mount Gluster NFS server from a Solaris client @@ -337,6 +243,7 @@ For example:
+
Automatically Mounting Volumes Using NFS You can configure your system to automatically mount Gluster volumes using NFS each time the system starts. @@ -347,58 +254,18 @@ For example: HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,vers=3 0 0 For example, server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,vers=3 0 0 - - Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: - requested NFS version or transport protocol is not supported. - - - To connect using TCP - - - Add the following entry in /etc/fstab file : - HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0 - For example, + If default transport to mount NFS is UDP, use below line in fstab server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0 To automount NFS mounts Gluster supports *nix standard method of automounting NFS mounts. Update the /etc/auto.master and /etc/auto.misc and restart the autofs service. After that, whenever a user or process attempts to access the directory it will be mounted in the background.
-
- Testing Volumes Mounted Using NFS - You can confirm that Gluster directories are mounting successfully. - To test mounted volumes - - - Use the mount command by entering the following: - # mount - For example, the output of the mount command on the client will display an entry like the following: - server1:/test-volume on /mnt/glusterfs type nfs (rw,vers=3,addr=server1) - - - - - Use the df command by entering the following: - # df - For example, the output of df command on the client will display the aggregated storage space from all the bricks in a volume. - # df -h /mnt/glusterfs -Filesystem Size Used Avail Use% Mounted on -server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs - - - Change to the directory and list the contents by entering the following: - # cd MOUNTDIR - # ls - For example, - - # cd /mnt/glusterfs - - # ls - - -
+
+
+
CIFS You can use CIFS to access to volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. You can export glusterfs mount point as the samba export, and then mount it using CIFS protocol. @@ -406,6 +273,7 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs CIFS access using the Mac OS X Finder is not supported, however, you can use the Mac OS X command line to access Gluster volumes using CIFS. +
Using CIFS to Mount Volumes You can use either of the following methods to mount Gluster volumes: @@ -417,15 +285,16 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs - After mounting a volume, you can test the mounted volume using the procedure described in . + After mounting a volume, you can test the mounted volume using the procedure described in . You can also use Samba for exporting Gluster Volumes through CIFS protocol. +
Exporting Gluster Volumes Through Samba We recommend you to use Samba for exporting Gluster volumes through the CIFS protocol. To export volumes through CIFS protocol - Mount a Gluster volume. For more information on mounting volumes, see . + Mount a Gluster volume. For more information on mounting volumes, see . Setup Samba configuration to export the mount point of the Gluster volume. @@ -446,6 +315,7 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs To be able mount from any server in the trusted storage pool, you must repeat these steps on each Gluster node. For more advanced configurations, see Samba documentation.
+
Manually Mounting Volumes Using CIFS You can manually mount Gluster volumes using CIFS on Microsoft Windows-based client machines. @@ -479,6 +349,7 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
+
Automatically Mounting Volumes Using CIFS You can configure your system to automatically mount Gluster volumes using CIFS on Microsoft Windows-based clients each time the system starts. @@ -502,10 +373,39 @@ server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
-
- Testing Volumes Mounted Using CIFS - You can confirm that Gluster directories are mounting successfully by navigating to the directory using Windows Explorer. -
+
+
+ Testing Mounted Volumes + To test mounted volumes + + + Use the following command: + # mount + If the gluster volume was successfully mounted, the output of the mount command on the client will be similar to this example: + server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072 + + + + + Use the following command: + # df -h + The output of df command on the client will display the aggregated storage space from all the bricks in a volume similar to this example: + # df -h /mnt/glusterfs + Filesystem Size Used Avail Use% Mounted on + server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs + + + Change to the directory and list the contents by entering the following: + # cd MOUNTDIR + # ls + + + For example, + # cd /mnt/glusterfs + # ls + + +
diff --git a/doc/admin-guide/en-US/admin_storage_pools.xml b/doc/admin-guide/en-US/admin_storage_pools.xml index 87b6320bd4b..2c4a5cabe1d 100644 --- a/doc/admin-guide/en-US/admin_storage_pools.xml +++ b/doc/admin-guide/en-US/admin_storage_pools.xml @@ -3,17 +3,17 @@ Setting up Trusted Storage Pools Before you can configure a GlusterFS volume, you must create a trusted storage pool consisting of the storage servers that provides bricks to a volume. - A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. To add additional storage servers to the storage pool, you can use the probe command from a storage server that is already trusted. + A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. To add additional storage servers to the storage pool, you can use the probe command from a storage server that is already part of the trusted storage pool. Do not self-probe the first server/localhost. - The GlusterFS service must be running on all storage servers that you want to add to the storage pool. See for more information. + The glusterd service must be running on all storage servers that you want to add to the storage pool. See for more information.
Adding Servers to Trusted Storage Pool To create a trusted storage pool, add servers to the trusted storage pool - The hostnames used to create the storage pool must be resolvable by DNS. + The hostnames used to create the storage pool must be resolvable by DNS. Also make sure that firewall is not blocking the probe requests/replies. (iptables -F) To add a server to the storage pool: # gluster peer probe server For example, to create a trusted storage pool of four servers, add three servers to the storage pool from server1: diff --git a/doc/admin-guide/en-US/admin_troubleshooting.xml b/doc/admin-guide/en-US/admin_troubleshooting.xml index dff182c5f1e..1c6866d8ba7 100644 --- a/doc/admin-guide/en-US/admin_troubleshooting.xml +++ b/doc/admin-guide/en-US/admin_troubleshooting.xml @@ -106,7 +106,7 @@ location of the log file.
Rotating Geo-replication Logs Administrators can rotate the log file of a particular master-slave session, as needed. -When you run geo-replication's log-rotate command, the log file +When you run geo-replication's log-rotate command, the log file is backed up with the current timestamp suffixed to the file name and signal is sent to gsyncd to start logging to a new log file. @@ -139,7 +139,7 @@ log rotate successful # gluster volume geo-replication log-rotate For example, to rotate the log file for all sessions: - # gluster volume geo-replication log rotate + # gluster volume geo-replication log-rotate log rotate successful @@ -172,18 +172,17 @@ you have installed the required version. the following: 2011-04-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twraptf(*aa) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in listen rid, exc, res = recv(self.inf) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42, in recv return pickle.load(inf) EOFError - Solution: This error indicates that the RPC communication between the master gsyncd module and slave -gsyncd module is broken and this can happen for various reasons. Check if it satisfies all the following + Solution: This error indicates that the RPC communication between the master geo-replication module and slave +geo-replication module is broken and this can happen for various reasons. Check if it satisfies all the following pre-requisites: - Password-less SSH is set up properly between the host and the remote machine. + Password-less SSH is set up properly between the host where geo-replication command is executed and the remote machine where the slave geo-replication is located. - If FUSE is installed in the machine, because geo-replication module mounts the GlusterFS volume -using FUSE to sync data. + If FUSE is installed in the machine where the geo-replication command is executed, because geo-replication module mounts the GlusterFS volume using FUSE to sync data. @@ -196,15 +195,15 @@ required permissions. - If GlusterFS 3.2 or higher is not installed in the default location (in Master) and has been prefixed to be + If GlusterFS 3.3 or higher is not installed in the default location (in Master) and has been prefixed to be installed in a custom location, configure the gluster-command for it to point to the exact location. - If GlusterFS 3.2 or higher is not installed in the default location (in slave) and has been prefixed to be + If GlusterFS 3.3 or higher is not installed in the default location (in slave) and has been prefixed to be installed in a custom location, configure the remote-gsyncd-command for it to point to the -exact place where gsyncd is located. +exact place where geo-replication is located. @@ -224,7 +223,7 @@ intermediate master.
Troubleshooting POSIX ACLs - This section describes the most common troubleshooting issues related to POSIX ACLs. + This section describes the most common troubleshooting issues related to POSIX ACLs.
setfacl command fails with “setfacl: <file or directory name>: Operation not supported” error @@ -244,7 +243,7 @@ Storage.
Time Sync - Running MapReduce job may throw exceptions if the time is out-of-sync on the hosts in the cluster. + Running MapReduce job may throw exceptions if the clocks are out-of-sync on the hosts in the cluster. Solution: Sync the time on all hosts using ntpd program. @@ -257,7 +256,7 @@ Storage.
mount command on NFS client fails with “RPC Error: Program not registered” - Start portmap or rpcbind service on the NFS server. + Start portmap or rpcbind service on the machine where NFS server is running. This error is encountered when the server has not started correctly. @@ -280,11 +279,11 @@ required: This situation can be confirmed from the log file, if the following error lines exist: [2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use -[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use -[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection -[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed -[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 -[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed +[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use +[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection +[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed +[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 +[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed [2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols To resolve this error one of the Gluster NFS servers will have to be shutdown. At this time, Gluster NFS server does not support running multiple NFS servers on the same machine. @@ -296,7 +295,7 @@ Gluster NFS server does not support running multiple NFS servers on the same mac mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. Start rpc.statd - For NFS clients to mount the NFS server, rpc.statd service must be running on the clients. + For NFS clients to mount the NFS server, rpc.statd service must be running on the client machine. Start rpc.statd service by running the following command: @@ -317,12 +316,12 @@ required: $ /etc/init.d/rpcbind start
- NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. + NFS server, glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. NFS start-up can succeed but the initialization of the NFS service can still fail preventing clients from accessing the mount points. Such a situation can be confirmed from the following error messages in the log file: - [2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap + [2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap [2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed [2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 [2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed @@ -415,14 +414,14 @@ Gluster NFS server. The timeout can be resolved by forcing the NFS client to use server requests/replies. Gluster NFS server operates over the following port numbers: 38465, 38466, and 38467. - For more information, see . + For more information, see .
Application fails with "Invalid argument" or "Value too large for defined data type" error. These two errors generally happen for 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files. -Use the following option from the CLI to make Gluster NFS return 32-bit inode numbers instead: +Use the following option from the CLI to make Gluster NFS server return 32-bit inode numbers instead: nfs.enable-ino32 <on|off> Applications that will benefit are those that were either: @@ -436,7 +435,7 @@ nfs.enable-ino32 <on|off> - This option is disabled by default so NFS returns 64-bit inode numbers by default. + This option is disabled by default. So Gluster NFS server returns 64-bit inode numbers by default. Applications which can be rebuilt from source are recommended to rebuild using the following flag with gcc: @@ -458,7 +457,7 @@ flag with gcc: # gluster volume statedump test-volume Volume statedump successful The statedump files are created on the brick servers in the /tmp directory or in the directory set using server.statedump-path volume option. The naming convention of the dump file is <brick-path>.<brick-pid>.dump. - The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them. + The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them before clearing. [xlator.features.locks.vol-locks.inode] path=/ mandatory=0 diff --git a/doc/admin-guide/en-US/gfs_introduction.xml b/doc/admin-guide/en-US/gfs_introduction.xml index 5fd88730556..64b1c077930 100644 --- a/doc/admin-guide/en-US/gfs_introduction.xml +++ b/doc/admin-guide/en-US/gfs_introduction.xml @@ -3,7 +3,7 @@ Introducing Gluster File System GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. GlusterFS can be flexibly combined with commodity physical, virtual, and cloud resources to deliver highly available and performant enterprise storage at a fraction of the cost of traditional solutions. - GlusterFS clusters together storage building blocks over Infiniband RDMA and/or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design, delivering exceptional performance for diverse workloads. + GlusterFS clusters together storage building blocks over Infiniband RDMA and/or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design, delivering exceptional performance for diverse workloads.
Virtualized Cloud Environments @@ -17,7 +17,7 @@
GlusterFS is designed for today's high-performance, virtualized cloud environments. Unlike traditional data centers, cloud environments require multi-tenancy along with the ability to grow or shrink resources on demand. Enterprises can scale capacity, performance, and availability on demand, with no vendor lock-in, across on-premise, public cloud, and hybrid environments. - GlusterFS is in production at thousands of enterprises spanning media, healthcare, government, education, web 2.0, and financial services. The following table lists the commercial offerings and its documentation location: + GlusterFS is in production at thousands of enterprises spanning media, healthcare, government, education, web 2.0, and financial services. The following table lists the commercial offerings and its documentation location: -- cgit