summaryrefslogtreecommitdiffstats
path: root/doc/admin-guide/en-US/admin_troubleshooting.xml
diff options
context:
space:
mode:
Diffstat (limited to 'doc/admin-guide/en-US/admin_troubleshooting.xml')
-rw-r--r--doc/admin-guide/en-US/admin_troubleshooting.xml49
1 files changed, 24 insertions, 25 deletions
diff --git a/doc/admin-guide/en-US/admin_troubleshooting.xml b/doc/admin-guide/en-US/admin_troubleshooting.xml
index dff182c5f1e..1c6866d8ba7 100644
--- a/doc/admin-guide/en-US/admin_troubleshooting.xml
+++ b/doc/admin-guide/en-US/admin_troubleshooting.xml
@@ -106,7 +106,7 @@ location of the log file.
<section>
<title>Rotating Geo-replication Logs</title>
<para>Administrators can rotate the log file of a particular master-slave session, as needed.
-When you run geo-replication&apos;s <command> log-rotate</command> command, the log file
+When you run geo-replication&apos;s <command> log-rotate</command> command, the log file
is backed up with the current timestamp suffixed to the file
name and signal is sent to gsyncd to start logging to a new
log file.</para>
@@ -139,7 +139,7 @@ log rotate successful</programlisting>
<para><command># gluster volume geo-replication log-rotate</command>
</para>
<para>For example, to rotate the log file for all sessions:</para>
- <programlisting># gluster volume geo-replication log rotate
+ <programlisting># gluster volume geo-replication log-rotate
log rotate successful</programlisting>
</listitem>
</itemizedlist>
@@ -172,18 +172,17 @@ you have installed the required version.
the following:
</para>
<para><errortext>2011-04-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] &lt;top&gt;: FAIL: Traceback (most recent call last): File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py&quot;, line 152, in twraptf(*aa) File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 118, in listen rid, exc, res = recv(self.inf) File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 42, in recv return pickle.load(inf) EOFError </errortext></para>
- <para><emphasis role="bold">Solution</emphasis>: This error indicates that the RPC communication between the master gsyncd module and slave
-gsyncd module is broken and this can happen for various reasons. Check if it satisfies all the following
+ <para><emphasis role="bold">Solution</emphasis>: This error indicates that the RPC communication between the master geo-replication module and slave
+geo-replication module is broken and this can happen for various reasons. Check if it satisfies all the following
pre-requisites:
</para>
<itemizedlist>
<listitem>
- <para>Password-less SSH is set up properly between the host and the remote machine.
+ <para>Password-less SSH is set up properly between the host where geo-replication command is executed and the remote machine where the slave geo-replication is located.
</para>
</listitem>
<listitem>
- <para>If FUSE is installed in the machine, because geo-replication module mounts the GlusterFS volume
-using FUSE to sync data.
+ <para>If FUSE is installed in the machine where the geo-replication command is executed, because geo-replication module mounts the GlusterFS volume using FUSE to sync data.
</para>
</listitem>
<listitem>
@@ -196,15 +195,15 @@ required permissions.
</para>
</listitem>
<listitem>
- <para>If GlusterFS 3.2 or higher is not installed in the default location (in Master) and has been prefixed to be
+ <para>If GlusterFS 3.3 or higher is not installed in the default location (in Master) and has been prefixed to be
installed in a custom location, configure the <command>gluster-command</command> for it to point to the exact
location.
</para>
</listitem>
<listitem>
- <para>If GlusterFS 3.2 or higher is not installed in the default location (in slave) and has been prefixed to be
+ <para>If GlusterFS 3.3 or higher is not installed in the default location (in slave) and has been prefixed to be
installed in a custom location, configure the <command>remote-gsyncd-command</command> for it to point to the
-exact place where gsyncd is located.
+exact place where geo-replication is located.
</para>
</listitem>
</itemizedlist>
@@ -224,7 +223,7 @@ intermediate master.
</section>
<section>
<title>Troubleshooting POSIX ACLs </title>
- <para>This section describes the most common troubleshooting issues related to POSIX ACLs.
+ <para>This section describes the most common troubleshooting issues related to POSIX ACLs.
</para>
<section>
<title>setfacl command fails with “setfacl: &lt;file or directory name&gt;: Operation not supported” error </title>
@@ -244,7 +243,7 @@ Storage.
</para>
<section id="sect-Administration_Guide-Troubleshooting-Test_Section_1">
<title>Time Sync</title>
- <para>Running MapReduce job may throw exceptions if the time is out-of-sync on the hosts in the cluster.
+ <para>Running MapReduce job may throw exceptions if the clocks are out-of-sync on the hosts in the cluster.
</para>
<para><emphasis role="bold">Solution</emphasis>: Sync the time on all hosts using ntpd program.
@@ -257,7 +256,7 @@ Storage.
</para>
<section>
<title>mount command on NFS client fails with “RPC Error: Program not registered” </title>
- <para>Start portmap or rpcbind service on the NFS server.
+ <para>Start portmap or rpcbind service on the machine where NFS server is running.
</para>
<para>This error is encountered when the server has not started correctly.
</para>
@@ -280,11 +279,11 @@ required:
This situation can be confirmed from the log file, if the following error lines exist:
</para>
<para><screen>[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
-[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
-[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
-[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
-[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
-[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
+[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
+[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
+[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
+[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
+[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols</screen></para>
<para>To resolve this error one of the Gluster NFS servers will have to be shutdown. At this time,
Gluster NFS server does not support running multiple NFS servers on the same machine.
@@ -296,7 +295,7 @@ Gluster NFS server does not support running multiple NFS servers on the same mac
</para>
<para><errortext>mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use &apos;-o nolock&apos; to keep locks local, or start statd. </errortext></para>
<para><errortext>Start rpc.statd </errortext></para>
- <para>For NFS clients to mount the NFS server, rpc.statd service must be running on the clients. </para>
+ <para>For NFS clients to mount the NFS server, rpc.statd service must be running on the client machine. </para>
<para>Start
rpc.statd service by running the following command:
</para>
@@ -317,12 +316,12 @@ required:
<para><command>$ /etc/init.d/rpcbind start</command></para>
</section>
<section>
- <title>NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. </title>
+ <title>NFS server, glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. </title>
<para>NFS start-up can succeed but the initialization of the NFS service can still fail preventing clients
from accessing the mount points. Such a situation can be confirmed from the following error
messages in the log file:
</para>
- <para><screen>[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
+ <para><screen>[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
@@ -415,14 +414,14 @@ Gluster NFS server. The timeout can be resolved by forcing the NFS client to use
server requests/replies. Gluster NFS server operates over the following port numbers: 38465,
38466, and 38467.
</para>
- <para>For more information, see <xref linkend="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Native-RPM"/>.
+ <para>For more information, see <xref linkend="sect-Administration_Guide-GlusterFS_Client-Native-RPM"/>.
</para>
</section>
<section>
<title>Application fails with &quot;Invalid argument&quot; or &quot;Value too large for defined data type&quot; error. </title>
<para>These two errors generally happen for 32-bit nfs clients or applications that do not support 64-bit
inode numbers or large files.
-Use the following option from the CLI to make Gluster NFS return 32-bit inode numbers instead:
+Use the following option from the CLI to make Gluster NFS server return 32-bit inode numbers instead:
nfs.enable-ino32 &lt;on|off&gt;
</para>
<para>Applications that will benefit are those that were either:
@@ -436,7 +435,7 @@ nfs.enable-ino32 &lt;on|off&gt;
</para>
</listitem>
</itemizedlist>
- <para>This option is disabled by default so NFS returns 64-bit inode numbers by default.
+ <para>This option is disabled by default. So Gluster NFS server returns 64-bit inode numbers by default.
</para>
<para>Applications which can be rebuilt from source are recommended to rebuild using the following
flag with gcc:</para>
@@ -458,7 +457,7 @@ flag with gcc:</para>
<para><programlisting># gluster volume statedump test-volume
Volume statedump successful</programlisting></para>
<para>The statedump files are created on the brick servers in the<filename> /tmp</filename> directory or in the directory set using <command>server.statedump-path</command> volume option. The naming convention of the dump file is <filename>&lt;brick-path&gt;.&lt;brick-pid&gt;.dump</filename>.</para>
- <para>The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them. </para>
+ <para>The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them before clearing. </para>
<para><screen>[xlator.features.locks.vol-locks.inode]
path=/
mandatory=0