summaryrefslogtreecommitdiffstats
path: root/doc/admin-guide/en-US/markdown
diff options
context:
space:
mode:
authorAravinda VK <avishwan@redhat.com>2015-03-26 15:45:55 +0530
committerHumble Devassy Chirammal <humble.devassy@gmail.com>2015-04-10 09:24:41 +0000
commit36aef0411431f62d2412f71557730543edf05726 (patch)
treea66907446c49a675651273494cf3b9e5f13ae68d /doc/admin-guide/en-US/markdown
parenta0750e3a921f06c2fb84a7ea1556679ec0f1ce09 (diff)
doc/geo-rep: Mountbroker User Management
BUG: 1136312 Change-Id: I1c8374de6e7ec93e401ec1c224752bfa5538adee Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10007 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Diffstat (limited to 'doc/admin-guide/en-US/markdown')
-rw-r--r--doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md197
1 files changed, 150 insertions, 47 deletions
diff --git a/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md b/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
index 1be5a8f98b9..bc111255b4a 100644
--- a/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
+++ b/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
@@ -20,7 +20,7 @@ This article is targeted towards users/admins who want to try new geo-replicatio
- Unlike previous version, slave **must** be a gluster volume. Slave can not be a directory. And both the master and slave volumes should have been created and started before creating geo-rep session.
#### Creating secret pem pub file
-- Execute the below command from the node where you setup the password-less ssh to slave. This will create the secret pem pub file which would have information of RSA key of all the nodes in the master volume. And when geo-rep create command is executed, glusterd uses this file to establish a geo-rep specific ssh connections.
+- Execute the below command from the node where you setup the password-less ssh to slave. This will create the secret pem pub file which would have information of RSA key of all the nodes in the master volume. And when geo-rep create command is executed, glusterd uses this file to establish a geo-rep specific ssh connections
```sh
gluster system:: execute gsec_create
```
@@ -44,6 +44,75 @@ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> crea
```
In this case the master node rsa-key distribution to slave node does not happen and above mentioned slave verification is not performed and these two things has to be taken care externaly.
+### Creating Non Root Geo-replication session
+
+`mountbroker` is a new service of glusterd. This service allows an
+unprivileged process to own a GlusterFS mount by registering a label
+(and DSL (Domain-specific language) options ) with glusterd through a
+glusterd volfile. Using CLI, you can send a mount request to glusterd to
+receive an alias (symlink) of the mounted volume.
+
+A request from the agent, the unprivileged slave agents use the
+mountbroker service of glusterd to set up an auxiliary gluster mount for
+the agent in a special environment which ensures that the agent is only
+allowed to access with special parameters that provide administrative
+level access to the particular volume.
+
+**To setup an auxiliary gluster mount for the agent**:
+
+1. In all Slave nodes, Create a new group. For example, `geogroup`
+
+2. In all Slave nodes, Create a unprivileged account. For example, ` geoaccount`. Make it a member of ` geogroup`
+
+3. In all Slave nodes, Create a new directory owned by root and with permissions *0711.* For example, create mountbroker-root directory `/var/mountbroker-root`
+
+4. In any one of Slave node, Run the following commands to add options to glusterd vol file(`/etc/glusterfs/glusterd.vol`
+ in rpm installations and `/usr/local/etc/glusterfs/glusterd.vol` in Source installation.
+
+ gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root
+ gluster system:: execute mountbroker opt geo-replication-log-group geogroup
+ gluster system:: execute mountbroker opt rpc-auth-allow-insecure on
+
+5. In any one of Slave node, Add Mountbroker user to glusterd vol file using,
+
+ ```sh
+ gluster system:: execute mountbroker user geoaccount slavevol
+ ```
+
+Where `slavevol` is Slave Volume name.
+
+If you host multiple slave volumes on Slave, for each of them and add the following options to the volfile using
+
+ ```sh
+ gluster system:: execute mountbroker user geoaccount2 slavevol2
+ gluster system:: execute mountbroker user geoaccount3 slavevol3
+ ```
+
+To add multiple volumes per mountbroker user,
+
+ ```sh
+ gluster system:: execute mountbroker user geoaccount1 slavevol11,slavevol12,slavevol13
+ gluster system:: execute mountbroker user geoaccount2 slavevol21,slavevol22
+ gluster system:: execute mountbroker user geoaccount3 slavevol31
+ ```
+
+6. Restart `glusterd` service on all the Slave nodes
+
+7. Setup a passwdless SSH from one of the master node to the user on one of the slave node. For example, to geoaccount.
+
+8. Create a geo-replication relationship between master and slave to the user by running the following command on the master node:
+ For example,
+
+ ```sh
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> create push-pem [force]
+ ```
+
+9. In the slavenode, which is used to create relationship, run `/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh` as a root with user name, master volume name, and slave volume names as the arguments.
+
+ ```sh
+ /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh <mountbroker_user> <master_volume> <slave_volume>
+ ```
+
### Configuring meta volume
A gluster meta volume needs to be configured with geo-replication to
better handle rename and other consistency issues in geo-replication during
@@ -51,42 +120,55 @@ brick/node down scenarios when master volume is configured with EC(Erasure Code)
Following are the steps to configure meta volume
Create a 3 way replicated meta volume in the master cluster with all three bricks from different nodes as follows.
-```sh
-gluster volume create <meta_vol> replica 3 <host1>:<brick_path> <host2>:<brick_path> <host3>:<brick_path>
-```
+
+ ```sh
+ gluster volume create <meta_vol> replica 3 <host1>:<brick_path> <host2>:<brick_path> <host3>:<brick_path>
+ ```
Start the meta volume as follows.
-```sh
-gluster volume start <meta_vol>
-```
+
+ ```sh
+ gluster volume start <meta_vol>
+ ```
Configure meta volume with geo-replication session as follows.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config meta_volume <meta_vol>
-```
+
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config meta_volume <meta_vol>
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config meta_volume <meta_vol>
+ ```
#### Starting a geo-rep session
There is no change in this command from previous versions to this version.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start
-```
+
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> start
+ ```
+
This command actually starts the session. Meaning the gsyncd monitor process will be started, which in turn spawns gsync worker processes whenever required. This also turns on changelog xlator (if not in ON state already), which starts recording all the changes on each of the glusterfs bricks. And if master is empty during geo-rep start, the change detection mechanism will be changelog. Else it’ll be xsync (the changes are identified by crawling through filesystem). Later when the initial data is syned to slave, change detection mechanism will be set to changelog
#### Status of geo-replication
gluster now has variants of status command.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> status
+ ```
This displays the status of session from each brick of the master to each brick of the slave node.
If you want more detailed status, then run 'status detail'
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status detail
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status detail
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> status detail
+ ```
This command displays extra information like, total files synced, files that needs to be synced, deletes pending etc.
@@ -94,18 +176,23 @@ This command displays extra information like, total files synced, files that nee
This command stops all geo-rep relates processes i.e. gsyncd monitor and works processes. Note that changelog will **not** be turned off with this command.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> stop [force]
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> stop [force]
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> stop [force]
+ ```
+
Force option is to be used, when one of the node (or glusterd in one of the node) is down. Once stopped, the session can be restarted any time. Note that upon restarting of the session, the change detection mechanism falls back to xsync mode. This happens even though you have changelog generating journals, while the geo-rep session is stopped.
#### Deleting geo-replication session
Now you can delete the glusterfs geo-rep session. This will delete all the config data associated with the geo-rep session.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> delete
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> delete
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> delete
+ ```
This deletes all the gsync conf files in each of the nodes. This returns failure, if any of the node is down. And unlike geo-rep stop, there is 'force' option with this.
@@ -113,34 +200,50 @@ This deletes all the gsync conf files in each of the nodes. This returns failure
There are some configuration values which can be changed using the CLI. And you can see all the current config values with following command.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config
+ ```
But you can check only one of them, like log_file or change-detector
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config log-file
-```
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector
-```
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config working-dir
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config log-file
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config log-file
+
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config change-detector
+
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config working-dir
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config working-dir
+ ```
+
To set a new value to this, just provide a new value. Note that, not all the config values are allowed to change. Some can not be modified.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector xsync
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector xsync
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config change-detector xsync
+ ```
+
Make sure you provide the proper value to the config value. And if you have large number of small files data set, then you can use tar+ssh as syncing method. Note that, if geo-rep session is running, this restarts the gsyncd.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config use-tarssh true
-```
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config use-tarssh true
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config use-tarssh true
+ ```
+
Resetting these value to default is also simple.
-```sh
-gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config \!use-tarssh
-```
-That makes the config key (tar-ssh in this case) to fall back to it’s default value.
+ ```sh
+ gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config \!use-tarssh
+ # If Mountbroker Setup,
+ gluster volume geo-replication <master_volume> <mountbroker_user>@<slave_host>::<slave_volume> config \!use-tarssh
+ ```
+
+That makes the config key (tar-ssh in this case) to fall back to it's default value.