diff options
authorM S Vishwanath Bhat <>2014-04-10 20:35:37 +0530
committerVijay Bellur <>2014-04-16 11:28:54 -0700
commit2e767af20738c63cbb285f9cba7fdbc1ca34f2ca (patch)
parent0ec8f91f1c43ef162ea3a5c9f68becf323f0b095 (diff)
Adding doc to upgrade geo-rep from older version to new geo-repv3.5.0
Change-Id: Ibfebca2bf92e63b906c9eb336b4a975c534a0bf2 BUG: 1086796 Signed-off-by: M S Vishwanath Bhat <> Reviewed-on: Reviewed-by: Niels de Vos <> Reviewed-by: Lalatendu Mohanty <> Reviewed-by: Humble Devassy Chirammal <> Tested-by: Gluster Build System <>
1 files changed, 42 insertions, 0 deletions
diff --git a/doc/upgrade/ b/doc/upgrade/
new file mode 100644
index 00000000000..427daa9a7f4
--- /dev/null
+++ b/doc/upgrade/
@@ -0,0 +1,42 @@
+#Steps to upgrade from previous verison of geo-replication to new distributed-geo-replication
+Here are the steps to upgrade your existing geo-replication setup to new distributed geo-replication in glusterfs-3.5. The new version leverges all the nodes in your master volume and provides better performace.
+#### Note:
+ - Since new version of geo-rep very much different from the older one, this has to be done offline.
+ - New version supports only syncing between two gluster volumes via ssh+gluster.
+ - This doc deals with upgrading geo-rep. So upgrading the volumes are not covered in detail here.
+### Below are the steps to upgrade.
+- Stop the geo-replication session in older version ( < 3.5) using the below command
+gluster volume geo-replication <master_vol> <slave_host>::<slave_vol> stop
+- Now upgrade the master and slave volumes separately. The steps to upgrade volumes is pretty simple. You should unmount the volumes from client, stop the volume and glusterd. Upgrade to new version using yum upgrade or just installing new rpms, then restart glusterd and start the gluster volume. And if you are using quota feature, please follow the steps provided there.
+- Now since the new geo-replication requires gfids of master and slave volume to be same, generate a file containing the gfids of all the files in master
+cd /usr/share/glusterfs/scripts/ ;
+bash localhost:<master_vol> $PWD/ /tmp/master_gfid_file.txt ;
+scp /tmp/master_gfid_file.txt root@<slave_host>:/tmp
+- Now go to the slave host and aplly the gfid to the slave volume.
+cd /usr/share/glusterfs/scripts/
+bash localhost:<slave_vol> /tmp/master_gfid_file.txt $PWD/gsync-sync-gfid
+This will ask you for password of all the nodes in slave cluster. Please provide them, if asked.
+- Also note that this will restart your slave gluster volume (stop and start)
+- Now create and start the geo-rep session between master and slave. For instruction on creating new geo-rep seesion please refer distributed-geo-rep admin guide.
+gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> create push-pem force
+gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start
+- Now your session is upgraded to use distributed-geo-rep