From 2e767af20738c63cbb285f9cba7fdbc1ca34f2ca Mon Sep 17 00:00:00 2001 From: M S Vishwanath Bhat Date: Thu, 10 Apr 2014 20:35:37 +0530 Subject: Adding doc to upgrade geo-rep from older version to new geo-rep Change-Id: Ibfebca2bf92e63b906c9eb336b4a975c534a0bf2 BUG: 1086796 Signed-off-by: M S Vishwanath Bhat Reviewed-on: http://review.gluster.org/7436 Reviewed-by: Niels de Vos Reviewed-by: Lalatendu Mohanty Reviewed-by: Humble Devassy Chirammal Tested-by: Gluster Build System --- doc/upgrade/geo-rep-upgrade-steps.md | 42 ++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) create mode 100644 doc/upgrade/geo-rep-upgrade-steps.md diff --git a/doc/upgrade/geo-rep-upgrade-steps.md b/doc/upgrade/geo-rep-upgrade-steps.md new file mode 100644 index 00000000000..427daa9a7f4 --- /dev/null +++ b/doc/upgrade/geo-rep-upgrade-steps.md @@ -0,0 +1,42 @@ +#Steps to upgrade from previous verison of geo-replication to new distributed-geo-replication +------------------------------------------------------------------------------- +Here are the steps to upgrade your existing geo-replication setup to new distributed geo-replication in glusterfs-3.5. The new version leverges all the nodes in your master volume and provides better performace. + +#### Note: + - Since new version of geo-rep very much different from the older one, this has to be done offline. + - New version supports only syncing between two gluster volumes via ssh+gluster. + - This doc deals with upgrading geo-rep. So upgrading the volumes are not covered in detail here. + +### Below are the steps to upgrade. +--------------------------------------- + +- Stop the geo-replication session in older version ( < 3.5) using the below command +```sh +gluster volume geo-replication :: stop +``` +- Now upgrade the master and slave volumes separately. The steps to upgrade volumes is pretty simple. You should unmount the volumes from client, stop the volume and glusterd. Upgrade to new version using yum upgrade or just installing new rpms, then restart glusterd and start the gluster volume. And if you are using quota feature, please follow the steps provided there. + +- Now since the new geo-replication requires gfids of master and slave volume to be same, generate a file containing the gfids of all the files in master + +```sh +cd /usr/share/glusterfs/scripts/ ; +bash generate-gfid-file.sh localhost: $PWD/get-gfid.sh /tmp/master_gfid_file.txt ; +scp /tmp/master_gfid_file.txt root@:/tmp +``` +- Now go to the slave host and aplly the gfid to the slave volume. + +```sh +cd /usr/share/glusterfs/scripts/ +bash slave-upgrade.sh localhost: /tmp/master_gfid_file.txt $PWD/gsync-sync-gfid +``` +This will ask you for password of all the nodes in slave cluster. Please provide them, if asked. +- Also note that this will restart your slave gluster volume (stop and start) + +- Now create and start the geo-rep session between master and slave. For instruction on creating new geo-rep seesion please refer distributed-geo-rep admin guide. + +```sh +gluster volume geo-replication :: create push-pem force +gluster volume geo-replication :: start +``` + +- Now your session is upgraded to use distributed-geo-rep -- cgit