summaryrefslogtreecommitdiffstats
path: root/extras/hook-scripts/S56glusterd-geo-rep-create-post.sh
Commit message (Collapse)AuthorAgeFilesLines
* feature/glusterfind: A tool to find incremental changesAravinda VK2015-03-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Documentation is available in patch: http://review.gluster.org/#/c/9800/ A tool which helps to get list of modified files or list of all files in GlusterFS Volume using Changelog or find command. Usage ===== glusterfind --help Create: ------- glusterfind create --help The tool creates status file $GLUSTERD_WORKDIR/SESSION/VOLUME/status and records current timestamp to initiate the session. This timestamp will be used as start time for next runs. As part of create also generates ssh key and distributes to all peers. and enables build.pgfid and changelog using volume set command. Pre: ---- glusterfind pre --help This command is used to generate the list of files modified after session creation time or after last run. To get list of all files/dirs in Volume, run pre command with `--full` argument. The tool gets all nodes details using gluster volume info and runs node agent for each brick in respective nodes via ssh command. Once these node agents generate the output file, tool copies to local using scp. Merges all the output files to generate the final output file. Post: ----- glusterfind post --help After consuming the list, this sub command is called to update the session time based on pre command status file. List: ----- glusterfind list --help To view all the sessions Delete: ------- glusterfind delete --help Delete session. Known Issues ------------ 1. Deleted files will not get listed, since we can't convert GFID to Path if file/dir is deleted. 2. Only new name will get listed if Renamed. 3. All hardlinks will get listed. Change-Id: I82991feb0aea85cb6ec035fddbf80a2b276e86b0 BUG: 1193893 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9682 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Handle copying of common_secret.pem.pub to slave correctly.Kotresh HR2015-01-211-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current Behaviour: 1. Geo-replication gsec_create creates common_secret.pem.pub file containing public keys of the all the nodes of master cluster in the location /var/lib/glusterd/ 2. Geo-replication create push-pem copies the common_secret.pem.pub to the same location on all the slave nodes with same name. Problem: Wrong public keys might get copied on to slave nodes in multiple geo-replication sessions simultaneosly. E.g. A geo-rep session is established between Node1(vol1:Master) to Node2 (vol2:Slave). And one more geo-rep session where Node2 (vol3) becomes master to Node3 (vol4) as below. Session1: Node1 (vol1) ---> Node2 (vol2) Session2: Node2 (vol3) ---> Node3 (vol4) If steps followed to create both geo-replication session is as follows, wrong public keys are copied on to Node3 from Node2. 1. gsec_create is done on Node1 (vol1) -Session1 2. gsec_create is done on Node2 (vol3) -Session2 3. create push-pem is done Node1 - Session1. -This overwrites common_secret.pem.pub in Node2 created by gsec_create in second step. 4. create push-pem on Node2 (vol3) copies overwrited common_secret.pem.pub keys to Node3. -Session2 Consequence: Session2 fails to start with Permission denied because of wrong public keys Solution: On geo-rep create push-pem, don't copy common_secret.pem.pub file with same name on to all slave nodes. Prefix master and slave volume names to the filename. NOTE: This brings change in manual steps to be followed to setup non-root geo-replication (mountbroker). To copy ssh public keys, extra two arguments needs to be followed. set_geo_rep_pem_keys.sh <mountbroker_user> <master vol name> \ <slave vol name> Path to set_geo_rep_pem_keys.sh: Source Installation: /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh Rpm Installatino: /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh Change-Id: If38cd4e6f58d674d5fe2d93da15803c73b660c33 BUG: 1183229 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9460 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/geo-rep: Use getent passwd instead of $HOMEAvra Sengupta2014-05-201-2/+1
| | | | | | | | | | | | | | | | $HOME might not be set in the env variables, as is the case when these scripts are executed using the runner framework. Hence using getent passwd instead of $HOME Change-Id: I99f6bcd788d727be534b3040600d66c8dbb7ee92 BUG: 1099041 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7803 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/geo-rep: Allow gverify.sh and S56glusterd-geo-rep-create-post.shAvra Sengupta2014-05-141-7/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to operate for non-root privileged slave volume Mounting the slave-volume on local node, to perform disk checks in order to allow gverify.sh to operate for non-root privileged slave volume Allowing the hook script S56glusterd-geo-rep-create-post.sh to operate for non-root privileged slave volume Modified peer_add_secret_pub.in to accept username as argument and add the pem keys to the users's_home_dir/.ssh/authorized_keys Wrote set_geo_rep_pem_keys.sh which accepts username as argument and copies the pem keys from the user's home directory to $GLUSTERD_WORKING_DIR/geo-replication/ and then copies the keys to other nodes in the cluster and add them to the respective authorized keys. The script takes as argument the user name and assumes that the user will be present in all the nodes in the cluster. It is not needed for root. To summarize: For a privileged slave user, execute the following on master node as super user: gluster system:: execute gsec_create gluster volume geo-replication <master_vol> [root@]<slave_ip>::<slave_vol> create push_pem For a non-privileged slave user execute the following on master node as super user: gluster system:: execute gsec_create gluster volume geo-replication <master_vol> <slave_user>@<slave_ip>::<slave_vol> create push_pem then on the slave node execute the following as super user: /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh <slave_user> BUG: 1077452 Change-Id: I88020968aa5b13a2c2ab86b1d6661b60071f6f5e Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7744 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/gverify.sh: Stops session being created with invalid slave detailsVenky Shankar2013-09-041-2/+3
| | | | | | | | | | | | | create force will fail with proper message, if the ip is not reachable, or is unable to fetch slave details. Change-Id: I44a3ba777b37702ffd0e48e9cb46c51e293327d4 BUG: 988314 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5516 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-0/+41
Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>