summaryrefslogtreecommitdiffstats
path: root/geo-replication/src/Makefile.am
Commit message (Collapse)AuthorAgeFilesLines
* build/packaging: Debian and Ubuntu don't have /usr/libexecKaleb S. KEITHLEY2017-03-131-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | GLUSTERFS_LIBEXECDIR is effectively hard-coded to /usr/libexec/glusterfs in configure(.ac) Debian-based distributions don't have a /usr/libexec/ directory This issues is partially mitigated by the use of $libexecdir in some of the Makefile.am files, but even so the incorrectly defined GLUSTERFS_LIBEXECDIR results in various things such as gsyncd, glusterfind, eventsd, etc., trying to invoke other scripts and programs from a location that doesn't exist. And once we correctly define GLUSTERFS_LIBEXECDIR, then we might as well use it appropriatedly. Change-Id: If5219cadc51ae316f7ba2e2831d739235c77902d BUG: 1430841 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16880 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Milind Changire <mchangir@redhat.com> Reviewed-by: Joe Julian <me@joejulian.name> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* build: out-of-tree builds generates files in the wrong directoryKaleb S KEITHLEY2016-09-181-5/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | And minor cleanup of a few of the Makefile.am files while we're at it. Rewrite the make rules to do what xdrgen does. Now we can get rid of xdrgen. Note 1. netbsd6's sed doesn't do -i. Why are we still running smoke tests on netbsd6 and not netbsd7? We barely support netbsd7 as it is. Note 2. Why is/was libgfxdr.so (.../rpc/xdr/src/...) linked with libglusterfs? A cut-and-paste mistake? It has no references to symbols in libglusterfs. Note3. "/#ifndef\|#define\|#endif/" (note the '\'s) is a _basic_ regex that matches the same lines as the _extended_ regex "/#(ifndef|define|endif)/". To match the extended regex sed needs to be run with -r on Linux; with -E on *BSD. However NetBSD's and FreeBSD's sed helpfully also provide -r for compatibility. Using a basic regex avoids having to use a kludge in order to run sed with the correct option on OS X. Note 4. Not copying the bit of xdrgen that inserts copyright/license boilerplate. AFAIK it's silly to pretend that machine generated files like these can be copyrighted or need license boilerplate. The XDR source files have their own copyright and license; and their copyrights are bound to be more up to date than old boilerplate inserted by a script. From what I've seen of other Open Source projects -- e.g. gcc and its C parser files generated by yacc and lex -- IIRC they don't bother to add copyright/license boilerplate to their generated files. It appears that it's a long-standing feature of make (SysV, BSD, gnu) for out-of-tree builds to helpfully pretend that the source files it can find in the VPATH "exist" as if they are in the $cwd. rpcgen doesn't work well in this situation and generates files with "bad" #include directives. E.g. if you `rpcgen ../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.x`, you get an #include directive in the generated .c file like this: ... #include "../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.h" ... which (obviously) results in compile errors on out-of-tree build because the (generated) header file doesn't exist at that location. Compared to `rpcgen ./glusterfs3-xdr.x` where you get: ... #include "glusterfs3-xdr.h" ... Which is what we need. We have to resort to some Stupid Make Tricks like the addition of various .PHONY targets to work around the VPATH "help". Warning: When doing an in-tree build, -I$(top_builddir)/rpc/xdr/... looks exactly like -I$(top_srcdir)/rpc/xdr/... Don't be fooled though. And don't delete the -I$(top_builddir)/rpc/xdr/... bits Change-Id: Iba6ab96b2d0a17c5a7e9f92233993b318858b62e BUG: 1330604 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/14085 Tested-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* geo-rep: Alternate command to generate SSH KeysAravinda VK2016-08-311-2/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `gluster system:: execute gsec_create` is used to generate SSH Keys in all Master nodes and collect public keys in command initiated node. But this tool will not provide details if a peer node is down and unable to generate keys. New command will be introduced to create SSH Keys in all peer nodes. Usage: gluster-georep-sshkey generate or gluster-georep-sshkey generate --no-prefix Generates two SSH keys(one for gsyncd access and other for tar) in all peer nodes and collects the public keys to the local node where it is initiated. Adds `command=` prefix to common_secret.pem.pub if `--no-prefix` argument is not set. Shows status as below, +-----------+-------------+---------------+ | NODE | NODE STATUS | KEYGEN STATUS | +-----------+-------------+---------------+ | fvm2 | UP | OK | | localhost | UP | OK | +-----------+-------------+---------------+ BUG: 1356508 Change-Id: Ib202811f41f9986694f07d9eedba31db6ed4d18f Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/14732 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kotresh HR <khiremat@redhat.com>
* geo-rep: Simplify Non root user(mountbroker) SetupAravinda VK2016-08-301-2/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting up Non root Geo-replication is troublesome. Lot of steps involved like user setup, directory creation, setting up proper permissions, editing glusterd.vol file etc. Introducing `gluster-mountbroker` command, with this tool non root user setup steps are(Run the following commands in any one Slave node), gluster-mountbroker setup <MOUNT ROOT> <GROUP> For example, gluster-mountbroker setup /var/mountbroker-root geogroup Add user using, gluster-mountbroker add <VOLUME> <USER> For example, gluster-mountbroker add slavevol geoaccount Remove user or Volume using, gluster-mountbroker remove [--volume <VOLUME>] [--user <USER>] Example, gluster-mountbroker remove --volume slavevol --user geoaccount gluster-mountbroker remove --user geoaccount gluster-mountbroker remove --volume slavevol Check the status of setup using, gluster-mountbroker status Once the changes are complete, restart glusterd in all Slave nodes. and follow the Geo-rep instructions. Change-Id: Ied3fa009df6a886b24ddd86d39283fcfcff68c82 BUG: 1343333 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/14544 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Kotresh HR <khiremat@redhat.com>
* build: fix compiling on older distributionsNiels de Vos2015-06-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | data-tiering is disabled on RHEL-5 because it depends on a too new SQLite version. This change also prevents installing some of files that are used by geo-replication, which is also not available on RHEL-5. geo-replication depends on a too recent version of Python. Due to an older version of OpenSSL, some of the newer functions can not be used. A fallback to previous functions is done. Unfortunately RHEL-5 does not seem to have TLSv1.2 support, so only older versions can be used. Change-Id: I672264a673f5432358d2e83b17e2a34efd9fd913 BUG: 1222317 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10803 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* feature/glusterfind: A tool to find incremental changesAravinda VK2015-03-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Documentation is available in patch: http://review.gluster.org/#/c/9800/ A tool which helps to get list of modified files or list of all files in GlusterFS Volume using Changelog or find command. Usage ===== glusterfind --help Create: ------- glusterfind create --help The tool creates status file $GLUSTERD_WORKDIR/SESSION/VOLUME/status and records current timestamp to initiate the session. This timestamp will be used as start time for next runs. As part of create also generates ssh key and distributes to all peers. and enables build.pgfid and changelog using volume set command. Pre: ---- glusterfind pre --help This command is used to generate the list of files modified after session creation time or after last run. To get list of all files/dirs in Volume, run pre command with `--full` argument. The tool gets all nodes details using gluster volume info and runs node agent for each brick in respective nodes via ssh command. Once these node agents generate the output file, tool copies to local using scp. Merges all the output files to generate the final output file. Post: ----- glusterfind post --help After consuming the list, this sub command is called to update the session time based on pre command status file. List: ----- glusterfind list --help To view all the sessions Delete: ------- glusterfind delete --help Delete session. Known Issues ------------ 1. Deleted files will not get listed, since we can't convert GFID to Path if file/dir is deleted. 2. Only new name will get listed if Renamed. 3. All hardlinks will get listed. Change-Id: I82991feb0aea85cb6ec035fddbf80a2b276e86b0 BUG: 1193893 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9682 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: mountbroker user managementAravinda VK2015-03-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Non root geo-replication setup is now simplified. This patch provides cli for mountbroker user and options management To set Options, gluster system:: execute mountbroker opt <KEY> <VALUE> # for example, gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root gluster system:: execute mountbroker opt geo-replication-log-group geogroup gluster system:: execute mountbroker opt rpc-auth-allow-insecure on To remove option, gluster system:: execute mountbroker optdel <KEY> # for example, gluster system:: execute mountbroker optdel geo-replication-log-group To add/edit user, gluster system:: execute mountbroker user <USERNAME> <VOLUMES> # for example gluster system:: execute mountbroker user geoaccount slavevol1,slavevol2 To remove user, gluster system:: execute mountbroker userdel <USERNAME> # for example gluster system:: execute mountbroker userdel geoaccount For info, gluster system:: execute mountbroker info gluster system:: execute mountbroker -j info For JSON output add -j after mountbroker, for example, gluster system:: execute mountbroker -j user geoaccount slavevol1,slavevol2 PS: Each peer prints its own JSON output, aggregator required from consumer side BUG: 1136312 Change-Id: Ie52210c0bcc91ac2ffd3ba58988222ffca62b47f Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9398 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: darshan n <dnarayan@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: make GLUSTERD_WORKDIR rely on localstatedirHarshavardhana2014-08-071-3/+1
| | | | | | | | | | | | | | | | | | | | | | - Break-way from '/var/lib/glusterd' hard-coded previously, instead rely on 'configure' value from 'localstatedir' - Provide 's/lib/db' as default working directory for gluster management daemon for BSD and Darwin based installations - loff_t is really off_t on Darwin - fix-off the warnings generated by clang on FreeBSD/Darwin - Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all platforms. - Define proper environment for running tests, define correct PATH and LD_LIBRARY_PATH when running tests, so that the desired version of glusterfs is used, regardless where it is installed. (Thanks to manu@netbsd.org for this additional work) Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7 BUG: 1111774 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8246 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/geo-rep: Allow gverify.sh and S56glusterd-geo-rep-create-post.shAvra Sengupta2014-05-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to operate for non-root privileged slave volume Mounting the slave-volume on local node, to perform disk checks in order to allow gverify.sh to operate for non-root privileged slave volume Allowing the hook script S56glusterd-geo-rep-create-post.sh to operate for non-root privileged slave volume Modified peer_add_secret_pub.in to accept username as argument and add the pem keys to the users's_home_dir/.ssh/authorized_keys Wrote set_geo_rep_pem_keys.sh which accepts username as argument and copies the pem keys from the user's home directory to $GLUSTERD_WORKING_DIR/geo-replication/ and then copies the keys to other nodes in the cluster and add them to the respective authorized keys. The script takes as argument the user name and assumes that the user will be present in all the nodes in the cluster. It is not needed for root. To summarize: For a privileged slave user, execute the following on master node as super user: gluster system:: execute gsec_create gluster volume geo-replication <master_vol> [root@]<slave_ip>::<slave_vol> create push_pem For a non-privileged slave user execute the following on master node as super user: gluster system:: execute gsec_create gluster volume geo-replication <master_vol> <slave_user>@<slave_ip>::<slave_vol> create push_pem then on the slave node execute the following as super user: /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh <slave_user> BUG: 1077452 Change-Id: I88020968aa5b13a2c2ab86b1d6661b60071f6f5e Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7744 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* gsyncd: distribute the crawling loadAvra Sengupta2013-07-261-0/+7
| | | | | | | | | | | | | | | | | | | * also consume changelog for change detection. * Status fixes * Use new libgfchangelog done API * process (and sync) one changelog at a time Change-Id: I24891615bb762e0741b1819ddfdef8802326cb16 BUG: 847839 Original Author: Csaba Henk <csaba@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5131 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* move 'xlators/marker/utils/' to 'geo-replication/' directoryAvra Sengupta2013-07-221-0/+26
Change-Id: Ibd0faefecc15b6713eda28bc96794ae58aff45aa BUG: 847839 Original Author: Amar Tumballi <amarts@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5133 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>