summaryrefslogtreecommitdiffstats
path: root/geo-replication/syncdaemon/gsyncd.py
Commit message (Collapse)AuthorAgeFilesLines
* geo-rep: Fix getting subvol numberKotresh HR2016-01-031-1/+1
| | | | | | | | | | | | | | | | | | | Fix getting subvol number if the volume type is tier. If the volume type was tier, the subvol number was calculated incorrectly and hence few of workers didn't become ACTIVE resulting in files not being replicated from corresponding brick. This patch addresses the same. BUG: 1293309 Change-Id: I318de346657d330a2394507514bdff61feb92d27 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/12994 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-on: http://review.gluster.org/13059
* geo-rep: use cold tier bricks for namespace operationsSaravanakumar Arumugam2015-12-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: symlinks are not getting synced to slave in a Tiering based volume. Solution: Now, symlinks are created directly in cold tier bricks( in the backend). Earlier, cold tier was avoided for namespace operations and only hot tier was used while processing changelogs. Now, cold tier is HASH subvolume in a Tiering volume. So, carry out namespace operation only in cold tier subvolume and avoid hot tier subvolume to avoid any races. Earlier, XSYNC was used(and changeloghistory avoided) during initial sync in order to avoid race while processing historychangelog in Hot tier. This is no longer required as there is no race from Hot tier. Also, avoid both live and history changelog ENTRY operations from Hot tier to avoid any race with cold tier. Change-Id: Ia8fbb7ae037f5b6cb683f36c0df5c3fc2894636e BUG: 1288027 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12844 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> (cherry picked from commit 93f31189ce8f6e2980a39b02568ed17088e0a667) Reviewed-on: http://review.gluster.org/12891
* geo-rep: Avoid cold tier bricks during ENTRY operationSaravanakumar Arumugam2015-12-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a series of patch which aims to fix geo-replication in a Tiering Volume. Problem: Consider, a file is placed in volume initially and then hot tier is attached. During any operation on the file, due to lookup a linkto file is created in hot tier. Now, any namespace operation carried out on the file is recorded in both cold and hot tier. There is a room for races when both changelogs are replayed. Solution: So, We are going to replay (namespace related)operations only in the hot tier. Why? a. If the file is directly placed in Hot tier , all fops will be recorded in HOT tier. b. If the file is already present in Cold tier, and if any fop is carried out, it creates linkto file in Hot tier. Now, operations like UNLINK, RENAME are captured in Hot tier(by means of linkto file). This way, we can get both tier's operation in HOT tier itself. Now, once the file is demoted to COLD tier, any namespace operation carried out on the cold tier can be avoided as we directly RECORD the same in HOT tier. How? 1. Check whether the brick is cold tier and skip ENTRY operation. 2. Also, if it is cold tier brick, use Xsync(which is used during initial run). This will help in getting all cold tier bricks changes using File System crawl and helps in avoiding races with hot tier brick(which can happen if historychangelog used in cold tier brick). Dependent patches: 1. http://review.gluster.org/12239 2. http://review.gluster.org/12326 Change-Id: I7692b1dbb8813a7e253451bca02f8f09a5782dde BUG: 1275173 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12355 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> (cherry picked from commit 6188b5fcebc56b3d8af1956beeec9988f3e8f268) Reviewed-on: http://review.gluster.org/12429 Reviewed-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Fix Geo-rep logging to log datetime in GMT/UTCAravinda VK2015-12-021-0/+1
| | | | | | | | | | | | | | | | Geo-rep is logging in Local time, all other Gluster logs are in GMT/UTC. It is very difficult to co-relate Geo-rep logs with other Gluster logs. BUG: 1284737 Change-Id: Ieae8bda7e4788e587cf4595e21e0e772c210cfbb Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/12583 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 41e900199d7f369862b21b739dd11602d0d7c48d) Reviewed-on: http://review.gluster.org/12723 Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* geo-rep: New Config option for ssh_portAravinda VK2015-11-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | If different port used for SSH instead of 22, Geo-replication was failing to establish SSH connection. ssh_port option can be added using config:ssh_command and config:ssh_command_tar, but user has to remember complete ssh command used with parameter to add/modify ssh port. This patch adds new config option for ssh_port, gluster volume geo-replication <MASTERVOL> <SLAVEHOST::<SLAVEVOL> \ config ssh_port 52022 Change-Id: I7753a09485f0b1f49d2b2a80b962c720817c96f4 Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 1283060 Reviewed-on: http://review.gluster.org/12444 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> (cherry picked from commit 7d35eb5926869ed084295600502a85ce13be506f) Reviewed-on: http://review.gluster.org/12607 Reviewed-by: Kotresh HR <khiremat@redhat.com>
* geo-rep: Update geo-rep status, if monitor process is killedSaravanakumar Arumugam2015-11-011-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: When the monitor process itself is getting killed, geo-rep session still shows as active. Status command will just pick up the content from the status file to show the output. Monitor process is the one which updates the Status file. When the monitor process itself gets killed, there is no way to update the status file. So, geo-rep session status command ends up showing last updated Status present in the status file. Solution: While getting the status output, check whether monitor process is running. If it is NOT running, update the status as STOPPED. Change-Id: I86a7ac1746dd8f27eef93658e992ef16f6068d9d BUG: 1276060 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/11873 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Milind Changire <mchangir@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> (cherry picked from commit 4d4c7d5dc54850dcf916083b2b1398d9bfe2bfe6) Reviewed-on: http://review.gluster.org/12448 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Status EnhancementsAravinda VK2015-05-061-10/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Discussion in gluster-devel http://www.gluster.org/pipermail/gluster-devel/2015-April/044301.html MASTER NODE - Master Volume Node MASTER VOL - Master Volume name MASTER BRICK - Master Volume Brick SLAVE USER - Slave User to which Geo-rep session is established SLAVE - <SLAVE_NODE>::<SLAVE_VOL> used in Geo-rep Create command SLAVE NODE - Slave Node to which Master worker is connected STATUS - Worker Status(Created, Initializing, Active, Passive, Faulty, Paused, Stopped) CRAWL STATUS - Crawl type(Hybrid Crawl, History Crawl, Changelog Crawl) LAST_SYNCED - Last Synced Time(Local Time in CLI output and UTC in XML output) ENTRY - Number of entry Operations pending.(Resets on worker restart) DATA - Number of Data operations pending(Resets on worker restart) META - Number of Meta operations pending(Resets on worker restart) FAILURES - Number of Failures CHECKPOINT TIME - Checkpoint set Time(Local Time in CLI output and UTC in XML output) CHECKPOINT COMPLETED - Yes/No or N/A CHECKPOINT COMPLETION TIME - Checkpoint Completed Time(Local Time in CLI output and UTC in XML output) XML output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> cliOutput> geoRep> volume> name> sessions> session> session_slave> pair> master_node> master_brick> slave_user> slave/> slave_node> status> crawl_status> entry> data> meta> failures> checkpoint_completed> master_node_uuid> last_synced> checkpoint_time> checkpoint_completion_time> BUG: 1218586 Change-Id: I944a6c3c67f1e6d6baf9670b474233bec8f61ea3 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10121 Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/10574 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Adhering to the common storage for geo-repKotresh HR2015-05-041-2/+3
| | | | | | | | | | | | | | | | | | | | Making geo-rep use the common storage shared by nfs, snapshot and geo-rep. The meta volume should be named as gluster_shared_storage, and it should be mounted at "/var/run/gluster/shared_storage/". Geo-rep will create a directory called 'geo-rep' in the meta-volume and all the lock files are created inside it. BUG: 1217939 Change-Id: I1d88798376d68340e2b2eff018c7e4f0121a608a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10196 Reviewed-on: http://review.gluster.org/10503 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com>
* geo-rep: Log Rsync performanceAravinda VK2015-04-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Introducing configurable option to log the rsync performance. gluster volume geo-replication <MASTERVOL> <SLAVEHOST>::<SLAVEVOL> \ config log-rsync-performance true Default value is False. Example log: [2015-03-31 16:48:34.572022] I [resource(/bricks/b1):857:rsync] SSH: rsync performance: Number of files: 2 (reg: 1, dir: 1), Number of regular files transferred: 1, Total file size: 178 bytes, Total transferred file size: 178 bytes, Literal data: 178 bytes, Matched data: 0 bytes, Total bytes sent: 294, Total bytes received: 32, sent 294 bytes received 32 bytes 652.00 bytes/sec Change-Id: If11467e29e6ac502fa114bd5742a8434b7084f98 Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 764827 Reviewed-on: http://review.gluster.org/10070 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Add support for aclsKotresh HR2015-03-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for ACLS. When it sees SETXATTR in Changelog, it adds the file to data queue. rsync/tar+ssh will take care of syncing ACLS. User set ACLS will be synced to Slave. This requires "system.posix_acl_access" to go through when client-pid is equal GF_CLIENT_PID_GSYNCD in fuse layer. New config interface is introduced, sync-acls Which can be set using geo-rep config(Default is True) gluster volume geo-replication <VOLUME> <SLAVEHOST>::<SLAVEVOL> \ config sync-acls false Change-Id: I7eb3523fa72b8fed830efc98138891244e830d65 BUG: 1187021 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10001 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* feature/geo-rep: Active Passive Switching logic flockKotresh HR2015-03-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CURRENT DESIGN AND ITS LIMITATIONS: ----------------------------------- Geo-replication syncs changes across geography using changelogs captured by changelog translator. Changelog translator sits on server side just above posix translator. Hence, in distributed replicated setup, both replica pairs collect changelogs w.r.t their bricks. Geo-replication syncs the changes using only one brick among the replica pair at a time, calling it as "ACTIVE" and other non syncing brick as "PASSIVE". Let's consider below example of distributed replicated setup where NODE-1 as b1 and its replicated brick b1r is in NODE-2 NODE-1 NODE-2 b1 b1r At the beginning, geo-replication chooses to sync changes from NODE-1:b1 and NODE-2:b1r will be "PASSIVE". The logic depends on virtual getxattr 'trusted.glusterfs.node-uuid' which always returns first up subvolume i.e., NODE-1. When NODE-1 goes down, the above xattr returns NODE-2 and that is made 'ACTIVE'. But when NODE-1 comes back again, the above xattr returns NODE-1 and it is made 'ACTIVE' again. So for a brief interval of time, if NODE-2 had not finished processing the changelog, both NODE-2 and NODE-1 will be ACTIVE causing rename race as mentioned in the bug. SOLUTION: --------- 1. Have a shared replicated storage, a glusterfs management volume specific to geo-replication. 2. Geo-rep creates a file per replica set on management volume. 3. fcntl lock on the above said file is used for synchronization between geo-rep workers belonging to same replica set. 4. If management volume is not configured, geo-replication will back to previous logic of using first up sub volume. Each worker tries to lock the file on shared storage, who ever wins will be ACTIVE. With this, we are able to solve the problem but there is an issue when the shared replicated storage goes down (when all replicas goes down). In that case, the lock state is lost. So AFR needs to rebuild the lock state after brick comes up. NOTE: ----- This patch brings in the, pre-requisite step of setting up management volume for geo-replication during creation. 1. Create mgmt-vol for geo-replicatoin and start it. Management volume should be part of master cluster and recommended to be three way replicated volume having each brick in different nodes for availability. 2. Create geo-rep session. 3. Configure mgmt-vol created with geo-replication session as follows. gluster vol geo-rep <mastervol> slavenode::<slavevol> config meta_volume \ <meta-vol-name> 4. Start geo-rep session. Backward Compatiability: ----------------------- If management volume is not configured, it falls back to previous logic of using node-uuid virtual xattr. But it is not recommended. Change-Id: I7319d2289516f534b69edd00c9d0db5a3725661a BUG: 1196632 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9759 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Add support for xattrsAravinda VK2015-03-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | This patch adds support for xattrs. When it sees SETXATTR in Changelog, it adds the file to data queue. rsync/tar+ssh will take care of syncing xattrs. User set xattrs will be synced to Slave. New config interface is introduced, sync-xattrs Which can be set using geo-rep config(Default is True) gluster volume geo-replication <VOLUME> <SLAVEHOST>::<SLAVEVOL> \ config sync-xattrs false Change-Id: I70626d854a0d616469dd54d61e5ef155ed8b67d8 BUG: 1196690 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9499 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Archive Changelogs and avoid generating empty XSync changelogsAravinda VK2015-02-191-0/+2
| | | | | | | | | | | | | | | With this patch, - Hybrid Crawl will not generate empty Changelogs - Archives Changelogs when processed(Hybrid(XSync), History, and Changelog Crawl - Passive worker cleans up its processing directory BUG: 1169331 Change-Id: I1383ffaed261cdf50da91b14260b4d43177657d1 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9453 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Making replica failover check interval configurableAravinda VK2014-06-111-0/+2
| | | | | | | | | | | | | | | | | | Replica failover check interval is hardcoded to 60 sec by default. Now this option is made configurable and defaulted to 1 sec. To change the default value gluster volume geo-replication <MASTERVOL> \ <SLAVEHOST>::<SLAVEVOL> config replica_failover_interval 15 Change-Id: Iada1b80d510452dcfedebd8a21bebd62394b0597 BUG: 1066410 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/8003 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* gsyncd / geo-rep: FSH recommended log locationsVenky Shankar2014-06-101-1/+8
| | | | | | | | | | | | | | | | Upgrading "working_dir" on the fly is a bit unclean yet (though it works) as currently config upgrade does not support "old" values to be expanded by using configuration variables. Change-Id: I44ed65c281f2e0ce3b6b467addc5c1c88ac674e7 BUG: 1077516 Signed-off-by: Venky Shankar <vshankar@redhat.com> Signed-off-by: Kotresh H R <khiremat@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/7375 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* feature/geo-rep: Fix to retain pause state of gsyncd on restart.Kotresh H R2014-06-051-0/+1
| | | | | | | | | | | | | | | A new gsyncd options '--pause-on-start' is introduced. When node reboots, if the status is paused, gsyncd is started with this option. After gsyncd spawns worker and agent, worker will send SIGSTOP to negative pid of monitor to enter pause mode. Change-Id: I5aad82c9a9fc8c243f384940b77d25e26e520d6d BUG: 1101410 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7885 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Pause and Resume feature for geo-replicationAravinda VK2014-05-091-0/+13
| | | | | | | | | | | | | | | | Changelog consumption/processing now happens in seperate process group than monitor. When monitor process group gets SIGSTOP all worker process, ssh, rsync will be paused except the changelog processing. When it gets SIGCONT it resumes its operation. Changelog agent runs as RepceServer, geo-rep worker communicates with changelog agent using RepceClient. Change-Id: I35c333e4d8b13d03a7808aed601960eef23cfa04 BUG: 1093602 Signed-off-by: Venky Shankar <vshankar@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/7322
* geo-rep: code pep8/flake8 fixesAravinda VK2014-04-071-107/+185
| | | | | | | | | | | | | | | | | | | | | | | | | | | pep8 is a style guide for python. http://legacy.python.org/dev/peps/pep-0008/ pep8 can be installed using, `pip install pep8` Usage: `pep8 <python file>`, For example, `pep8 master.py` will display all the coding standard errors. flake8 is used to identify unused imports and other issues in code. pip install flake8 cd $GLUSTER_REPO/geo-replication/ flake8 syncdaemon Updated license headers to each source file. Change-Id: I01c7d0a6091d21bfa48720e9fb5624b77fa3db4a Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/7311 Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Config file upgradeAravinda VK2014-02-101-1/+3
| | | | | | | | | | | | | | | | | | | When old config file is used with new geo-rep, config item like 'georep_session_working_dir' was missing in old config file. With this patch geo-rep sets the default value for new items. Following config options supported: - georep_session_working_dir - gluster_params - ssh_command_tar BUG: 1036539 Change-Id: I389c62e749f3b567f9ecf96d4b41367ef962c025 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/6934 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gsyncd / geo-rep: geo-replication fixesAjeet Jha2013-12-121-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | -> "threaded" hybrid crawl. -> Enabling metatadata synchronization. -> Handling EINVAL/ESTALE gracefully while syncing metadata. -> Improvments to changelog crawl code. -> Initial crawl changelog generation format. -> No gsyncd restart when checkpoint updated. -> Fix symlink handling in hybrid crawl. -> Slave's xtime key is 'stime'. -> tar+ssh as data synchronization. -> Instead of 'raise', just log in warning level for xtime missing cases. -> Fix for JSON object load failure -> Get new config value after config value reset. -> Skip already processed changelogs. -> Saving status of each individual worker thread. -> GFID fetch on slave for purges. -> Add tar ssh keys and config options. -> Fix nlink count when using backend. -> Include "data" operation for hardlink. -> Use changelog time prefix as slave's time. -> Process changelogs in parallel. Change-Id: I09fcbb2e2e418149a6d8435abd2ac6b2f015bb06 BUG: 1036539 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6404 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep: logrotate: Logrotate handlingAravinda VK2013-10-031-2/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In existing georep logrotate was implemented by handling SIGSTOP and SIGCONT, gsyncd was failing to start again after SIGSTOP. New approach uses WatchedFileHandler in logging, which tracks the log file changes or logrotate. Reopens the log file if logrotate is triggered or if same log file is updated from other process. As per python doc: http://docs.python.org/2/library/logging.handlers.html: The WatchedFileHandler class, located in the logging.handlers module, is a FileHandler which watches the file it is logging to. If the file changes, it is closed and reopened using the file name. A file change can happen because of usage of programs such as newsyslog and logrotate which perform log file rotation. This handler, intended for use under Unix/Linux, watches the file to see if it has changed since the last emit. (A file is deemed to have changed if its device or inode have changed.) If the file has changed, the old file stream is closed, and the file opened to get a new stream. Change-Id: I30f65eb1e9778b12943d6e43b60a50344a7885c6 BUG: 1012776 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5968 Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-041-0/+2
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* gsyncd: distribute the crawling loadAvra Sengupta2013-07-261-15/+110
| | | | | | | | | | | | | | | | | | | * also consume changelog for change detection. * Status fixes * Use new libgfchangelog done API * process (and sync) one changelog at a time Change-Id: I24891615bb762e0741b1819ddfdef8802326cb16 BUG: 847839 Original Author: Csaba Henk <csaba@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5131 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* move 'xlators/marker/utils/' to 'geo-replication/' directoryAvra Sengupta2013-07-221-0/+419
Change-Id: Ibd0faefecc15b6713eda28bc96794ae58aff45aa BUG: 847839 Original Author: Amar Tumballi <amarts@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5133 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>