summaryrefslogtreecommitdiffstats
path: root/geo-replication/syncdaemon/monitor.py
Commit message (Collapse)AuthorAgeFilesLines
* georep: add reset-sync-time option for session deleteMilind Changire2016-07-211-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Set the stime xattr at all the brick roots to (0,0) if the argument reset-sync-time has been provided on the command-line. To avoid testing against directory specific stime, the remote stime is assumed to be minus_infinity, if the root directory stime is set to (0,0), before the directory scan begins. This triggers a full volume resync to slave in the case of a geo-rep session recreation with the same master-slave volume pair. Command synopsis: gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \ [reset-sync-time] Update gluster cli man page to include new sub-command reset-sync-time. Change-Id: Ie4ce03b9425ed9bb81eda8681058c0fc6f990948 BUG: 1357772 Signed-off-by: Milind Changire <mchangir@redhat.com> Reviewed-on: http://review.gluster.org/14051 Reviewed-by: Kotresh HR <khiremat@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> (cherry picked from commit 70fd68d94f768c098b3178c151fa92c5079a8cfd) Reviewed-on: http://review.gluster.org/14952
* geo-rep: Handle Worker kill gracefully if worker already diedAravinda VK2016-06-011-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If Agent dies for any reason, monitor tries to kill Worker also. But if worker is also died then kill command raises error ESRCH: No such process. [2016-05-23 16:49:33.903965] I [monitor(monitor):326:monitor] Monitor: Changelog Agent died, Aborting Worker(/bricks/brick0/master_brick0) [2016-05-23 16:49:33.904535] E [syncdutils(monitor):276:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 306 in twrap tf(*aa) File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 393, in wmon slave_host, master) File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 327, in monitor os.kill(cpid, signal.SIGKILL) OSError: [Errno 3] No such process With this patch, monitor will gracefully handle if worker is already died. Change-Id: I3ae5f816a3a197343b64540cf46f5453167fb660 Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 1341068 Reviewed-on: http://review.gluster.org/14512 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kotresh HR <khiremat@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> (cherry picked from commit 4f4a94a35a24d781f3f0e584a8cb59c019e50d6f) Reviewed-on: http://review.gluster.org/14562 Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com>
* geo-rep: Fix syntax errors in GsyncdErrorAravinda VK2016-01-031-3/+2
| | | | | | | | | | | | | | | | %s was not replaced by actual values in GsyncdError BUG: 1279644 Change-Id: I3c0a10f07383ca72844a46f930b4aa3d3c29f568 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/12566 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> (cherry picked from commit 74699ddd777f7e862991cf3afad91823d30e5b84) Reviewed-on: http://review.gluster.org/12724 Reviewed-by: Milind Changire <mchangir@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Fix getting subvol countKotresh HR2016-01-031-1/+4
| | | | | | | | | | | | | | | | Tiering doesn't support disperse volume as hot tier, hence xml output doesn't give 'hotdisperseCount'. Remove the usage of 'hotdisperseCount' in geo-rep and return 0 instead. BUG: 1293309 Change-Id: I3f50d21cb51db91e31faebf69af4f72360420b73 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/13062 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/13068
* geo-rep: Fix getting subvol numberKotresh HR2016-01-031-16/+46
| | | | | | | | | | | | | | | | | | | Fix getting subvol number if the volume type is tier. If the volume type was tier, the subvol number was calculated incorrectly and hence few of workers didn't become ACTIVE resulting in files not being replicated from corresponding brick. This patch addresses the same. BUG: 1293309 Change-Id: I318de346657d330a2394507514bdff61feb92d27 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/12994 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-on: http://review.gluster.org/13059
* geo-rep: use cold tier bricks for namespace operationsSaravanakumar Arumugam2015-12-071-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: symlinks are not getting synced to slave in a Tiering based volume. Solution: Now, symlinks are created directly in cold tier bricks( in the backend). Earlier, cold tier was avoided for namespace operations and only hot tier was used while processing changelogs. Now, cold tier is HASH subvolume in a Tiering volume. So, carry out namespace operation only in cold tier subvolume and avoid hot tier subvolume to avoid any races. Earlier, XSYNC was used(and changeloghistory avoided) during initial sync in order to avoid race while processing historychangelog in Hot tier. This is no longer required as there is no race from Hot tier. Also, avoid both live and history changelog ENTRY operations from Hot tier to avoid any race with cold tier. Change-Id: Ia8fbb7ae037f5b6cb683f36c0df5c3fc2894636e BUG: 1288027 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12844 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> (cherry picked from commit 93f31189ce8f6e2980a39b02568ed17088e0a667) Reviewed-on: http://review.gluster.org/12891
* geo-rep: Avoid cold tier bricks during ENTRY operationSaravanakumar Arumugam2015-12-071-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a series of patch which aims to fix geo-replication in a Tiering Volume. Problem: Consider, a file is placed in volume initially and then hot tier is attached. During any operation on the file, due to lookup a linkto file is created in hot tier. Now, any namespace operation carried out on the file is recorded in both cold and hot tier. There is a room for races when both changelogs are replayed. Solution: So, We are going to replay (namespace related)operations only in the hot tier. Why? a. If the file is directly placed in Hot tier , all fops will be recorded in HOT tier. b. If the file is already present in Cold tier, and if any fop is carried out, it creates linkto file in Hot tier. Now, operations like UNLINK, RENAME are captured in Hot tier(by means of linkto file). This way, we can get both tier's operation in HOT tier itself. Now, once the file is demoted to COLD tier, any namespace operation carried out on the cold tier can be avoided as we directly RECORD the same in HOT tier. How? 1. Check whether the brick is cold tier and skip ENTRY operation. 2. Also, if it is cold tier brick, use Xsync(which is used during initial run). This will help in getting all cold tier bricks changes using File System crawl and helps in avoiding races with hot tier brick(which can happen if historychangelog used in cold tier brick). Dependent patches: 1. http://review.gluster.org/12239 2. http://review.gluster.org/12326 Change-Id: I7692b1dbb8813a7e253451bca02f8f09a5782dde BUG: 1275173 Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: http://review.gluster.org/12355 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> (cherry picked from commit 6188b5fcebc56b3d8af1956beeec9988f3e8f268) Reviewed-on: http://review.gluster.org/12429 Reviewed-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Kill Geo-rep Worker when Agent process diesAravinda VK2015-11-241-10/+44
| | | | | | | | | | | | | | | | | | | | When Changelog agent process dies, Geo-replication fails to detect and worker will run without respective Changelog agent. Status shows Active/Passive without any progress. With this patch, Worker process gets killed whenever Changelog agent dies. Change-Id: I30b4cc77f924f7e8174b8bfe415ac17f0b3851b4 Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 1279362 Reviewed-on: http://review.gluster.org/12485 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 5d1ff7efd6ab3bd29a29922a9ea1e1aaf02544ad) Reviewed-on: http://review.gluster.org/12550
* geo-rep: Status EnhancementsAravinda VK2015-05-061-39/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Discussion in gluster-devel http://www.gluster.org/pipermail/gluster-devel/2015-April/044301.html MASTER NODE - Master Volume Node MASTER VOL - Master Volume name MASTER BRICK - Master Volume Brick SLAVE USER - Slave User to which Geo-rep session is established SLAVE - <SLAVE_NODE>::<SLAVE_VOL> used in Geo-rep Create command SLAVE NODE - Slave Node to which Master worker is connected STATUS - Worker Status(Created, Initializing, Active, Passive, Faulty, Paused, Stopped) CRAWL STATUS - Crawl type(Hybrid Crawl, History Crawl, Changelog Crawl) LAST_SYNCED - Last Synced Time(Local Time in CLI output and UTC in XML output) ENTRY - Number of entry Operations pending.(Resets on worker restart) DATA - Number of Data operations pending(Resets on worker restart) META - Number of Meta operations pending(Resets on worker restart) FAILURES - Number of Failures CHECKPOINT TIME - Checkpoint set Time(Local Time in CLI output and UTC in XML output) CHECKPOINT COMPLETED - Yes/No or N/A CHECKPOINT COMPLETION TIME - Checkpoint Completed Time(Local Time in CLI output and UTC in XML output) XML output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> cliOutput> geoRep> volume> name> sessions> session> session_slave> pair> master_node> master_brick> slave_user> slave/> slave_node> status> crawl_status> entry> data> meta> failures> checkpoint_completed> master_node_uuid> last_synced> checkpoint_time> checkpoint_completion_time> BUG: 1218586 Change-Id: I944a6c3c67f1e6d6baf9670b474233bec8f61ea3 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10121 Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/10574 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Adhering to the common storage for geo-repKotresh HR2015-05-041-20/+0
| | | | | | | | | | | | | | | | | | | | Making geo-rep use the common storage shared by nfs, snapshot and geo-rep. The meta volume should be named as gluster_shared_storage, and it should be mounted at "/var/run/gluster/shared_storage/". Geo-rep will create a directory called 'geo-rep' in the meta-volume and all the lock files are created inside it. BUG: 1217939 Change-Id: I1d88798376d68340e2b2eff018c7e4f0121a608a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10196 Reviewed-on: http://review.gluster.org/10503 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com>
* feature/geo-rep: Active Passive Switching logic flockKotresh HR2015-03-151-1/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CURRENT DESIGN AND ITS LIMITATIONS: ----------------------------------- Geo-replication syncs changes across geography using changelogs captured by changelog translator. Changelog translator sits on server side just above posix translator. Hence, in distributed replicated setup, both replica pairs collect changelogs w.r.t their bricks. Geo-replication syncs the changes using only one brick among the replica pair at a time, calling it as "ACTIVE" and other non syncing brick as "PASSIVE". Let's consider below example of distributed replicated setup where NODE-1 as b1 and its replicated brick b1r is in NODE-2 NODE-1 NODE-2 b1 b1r At the beginning, geo-replication chooses to sync changes from NODE-1:b1 and NODE-2:b1r will be "PASSIVE". The logic depends on virtual getxattr 'trusted.glusterfs.node-uuid' which always returns first up subvolume i.e., NODE-1. When NODE-1 goes down, the above xattr returns NODE-2 and that is made 'ACTIVE'. But when NODE-1 comes back again, the above xattr returns NODE-1 and it is made 'ACTIVE' again. So for a brief interval of time, if NODE-2 had not finished processing the changelog, both NODE-2 and NODE-1 will be ACTIVE causing rename race as mentioned in the bug. SOLUTION: --------- 1. Have a shared replicated storage, a glusterfs management volume specific to geo-replication. 2. Geo-rep creates a file per replica set on management volume. 3. fcntl lock on the above said file is used for synchronization between geo-rep workers belonging to same replica set. 4. If management volume is not configured, geo-replication will back to previous logic of using first up sub volume. Each worker tries to lock the file on shared storage, who ever wins will be ACTIVE. With this, we are able to solve the problem but there is an issue when the shared replicated storage goes down (when all replicas goes down). In that case, the lock state is lost. So AFR needs to rebuild the lock state after brick comes up. NOTE: ----- This patch brings in the, pre-requisite step of setting up management volume for geo-replication during creation. 1. Create mgmt-vol for geo-replicatoin and start it. Management volume should be part of master cluster and recommended to be three way replicated volume having each brick in different nodes for availability. 2. Create geo-rep session. 3. Configure mgmt-vol created with geo-replication session as follows. gluster vol geo-rep <mastervol> slavenode::<slavevol> config meta_volume \ <meta-vol-name> 4. Start geo-rep session. Backward Compatiability: ----------------------- If management volume is not configured, it falls back to previous logic of using node-uuid virtual xattr. But it is not recommended. Change-Id: I7319d2289516f534b69edd00c9d0db5a3725661a BUG: 1196632 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9759 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Fixing the typo errorsarao2015-02-031-1/+1
| | | | | | | | | Change-Id: Iacc67e4ba9ac45e0858f3befe84ffb8fccf7e1c3 BUG: 1075417 Signed-off-by: arao <arao@redhat.com> Reviewed-on: http://review.gluster.org/9502 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
* geo-rep: Handle Volume status error while getting slave nodesAravinda VK2015-01-121-1/+6
| | | | | | | | | | | | | | | gluster volume status command not returns xml output, when any error like "Transaction in Progress", we need to handle returncode along with xml error. BUG: 1151412 Signed-off-by: Aravinda VK <avishwan@redhat.com> Change-Id: Id5b7712df7cff58744b4c5a0d00870aec1d926a8 Reviewed-on: http://review.gluster.org/9432 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Failover when a Slave node goes downAravinda VK2014-10-191-5/+64
| | | | | | | | | | | | | | | | | | When a slave node goes down, worker in master node can connect to different slave node and resume the operation. Existing georep was not checking the status of slave node before worker restart. With this patch, geo-rep worker will check the node status using `gluster volume status` when it goes faulty. BUG: 1151412 Change-Id: If3ab7fdcf47f5b3f3ba383c515703c5f1f9dd668 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/8921 Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Fix the fd leak in worker/agent spawnAravinda VK2014-06-301-2/+5
| | | | | | | | | | | | | | | | | | worker and agent uses pipe to communicate, if worker dies for some reason agent should get EOF and terminate. Each worker-agent spawning is done in thread, Due to race if multiple workers in same node retain the pipe refs of other workers. Hence agent will not get EOF even if worker dies. BUG: 1114003 Change-Id: I36b9709b9392299483606bd3ef1db764fa3f2bff Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/8194 Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* feature/geo-rep: Fix to retain pause state of gsyncd on restartKotresh H R2014-06-201-7/+18
| | | | | | | | | | | | | | On soft reboot, geo-rep monitor is writing 'faulty' into status file. It should not do it if previous state is paused as glusterd depend on the state file on node restart. Change-Id: Idd45abf13350b087371935f1b4f6e1a346433d27 BUG: 1101410 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8097 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* feature/geo-rep: Fix for changelog agent becoming zombie.Kotresh H R2014-06-171-4/+12
| | | | | | | | | | | | | | | | | | Monitor process spawns changelog agent and is not wait on it, hence becoming zombie. When worker is dies/killed, it respawns both worker and corresponding agent leaving the earlier changelog agent in zombie state. This patch addresses this issue by waiting on agent process in montor process. Change-Id: I571b7d6487133848edca67e7446f1caa70ae01c9 BUG: 1103643 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7956 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* feature/geo-rep: Fix to retain pause state of gsyncd on restart.Kotresh H R2014-06-051-2/+7
| | | | | | | | | | | | | | | A new gsyncd options '--pause-on-start' is introduced. When node reboots, if the status is paused, gsyncd is started with this option. After gsyncd spawns worker and agent, worker will send SIGSTOP to negative pid of monitor to enter pause mode. Change-Id: I5aad82c9a9fc8c243f384940b77d25e26e520d6d BUG: 1101410 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7885 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* gsyncd / geo-rep: fix cli query for volinfo fetchVenky Shankar2014-05-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | With an unprivileged geo-replication session, monitor was using user@slave for --remote-host option for gluster cli, thereby failing to sucessfully connect to the slave glusterd. This patch fixes the issue by selecting the hostname/IP from the speicified slave endpoint url. - For privileged geo-replication sessions, this patch has no effect as the slave endpoint url is just the hostname/IP. Change-Id: I88f66c406a8d9a34db7fc626965f949075e3ceac BUG: 1077452 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/7818 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gsyncd/geo-rep: Fix remote vol info fetching for non-rootKotresh H R2014-05-151-1/+1
| | | | | | | | | | Signed-off-by: Kotresh H R <khiremat@redhat.com> Change-Id: If1d2cab3fcfe2391105551e54f0b9729a7c204e4 BUG: 1077452 Reviewed-on: http://review.gluster.org/7767 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* gsyncd / geo-rep: Partial support for Non-root geo-replication.Venky Shankar2014-05-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | This patch enables geo-replication to be run as an unprivileged user. As of now, this is just the partial support, but is very close to achieve full functionality. Current limitation * Geo-replication executed Gluster CLI commands on the slave via SSH. On a non-root setup, Gluster CLI would run as an unprivileged user, failing to execute the command. As a workaround (for testing), setuid(2) Gluster CLI executable or use the glusterd option to accept commands by unprivileged CLI process. The nature of cli commands are "system::" commands (for key management) and remote volume info fetching. Remote volume info fetching has been modified to use --remote-host gluster cli option rather than ssh and remote cli execution. Change-Id: Ica89e2ba9b7f48fd6e1c876c477d7822dc693617 BUG: 1077452 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/7658 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Pause and Resume feature for geo-replicationAravinda VK2014-05-091-2/+32
| | | | | | | | | | | | | | | | Changelog consumption/processing now happens in seperate process group than monitor. When monitor process group gets SIGSTOP all worker process, ssh, rsync will be paused except the changelog processing. When it gets SIGCONT it resumes its operation. Changelog agent runs as RepceServer, geo-rep worker communicates with changelog agent using RepceClient. Change-Id: I35c333e4d8b13d03a7808aed601960eef23cfa04 BUG: 1093602 Signed-off-by: Venky Shankar <vshankar@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/7322
* geo-rep: code pep8/flake8 fixesAravinda VK2014-04-071-25/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | pep8 is a style guide for python. http://legacy.python.org/dev/peps/pep-0008/ pep8 can be installed using, `pip install pep8` Usage: `pep8 <python file>`, For example, `pep8 master.py` will display all the coding standard errors. flake8 is used to identify unused imports and other issues in code. pip install flake8 cd $GLUSTER_REPO/geo-replication/ flake8 syncdaemon Updated license headers to each source file. Change-Id: I01c7d0a6091d21bfa48720e9fb5624b77fa3db4a Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/7311 Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Fix ValueError - signal only works in main threadAravinda VK2014-03-201-8/+7
| | | | | | | | | | | | | | | | | | | | When a worker process not confirmed within 60 seconds of start then monitor thread was terminated instead of stopping and restarting the worker thread. Before terminate monitor thread tries to add a signal handler for SIGTERM to cleanup the stuff before terminate. Signal handling will not work inside thread, so ValueError was raised. This patch will not terminate monitor thread, instead only kills and restarts the worker. Change-Id: I14df26c0cc3097af29293c81536c13b86075e28f BUG: 1078068 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/7294 Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: logrotate: Logrotate handlingAravinda VK2013-10-031-13/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In existing georep logrotate was implemented by handling SIGSTOP and SIGCONT, gsyncd was failing to start again after SIGSTOP. New approach uses WatchedFileHandler in logging, which tracks the log file changes or logrotate. Reopens the log file if logrotate is triggered or if same log file is updated from other process. As per python doc: http://docs.python.org/2/library/logging.handlers.html: The WatchedFileHandler class, located in the logging.handlers module, is a FileHandler which watches the file it is logging to. If the file changes, it is closed and reopened using the file name. A file change can happen because of usage of programs such as newsyslog and logrotate which perform log file rotation. This handler, intended for use under Unix/Linux, watches the file to see if it has changed since the last emit. (A file is deemed to have changed if its device or inode have changed.) If the file has changed, the old file stream is closed, and the file opened to get a new stream. Change-Id: I30f65eb1e9778b12943d6e43b60a50344a7885c6 BUG: 1012776 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5968 Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gsyncd / geo-rep: distributify slaveVenky Shankar2013-09-041-14/+1
| | | | | | | | | | | | | | | | | commit fbb8fd92 introduced slave distributification but had some problems (monitor would crash upon gsyncd start). This patch fixes the issue and makes code more pythonic ;) Change-Id: I2cbf5669d81966046a4aeeb4a6ad11a947aa8f09 BUG: 1003807 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: http://review.gluster.org/5761 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-replication: fix the logic of choosing the remote node to syncVenky Shankar2013-09-041-2/+11
| | | | | | | | | | | | Change-Id: Ie15636357d89e94b6bfad0e168b1fcad53508c47 BUG: 1003807 Signed-off-by: Amar Tumballi <amarts@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5759 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-041-1/+1
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gsyncd: distribute the crawling loadAvra Sengupta2013-07-261-42/+174
| | | | | | | | | | | | | | | | | | | * also consume changelog for change detection. * Status fixes * Use new libgfchangelog done API * process (and sync) one changelog at a time Change-Id: I24891615bb762e0741b1819ddfdef8802326cb16 BUG: 847839 Original Author: Csaba Henk <csaba@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5131 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* move 'xlators/marker/utils/' to 'geo-replication/' directoryAvra Sengupta2013-07-221-0/+129
Change-Id: Ibd0faefecc15b6713eda28bc96794ae58aff45aa BUG: 847839 Original Author: Amar Tumballi <amarts@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5133 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>