summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-syncop.c
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: null dereferencenik-redhat2020-07-081-1/+1
| | | | | | | | | | | | | | | | | | | | Issue: There has been either an explicit null dereference or a dereference after null check in some cases. Fix: Added the proper condition for null check and fixed null derefencing. CID: 1430106 : Dereference after null check CID: 1430120 : Explicit null dereferenced CID: 1430132 : Dereference after null check CID: 1430134 : Dereference after null check Change-Id: I7e795cf9f7146a633097c26a766f16b159881fa3 Updates: #1060 Signed-off-by: nik-redhat <nladha@redhat.com>
* glusterd: rebalance status displays stats as 0 after rebootSanju Rakonde2020-07-021-9/+20
| | | | | | | | | | | | | | | | | | | | | | | problem: while the rebalance is in progress, if a node is rebooted rebalance v status shows the stats of this node as 0 once the node is back. Reason: when the node is rebooted, once it is back glusterd_volume_defrag_restart() starts the rebalance and creates the rpc. but due to some race, rebalance process is sending disconnect event, so rpc object is getting destroyed. As the rpc object is null, request for fetching the latest stats is not sent to rebalance process. and stats are shows as default values which is 0. Solution: When the rpc object null, we should create the rpc if the rebalance process is up. so that request can be sent to rebalance process using the rpc. fixes: #1339 Change-Id: I1c7533fedd17dcaffc0f7a5a918c87356133a81c Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: additional log informationnik-redhat2020-06-291-9/+37
| | | | | | | | | | | | | | | Issue: Some of the functions didn't had sufficient logging of information in case of failure. Fix: Added log information in few functions in case of failure indicating the cause of such event. Change-Id: I301cf3a1c8d2c94505c6ae0d83072b0241c36d84 fixes: #874 Signed-off-by: nik-redhat <nladha@redhat.com>
* multiple files: reduce minor work under RCU_READ_LOCKYaniv Kaul2019-08-051-2/+4
| | | | | | | | | 1. Try to unlock faster - in error paths. 2. Remove memory allocations - do them before the lock. Change-Id: I1e9ddd80b99de45ad0f557d62a5f28951dfd54c8 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd/svc: update pid of mux volumes from the shd processMohammed Rafi KC2019-07-091-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | For a normal volume, we are updating the pid from a the process while we do a daemonization or at the end of the init if it is no-daemon mode. Along with updating the pid we also lock the file, to make sure that the process is running fine. With brick mux, we were updating the pidfile from gluterd after an attach/detach request. There are two problems with this approach. 1) We are not holding a pidlock for any file other than parent process. 2) There is a chance for possible race conditions with attach/detach. For example, shd start and a volume stop could race. Let's say we are starting an shd and it is attached to a volume. While we trying to link the pid file to the running process, this would have deleted by the thread that doing a volume stop. Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a Updates: bz#1722541 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* glusterd/tier: remove tier related code from glusterdHari Gowtham2019-05-271-30/+0
| | | | | | | | | | | | | The handler functions are pointed to dummy functions. The switch case handling for tier also have been moved to point default case to avoid issues, if reintroduced. The tier changes in DHT still remain as such. updates: bz#1693692 Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
* glusterd: Fix coverity defects & put coverity annotationsAtin Mukherjee2019-05-021-0/+1
| | | | | | | | | | | Along with fixing few defect, put the required annotations for the defects which are marked ignore/false positive/intentional as per the coverity defect sheet. This should avoid the per component graph showing many defects as open in the coverity glusterfs web page. Updates: bz#789278 Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: coverity fixesAtin Mukherjee2019-04-251-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | Addresses the following: * CID 1124776: Resource leaks (RESOURCE_LEAK) - Variable "aa" going out of scope leaks the storage it points to in glusterd-volgen.c * Bunch of CHECKED_RETURN defects in the callers of synctask_barrier_init * CID 1400755: Error handling issues (CHECKED_RETURN) - Calling "gf_is_service_running" without checking return value in xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 671 in glusterd_shdsvc_stop() * CID 1400745: Memory - illegal accesses (USE_AFTER_FREE) - Dereferencing freed pointer "volinfo" in /xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 460 in glusterd_shdsvc_start() * CID 1400742: Program hangs (LOCK) - adding annotation to fix this false positive Updates: bz#789278 Change-Id: I02f16e7eeb8c5cf72f7d0b29d00df4f03b3718b3 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: fix txn-id mem leakAtin Mukherjee2019-03-251-0/+16
| | | | | | | | | | | | | | | This commit ensures the following: 1. Don't send commit op request to the remote nodes when gluster v status all is executed as for the status all transaction the local commit gets the name of the volumes and remote commit ops are technically a no-op. So no need for additional rpc requests. 2. In op state machine flow, if the transaction is in staged state and op_info.skip_locking is true, then no need to set the txn id in the priv->glusterd_txn_opinfo dictionary which never gets freed. Fixes: bz#1691164 Change-Id: Ib6a9300ea29633f501abac2ba53fb72ff648c822 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: glusterd memory leak while running "gluster v profile" in a loopMohit Agrawal2019-03-051-1/+3
| | | | | | | | | | | Problem: glusterd has memory leak while running "gluster v profile" in a loop Solution: Resolve leak code path to avoid leak Change-Id: Id608703ff6d0ad34ed8f921a5d25544e24cfadcd fixes: bz#1685414 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: migrating rebalance commands to mgmt_v3 frameworkSanju Rakonde2018-12-181-0/+9
| | | | | | | | | Current rebalance commands use the op_state machine framework. Porting it to use the mgmt_v3 framework. Change-Id: I6faf4a6335c2e2f3d54bbde79908a7749e4613e7 fixes: bz#1655827 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: perform rcu_read_lock/unlock() under cleanup_lock mutexSanju Rakonde2018-12-031-20/+20
| | | | | | | | | | | | | | Problem: glusterd should not try to acquire locks on any resources, when it already received a SIGTERM and cleanup is started. Otherwise we might hit segfault, since the thread which is going through cleanup path will be freeing up the resouces and some other thread might be trying to acquire locks on freed resources. Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex. fixes: bz#1654270 Change-Id: I87a97cfe4f272f74f246d688660934638911ce54 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-1678/+1621
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* glusterd : fix some coverity issues in glusterd-syncop.cSunny Kumar2018-08-301-3/+5
| | | | | | | | This patch fixes CID 1382344, 1124655 and 1325537. Change-Id: I2412d6b88483e32a5de1baebb3823a985b2dcfb0 updates: bz#789278 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* glusterd: fix some coverity issuesBhumika Goyal2018-08-201-2/+2
| | | | | | | | | | | Fixes CID: 1241481 1241482 1274079 1274118 1274121 1274131 1274198 1274214 1274220 1274224 1394663 1394641 382454 1382453 1382449 1288095 Link: https://scan6.coverity.com/reports.htm#v42388/p10714/fileInstanceId=84772667&defectInstanceId=25770661&mergedDefectId=744716 Change-Id: Idaf434186231c8b0fff4b27c57fa23636a89c8a7 updates: bz#789278 Signed-off-by: Bhumika Goyal <bgoyal@redhat.com>
* changed 'sometime' messsages to 'some time'Levi Baber2018-06-011-4/+4
| | | | | | Change-Id: I0936229fc84c011db7791218bb566c971fdea174 fixes: bz#1584864 Signed-off-by: Levi Baber <baber@iastate.edu>
* glusterd: glusterd is releasing the locks before timeoutSanju Rakonde2018-05-281-0/+9
| | | | | | | | | | | | | | | | | | | Problem: We introduced lock timer in mgmt v3, which will realease the lock after 3 minutes from command execution. Some commands related to heal/profile will take more time to execute. For these comands timeout is set to 10 minutes. As the lock timer is set to 3 minutes glusterd is releasing the lock after 3 minutes. That means locks are released before the command is completed its execution. Solution: Pass a timeout parameter from cli to glusterd, when there is a change in default timeout value(i.e, default timeout value can be changed through command line or for the commands related to profile/heal we will change the default timeout value to 10 minutes.) glusterd will set the lock timer timeout according to the timeout value passed. Change-Id: I7b7a9a4f95ed44aca39ef9d9907f546bca99c69d fixes: bz#1577731 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: handling brick termination in brick-muxSanju Rakonde2018-05-071-17/+0
| | | | | | | | | | | | | | | | | Problem: There's a race between the glusterfs_handle_terminate() response sent to glusterd from last brick of the process and the socket disconnect event that encounters after the brick process got killed. Solution: When it is a last brick for the brick process, instead of sending GLUSTERD_BRICK_TERMINATE to brick process, glusterd will kill the process (same as we do it in case of non brick multiplecing). The test case is added for https://bugzilla.redhat.com/show_bug.cgi?id=1549996 Change-Id: If94958cd7649ea48d09d6af7803a0f9437a85503 fixes: bz#1545048 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Revert "glusterd: handling brick termination in brick-mux"Sanju Rakonde2018-03-291-0/+17
| | | | | | | | | | | | | This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b. This commit was causing multiple tests to time out when brick multiplexing is enabled. With further debugging, it's found that even though the volume stop transaction is converted into mgmt_v3 to allow the remote nodes to follow the synctask framework to process the command, there are other callers of glusterd_brick_stop () which are not synctask based. Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693 updates: bz#1545048
* glusterd: handling brick termination in brick-muxSanju Rakonde2018-03-281-17/+0
| | | | | | | | | | | | | | | Problem: There's a race between the last glusterfs_handle_terminate() response sent to glusterd and the kill that happens immediately if the terminated brick is the last brick. Solution: When it is a last brick for the brick process, instead of glusterfsd killing itself, glusterd will kill the process in case of brick multiplexing. And also changing gf_attach utility accordingly. Change-Id: I386c19ca592536daa71294a13d9fc89a26d7e8c0 fixes: bz#1545048 BUG: 1545048 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* cli/afr: gluster volume heal info "healed" command output is not appropriateMohit Agrawal2017-10-041-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | Problem: "gluster volume heal info [healed] [heal-failed]" command output on terminal is not appropriate in case of down any volume. Solution: To make message more appropriate change the condition in function "gd_syncop_mgmt_brick_op". Test : To verify the fix followed below procedure 1) Create 2*3 distribute replicate volume 2) set self-heal daemon off 3) kill two bricks (3, 6) 4) create some file on mount point 5) bring brick 3,6 up 6) kill other two brick (2 and 4) 7) make self heal daemon on 8) run "gluster v heal <vol-name>" Note: After apply the patch options (healed | heal-failed) will deprecate from command line. BUG: 1388509 Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* Command to identify client processhari gowtham2017-09-061-2/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command: gluster volume status <volname/all> client-list output: Client connections for volume v1 Name count ----- ------ fuse 2 tierd 1 total clients for volume v1 : 3 ----------------------------------------------------------------- Client connections for volume v2 Name count ----- ------ tierd 1 fuse.gsync 1 total clients for volume v2 : 2 ----------------------------------------------------------------- Updates: #178 Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/18095 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tier: separation of attach-tier from add-brickhari gowtham2017-08-011-0/+1
| | | | | | | | | | | | | | | | | | PROBLEM: Both attach tier and add brick have the same RPC and set of code. This becomes a hurdle while tring to implement add brick on a tiered volume. FIX: This patch separates the add brick and attach tier giving them separate RPCs. Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2 BUG: 1376326 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/15503 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Introduce option to limit no. of muxed bricks per processSamikshan Bairagya2017-07-101-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces a new global option that can be set to limit the number of multiplexed bricks in one process. Usage: `# gluster volume set all cluster.max-bricks-per-process <value>` If this option is not set then multiplexing will happen for now with no limitations set; i.e. a brick process will have as many bricks multiplexed to it as possible. In other words the current multiplexing behaviour won't change if this option isn't set to any value. This commit also introduces a brick process instance that contains information about brick processes, like the number of bricks handled by the process (which is 1 in non-multiplexing cases), list of bricks, and port number which also serves as an unique identifier for each brick process instance. The brick process list is maintained in 'glusterd_conf_t'. Updates: #151 Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17469 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* libglusterfs, gfdb, glusterfs: Add missing breaksNigel Babu2017-02-261-0/+1
| | | | | | | | | | | | | | | | A few switches did not have breaks causing fall throughs. Most of them have been fixed with fall through comments for those that are intentional. Change-Id: I84c85726b542f38504b50fefab5eba5dbcd27a07 BUG: 1424894 Signed-off-by: Nigel Babu <nigelb@redhat.com> Reviewed-on: https://review.gluster.org/16677 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* core: run many bricks within one glusterfsd processJeff Darcy2017-01-301-2/+15
| | | | | | | | | | | | | | | | | | | | | | | This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/14763 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tier : Tier as a servicehari gowtham2017-01-161-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tierd is implemented by separating from rebalance process. The commands affected: 1) Attach tier will trigger this process instead of old one 2) tier start and tier start force will also trigger this process. 3) volume status [tier] will show tier daemon as a process instead of task and normal tier status and tier detach status works. 4) tier stop implemented. 5) detach tier implemented separately along with new detach tier status 6) volume tier volname status will work using the changes. 7) volume set works This patch has separated the tier translator from the legacy DHT rebalance code. It now sends the RPCs from the CLI to glusterd separate to the DHT rebalance code. The daemon is now a service, similar to the snapshot daemon, and can be viewed using the volume status command. The code for the validation and commit phase are the same as the earlier tier validation code in DHT rebalance. The “brickop” phase has been changed so that the status command can use this framework. The service management framework is now used. DHT rebalance does not use this framework. This service framework takes care of : *) spawning the daemon, killing it and other such processes. *) volume set options , which are written on the volfile. *) restart and reconfigure functions. Restart is to restart the daemon at two points 1)after gluster goes down and comes up. 2) to stop detach tier. *) reconfigure is used to make immediate volfile changes. By doing this, we don’t restart the daemon. it has the code to rewrite the volfile for topological changes too (which comes into place during add and remove brick). With this patch the log, pid, and volfile are separated and put into respective directories. Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399 BUG: 1313838 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13365 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Get maximum supported op-version in a clusterSamikshan Bairagya2017-01-081-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gluster volume get <VOLNAME> cluster.opversion gives us the current op-version on which the cluster is operating. There is no command that lets the user know the maximum supported op-version that the cluster can run on. This patch adds a new global option cluster.max-op-version, that can be used to retrieve the maximum supported op-version in a cluster. Usage: # gluster volume get all cluster.max-op-version Example output: Option Value ------ ----- cluster.max-op-version 30900 NOTE: The only way to test this feature for now is to set the GD_OP_VERSION_MAX macro to different values (30800 for 3.8,30900 for 3.9, and so on) and rebuild glusterd. Since the regression test framework currently doesn't have support to simulate these tests, there are no accompanying regression tests for this feature. It should be possible to add tests once glusto comes in and makes it easier to run a heterogeneous cluster. Change-Id: I547480ee5e7912664784643e436feb198b6d16d0 BUG: 1365822 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/16283 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : Introduce reset brickAnuradha Talur2016-08-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The command basically allows replace brick with src and dst bricks as same. Usage: gluster v reset-brick <volname> <hostname:brick-path> start This command kills the brick to be reset. Once this command is run, admin can do other manual operations that they need to do, like configuring some options for the brick. Once this is done, resetting the brick can be continued with the following options. gluster v reset-brick <vname> <hostname:brick> <hostname:brick> commit {force} Does the job of resetting the brick. 'force' option should be used when the brick already contains volinfo id. Problem: On doing a disk-replacement of a brick in a replicate volume the following 2 scenarios may occur : a) there is a chance that reads are served from this replaced-disk brick, which leads to empty reads. b) potential data loss if next writes succeed only on replaced brick, and heal is done to other bricks from this one. Solution: After disk-replacement, make sure that reset-brick command is run for that brick so that pending markers are set for the brick and it is not chosen as source for reads and heal. But, as of now replace-brick for the same brick-path is not allowed. In order to fix the above mentioned problem, same brick-path replace-brick is needed. With this patch reset-brick commit {force} will be allowed even when source and destination <hostname:brickpath> are identical as long as 1) destination brick is not alive 2) source and destination brick have the same brick uuid and path. Also, the destination brick after replace-brick will use the same port as the source brick. Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de BUG: 1266876 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12250 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd (syncop): fix unused variable warnings/errorsKaleb S. KEITHLEY2016-08-251-4/+0
| | | | | | | | | | | | | | | | | http://review.gluster.org/14085 fixes a/the "leak" - via the generated rpc/xdr headers - of pragmas that mask these warnings. However 14085 won't pass the smoke test until all the warnings are fixed. Change-Id: I40da2a344be3da4bda2370b1ae1eb77dc00b033e BUG: 1369124 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/15281 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* feature/bitrot: Ondemand scrub option for bitrotKotresh HR2016-08-251-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The bitrot scrubber takes 'hourly/daily/biweekly/monthly' as the values for 'scrub-frequency'. There is no way to schedule the scrubbing when the admin wants it. Ondemand scrubbing brings in the new option 'ondemand' with which the admin can start scrubbing ondemand. It starts the scrubbing immediately. Ondemand scrubbing is successful only if the scrubber is in 'Active (Idle)' (waiting for it's next frequency cycle to start scrubbing). It is not entertained when the scrubber is in 'Paused' or already running. Here is the command line syntax. gluster volume bitrot <vol name> scrub ondemand Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d BUG: 1366195 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/15111 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* glusterd: clean dead initializationsPrasanna Kumar Kalever2016-04-041-6/+0
| | | | | | | | | | | | | | | | This patch cleans unused variable initialization as well as their declarations which are no where used in the code Change-Id: I784165fc6e91297758079699dd9583d5203b7793 BUG: 1253831 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/11929 Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd/syncop: double free of frame stackMohammed Rafi KC2016-03-311-13/+35
| | | | | | | | | | | | | | | If rpc message from glusterd during brick op phase fails without sending, then frame was freed from the caller function and call back function. Change-Id: I63cb3be30074e9a074f6895faa25b3d091f5b6a5 BUG: 1322262 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/13854 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: fixing few memory leak in glusterdGaurav Kumar Garg2016-02-231-0/+4
| | | | | | | | | | | | | | | | | | | | | | Current glusterd code base having memory leak. This is because of memory allocate by dict_allocate_and_serialize function in "gd_syncop_mgmt_v3_lock" and "gd_syncop_mgmt_v3_unlock" function is not freeing up meory upon exit. Fix is to free the memory after exit of the above function. Thanx Carlos and Roman for finding out the issue and fix. Change-Id: Id67aa794c84969830ca7ea8c2374f80c64d7a639 BUG: 1287517 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com> Signed-off-by: Roman Tereshonkov <roman.tereshonkov@nokia.com> Reviewed-on: http://review.gluster.org/12927 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* all: fixes for clang compile warningsKaleb S KEITHLEY2016-02-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cli/src/cli-cmd-parser.c (chenk) cli/src/cli-xml-output.c (spandit) cli/src/cli.c (chenk) libglusterfs/src/common-utils.c (vmallika) libglusterfs/src/gfdb/gfdb_sqlite3.c (jfernand +1) rpc/rpc-transport/socket/src/socket.c (?) xlators/cluster/afr/src/afr-transaction.c (?) xlators/cluster/dht/src/dht-common.h (srangana +2) xlators/cluster/dht/src/dht-selfheal.c (srangana +2) xlators/debug/io-stats/src/io-stats.c (R. Wareing) xlators/features/barrier/src/barrier.c (vshastry) xlators/features/bit-rot/src/bitd/bit-rot-scrub.h (vshankar +1) xlators/features/shard/src/shard.c (kdhananj +1) xlators/mgmt/glusterd/src/glusterd-ganesha.c (skoduri) xlators/mgmt/glusterd/src/glusterd-handler.c (atinmu) xlators/mgmt/glusterd/src/glusterd-op-sm.h (atinmu) xlators/mgmt/glusterd/src/glusterd-snapshot.c (spandit) xlators/mgmt/glusterd/src/glusterd-syncop.c (atinmu) xlators/mgmt/glusterd/src/glusterd-volgen.c (atinmu) xlators/protocol/client/src/client-messages.h (mselvaga +1) xlators/storage/bd/src/bd-helper.c (M. Mohan Kumar) xlators/storage/bd/src/bd.c (M. Mohan Kumar) xlators/storage/posix/src/posix.c (nbalacha +1) Change-Id: I85934fbcaf485932136ef3acd206f6ebecde61dd BUG: 1293133 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/13031 CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: cli command implementation for bitrot scrub statusGaurav Kumar Garg2015-11-191-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CLI command for bitrot scrub status will be : gluster volume bitrot <volname> scrub status Above command will show the statistics of bitrot scrubber. Upon execution of this command it will show some common scrubber tunable value of volume <VOLNAME> followed by statistics of scrubber statistics of individual nodes. sample ouput for single node: Volume name : <VOLNAME> State of scrub: Active Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: Number of Scrubbed files: Number of Unsigned files: Last completed scrub time: Duration of last scrub: Error count: ========================================================= This is just infrastructure. list of bad file, last scrub time, error count value will be taken care by http://review.gluster.org/#/c/12503/ and http://review.gluster.org/#/c/12654/ patches. Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1 BUG: 1207627 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10231 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: fix op-version bump up flowAtin Mukherjee2015-08-031-10/+16
| | | | | | | | | | | | | | | | | | | | | If a cluster is upgraded from 3.5 to latest version, gluster volume set all cluster.op-version <VERSION> will throw an error message back to the user saying unlocking failed. This is because of trying to release a volume wise lock in unlock phase as the lock was taken cluster wide. The problem surfaced because the op-version is updated in commit phase and unlocking works in the v3 framework where it should have used cluster unlock. Fix is to decide which lock/unlock is to be followed before invoking lock phase Change-Id: Iefb271a058431fe336a493c24d240ed833f279c5 BUG: 1248298 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11798 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Get the local txn_info based on trans_id in op_sm call backs.anand2015-07-061-12/+0
| | | | | | | | | | | | | | | | | | | Issue: when two or more transactions are running concurrently in op_sm, global op_info might get corrupted. Fix: Get local txn_info based on trans_id instead of using global txn_info for commands (re-balance, profile ) which are using op_sm in originator. TODO: Handle errors properly in call backs and completely remove the global op_info from op_sm. Change-Id: I9d61388acc125841ddc77e2bd560cb7f17ae0a5a BUG: 1229139 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/11120 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Porting left out log messages to new frameworkNandaja Varma2015-06-261-3/+7
| | | | | | | | | | | Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647 BUG: 1235538 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/11388 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* handshake,locks,mountbroker,syncop/glusterd:New logging frameworkNandaja Varma2015-06-141-41/+73
| | | | | | | | | | Change-Id: If491a6945b7a0afa10165ff9f9874a244aece36f BUG: 1194640 Signed-off-by: Nandaja Varma <nandaja.varma@gmail.com> Reviewed-on: http://review.gluster.org/9864 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd/snapshot: Return correct errno in events of failure - PATCH 2Avra Sengupta2015-06-021-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | ENUM RETCODE ERROR ------------------------------------------------------------- EG_INTRNL 30800 Internal Error EG_OPNOTSUP 30801 Gluster Op Not Supported EG_ANOTRANS 30802 Another Transaction in Progress EG_BRCKDWN 30803 One or more brick is down EG_NODEDWN 30804 One or more node is down EG_HRDLMT 30805 Hard Limit is reached EG_NOVOL 30806 Volume does not exist EG_NOSNAP 30807 Snap does not exist EG_RBALRUN 30808 Rebalance is running EG_VOLRUN 30809 Volume is running EG_VOLSTP 30810 Volume is not running EG_VOLEXST 30811 Volume exists EG_SNAPEXST 30812 Snapshot exists EG_ISSNAP 30813 Volume is a snap volume EG_GEOREPRUN 30814 Geo-Replication is running EG_NOTTHINP 30815 Bricks are not thinly provisioned Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275 BUG: 1212413 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10588 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Return correct errno in events of failure - PATCH 1Avra Sengupta2015-05-281-3/+10
| | | | | | | | | | | | | | | | RETCODE ERROR ------------------------------------------- 30800 Internal Error 30801 Another Transaction In Progress Change-Id: Ica7fd2e513b2c28717b6df73cfb2667725dbf057 BUG: 1212413 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10313 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd : allocate peerid to store in frame->cookieAtin Mukherjee2015-05-281-23/+47
| | | | | | | | | | | | | | | | | | | commit a1de3b05 was using peerid from the stack and storing it in the frame->cookie and in the subsequent callback it was referred. The existance of this variable is not guranteed in the cbk since its not dynamically allocated. Fix is to dynmacially manage peerid in the frame cookie. This patch also fixes one problem in gd_sync_task_begin () where unlock is not triggered if the cluster is running with lesser than 3.6 op-version resulting into commands failing with another transaction is in progress. Change-Id: I0d22cf663df53ef3769585703944577461061312 BUG: 1223213 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10842 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: fix double-free of rebalance process' rpc objectKrishnan Parthasarathi2015-05-261-0/+5
| | | | | | | | | | | Change-Id: I0c79c4de47a160b1ecf3a8994eedc02e3f5002a9 BUG: 1223338 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/10872 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* NFS-Ganesha : Locking global options fileMeghana Madhusudhan2015-05-061-7/+44
| | | | | | | | | | | | | | | | | | | | | Global option gluster features.ganesha enable writes into the global 'option' file. The snapshot feature also writes into the same file. To handle concurrent multiple transactions correctly, a new lock has to be introduced on this file. Every operation using this file needs to contest for the new lock type. Change-Id: Ia8a324d2a466717b39f2700599edd9f345b939a9 BUG: 1200254 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/10130 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* libglusterfs: Implementation of sync lock as recursive lock to avoid crash.anand2015-04-281-12/+0
| | | | | | | | | | | | | | | | | | | | | | | Problem : In glusterd,we are using big lock which is implemented based on sync task frame work for thread synchronization and rcu lock for data consistency. sync task frame work swap the threads if there is no worker poll threads available,due to this rcu lock and rcu unlock was happening in different threads (urcu-bp will not allow this),resulting into glusterd crash. fix : To avoid releasing the sync lock(big lock) in between rcu critical section,implemented sync lock as recursive lock. More details: link : http://www.spinics.net/lists/gluster-devel/msg14632.html Change-Id: I2b56c1caf3f0470f219b1adcaf62cce29cdc6b88 BUG: 1211640 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10285 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Remove direct references to peerinfo in frame cookiesKaushal M2015-04-261-38/+92
| | | | | | | | | | | | | | | | | | | | | | | | | RCU protection requires that we don't have direct references to protected data structures outside read-critical sections This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 82ebfdd Remove direct references to peerinfo in frame cookies dec4bec Remove incorrect and unneeded code from gd_syncop_mgmt_v3_unlock_cbk_fn 7aced7b Use stack allocated uuid for frame cookie. 38e4124 Address comments from 10192/2 [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: Ic50e5fca0be72af5090f4cf318efa55d29075de9 BUG: 1205186 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/10192 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: support for tier volumes 'detach start' and 'detach commit'Dan Lambright2015-04-221-0/+17
| | | | | | | | | | | | | | | | | | | | | These commands work in a manner analagous to rebalancing when removing a brick. The existing migration daemon detects "detach start" and switches to moving data off the hot tier. While in this state all lookups are directed to the cold tier. gluster v detach-tier <vol> start gluster v detach-tier <vol> commit The status and stop cli commands shall be submitted separately. Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0 BUG: 1205540 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: root <root@localhost.localdomain> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10108 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: NetBSD Build System
* glusterd: Replace transaction peers listsKaushal M2015-04-131-123/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transaction peer lists were used in GlusterD to peers belonging to a transaction. This was needed to prevent newly added peers performing partial transactions, which could be incorrect. This was accomplished by creating a seperate transaction peers list at the beginning of every transaction. A transaction peers list referenced the peerinfo data structures of the peers which were present at the beginning of the transaction. RCU protection of peerinfos referenced by the transaction peers list is a hard problem and difficult to do correctly. To have proper RCU protection of peerinfos, the transaction peers lists have been replaced by an alternative method to identify peers that belong to a transaction. The alternative method is to the global peers list along with generation numbers to identify peers that should belong to a transaction. This change introduces a global peer list generation number, and a generation number for each peerinfo object. Whenever a peerinfo object is created, the global generation number is bumped, and the peerinfos generation number is set to the bumped global generation. With the above changes, the algorithm to identify peers belonging to a transaction with RCU protection is as follows, - At the beginning of a transaction, the current global generation number is saved - To identify if a peers belonging to the transaction, - Start a RCU read critical section - For each peer in the global peers list, - If the peers generation number is not greater than the saved generation number, continue with the action on the peer - End the RCU read critical section The above algorithm guarantees that, - The peer list is not modified when a transaction is iterating through it - The transaction actions are only done on peers that were present when the transaction started But, as a transaction could iterate over the peers list multiple times, the algorithm cannot guarantee that same set of peers will be selected every time. A peer could get deleted between two iterations of the list within a transaction. This problem existed with transaction peers list as well, but unlike before now it will not lead to invalid memory access and potential crashes. This problem will be addressed seprately. This change was developed on the git branch at [1]. This commit is a combination of the following commits on the development branch. 52ded5b Add timespec_cmp 44aedd8 Add create timestamp to peerinfo 7bcbea5 Fix some silly mistakes 13e3241 Add start time to opinfo 17a6727 Use timestamp comparisions to identify xaction peers instead of a xaction peer list 3be05b6 Correct check for peerinfo age 70d5b58 Use read-critical sections for peer list iteration ba4dbca Use peerinfo timestamp checks in op-sm instead of xaction peer list d63f811 Add more peer status checks when iterating peers list in glusterd-syncop 1998a2a Timestamp based peer list traversal of mgmtv3 xactions f3c1a42 Remove transaction peer lists b8b08ee Remove unused labels 32e5f5b Remove 'npeers' usage a075fb7 Remove 'npeers' from mgmt-v3 framework 12c9df2 Use generation number instead of timestamps. 9723021 Remove timespec_cmp 80ae2c6 Remove timespec.h include a9479b0 Address review comments on 10147/4 [1]: https://github.com/kshlm/glusterfs/tree/urcu Change-Id: I9be1033525c0a89276f5b5d83dc2eb061918b97f BUG: 1205186 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/10147 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Avoid conflict between contrib/uuid and system uuidEmmanuel Dreyfus2015-04-041-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glusterfs relies on Linux uuid implementation, which API is incompatible with most other systems's uuid. As a result, libglusterfs has to embed contrib/uuid, which is the Linux implementation, on non Linux systems. This implementation is incompatible with systtem's built in, but the symbols have the same names. Usually this is not a problem because when we link with -lglusterfs, libc's symbols are trumped. However there is a problem when a program not linked with -lglusterfs will dlopen() glusterfs component. In such a case, libc's uuid implementation is already loaded in the calling program, and it will be used instead of libglusterfs's implementation, causing crashes. A possible workaround is to use pre-load libglusterfs in the calling program (using LD_PRELOAD on NetBSD for instance), but such a mechanism is not portable, nor is it flexible. A much better approach is to rename libglusterfs's uuid_* functions to gf_uuid_* to avoid any possible conflict. This is what this change attempts. BUG: 1206587 Change-Id: I9ccd3e13afed1c7fc18508e92c7beb0f5d49f31a Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/10017 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>