summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.h
Commit message (Collapse)AuthorAgeFilesLines
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-051-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* glusterd/mux: Optimize brick disconnect handler codeMohammed Rafi KC2018-11-181-1/+2
| | | | | | | | | Removed unnecessary iteration during brick disconnect handler when multiplex is enabled. Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f fixes: bz#1650115 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* core: Portmap entries showing stale brick entries when bricks are downMohit Agrawal2018-11-121-0/+2
| | | | | | | | | | | | | | | | Problem: pmap is showing stale brick entries after down the brick because of glusterd_brick_rpc_notify call gf_is_service_running before call pmap_registry_remove to ensure about brick instance. Solutiom: 1) Change the condition in gf_is_pid_running to ensure about process existence, use open instead of access to achieve the same 2) Call search_brick_path_from_proc in __glusterd_brick_rpc_notify along with gf_is_service_running Change-Id: Ia663ac61c01fdee6c12f47c0300cdf93f19b6a19 fixes: bz#1646892 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* glusterd: allow shared-storage to use bricks under glusterd working directorySanju Rakonde2018-11-081-2/+2
| | | | | | | | | | | | | | | With commit 44e4db, we are not allowing user to create a volume using glusterd's working directory as a brick or any sub directory under glusterd's working directory as a brick.This has broken shared-storage since the volume "gluster-shared-storage" is created using the bricks under glusterd's working directory. With this patch, we let the "gluster-shared-storage" volume to use bricks under glusterd's working directory. fixes: bz#1647029 Change-Id: Ifcbcf4576eea12cf46f199dea287b29bd3ec3bfd Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Land clang-format changesGluster Ant2018-09-121-434/+416
| | | | Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
* Some (mgmt) xlators: use dict_{setn|getn|deln|get_int32n|set_int32n|set_strn}Yaniv Kaul2018-09-091-2/+3
| | | | | | | | | | | | | | | | | | | | | In a previous patch (https://review.gluster.org/20769) we've added the key length to be passed to dict_* funcs, to remove the need to strlen() it. This patch moves some xlators to use it. - It also adds dict_get_int32n which was missing. - It also reduces the size of some key variables. They were set to 1024b or PATH_MAX, where sometimes 64 bytes were really enough. Please review carefully: 1. That I did not reduce some the size of the key variables too much. 2. That I did not mix up some keys. Compile-tested only! Change-Id: Ic729baf179f40e8d02bc2350491d4bb9b6934266 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* glusterd: ignore importing volume which is undergoing a delete operationAtin Mukherjee2018-08-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem explanation: Assuming in a 3 nodes cluster, if N1 originates a delete operation and while N1's commit phase completes, either glusterd service of N2 or N3 gets disconnected from N1 (before completing the commit phase), N1 will attempt to end up importing the volume which is in-flight for a delete in other nodes as a fresh resulting into an incorrect configuration state. Fix: Mark a volume as stage deleted once a volume delete operation passes it's staging phase and reset this flag during unlock phase. Now during this intermediate phase if the same volume gets imported to other peers, it shouldn't considered to be recreated. An automated .t is quite tough to implement with the current infra. Test Case: 1. Keep creating and deleting volumes in a loop on a 3 node cluster 2. Simulate n/w failure between the peers (ifdown followed by ifup) 3. Check if output of 'gluster v list | wc -l' is same across all 3 nodes during 1 & 2. Change-Id: Ifdd5dc39699120258d7fdd42fe2deb9de25c6246 Fixes: bz#1605077 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Add multiple checks before attach/start a brickMohit Agrawal2018-07-271-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In brick mux scenario sometime glusterd is not able to start/attach a brick and gluster v status shows brick is already running Solution: 1) To make sure brick is running check brick_path in /proc/<pid>/fd , if a brick is consumed by the brick process it means brick stack is come up otherwise not 2) Before start/attach a brick check if a brick is mounted or not 3) At the time of printing volume status check brick is consumed by any brick process Test: To test the same followed procedure 1) Setup brick mux environment on a vm 2) Put a breaking point in gdb in function posix_health_check_thread_proc at the time of notify GF_EVENT_CHILD_DOWN event 3) unmount anyone brick path forcefully 4) check gluster v status it will show N/A for the brick 5) Try to start volume with force option, glusterd throw message "No device available for mount brick" 6) Mount the brick_root path 7) Try to start volume with force option 8) down brick is started successfully Change-Id: I91898dad21d082ebddd12aa0d1f7f0ed012bdf69 fixes: bz#1595320 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd: show brick online after port registration even in brick-muxPranith Kumar K2018-07-051-1/+2
| | | | | | | | | | | | | | | | | Problem: With brick-mux even before brick attach is complete on the bricks glusterd marks them as online. This can lead to a race where scripts that check if the bricks are online to assume that the brick is online before it is completely online. Fix: Wait for the callback from the brick before marking the port as registered so that volume status will show the correct status of the brick. fixes bz#1597568 Change-Id: Icd3dc62506af0cf75195e96746695db823312051 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* glusterd: handling brick termination in brick-muxSanju Rakonde2018-05-071-1/+2
| | | | | | | | | | | | | | | | | Problem: There's a race between the glusterfs_handle_terminate() response sent to glusterd from last brick of the process and the socket disconnect event that encounters after the brick process got killed. Solution: When it is a last brick for the brick process, instead of sending GLUSTERD_BRICK_TERMINATE to brick process, glusterd will kill the process (same as we do it in case of non brick multiplecing). The test case is added for https://bugzilla.redhat.com/show_bug.cgi?id=1549996 Change-Id: If94958cd7649ea48d09d6af7803a0f9437a85503 fixes: bz#1545048 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Revert "glusterd: handling brick termination in brick-mux"Sanju Rakonde2018-03-291-2/+1
| | | | | | | | | | | | | This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b. This commit was causing multiple tests to time out when brick multiplexing is enabled. With further debugging, it's found that even though the volume stop transaction is converted into mgmt_v3 to allow the remote nodes to follow the synctask framework to process the command, there are other callers of glusterd_brick_stop () which are not synctask based. Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693 updates: bz#1545048
* glusterd: handling brick termination in brick-muxSanju Rakonde2018-03-281-1/+2
| | | | | | | | | | | | | | | Problem: There's a race between the last glusterfs_handle_terminate() response sent to glusterd and the kill that happens immediately if the terminated brick is the last brick. Solution: When it is a last brick for the brick process, instead of glusterfsd killing itself, glusterd will kill the process in case of brick multiplexing. And also changing gf_attach utility accordingly. Change-Id: I386c19ca592536daa71294a13d9fc89a26d7e8c0 fixes: bz#1545048 BUG: 1545048 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: volume get fixes for client-io-threads & quorum-typeRavishankar N2018-03-071-1/+2
| | | | | | | | | | | | | | | | | | | | 1. If a replica volume created on glusterfs-3.8 was upgraded to glusterfs-3.12, `gluster vol get volname client-io-threads` displayed 'on' even though it wasn't and the xlator wasn't loaded on the client-graph. This was due to removing certain checks in glusterd_get_default_val_for_volopt as a part of commit 47604fad4c2a3951077e41e0c007ceb979bb2c24. Fix it. 2. Also, as a part of op-version bump-up, client-io-threads was being loaded on the clients during volfile regeneration. Prevent it. 3. AFR assumes quorum-type to be auto in newly created replic 3 (odd replica in general) volumes but `gluster vol get quorum-type` displays 'none'. Fix it. Change-Id: I19e586361ed1065c70fb378533d3b4dac1095df9 BUG: 1545056 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterd: import volumes in separate synctaskAtin Mukherjee2018-02-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | With brick multiplexing, to attach a brick to an existing brick process the prerequisite is to have the compatible brick to finish it's initialization and portmap sign in and hence the thread might have to go to a sleep and context switch the synctask to allow the brick process to communicate with glusterd. In normal code path, this works fine as glusterd_restart_bricks () is launched through a separate synctask. In case there's a mismatch of the volume when glusterd restarts, glusterd_import_friend_volume is invoked and then it tries to call glusterd_start_bricks () from the main thread which eventually may land into the similar situation. Now since this is not done through a separate synctask, the 1st brick will never be able to get its turn to finish all of its handshaking and as a consequence to it, all the bricks will fail to get attached to it. Solution : Execute import volume and glusterd restart bricks in separate synctask. Importing snaps had to be also done through synctask as there's a dependency of the parent volume need to be available for the importing snap functionality to work. Change-Id: I290b244d456afcc9b913ab30be4af040d340428c BUG: 1540607 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: add profile_enabled flag in get-stateAtin Mukherjee2018-01-251-0/+3
| | | | | | Change-Id: I09f348ed7ae6cd481f8c4d8b4f65f2f2f6aad84e BUG: 1537364 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: connect to an existing brick process when qourum status is ↵Atin Mukherjee2018-01-051-1/+2
| | | | | | | | | | | | | | NOT_APPLICABLE_QUORUM First of all, this patch reverts commit 635c1c3 as the same is causing a regression with bricks not coming up on time when a node is rebooted. This patch tries to fix the problem in a different way by just trying to connect to an existing running brick when quorum status is not applicable. Change-Id: I0efb5901832824b1c15dcac529bffac85173e097 BUG: 1509845 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* Tier: Stop tierd for detach starthari gowtham2017-12-011-0/+3
| | | | | | | | | | | | | | | | | | | Problem: tierd was stopped only after detach commit This makes the detach take a longer time. The detach demotes the files to the cold brick and if the promotion frequency is hit, then the tierd starts to promote files to hot tier again. Fix: stop tierd after detach start so the files get demoted faster. Note: the is_tier_enabled was not maintained properly. That has been fixed too. some code clean up has been done. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I532f7410cea04fbb960105483810ea3560ca149b BUG: 1446381
* Coverity Issue: PW.INCLUDE_RECURSION in several filesGirjesh Rajoria2017-11-091-1/+0
| | | | | | | | | | | | | | Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439, 440, 441, 442, 443 Issue: Event include_recursion Removed redundant, recursive includes from the files. Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7 BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* gfproxyd: Let glusterd manage gfproxy daemonPoornima G2017-10-181-4/+0
| | | | | | | Updates: #242 BUG: 1428063 Change-Id: Iaaf2edf99b2ecc75f6d30762c752a6d445c1c826 Signed-off-by: Poornima G <pgurusid@redhat.com>
* gfproxy: Introduce new server-side daemon called GFProxyShreyas Siravara2017-10-101-0/+8
| | | | | | | | | | | | | | Summmary: Adds a new server-side daemon called gfproxyd & a new FUSE client called gfproxy-client Updates: #242 BUG: 1428063 Change-Id: I83210098d3a381922bc64fed1110ae1b76e6519f Tested-by: Shreyas Siravara <sshreyas@fb.com> Reviewed-by: Kevin Vigor <kvigor@fb.com> Signed-off-by: Shreyas Siravara <sshreyas@fb.com> Signed-off-by: Poornima G <pgurusid@redhat.com>
* glusterd : fix client io-threads option for replicate volumesRavishankar N2017-10-091-2/+1
| | | | | | | | | | | | | | | | | | | Problem: Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading client-io-threads for replicate volumes (it was set to on by default in commit e068c1997314046658dd502e9118dab32decf879) due to performance issues but in doing so, inadvertently failed to load the xlator even if the user explicitly enabled the option using the volume set command. This was despite returning returning sucess for the volume set. Fix: Modify the check in perfxl_option_handler() and add checks in volume create/add-brick/remove-brick code paths, tying it all to GD_OP_VERSION_3_12_2. Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf BUG: 1498570 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterd: do not create .glusterfs/indicesPrashanth Pai2017-08-231-11/+0
| | | | | | | | | | | | | | | | | | | | | glusterd shouldn't concern itself with creating directories specific to certain xlators. The index xlator will now proceed creating './glusterfs/indices' dir only if the parent '.glusterfs' directory exists, which still fixes the original problem reported i.e 'volume start force' command shouldn't create brick path if it doesn't exist (BUG 1457202) This reverts most of the changes done by the commit b58a15948fb3fc37b6c0b70171482f50ed957f42 Change-Id: I7fc52ad64dce220e336c218fb4d85933ca2e61c0 Signed-off-by: Prashanth Pai <ppai@redhat.com> Reviewed-on: https://review.gluster.org/18003 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* core: miscellaneous cleanupKaleb S. KEITHLEY2017-07-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | clean up things that I tripped over doing other changes. 1) fix mishmash of random spacing in struct decls in glusterfs.h. Not technically a problem, just ugly to look at. 2) replace open-coded strings constants with existing #define constants. A disaster waiting to happen. 3) Use sys_access() instead of sys_stat() or sys_lstat() to test simple existence of file. Why copy dozens of bytes from kernel to user space that aren't going to be used by anything? There are probably more instances like these. Change-Id: I28089bef4cc93d5e4e4213045fb1a2649d110f82 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17769 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Introduce option to limit no. of muxed bricks per processSamikshan Bairagya2017-07-101-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces a new global option that can be set to limit the number of multiplexed bricks in one process. Usage: `# gluster volume set all cluster.max-bricks-per-process <value>` If this option is not set then multiplexing will happen for now with no limitations set; i.e. a brick process will have as many bricks multiplexed to it as possible. In other words the current multiplexing behaviour won't change if this option isn't set to any value. This commit also introduces a brick process instance that contains information about brick processes, like the number of bricks handled by the process (which is 1 in non-multiplexing cases), list of bricks, and port number which also serves as an unique identifier for each brick process instance. The brick process list is maintained in 'glusterd_conf_t'. Updates: #151 Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17469 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* index: Do not proceed with init if brick is not mountedRavishankar N2017-06-191-0/+11
| | | | | | | | | | | | | | | | | | | | | ..or else when a volume start force is given, we end up creating /brick-path/.glusterfs/indices folder and various subdirs under it and eventually starting the brick process. As a part of this patch, glusterd_get_index_basepath() is added in glusterd, who will then use it to create the basepath during volume-create, add-brick, replace-brick and reset-brick. It also uses this function to set the 'index-base' xlator option for the index translator. Change-Id: Id018cf3cb6f1e2e35b5c4cf438d1e939025cb0fc BUG: 1457202 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17426 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: Make reset-brick work correctly if brick-mux is onSamikshan Bairagya2017-05-101-3/+9
| | | | | | | | | | | | | | | | | | | Reset brick currently kills of the corresponding brick process. However, with brick multiplexing enabled, stopping the brick process would render all bricks attached to it unavailable. To handle this correctly, we need to make sure that the brick process is terminated only if brick-multiplexing is disabled. Otherwise, we should send the GLUSTERD_BRICK_TERMINATE rpc to the respective brick process to detach the brick that is to be reset. Change-Id: I69002d66ffe6ec36ef48af09b66c522c6d35ac58 BUG: 1446172 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17128 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: cleanup pidfile on pmap signoutAtin Mukherjee2017-05-081-0/+4
| | | | | | | | | | | | | | | | This patch ensures 1. brick pidfile is cleaned up on pmap signout 2. pmap signout evemt is sent for all the bricks when a brick process shuts down. Change-Id: I7606a60775b484651d4b9743b6037b40323931a2 BUG: 1444596 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17168 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd : Disallow peer detach if snapshot bricks exist on itGaurav Yadav2017-03-311-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem : - Deploy gluster on 2 nodes, one brick each, one volume replicated - Create a snapshot - Lose one server - Add a replacement peer and new brick with a new IP address - replace-brick the missing brick onto the new server (wait for replication to finish) - peer detach the old server - after doing above steps, glusterd fails to restart. Solution: With the fix detach peer will populate an error : "N2 is part of existing snapshots. Remove those snapshots before proceeding". While doing so we force user to stay with that peer or to delete all snapshots. Change-Id: I3699afb9b2a5f915768b77f885e783bd9b51818c BUG: 1322145 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/16907 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* Tier: remove warning related to the enumhari gowtham2017-02-071-1/+1
| | | | | | | | | | | | | | | | | | | PROBLEM: In the tier as a service patch the enums for tier (from gf1_op_command and gf_defrag_command) are put into a single enum gf_defrag_command which causes a warning that will make the build fail. FIX: send both the enum and eliminate the warning. Change-Id: I899ff622dfb07134e6459aa65f65ea7252765293 BUG: 1418973 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/16539 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : do not load io-threads in client graph for replicate volumesAtin Mukherjee2017-02-031-1/+2
| | | | | | | | | | | | | | | | | | | client.io-threads has been turned on by default from release-3.9 onwards, however this has an adverse effects on replicate volumes due to the design limitations on replications, till that gets addressed through server side replication as a preventive measure it is wiser not to load io-threads in the client graph for replicate volumes. Change-Id: Ibc576d4517da23fcdf55c6f4d17b90152a8817d7 BUG: 1418014 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/16502 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* core: run many bricks within one glusterfsd processJeff Darcy2017-01-301-0/+6
| | | | | | | | | | | | | | | | | | | | | | | This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/14763 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: add a cli command to trigger a statedump on a clientPoornima G2017-01-231-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this, we will be able to trigger statedumps on remote Gluster clients, mainly targetted for applications using libgfapi. Design: SIGUSR signal is the most comman way of taking a statedump in Gluster. But it cannot be used for libgfapi based processes, as the process loading the library might have already consumed SIGUSR signal. Hence going by the command way. One has to issue a Gluster command to initiate a statedump on the libgfapi based client. The command takes hostname and PID as an argument. All the glusterds in the cluster, check if they are connected to the specified hostname, and send an RPC request to all the connected clients from that hostname (via the mgmt connection). URL: http://review.gluster.org/16357 Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91 BUG: 1169302 Signed-off-by: Poornima G <pgurusid@redhat.com> [ndevos: minor fixes and split patch in smaller pieces] Reviewed-on: https://review.gluster.org/9228 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
* tier : Tier as a servicehari gowtham2017-01-161-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tierd is implemented by separating from rebalance process. The commands affected: 1) Attach tier will trigger this process instead of old one 2) tier start and tier start force will also trigger this process. 3) volume status [tier] will show tier daemon as a process instead of task and normal tier status and tier detach status works. 4) tier stop implemented. 5) detach tier implemented separately along with new detach tier status 6) volume tier volname status will work using the changes. 7) volume set works This patch has separated the tier translator from the legacy DHT rebalance code. It now sends the RPCs from the CLI to glusterd separate to the DHT rebalance code. The daemon is now a service, similar to the snapshot daemon, and can be viewed using the volume status command. The code for the validation and commit phase are the same as the earlier tier validation code in DHT rebalance. The “brickop” phase has been changed so that the status command can use this framework. The service management framework is now used. DHT rebalance does not use this framework. This service framework takes care of : *) spawning the daemon, killing it and other such processes. *) volume set options , which are written on the volfile. *) restart and reconfigure functions. Restart is to restart the daemon at two points 1)after gluster goes down and comes up. 2) to stop detach tier. *) reconfigure is used to make immediate volfile changes. By doing this, we don’t restart the daemon. it has the code to rewrite the volfile for topological changes too (which comes into place during add and remove brick). With this patch the log, pid, and volfile are separated and put into respective directories. Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399 BUG: 1313838 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13365 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Get maximum supported op-version in a clusterSamikshan Bairagya2017-01-081-2/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gluster volume get <VOLNAME> cluster.opversion gives us the current op-version on which the cluster is operating. There is no command that lets the user know the maximum supported op-version that the cluster can run on. This patch adds a new global option cluster.max-op-version, that can be used to retrieve the maximum supported op-version in a cluster. Usage: # gluster volume get all cluster.max-op-version Example output: Option Value ------ ----- cluster.max-op-version 30900 NOTE: The only way to test this feature for now is to set the GD_OP_VERSION_MAX macro to different values (30800 for 3.8,30900 for 3.9, and so on) and rebuild glusterd. Since the regression test framework currently doesn't have support to simulate these tests, there are no accompanying regression tests for this feature. It should be possible to add tests once glusto comes in and makes it easier to run a heterogeneous cluster. Change-Id: I547480ee5e7912664784643e436feb198b6d16d0 BUG: 1365822 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/16283 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd, cli: Get global options through volume get functionalitySamikshan Bairagya2016-12-301-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently it is not possible to retrieve values of global options by using the 'gluster volume get' functionality if there are no volumes present. In order to get the global options one has to use 'gluster volume get' with a specific volume name. This usage makes the illusion as though the option is set only on one volume, which is incorrect. When setting the global options, 'gluster volume set' provides a way to set them using the volume name as 'all'. Similarly, retrieving the global options should be made possible by using the volume name 'all' with the 'gluster volume get' functionality. This patch adds that functionality to 'volume get' Usage: # gluster volume get all <OPTION/all> Change-Id: Ic2fdb9eda69d4806d432dae26d117d9660fe6d4e BUG: 1378842 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/15563 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : Introduce reset brickAnuradha Talur2016-08-291-1/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The command basically allows replace brick with src and dst bricks as same. Usage: gluster v reset-brick <volname> <hostname:brick-path> start This command kills the brick to be reset. Once this command is run, admin can do other manual operations that they need to do, like configuring some options for the brick. Once this is done, resetting the brick can be continued with the following options. gluster v reset-brick <vname> <hostname:brick> <hostname:brick> commit {force} Does the job of resetting the brick. 'force' option should be used when the brick already contains volinfo id. Problem: On doing a disk-replacement of a brick in a replicate volume the following 2 scenarios may occur : a) there is a chance that reads are served from this replaced-disk brick, which leads to empty reads. b) potential data loss if next writes succeed only on replaced brick, and heal is done to other bricks from this one. Solution: After disk-replacement, make sure that reset-brick command is run for that brick so that pending markers are set for the brick and it is not chosen as source for reads and heal. But, as of now replace-brick for the same brick-path is not allowed. In order to fix the above mentioned problem, same brick-path replace-brick is needed. With this patch reset-brick commit {force} will be allowed even when source and destination <hostname:brickpath> are identical as long as 1) destination brick is not alive 2) source and destination brick have the same brick uuid and path. Also, the destination brick after replace-brick will use the same port as the source brick. Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de BUG: 1266876 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12250 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd/cli: cli to get local state representation from glusterdSamikshan Bairagya2016-08-261-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently there is no existing CLI that can be used to get the local state representation of the cluster as maintained in glusterd in a readable as well as parseable format. The CLI added has the following usage: # gluster get-state [daemon] [odir <path/to/output/dir>] [file <filename>] This would dump data points that reflect the local state representation of the cluster as maintained in glusterd (no other daemons are supported as of now) to a file inside the specified output directory. The default output directory and filename is /var/run/gluster and glusterd_state_<timestamp> respectively. The option for specifying the daemon name leaves room to add support for other daemons in the future. Following are the data points captured as of now to represent the state from the local glusterd pov: * Peer: - Primary hostname - uuid - state - connection status - List of hostnames * Volumes: - name, id, transport type, status - counts: bricks, snap, subvol, stripe, arbiter, disperse, redundancy - snapd status - quorum status - tiering related information - rebalance status - replace bricks status - snapshots * Bricks: - Path, hostname (for all bricks these info will be shown) - port, rdma port, status, mount options, filesystem type and signed in status for bricks running locally. * Services: - name, online status for initialised services * Others: - Base port, last allocated port - op-version - MYUUID Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618 BUG: 1353156 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/14873 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Fix gsyncd upgrade issueKotresh HR2016-07-131-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: gluster upgrade is not generating new volfiles Cause: During upgrade, "glusterd --xlator-option *.upgrade=on -N" is run to generate new volfiles. It is run post 'glusterfs' rpm installation. The above command fails during upgrade if geo-replication is installed. This is because on glusterd start 'gsyncd' binary is called to configure geo-replication related stuff. Since 'glusterfs' rpm is installed prior to 'geo-rep' rpm, the 'gsyncd' binary used to glusterd upgrade command is of old version and hence it fails before generating new volfiles. Solution: Don't call geo-replication configure during upgrade/downgrade. Geo-replication configuration happens during start of glusterd after upgrade. Change-Id: Id58ea44ead9f69982f86fb68dc5b9ee3f6cd11a1 BUG: 1355628 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/14898 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* glusterd: compare uuid instead of hostname address resolutionAtin Mukherjee2016-07-051-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | In glusterd_get_brickinfo () brick's hostname is address resolved. This adds an unnecessary latency since it uses calls like getaddrinfo (). Instead given the local brick's uuid is already known a comparison of MY_UUID and brickinfo->uuid is much more light weight than the previous approach. On a scale testing where cluster hosting ~400 volumes spanning across 4 nodes, if a node goes for a reboot, few of the bricks don't come up. After few days of analysis its found that glusterd_pmap_sigin () was taking signficant amount of latency and further code walthrough revealed this unnecessary address resolution. Applying this fix solves the issue and now all the brick processes come up on a node reboot. Change-Id: I299b8660ce0da6f3f739354f5c637bc356d82133 BUG: 1352279 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14849 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/glusterd: add/remove brick fixes for arbiter volumesRavishankar N2016-05-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1.Provide a command to convert replica 2 volumes to arbiter volumes. Existing self-heal logic will automatically heal the file hierarchy into the arbiter brick, the progress of which can be monitored using the heal info command. Syntax: gluster volume add-brick <VOLNAME> replica 3 arbiter 1 <HOST:arbiter-brick-path> 2. Add checks when removing bricks from arbiter volumes: - When converting from arbiter to replica 2 volume, allow only arbiter brick to be removed. - When converting from arbiter to plain distribute volume, allow only if arbiter is one of the bricks that is removed. 3. Some clean-up: - Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to log messages that are not failures. - Remove unused variable `brick_list` - Move 'brickinfo->group' related functions to glusted-utils. Change-Id: Ic87b8c7e4d7d3ab03f93e7b9f372b314d80947ce BUG: 1318289 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/14126 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: copy real_path from older brickinfo during brick importAtin Mukherjee2016-05-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | In glusterd_import_new_brick () new_brickinfo->real_path will not be populated for the first time and hence if the underlying file system is bad for the same brick, import will fail resulting in inconsistent configuration data. Fix is to populate real_path from old brickinfo object. Also there were many cases where we were unnecessarily calling realpath() and that may cause in failure. For eg - if a remove brick is executed with a brick whoose underlying file system has crashed, remove-brick fails since realpath() call fails. We'd need to call realpath() here as the value is of no use.Hence passing construct_realpath as _gf_false in glusterd_volume_brickinfo_get_by_brick () is a must in such cases. Change-Id: I7ec93871dc9e616f5d565ad5e540b2f1cacaf9dc BUG: 1335531 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14306 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/glusterd: enhance quota enable and disable processvmallika2016-04-291-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously quota crawl was done from the single mount point, this is very slow process if there are huge number of files exists in the volume This RFE will now spawn crawl process for each brick in the volume, and files are looked in parallel independently for each brick. This improves the speed of crawling process for entire files-system This patch also fixes below problem * Previously, mountdir was created under '/tmp'. If someone tries to cleanup '/tmp'/ directory then it is very dangerous that we loose volume data So create a mount point under /var/run/gluster/tmp instead * Previously, file-system crawl is performed from all the nodes, which is a redundant operation and performance will degrade The problem is fixed with this patch Change-Id: Icabedeb44182139ace9c8106793803122388cab8 BUG: 1290766 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/12952 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: populate brickinfo->real_path conditionallyAtin Mukherjee2016-04-111-2/+5
| | | | | | | | | | | | | | | | | | | | glusterd_brickinfo_new_from_brick () is called from multiple places and one of them is glusterd_brick_rpc_notify where its very well possible that an underlying brick's file system has crashed and a disconnect event has been received. In this case glusterd tries to build the brickinfo from the brickid in the RPC request, however the same fails as glusterd_brickinfo_new_from_brick () fails from realpath. Fix is to skip populating real_path if its a disconnect event. Change-Id: I9d9149c64a9cf2247abb731f219c1b1eef037960 BUG: 1325841 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/13965 Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd / afr : Enable auto heal when replica count increasesAnuradha Talur2016-03-211-0/+10
| | | | | | | | | | | | | | | | | | | | | In replicate volumes, when a brick is added to a replicate group, heal to the new brick should be triggered. Also, the new brick should not be considered as source for healing till it is up to date. Previously, extended attributes had to be set manually on the bricks for this to happen. This patch is part 1 patch to automate this process. Change-Id: I29958448618372bfde23bf1dac5dd23dba1ad98f BUG: 1276203 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12451 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com>
* Tier: "tier start force" command implementationhari gowtham2015-12-221-1/+1
| | | | | | | | | | | | | | | | The start command doesnt restart the tier deamon if the deamon is running at one node. hence to bring up the tierd on the nodes where the deamon is down, the force command is implemented. It skips the check for tierd running. Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436 BUG: 1292112 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12983 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: Disallow peer with existing volumes to be probed in clusterAtin Mukherjee2015-12-061-0/+3
| | | | | | | | | | | | | | | As of now we do allow peer to get added in the trusted storage pool even if it has a volume configured. This is definitely not a supported configuration and can lead to issues as we never claim to support merging clusters. A single node running a standalone volume can be considered as a cluster. Change-Id: Id0cf42d6e5f20d6bfdb7ee19d860eee67c7c45be BUG: 1287992 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/12864 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/tier: Reset to reconfigured values after detach-tierMohammed Rafi KC2015-11-261-0/+3
| | | | | | | | | | | | | | After detach-tier commit , we have to reset the option reconfigured for tier volume Change-Id: Iae0210259720d6ac14ccc0cc339dc9f54a0c4571 BUG: 1285046 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12736 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: cli command implementation for bitrot scrub statusGaurav Kumar Garg2015-11-191-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CLI command for bitrot scrub status will be : gluster volume bitrot <volname> scrub status Above command will show the statistics of bitrot scrubber. Upon execution of this command it will show some common scrubber tunable value of volume <VOLNAME> followed by statistics of scrubber statistics of individual nodes. sample ouput for single node: Volume name : <VOLNAME> State of scrub: Active Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: Number of Scrubbed files: Number of Unsigned files: Last completed scrub time: Duration of last scrub: Error count: ========================================================= This is just infrastructure. list of bad file, last scrub time, error count value will be taken care by http://review.gluster.org/#/c/12503/ and http://review.gluster.org/#/c/12654/ patches. Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1 BUG: 1207627 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10231 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* mgmt/gluster: Handle tier brick volgenPranith Kumar K2015-11-191-0/+4
| | | | | | | | | | | | | | | | Index xlator watches only some xattrs based on type of volume. i.e. disperse/afr. When the volume becomes tiered then index is not adding these options in the volfile leading to no maintenance of indices. Thus no proactive self-heals. With this fix, we write brick volfiles considering the type of volume they belong to. BUG: 1250803 Change-Id: Ibe8f2d4ad5cb350306ab7ca0753e0f9a40b96a26 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12595 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* tier/shd: inline warning when compiled with gcc v.5Mohammed Rafi KC2015-10-131-1/+1
| | | | | | | | | | | | Change-Id: I487a26263d6e940eed364a831e99f9b8390bc96a BUG: 1226881 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12342 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anoop C S <anoopcs@redhat.com> Tested-by: Anoop C S <anoopcs@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>