summaryrefslogtreecommitdiffstats
path: root/rpc/xdr/src/cli1-xdr.x
Commit message (Collapse)AuthorAgeFilesLines
* bitrot: Make number of signer threads configurableKotresh HR2020-02-071-0/+1
| | | | | | | | | | | | | The number of signing process threads (glfs_brpobj) is set to 4 by default. The recommendation is to set it to number of cores available. This patch makes it configurable as follows gluster vol bitrot <volname> signer-threads <count> fixes: bz#1797869 Change-Id: Ia883b3e5e34e0bc8d095243508d320c9c9c58adc Signed-off-by: Kotresh HR <khiremat@redhat.com>
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* Command to identify client processhari gowtham2017-09-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command: gluster volume status <volname/all> client-list output: Client connections for volume v1 Name count ----- ------ fuse 2 tierd 1 total clients for volume v1 : 3 ----------------------------------------------------------------- Client connections for volume v2 Name count ----- ------ tierd 1 fuse.gsync 1 total clients for volume v2 : 2 ----------------------------------------------------------------- Updates: #178 Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/18095 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Add option to get all volume options through get-state CLISamikshan Bairagya2017-07-251-1/+2
| | | | | | | | | | | | | | | | | | | | This commit makes the get-state CLI capable to returning the values for all volume options for all volumes. This is similar to what you get when you issue a `gluster volume get <volname> all` command. This is the new usage for the get-state CLI: # gluster get-state [<daemon>] [[odir </path/to/output/dir/>] \ [file <filename>]] [detail|volumeoptions] Fixes: #277 Change-Id: Ice52d936a5a389c6fa0ba5ab32416a65cdfde46d Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17858 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Add client details to get-state outputSamikshan Bairagya2017-04-121-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | This commit optionally adds client details corresponding to the locally running bricks to the get-state output. Since getting the client details involves sending RPC requests to the respective local bricks, this is a relatively more costly operation. These client details would be added to the get-state output only if the get-state command is invoked with the 'detail' option. This commit therefore also changes the get-state CLI usage. The modified usage is as follows: # gluster get-state [<daemon>] [[odir </path/to/output/dir/>] \ [file <filename>]] [detail] Change-Id: I42cd4ef160f9e96d55a08a10d32c8ba44e4cd3d8 BUG: 1431183 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17003 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Tier: remove warning related to the enumhari gowtham2017-02-071-1/+2
| | | | | | | | | | | | | | | | | | | PROBLEM: In the tier as a service patch the enums for tier (from gf1_op_command and gf_defrag_command) are put into a single enum gf_defrag_command which causes a warning that will make the build fail. FIX: send both the enum and eliminate the warning. Change-Id: I899ff622dfb07134e6459aa65f65ea7252765293 BUG: 1418973 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/16539 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tier : Tier as a servicehari gowtham2017-01-161-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tierd is implemented by separating from rebalance process. The commands affected: 1) Attach tier will trigger this process instead of old one 2) tier start and tier start force will also trigger this process. 3) volume status [tier] will show tier daemon as a process instead of task and normal tier status and tier detach status works. 4) tier stop implemented. 5) detach tier implemented separately along with new detach tier status 6) volume tier volname status will work using the changes. 7) volume set works This patch has separated the tier translator from the legacy DHT rebalance code. It now sends the RPCs from the CLI to glusterd separate to the DHT rebalance code. The daemon is now a service, similar to the snapshot daemon, and can be viewed using the volume status command. The code for the validation and commit phase are the same as the earlier tier validation code in DHT rebalance. The “brickop” phase has been changed so that the status command can use this framework. The service management framework is now used. DHT rebalance does not use this framework. This service framework takes care of : *) spawning the daemon, killing it and other such processes. *) volume set options , which are written on the volfile. *) restart and reconfigure functions. Restart is to restart the daemon at two points 1)after gluster goes down and comes up. 2) to stop detach tier. *) reconfigure is used to make immediate volfile changes. By doing this, we don’t restart the daemon. it has the code to rewrite the volfile for topological changes too (which comes into place during add and remove brick). With this patch the log, pid, and volfile are separated and put into respective directories. Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399 BUG: 1313838 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13365 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd/quota: upgrade quota.conf file during an upgradeManikandan Selvaganesh2016-11-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem ======= When quota is enabled on 3.6, it will have quota conf version in quota.conf as v1.1. This node gets upgraded to 3.7 but it will still have quota conf version as v1.1 until a quota enable/disable/set limit is initiated. When this is not initiated and when this node tries to peer probe a node which is a fresh install of 3.7 (which will have quota conf version as v1.2), then this will result in "Peer rejected" state. This patch fixes the issue. Solution ======== When an upgrade happens from 3.6 to 3.7, quota.conf file needs to be modified as well. With 3.6, in quota.conf the version will be v1.1 and it needs to be changed to v1.2 from 3.7. This is because in 3.7, inode quota feature is introduced. So when an op-version bumpup happens quota.conf needs to be upgraded with quota conf version v1.2 and all the 16 byte uuid needs to be changed to 17 bytes uuid as well. Previously, when the cluster version is upgraded to 3.7, the quota.conf got upgraded as well. But, the upgradation was done only when quota enable/disable/set limit is done. With this patch, the upgradation is done during a cluster op version bump up as well. Change-Id: Idb5ba29d3e1ea0e45c85d87c952c75da9e0f99f0 BUG: 1371539 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/15352 Tested-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* build: out-of-tree builds generates files in the wrong directoryKaleb S KEITHLEY2016-09-181-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | And minor cleanup of a few of the Makefile.am files while we're at it. Rewrite the make rules to do what xdrgen does. Now we can get rid of xdrgen. Note 1. netbsd6's sed doesn't do -i. Why are we still running smoke tests on netbsd6 and not netbsd7? We barely support netbsd7 as it is. Note 2. Why is/was libgfxdr.so (.../rpc/xdr/src/...) linked with libglusterfs? A cut-and-paste mistake? It has no references to symbols in libglusterfs. Note3. "/#ifndef\|#define\|#endif/" (note the '\'s) is a _basic_ regex that matches the same lines as the _extended_ regex "/#(ifndef|define|endif)/". To match the extended regex sed needs to be run with -r on Linux; with -E on *BSD. However NetBSD's and FreeBSD's sed helpfully also provide -r for compatibility. Using a basic regex avoids having to use a kludge in order to run sed with the correct option on OS X. Note 4. Not copying the bit of xdrgen that inserts copyright/license boilerplate. AFAIK it's silly to pretend that machine generated files like these can be copyrighted or need license boilerplate. The XDR source files have their own copyright and license; and their copyrights are bound to be more up to date than old boilerplate inserted by a script. From what I've seen of other Open Source projects -- e.g. gcc and its C parser files generated by yacc and lex -- IIRC they don't bother to add copyright/license boilerplate to their generated files. It appears that it's a long-standing feature of make (SysV, BSD, gnu) for out-of-tree builds to helpfully pretend that the source files it can find in the VPATH "exist" as if they are in the $cwd. rpcgen doesn't work well in this situation and generates files with "bad" #include directives. E.g. if you `rpcgen ../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.x`, you get an #include directive in the generated .c file like this: ... #include "../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.h" ... which (obviously) results in compile errors on out-of-tree build because the (generated) header file doesn't exist at that location. Compared to `rpcgen ./glusterfs3-xdr.x` where you get: ... #include "glusterfs3-xdr.h" ... Which is what we need. We have to resort to some Stupid Make Tricks like the addition of various .PHONY targets to work around the VPATH "help". Warning: When doing an in-tree build, -I$(top_builddir)/rpc/xdr/... looks exactly like -I$(top_srcdir)/rpc/xdr/... Don't be fooled though. And don't delete the -I$(top_builddir)/rpc/xdr/... bits Change-Id: Iba6ab96b2d0a17c5a7e9f92233993b318858b62e BUG: 1330604 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/14085 Tested-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* feature/bitrot: Ondemand scrub option for bitrotKotresh HR2016-08-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The bitrot scrubber takes 'hourly/daily/biweekly/monthly' as the values for 'scrub-frequency'. There is no way to schedule the scrubbing when the admin wants it. Ondemand scrubbing brings in the new option 'ondemand' with which the admin can start scrubbing ondemand. It starts the scrubbing immediately. Ondemand scrubbing is successful only if the scrubber is in 'Active (Idle)' (waiting for it's next frequency cycle to start scrubbing). It is not entertained when the scrubber is in 'Paused' or already running. Here is the command line syntax. gluster volume bitrot <vol name> scrub ondemand Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d BUG: 1366195 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/15111 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* snapshot/cli: Fix snapshot status xml outputAvra Sengupta2016-08-231-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | snap status --xml errors out if a brick is down and doesn't have pid. It is handled in the cli of the snap status where "N/A" is displayed in such a scenario. Handled the same in xml snap status <snapname> --xml fails as the writer is not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's status to differentiate between the two scenarios. Added testcase volume-snapshot-xml.t to check all snapshot commands xml outputs Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745 BUG: 1325831 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/14018 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* tier/glusterd : making new tier detach command throw warninghari gowtham2015-12-151-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | For detach tier, the validation was done using the string "detach-tier" but the new commands used has the string "tier". Making the string use "tier" to compare, creates problem as the tier status and tier detach have the keyword "tier". So tier detach and tier status were separated. and strtok was used to prevent the condition from passing when the volume name has a substring of "tier". (only the second word from the string is got and checked if the feature is tier) Problem: new detach tier command doesnt throw warnings like "not a tier volume" or " detach tier not started" respectively instead it prints empty output. Fix: while validate the volume is checked if its a tiered volume if yes it is checked if the detach tier is started, else a warning is thrown respectively. Change-Id: I94246d53b18ab0e9406beaf459eaddb7c5b766c2 BUG: 1288517 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12883 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: cli command implementation for bitrot scrub statusGaurav Kumar Garg2015-11-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CLI command for bitrot scrub status will be : gluster volume bitrot <volname> scrub status Above command will show the statistics of bitrot scrubber. Upon execution of this command it will show some common scrubber tunable value of volume <VOLNAME> followed by statistics of scrubber statistics of individual nodes. sample ouput for single node: Volume name : <VOLNAME> State of scrub: Active Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: Number of Scrubbed files: Number of Unsigned files: Last completed scrub time: Duration of last scrub: Error count: ========================================================= This is just infrastructure. list of bad file, last scrub time, error count value will be taken care by http://review.gluster.org/#/c/12503/ and http://review.gluster.org/#/c/12654/ patches. Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1 BUG: 1207627 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10231 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/tier: add pause tier for snapshotsDan Lambright2015-10-211-1/+3
| | | | | | | | | | | | | | | | | | | | Snaps of tiered volumes cannot handle files undergoing migration. We implement a helper mechanism to "pause" migration. Any files undergoing migration are aborted. Clean up is done to remove sticky bits and data at the destination. Migration is restarted after snap completes. For testing an internal switch is added. It is not exposed externally. gluster volume set vol1 tier-pause [true|false] Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799 BUG: 1267950 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12304 Reviewed-by: N Balachandran <nbalacha@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gluster/cli: snapshot delete all does not work with xmlRajesh Joseph2015-08-281-2/+4
| | | | | | | | | | | | | Problem: snapshot delete all command fails with --xml option Fix: Provided xml support for delete all command Change-Id: I77cad131473a9160e188c783f442b6a38a37f758 BUG: 1257533 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/12027 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* features/bitrot: tuanble object signing waiting time value for bitrotGaurav Kumar Garg2015-06-151-0/+1
| | | | | | | | | | | | | | Currently bitrot using 120 second waiting time for object to be signed after all fop's released. This signing waiting time value should be tunable. Command for changing the signing waiting time will be #gluster volume bitrot <VOLNAME> signing-time <waiting time value in second> Change-Id: I89f3121564c1bbd0825f60aae6147413a2fbd798 BUG: 1228680 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/11105
* cli/tiering: Enhance cli output for tieringMohammed Rafi KC2015-05-081-1/+2
| | | | | | | | | | | | | | Fix for handling cli output for attach-tier and detach-tier Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678 BUG: 1211562 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10284 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-071-10/+0
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/marker: turn off inode quotas by defaultvmallika2015-05-061-0/+1
| | | | | | | | | | | | | | | | | | | | | inode quota is a new feature implemented in glusterfs-3.7 if quota is enabled in the older version and is upgraded to a new version, we can hit setxattr spike during self-heal of inode quotas. So, when a quota is enabled, turn off inode-quotas with a xlator option. With this patch, we still account for inode quotas but only when a write operation is performed for a particular file. User will be able to query inode quotas once the Inode-quota xlator option is enabled. Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a BUG: 1209430 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10152 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: gluster volume status should show status of bitrot and scrubber daemonGaurav Kumar Garg2015-05-041-16/+18
| | | | | | | | | | | | | | | | | | | | | | | | Command gluster volume status <VOLNAME> should show the status of bitrot and scrubber daemon and its pid information. Along with displaying bitrot and scrubber daemon information in gluster volume status command there should be command to show its individual status separately. Command to show individual status of bitrot and scrubber daemon will following. command to show only bitd daemon information will be gluster volume status <VOLNAME> bitd command to show only scrubber daemon information gluster volume status <VOLNAME> scrub Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4 BUG: 1209752 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10175 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* quota: display error when inode-quota cmds executed with cluster ver < 3.7vmallika2015-04-271-0/+1
| | | | | | | | | | | | | | | | | | | In a heterogeneous cluster with op_version less than 3.7, inode quotas will be accounted on those bricks which has glusterfs version 3.7 and this is not available in older version. This will have incorrect values displayed when user queries inode count from CLI. This patch will display error when inode-quota commands are executed with cluster version less than 3.7 Change-Id: Ia0e6d5635d1d8e7b2e2cfc3daa7b7f9e314a263a BUG: 1212253 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/10261 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: fix vol_type in volume info xmlAtin Mukherjee2015-04-261-1/+2
| | | | | | | | | | | | xml parsing of voltype should be inline with the cli Change-Id: I41ddddac00d07f03b56a041e1c3f5a132fbd7393 BUG: 1212398 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10271 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: support for tier volumes 'detach start' and 'detach commit'Dan Lambright2015-04-221-2/+6
| | | | | | | | | | | | | | | | | | | | | These commands work in a manner analagous to rebalancing when removing a brick. The existing migration daemon detects "detach start" and switches to moving data off the hot tier. While in this state all lookups are directed to the cold tier. gluster v detach-tier <vol> start gluster v detach-tier <vol> commit The status and stop cli commands shall be submitted separately. Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0 BUG: 1205540 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: root <root@localhost.localdomain> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10108 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: NetBSD Build System
* glusterd: CLI commands to create and manage tiered volumes.Dan Lambright2015-03-191-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A tiered volume is a normal volume with some number of new bricks representing "hot" storage. The "hot" bricks can be attached or detached dynamically to a normal volume. When this happens, a new graph is constructed. The root of the new graph is an instance of the tier translator. One subvolume of the tier translator leads to the old volume, and another leads to the new hot bricks. attach-tier <VOLNAME> [<replica> <COUNT>] <NEW-BRICK> ... [force] volume detach-tier <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> gluster volume rebalance <volume> tier start gluster volume rebalance <volume> tier stop gluster volume rebalance <volume> tier status The "tier start" CLI command starts a server side daemon. The daemon initiates file level migration based on caching policies. The daemon's status can be monitored and stopped. Note development on the "tier status" command is incomplete. It will be added in a subsequent patch. When the "hot" storage is detached, the tier translator is removed from the graph and the tiered volume reverts to its original state as described in the volume's info file. For more background and design see the feature page [1]. [1] http://www.gluster.org/community/documentation/index.php/Features/data-classification Change-Id: Ic8042ce37327b850b9e199236e5be3dae95d2472 BUG: 1194753 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/9753 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: cli command implementation for bitrot featuresGaurav Kumar Garg2015-03-181-0/+10
| | | | | | | | | | | | | | | | | CLI command for bitrot features. volume bitrot <volname> enable|disable Above command will enable/disable bitrot feature for particular volume. BUG: 1170075 Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Anand nekkunti <anekkunt@redhat.com> Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com> Reviewed-on: http://review.gluster.org/9866 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/quota : Introducing inode quotavmallika2015-03-181-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ========================================================================== Inode quota ========================================================================== = Currently, the only way to retrieve the number of files/objects in a = = directory or volume is to do a crawl of the entire directory/volume. = = This is expensive and is not scalable. = = = = The proposed mechanism will provide an easier alternative to determine = = the count of files/objects in a directory or volume. = = = = The new mechanism proposes to store count of objects/files as part of = = an extended attribute of a directory. Each directory's extended = = attribute value will indicate the number of files/objects present = = in a tree with the directory being considered as the root of the tree. = = = = The count value can be accessed by performing a getxattr(). = = Cluster translators like afr, dht and stripe will perform aggregation = = of count values from various bricks when getxattr() happens on the key = = associated with file/object count. = A new interface is introduced: ------------------------------ limit-objects : limit the number of inodes at directory level list-objects : list the directories where the limit is set remove-objects : remove the limit from the directory ========================================================================== CLI COMMAND: gluster volume quota <volname> limit-objects <path> <number> [<percent>] * <number> is a hard-limit for number of objects limitation for path "<path>" If hard-limit is exceeded, creation of file/directory is no longer permitted. * <percent> is a soft-limit for number of objects creation for path "<path>" If soft-limit is exceeded, a warning is issued for each creation. CLI COMMAND: gluster volume quota <volname> remove-objects [path] ========================================================================== CLI COMMAND: gluster volume quota <volname> list-objects [path] ... Sample output: ------------------ Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------ -------------------------------------- /dir 10 80% 10 0 Yes Yes ========================================================================== [root@snapshot-28 dir]# ls a b file11 file12 file13 file14 file15 file16 file17 [root@snapshot-28 dir]# touch a1 touch: cannot touch `a1': Disk quota exceeded * Nine files are created in directory "dir" and directory is included in * the count too. Hence the limit "10" is reached and further file creation fails ========================================================================== Note: We have also done some re-factoring in cli for volume name validation. New function cli_validate_volname is created ========================================================================== Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b BUG: 1190108 Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9769 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Snapshot/clone: clone of a snapshot that will act as a regular volumeMohammed Rafi KC2015-03-181-0/+1
| | | | | | | | | | | | | | | | | snapshot clone will allow us to take a snpahot of a snapshot. Newly created clone volume will be a regular volume with read/write permissions. CLI command snapshot clone <clonename> <snapname> Change-Id: Icadb993fa42fff787a330f8f49452da54e9db7de BUG: 1199894 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9750 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* feature/snapshot : Interface to delete all snapshots belongingSachin Pandit2014-07-251-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to a system as-well-as to a particular volume. Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1112613 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/glusterd: Added support for dispersed volumesXavier Hernandez2014-07-111-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new options have been added to the 'create' command of the cli interface: disperse [<count>] redundancy <count> Both are optional. A dispersed volume is created by specifying, at least, one of them. If 'disperse' is missing or it's present but '<count>' does not, the number of bricks enumerated in the command line is taken as the disperse count. If 'redundancy' is missing, the lowest optimal value is assumed. A configuration is considered optimal (for most workloads) when the disperse count - redundancy count is a power of 2. If the resulting redundancy is 1, the volume is created normally, but if it's greater than 1, a warning is shown to the user and he/she must answer yes/no to continue volume creation. If there isn't any optimal value for the given number of bricks, a warning is also shown and, if the user accepts, a redundancy of 1 is used. If 'redundancy' is specified and the resulting volume is not optimal, another warning is shown to the user. A distributed-disperse volume can be created using a number of bricks multiple of the disperse count. Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4 BUG: 1118629 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/7782 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: display snapd status as part of volume statusRaghavendra Bhat2014-06-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | * Made changes to save the port used by snapd in the info file for the volume i.e. <glusterd-working-directory>/vols/<volname>/info This is how the gluster volume status of a volume would look like for which the uss feature is enabled. [root@tatooine ~]# gluster volume status vol Status of volume: vol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick tatooine:/export1/vol 49155 Y 5041 Snapshot Daemon on localhost 49156 Y 5080 NFS Server on localhost 2049 Y 5087 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks Change-Id: I8f3e5d7d764a728497c2a5279a07486317bd7c6d BUG: 1111041 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8114 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* geo-rep/glusterd: Pause and Resume feature for geo-replicationKotresh H R2014-05-131-1/+3
| | | | | | | | | | | | | | | | | This patch introduces pause and resume cli command for geo-replication. Signed-off-by: Kotresh H R <khiremat@redhat.com> Change-Id: I4f5e58e9175fe85077d56088473252391fb57de7 BUG: 1093602 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7643 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/snapshot: Activation and De-activation of snapshotJoseph Fernandes2014-05-021-7/+7
| | | | | | | | | | | | | | | | | | | | | Previously, snapshots by default were activated on creation and there was no option to activate or deactivate them on demand. This will allow the user to activate and deactivate on demand. The CLI goes as follows 1) Activate the snap using a command "gluster snapshot activate <snapname> [force]" 2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>" Note: Even now the snapshot will be activated during creation. Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f BUG: 1061685 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7476 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpcgen: Remove autogenerated files instead build on demandHarshavardhana2014-04-251-21/+33
| | | | | | | | | | | | | Avoid modifying autogenerated files and keeping them in repository - autogenerate them on demand from ".x" files Change-Id: I2cdb1fe9b99768ceb80a8cb100fa00bd1d8fe2c6 BUG: 1090807 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7526 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add options to the CLI that let the user control the reset ofDawit Alemu2014-01-241-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stats "volume profile info" automatically clears incremental stats. There isn't a command to: - fetch stats without clearing incremental stats and - clear cumulative and incremental stats This change introduces two arguments (i.e. peek and clear). 'clear' will wipe both incremental and cumulative stats. 'peek' fetches stats without wiping incremental stats. 'volume profile info peek' - fetches incremental and cumulative stats without wiping incremental stats 'volume profile info incremental peek' - fetches incremental stats without wiping incremental stats 'volume profile info clear' - clears both incremental and cumultiave stats Change-Id: I91834515ad672eca5f882809941147d7d997c4c9 BUG: 1047416 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6620 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: More checks in rebalance status outputKaushal M2013-12-031-1/+2
| | | | | | | | | | Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda BUG: 1031887 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6337 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli/glusterd: Changes to quota command Quota featureRaghavendra G2013-11-261-13/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | re-work. Following are the cli commands that are new/re-worked: ====================================================== volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} | volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} | volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool] volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] glusterd changes: ================= * Quota limits are now set as extended attributes by glusterd from the aux mount created by the cli. * The gfids of the directories on which quota limits are set for a given volume are stored in /var/lib/glusterd/vols/<volname>/quota.conf file in binary format, and whose cksum and version is stored in /var/lib/glusterd/vols/<volname>/quota.cksum. Original-author: Krutika Dhananjay <kdhananj@redhat.com> Original-author: Krishnan Parthasarathi <kparthas@redhat.com> BUG: 969461 Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-0/+1
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-191-1/+5
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli: Geo-Replication "status detail" cmdVenky Shankar2013-09-041-2/+2
| | | | | | | | | | | | | | | | | | | | | | Provides detailed status info in the following format MASTER <master-vol> SLAVE <slave-vol> NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING ----------------------------------------------------------------------------------- This patch introdues "status detail" command to show crawl related information in CLI. These values are "pulled" from gsyncd when "status detail" is executed. Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1 BUG: 990420 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5590 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Log peer op status at the appropriate timeKrutika Dhananjay2013-06-181-26/+0
| | | | | | | | | | Change-Id: Ia8e1af082078f2f791708ba4faa4992bf291dd6e BUG: 961339 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/5023 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd,cli: Enable errstr for peer detachKaushal M2012-05-181-0/+1
| | | | | | | | | | | | | This patch adds an op_errstr member to the gf1_cli_deprobe_rsp structure to enable return of an errstr to cli. Change-Id: I0cbb6805b05d7cc0603c13d1c1550bb2bd062a7a BUG: 816840 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/3307 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Enable errstr for peer probeKaushal M2012-05-181-0/+1
| | | | | | | | | | | | | | | | | | | Presently glusterd only returns an errno to cli for peer probe command. This patch allows glusterd to return an errstr as well to cli. An op_errstr member has been added to gf1_cli_probe_rsp and gd1_mgmt_probe_rsp structs to allow this. In case of an error, cli will display the errstr if it was set. If errstr is not set cli will display the error message based on errno. Also, to allow for return of errstr in cases such as handshake failure, an errstr member has been added to the glusterd_peerctx_t struct. Change-Id: Iece2b44a7181555e960d9fe4517ec6cda4cdb385 BUG: 816840 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/3262 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/remove-brick: Replace ABORT with STOPshishir gowda2012-04-131-1/+1
| | | | | | | | | | | | | | Remove-brick stop now invokes rebalance stop. This leads to a graceful stop of decommissioning. The volfile is also updated (removal of decommission) Change-Id: I5a8f725c0f54439b810ce32d988c21c02229c703 BUG: 811513 Signed-off-by: shishir gowda <shishirng@gluster.com> Reviewed-on: http://review.gluster.com/3126 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd/remove-brick: Remove support for pause optionshishir gowda2012-04-131-1/+0
| | | | | | | | | | | Decommissioning through rebalance has no pause option. Change-Id: I90f165cdb2eccfaefc99365ae4b48d81320fb753 BUG: 811459 Signed-off-by: shishir gowda <shishirng@gluster.com> Reviewed-on: http://review.gluster.com/3123 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* cli,glusterd: more volume status improvementsKaushal M2012-03-291-12/+13
| | | | | | | | | | | | | | | | | | | | | | The major changes are, * "volume status" now supports getting details of the self-heal daemon processes for replica volumes. A new cli options "shd", similar to "nfs", has been introduced for this. "detail", "fd" and "clients" status ops are not supported for self-heal daemons. * The default/normal ouput of "volume status" has been enhanced to contain information about nfs-server and self-heal daemon processes as well. Some tweaks have been done to the cli output to show appropriate output. Also, changes have been done to rebalance/remove-brick status, so that hostnames are displayed instead of uuids. Change-Id: I3972396dcf72d45e14837fa5f9c7d62410901df8 BUG: 803676 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/3016 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* cli, glusterd, nfs: "volume status|profile|top" for nfs serversKaushal M2012-03-141-1/+2
| | | | | | | | | | | | | | | | | | | | | | Enables usage of volume monitoring operations "volume status", "volume top" and "volume profile" for nfs servers. These operations can be performed on nfs-servers by passing "nfs" as an option in cli. The output is similar to the normal brick outputs for these commands. The new syntaxes for the changed commands are as below, #gluster volume profile <VOLNAME> {start|info|stop} [nfs] #gluster volume top <VOLNAME> {[open|read|write|opendir|readdir [nfs]] |[read-perf|write-perf [nfs|{bs <size> count <count>}]]} [brick <brick>] [list-cnt <count>] #gluster volume status [all | <VOLNAME> [nfs|<BRICK>]] [detail|clients|mem|inode|fd|callpool] Change-Id: Ia6eb50c60aecacf9b413d3ea993f4cdd90ec0e07 BUG: 795267 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/2820 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
* cluster/dht: Rebalance will be a new glusterfs processshishirng2012-02-191-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | rebalance will not use any maintainance clients. It is replaced by syncops, with the volfile. Brickop (communication between glusterd<->glusterfs process) is used for status and stop commands. Dept-first traversal of dir is maintained, but data is migrated as and when encounterd. fix-layout (dir) do Complete migrate-data of dir fix-layout (subdir) done Rebalance state is saved in the vol file, for restart-ability. A disconnect event and pidfile state determine the defrag-status Signed-off-by: shishirng <shishirng@gluster.com> Change-Id: Iec6c80c84bbb2142d840242c28db3d5f5be94d01 BUG: 763844 Reviewed-on: http://review.gluster.com/2540 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* cli/glusterd: volume status modificationRajesh Amaravathi2012-02-181-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | * Method of getting mount details of brick has been changed from direct reading of /etc/mtab to using libc's <mntent.h>, providing a fairly portable version independent of different linux distributions. It is only supported on Linux though. * Wrong fs type (rootfs for /) in fedora-based distributions has been fixed. * Allows options (detail, mem, fd, et al) to "all" volumes. * Use of the fnmatch's GNU extension flag, FNM_LEADING_DIR is restricted to Linux hosts only. In case of non-Linux hosts, partial match functionality is absent. Change-Id: I102ce808c192ef635c2536a2167101be0aa0fc50 BUG: 786367 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.com/2705 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>