summaryrefslogtreecommitdiffstats
path: root/cli/src/cli-cmd-parser.c
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/snapshot: Snapshot should be deactivated when it is created.vmallika2014-12-181-10/+41
| | | | | | | | | | | | | | | | | | | | By default snapshot should be deactivated and this should be a configurable option. This behaviour can be configured by the command below: gluster snapshot config activate-on-create <enable|disable> Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513 BUG: 1170921 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8985 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9241 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* cli/snapshot : Add confirmation dialog to snapshot restore operation.Sachin Pandit2014-09-231-2/+19
| | | | | | | | | | | | | | | | | | | When restoring a volume, the user is not prompted for confirmation. Since restoring a volume rolls back the data to a previous point in time, there is the potential for updates to be lost. Hence it is better to display a confirmation dialogue during snapshot restore operation. Change-Id: I7b23eaeb43ad2aafa508e2ca5750d9b0fc7d6e36 BUG: 1145092 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8525 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8806
* cli/snapshot : update of a snapshot delete syntax in documentation.Sachin Pandit2014-09-231-2/+2
| | | | | | | | | | | | Change-Id: Id1a4b9684a8dd5750ee6eed841e3d5195407fb7e BUG: 1145084 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8534 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8805 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* feature/snapshot : Interface to delete all snapshots belonging to a system ↵Sachin Pandit2014-09-231-18/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as-well-as to a particular volume Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1145083 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8798 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* cli/snapshot : gluster volume info should not show the options which are not ↵Sachin Pandit2014-09-231-17/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | set explicitly. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1145020 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8793 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* build: make GLUSTERD_WORKDIR rely on localstatedirHarshavardhana2014-09-031-2/+0
| | | | | | | | | | | | | | | | | | | | | | | Backport from master branch - http://review.gluster.org/#/c/8246/ - Break-way from '/var/lib/glusterd' hard-coded previously, instead rely on 'configure' value from 'localstatedir' - Provide 's/lib/db' as default working directory for gluster management daemon for BSD and Darwin based installations - loff_t is really off_t on Darwin - fix-off the warnings generated by clang on FreeBSD/Darwin - Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all platforms. - Define proper environment for running tests, define correct PATH and LD_LIBRARY_PATH when running tests, so that the desired version of glusterfs is used, regardless where it is installed. (Thanks to manu@netbsd.org for this additional work) Change-Id: I06e684ac4c26d1e74c9daf76753403ad15f79276 BUG: 1130308 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8486 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli/glusterd: Added support for dispersed volumesXavier Hernandez2014-07-111-5/+198
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new options have been added to the 'create' command of the cli interface: disperse [<count>] redundancy <count> Both are optional. A dispersed volume is created by specifying, at least, one of them. If 'disperse' is missing or it's present but '<count>' does not, the number of bricks enumerated in the command line is taken as the disperse count. If 'redundancy' is missing, the lowest optimal value is assumed. A configuration is considered optimal (for most workloads) when the disperse count - redundancy count is a power of 2. If the resulting redundancy is 1, the volume is created normally, but if it's greater than 1, a warning is shown to the user and he/she must answer yes/no to continue volume creation. If there isn't any optimal value for the given number of bricks, a warning is also shown and, if the user accepts, a redundancy of 1 is used. If 'redundancy' is specified and the resulting volume is not optimal, another warning is shown to the user. A distributed-disperse volume can be created using a number of bricks multiple of the disperse count. Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4 BUG: 1118629 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/7782 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: display snapd status as part of volume statusRaghavendra Bhat2014-06-301-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | * Made changes to save the port used by snapd in the info file for the volume i.e. <glusterd-working-directory>/vols/<volname>/info This is how the gluster volume status of a volume would look like for which the uss feature is enabled. [root@tatooine ~]# gluster volume status vol Status of volume: vol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick tatooine:/export1/vol 49155 Y 5041 Snapshot Daemon on localhost 49156 Y 5080 NFS Server on localhost 2049 Y 5087 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks Change-Id: I8f3e5d7d764a728497c2a5279a07486317bd7c6d BUG: 1111041 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8114 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot : Provide enable/disable option for snapshot auto-delete ↵Sachin Pandit2014-06-131-4/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | feature. This patch provides an interface to enable or disable the auto-delete feature. Syntax : gluster snapshot config auto-delete <enable/disable> DETAILS : 1) When auto-delete feature is disabled, If the the soft-limit is reached then user is given a warning about exceeding soft-limit along with successful snapshot creation message (oldest snapshot is not deleted). And upon reaching hard-limit further snapshot creation is not allowed. Example : ------------------------------------------------------------------ |Case - 1: Upon reaching soft-limit | |Snapshot create : snap successfully created. |Warning : soft-limit of volume (vol) is reached. Snapshot creation |is not possible once hard-limit is reached. | |----------------------------------------------------- |Case - 2: Upon reaching hard-limit | |Snapshot create : snap creation failed. |Error : hard-limit of volume (vol) is reached, Hence it is not |possible to take further snapshots. Please delete few snapshots |of the volume (vol) before taking another snapshot. ------------------------------------------------------------------ 2) When auto-delete feature is enabled, then as soon as the soft-limit is reached the oldest snapshot is deleted for every successful snapshot creation (same as existing method), With this it is made sure that number of snapshot created is not more than snap-max-hard-limit. Change-Id: Ie3ca64bbd2c763371f541cd2e378314e73b695b4 BUG: 1105415 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8017 Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/snapshot : Dont Do the validation of snapshot config limit in CLISachin Pandit2014-06-041-13/+1
| | | | | | | | | | | | | | | | | | | code path. Problem : If we try to set the volume snap limit to more that 256, it always shows value cannot exceed 256, irrespective of system max limit. Solution : Dont do validation in CLI side. Change-Id: I292c0bc91a1806cd4906fca0151dd98135e6e49a BUG: 1098122 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7777 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/status : First fetch the snapcount and then send the rpc callSachin Pandit2014-06-031-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | for individual snapshots for snapshot status Problem : Initially, we used to do all the calculation in the glusterd side, once all the information related to snap was fetched, it was aggregated into one dictionary and that was sent back to CLI. Problem with this approach was, when number of snapshots are very high then CLI will timeout. Solution: First fetch snapcount and snapname from glusterd, then make a individual calls using the snapname fetched. This will resolve the timeout problem. Change-Id: I32609b3898ed227c804dd4d8ee4516f081240756 BUG: 1087676 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7456 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* geo-rep/glusterd: Pause and Resume feature for geo-replicationKotresh H R2014-05-131-2/+17
| | | | | | | | | | | | | | | | | This patch introduces pause and resume cli command for geo-replication. Signed-off-by: Kotresh H R <khiremat@redhat.com> Change-Id: I4f5e58e9175fe85077d56088473252391fb57de7 BUG: 1093602 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7643 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd : Volname, brickpath & volfpath length validationAtin Mukherjee2014-05-031-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | While creating a volume and adding a brick validation for _POSIX_PATH_MAX is done on absolute pathname instead of relative pathname due to which a brickpath having less than _POSIX_PATH_MAX may also fail the validation if the directory length is greater than (_POSIX_PATH_MAX -strlen(brickpath/volume name). Also this fix addresses one cli response message correction which says the volume file is too long instead of brick path is too long (when brickpath length validation doesn't fail and vol file length validation fails.) It is also important to note that with the current design of volfile naming, it can not be guranteed that volname and brickpath can have max of _POSIX_PATH_MAX characters. Change-Id: I1283d1f9dea96ae797620002c8723719f26a866d BUG: 1085330 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/7420 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Activation and De-activation of snapshotJoseph Fernandes2014-05-021-7/+149
| | | | | | | | | | | | | | | | | | | | | Previously, snapshots by default were activated on creation and there was no option to activate or deactivate them on demand. This will allow the user to activate and deactivate on demand. The CLI goes as follows 1) Activate the snap using a command "gluster snapshot activate <snapname> [force]" 2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>" Note: Even now the snapshot will be activated during creation. Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f BUG: 1061685 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7476 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* [glusterd/snapshot] snapshot create force optionJoseph Fernandes2014-04-301-11/+18
| | | | | | | | | | | | | | | | | | | | Implement force option in snapshot create i.e 1) Creation of snapshot fails if the original volume bricks are down 2) With a force option creation of snapshot will continue even if the original volume bricks are down. This was the fix for bugs 1089527 and 1083502 Change-Id: I8de0242adf8ee0af00db9fa8701d86fabc12e7fc BUG: 1090042 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7520 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapshot/config : Fix for bug which states gluster snapshot configSachin Pandit2014-04-291-6/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command should only accept the decimal numeric value. Syntax : gluster snapshot config [volname] [snap-max-hard-limit <count>] [snap-max-soft-limit <percentage>] Problem : Snapshot config used to consider the alphanumeric value staring with digit as an integer (Example: "9abc" is converted to "9"). Solution : Refined the code to check if the entered value is numeric. This patch also fixes some of the minor problems related to snapshot config. 1) Output correction in gluster snapshot config snap-max-soft-limit. 2) setting the soft limit to greater than 100% displays that "Invalid snap-max-soft-limit 0". The error message used to display "zero" in the output, Changed this to display relevant value. 3) Setting greater than allowed snap-max-hard-limit output needs to have space in between. Change-Id: Ie7c7045722fe57b2b3c50c873664b67c28eb3853 BUG: 1087203 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7457 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Error out when character "." is available in volume nameggarg2014-04-281-2/+14
| | | | | | | | | | | | | | | | previously it was failed without showing any information why it is failed. Now with this fix, when "." or any non alphanumeric character present in volume name, it will give error messages Change-Id: I17e8e69c08345c4d760f3ba333fe841e754bc9c8 BUG: 921215 Signed-off-by: ggarg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/7364 Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* build: MacOSX Porting fixesHarshavardhana2014-04-241-3/+3
| | | | | | | | | | | | | | | | | | | | | git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs Working functionality on MacOSX - GlusterD (management daemon) - GlusterCLI (management cli) - GlusterFS FUSE (using OSXFUSE) - GlusterNFS (without NLM - issues with rpc.statd) Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Dennis Schafroth <dennis@schafroth.com> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Dennis Schafroth <dennis@schafroth.com> Reviewed-on: http://review.gluster.org/7503 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-2/+936
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli : Removal of dead codeAtin Mukherjee2014-04-041-5/+0
| | | | | | | | | | | | | dead code reported by covscan is removed from cli-cmd-parser.c Fix for coverity CID: 1195423 Change-Id: Ice1771dc8b3ef47fd2e63b380b12e850dc1d5d95 BUG: 789278 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/7389 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: remove-brick no longer defaults to commit-forceAtin Mukherjee2014-04-021-6/+4
| | | | | | | | | | | | | | | | | | | | | | | Problem : When gluster volume remove-brick is executed with out any option, it defaults to force commit which results in data loss. Fix : remove-brick can not be executed with out explicit option, user needs to provide the option in the command line else the command will throw back an usage error. Earlier usage : volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... [start|stop|status|commit|force] Current usage : volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> Change-Id: I2a49131f782a6c0dcd03b4dc8ebe5907999b0b49 BUG: 1077682 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/7292 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shyamsundar Ranganathan <sam.somari@gmail.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Changed log rotate command format to common format.Raghavendra Talur2014-03-051-1/+4
| | | | | | | | | | | | | | | | | | | | Problem: The syntax of log rotate was volume log rotate <volname> [<brick>] All other commands are of the form: volume <verb|noun> <volname> [<actions>|options] Solution: Changed log rotate command usage to common format. Old syntax is also supported, it will be deprecated in next release. Change-Id: Ia6f6665185a1549c6f79ca763599b446f29e2c78 BUG: 844187 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/4594 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Add options to the CLI that let the user control the reset ofDawit Alemu2014-01-241-10/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stats "volume profile info" automatically clears incremental stats. There isn't a command to: - fetch stats without clearing incremental stats and - clear cumulative and incremental stats This change introduces two arguments (i.e. peek and clear). 'clear' will wipe both incremental and cumulative stats. 'peek' fetches stats without wiping incremental stats. 'volume profile info peek' - fetches incremental and cumulative stats without wiping incremental stats 'volume profile info incremental peek' - fetches incremental stats without wiping incremental stats 'volume profile info clear' - clears both incremental and cumultiave stats Change-Id: I91834515ad672eca5f882809941147d7d997c4c9 BUG: 1047416 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6620 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd : enable/disable quota using volume set cmd deprecated.Anuradha Talur2013-12-191-31/+0
| | | | | | | | | | | | | | | | | Using the command 'gluster volume set <volname> quota {on|off}' doesn't really enable/disable quota. So, a message to the user is provided. Message asks the user to use 'gluster volume quota <volname> {enable|disable}' instead. Improvement of the previous method to solve the issue. Change-Id: Ibeda35bb49ec7b541d53e592b48fd132380fce76 BUG: 1024626 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/6512 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: more glusterd and cli fixes for geo-rep.Ajeet Jha2013-12-121-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | -> handle option validation cases in reset case. -> Creating valid conf path when glusterd restarts. -> Reading the gsyncd worker thread status and displaying it. -> Displaying status-detail per worker. -> Fetch checkpoint info in geo-rep status. -> use-tarssh value validation added. misc: misc geo-rep fixes based on cluster, logrotate etc.. -> cluster/dht: fix 'stime' getxattr getting overwritten. -> cluster/afr: return max of 'stime' values in subvol. -> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary. -> cluster/dht: fix convoluted logic while aggregating. -> cluster/*: fix 'stime' min/max fetch logic. Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85 BUG: 1036552 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli : enabling/disabling quota using volume set cmd deprecatedAnuradha2013-12-111-0/+31
| | | | | | | | | | | | | | | Using the command 'gluster volume set <volname> quota {on|off}' doesn't really enable/disable quota. So, a message to the user is provided. Message asks the user to use 'gluster volume quota <volname> {enable|disable}' instead. Change-Id: Ic4e12d5f8298ca9233639b4f6f8601ed6937fecb BUG: 1024626 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/6192 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add an option to fetch just incremental or cumulative I/0Dawit Alemu2013-12-031-7/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | information 'volume profile info' fetches both cumulative and incremental I/O statistics. There isn't a way to fetch just cumulative or incremental statistics. This change introduces two optional arguments, namely "incremental" and "cumulative", that can be tacked on to 'volume profile info'. In other words, the new command format is volume profile <VOLNAME> {start | info [incremental | cumulative] | stop} [nfs] 'volume profile info incremental' - fetches incremental stats 'volume profile info cumulative' - fetches cumulative stats 'volume profile info' - fetches incremental and cumulative stats Change-Id: I5ddb45d990542ea611d23d251efebfec46f472d0 BUG: 1030580 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6264 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli, glusterd: More quota fixes ...Krutika Dhananjay2013-11-301-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... which may be grouped under the following categories: 1. Fix incorrect cli exit status for 'quota list' cmd 2. Print appropriate error message on quota parse errors in cli Authored by: Anuradha Talur <atalur@redhat.com> 3. glusterd: Improve quota validation during stage-op 4. Fix peer probe issues resulting from quota conf checksum mismatches 5. Enhancements to CLI output in the event of quota command failures Authored by: Kaushal Madappa <kmadappa@redhat.com> 7. Move aux mount location from /tmp to /var/run/gluster Authored by: Krishnan Parthasarathi <kparthas@redhat.com> 8. Fix performance issues in quota limit-usage Authored by: Krutika Dhananjay <kdhananj@redhat.com> Note: Some functions that were used in earlier version of quota, that aren't called anymore have been removed. Change-Id: I9d874f839ae5fdcfbe6d4f2d727eac091f27ac57 BUG: 969461 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6366 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Changes to quota command Quota featureRaghavendra G2013-11-261-13/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | re-work. Following are the cli commands that are new/re-worked: ====================================================== volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} | volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} | volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool] volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] glusterd changes: ================= * Quota limits are now set as extended attributes by glusterd from the aux mount created by the cli. * The gfids of the directories on which quota limits are set for a given volume are stored in /var/lib/glusterd/vols/<volname>/quota.conf file in binary format, and whose cksum and version is stored in /var/lib/glusterd/vols/<volname>/quota.cksum. Original-author: Krutika Dhananjay <kdhananj@redhat.com> Original-author: Krishnan Parthasarathi <kparthas@redhat.com> BUG: 969461 Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* bd_map: Remove bd_map xlatorM. Mohan Kumar2013-11-131-49/+2
| | | | | | | | | | | Remove bd_map xlator and CLI related changes. Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/5747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: [Feature] Command implementation to get heal-countVenkatesh Somyajulu2013-10-141-14/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently to know the number of files to be healed, either user has to go to backend and check the number of entries present in indices/xattrop directory. But if a volume consists of large number of bricks, going to each backend and counting the number of entries is a time-taking task. Otherwise user can give gluster volume heal vol-name info command but with this approach if no. of entries are very hugh in the indices/ xattrop directory, it will comsume time. So as a feature, new command is implemented. Command 1: gluster volume heal vn statistics heal-count This command will get the number of entries present in every brick of a volume. The output displays only entries count. Command 2: gluster volume heal vn statistics heal-count replica 192.168.122.1:/home/user/brickname Here if we are concerned with just one replica. So providing any one of the brick of a replica will get the number of entries to be healed for that replica only. Example: Replicate volume with replica count 2. Backend status: -------------- [root@dhcp-0-17 xattrop]# ls -lia | wc -l 1918 NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy entries so actual no. of entries to be healed are 1916. [root@dhcp-0-17 xattrop]# pwd /home/user/2ty/.glusterfs/indices/xattrop Command output: -------------- Gathering count of entries to be healed on volume volume3 has been successful Brick 192.168.122.1:/home/user/22iu Status: Brick is Not connected Entries count is not available Brick 192.168.122.1:/home/user/2ty Number of entries: 1916 Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290 BUG: 1015990 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6044 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr : Implementation of command "gluster volume heal vn statistics"Venkatesh Somyajulu2013-10-141-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "gluster volume heal volumename statistics" command gives the summary of the afr crawl done based on the entries present in the xattrop directory. Whenever afr crawls are attempted, the beginning time of crawl, end time of crawl, no of files healed, heal-failed count and number of files in split brain are shown along with the type of the crawl. If crawl is already in progress then it will give the number of files healed, heal failed count and number of files in split-brain from the beginning of the crawl and instead of telling the end time of the crawl, "CRAWL IN PROGRESS" message will be shown. Output format: command: "gluster volume heal volume-name statistics" Output: Gathering afr crawl statistics crawl statistics on volume volume-name has been successful ------------------------------------------------ Crawl statistics for brick no 0 Hostname of brick 192.168.122.248 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 ------------------------------------------------ Crawl statistics for brick no 1 Hostname of brick 192.168.122.1 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 -------------------------------------------------- Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9 BUG: 949400 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4790 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-5/+14
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/cli: Status detail cli parse check and vol geo status crash fixAvra Sengupta2013-09-211-0/+7
| | | | | | | | | | Change-Id: I1841864273fc4242de15fbfcf76fd5de40269f28 BUG: 1006249 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5889 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli: Geo-Replication "status detail" cmdVenky Shankar2013-09-041-2/+17
| | | | | | | | | | | | | | | | | | | | | | Provides detailed status info in the following format MASTER <master-vol> SLAVE <slave-vol> NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING ----------------------------------------------------------------------------------- This patch introdues "status detail" command to show crawl related information in CLI. These values are "pulled" from gsyncd when "status detail" is executed. Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1 BUG: 990420 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5590 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli : Display error messages if virt file has been deleted or is invalid.Avra Sengupta2013-07-271-3/+17
| | | | | | | | | | | | | "gluster volume set <VOLNAME> group virt" will display error message if virt file is deleted or is invalid. Change-Id: Icb202b6a445597fcd9a3dcef8001891f2601a115 BUG: 916127 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4586 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-89/+167
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: changes in 'volume create' behaviourKrutika Dhananjay2013-04-091-4/+36
| | | | | | | | | | | | | This patch incorporates all the changes suggested on the behaviour of 'volume create' command in http://review.gluster.org/#change,4214 (comment #14, to be precise). Change-Id: Iaac524a59738b177415595b18aa8a136090d3d25 BUG: 948729 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4740 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* object-storage: Restoring multi volume support in UFO.Mohammed Junaid2013-03-071-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | * Currently, the users of UFO are restricted to use just one volume at any given point of time. This patch removes this limitation. * The usage of gluster-swift-gen-builders has also changed. With this commit the users should mention the list of volumes that they want to expose through UFO. So, only the volumes mentioned during the ring file generation can be accessed. Usage: gluster-swift-gen-builders <vol-name1> [<vol-name2>]... This is an intermediate fix until we remove the account, container and object server processes. Once we have this frame work running, it will completely eliminate the ring files. Change-Id: I9ad3808519fec9c7c60ad846c4f8b653117a8337 BUG: 909053 Signed-off-by: Mohammed Junaid <junaid@redhat.com> Reviewed-on: http://review.gluster.org/4485 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Peter Portante <pportant@redhat.com>
* cli: fail vol set when value = empty string/string with all whitespacesKrutika Dhananjay2012-12-101-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | PROBLEMS: a. 'volume set' operation accepts empty strings and strings containing all whitespaces for value. The result - subsequent attempts to start glusterd fail. b. 'volume set group' operation, when passed an empty string or string containing all spaces as value, fopens the wrong file. FIX: Made volume set parser in cli check for and invalidate value if its length is 0 after all the whitespaces have been stripped off. Change-Id: I6792ee1fe9452d8e25e1fd1eeb1b9981be3389f1 BUG: 859927 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4246 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* BD Backend: CLI to create a full/linked clone of a imageM. Mohan Kumar2012-11-291-1/+1
| | | | | | | | | | | | | | | | A new CLI command added to support cloning/snapshotting of a LV device Syntax is: $ gluster bd clone <volname>:<vg>/<lv> <newlv> $ gluster bd snapshot <volname>:<vg>/<lv> <snap_lv> <size> BUG: 805138 Change-Id: Idc2ac14525a3998329c742bf85a06326cac8cd54 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/3719 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* BD Backend: Volume creation supportM. Mohan Kumar2012-11-291-2/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A new parameter type is added to volume create command. To use BD xlator one has to specify following argument in addition to normal volume create device vg brick:<VG-NAME> for example, $ gluster volume create lv_volume device vg host:/vg1 Changes from previous version * New type 'backend' added to volinfo structure to differentiate between posix and bd xlator * Most of the volume related commands are updated to handle BD xlator, like add-brick, heal-brick etc refuse to work when volume is BD xlator type * Only one VG (ie brick) can be specified for BD xlator during volume creation * volume info shows VG info if its of type BD xlator BUG: 805138 Change-Id: I0ff90aca04840c71f364fabb0ab43ce33f9278ce Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/3717 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli: removed extra dict unrefs and memory leaksKrutika Dhananjay2012-09-301-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEMS ADDRESSED: a. The following change http://review.gluster.com/#change,3948 introduces extra dict unrefs in cli. b. There are instances of memory leak in gf_cli_*_cbk functions. FIX: Problem (a) is fixed by ensuring the dict is ref'd (indirectly at the time of creation) and unref'd (in CLI_STACK_DESTROY) ONLY once. The following is the rationale behind this approach: The number of refs and unrefs on dict varies across the different commands that use it. The cli thread MUST wait for the gf_cli_*_cbks to complete before the frame is destroyed. This rules out the need to do extra refs and unrefs in the code path. Problem (b) is fixed by doing unref on dicts that are created for the purpose of unserializing the response but never freed in gf_cli_*_cbk functions. Change-Id: I5a7431543fc8e3cac4d256f5c87d1e3c62a331be BUG: 823081 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/3966 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli/: Filter out the loopback address for brick creation.Varun Shastry2012-09-181-2/+4
| | | | | | | | | | | | Don't create the volumes with brick address 0.0.0.0 to 0.255.255.255 (http://en.wikipedia.org/wiki/Reserved_IP_addresses) Change-Id: Ic5464ec370bddf498a5220acf2d367b877e21e66 BUG: 787627 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3865 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Fix double free in cli_add_key_groupKrishnan Parthasarathi2012-09-141-0/+2
| | | | | | | | | Change-Id: I3c2f030ac7c53913612a3fbac5e582c47b005621 BUG: 851237 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3943 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-09-131-7/+6
| | | | | | | | | | | | License message changed for server-side, dual license GPLV2 and LGPLv3+. Change-Id: Ia9e53061b9d2df3b3ef3bc9778dceff77db46a09 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3940 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Added special key "group" for bulk volume set.Krishnan Parthasarathi2012-09-121-4/+109
| | | | | | | | | | | | | | | | | | | gluster volume set VOLNAME group group_name - where group_name is a file under /var/lib/glusterd/groups containing one key, value pair per line as below, key1=value1 key2=value2 [...] - the command sets key1 to value1 and so on. Change-Id: Ic4c8dedb98d013b29a74e57f8ee7c1d3573137d2 BUG: 851237 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3831 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-08-281-14/+5
| | | | | | | | | | | | | | | | | | The license message is changed to Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com> This file is part of GlusterFS. This file is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. Change-Id: I07d2b63ed5fbbbd1884f1e74f2dd56013d15b0f4 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3858 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Prevent creation of volumes with tokens as volnamesPranith Kumar K2012-07-291-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | RCA: yyparse confuses volnames with tokens when any of the tokens 'volume', 'type', 'subvolumes', 'option', 'end-volume' are used as volnames. This happens because io-stats xlator name is same as volname in fuse volfile. Both nfs, fuse volfiles are affected by this problem. Fix: We could fix this also by changing io-stats xlator name to something other than 'volname'. But I am worried of the backward compatibility issues it may introduce. Disallowing creation of volumes with tokens as volname seems like a safer fix. Tests: All volume creation operations with tokens as volname give invalid volume name error. Change-Id: Ifc63a5c31375e92541b954ec133aa3c8e6a56a02 BUG: 844030 Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Reviewed-on: http://review.gluster.com/3745 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* remove useless if-before-free (and free-like) functionsJim Meyering2012-07-131-12/+6
| | | | | | | | | | | | See comments in http://bugzilla.redhat.com/839925 for the code to perform this change. Signed-off-by: Jim Meyering <meyering@redhat.com> BUG: 839925 Change-Id: I10e4ecff16c3749fe17c2831c516737e08a3205a Reviewed-on: http://review.gluster.com/3661 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>