summaryrefslogtreecommitdiffstats
path: root/cli/src/cli.h
Commit message (Collapse)AuthorAgeFilesLines
* tier/cli : throw a warning when user issues a detach-tier commit/forceManikandan Selvaganesh2015-10-051-1/+1
| | | | | | | | | | | | | command Change-Id: Idf7664d509156ce46ef4308ffc07fb556a0aedd2 BUG: 1268755 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12297 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* quota : xml output modified to give exact available space in bytesManikandan Selvaganesh2015-09-301-5/+6
| | | | | | | | | | | | | | | | Currrently, 'gluster v quota <VOLNAME> list' command rounds off the available space and shows it to the user. Now, 'gluster v quota <VOLNAME> list --xml' command is modified to show the exact available space in bytes. Change-Id: I3772e036a2537c1df12f22cf32dfe4ac7940988f BUG: 1261404 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12137 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Tier/cli: tier related information in volume info xmlhari gowtham2015-09-291-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gluster v info didnt differentiate the hot bricks and cold bricks and other few values <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volInfo> <volumes> <volume> <name>rmbr</name> <id>72d223fc-96ba-4f4a-ac6e-0d0bc16ef127</id> <status>1</status> <statusStr>Started</statusStr> <brickCount>3</brickCount> <distCount>1</distCount> <stripeCount>1</stripeCount> <replicaCount>1</replicaCount> <disperseCount>0</disperseCount> <redundancyCount>0</redundancyCount> <type>5</type> <typeStr>Tier</typeStr> <transport>0</transport> <xlators/> <bricks> <hotBricks> <hotBrickType>Distribute</hotBrickType> <numberOfBricks>1</numberOfBricks> <brick uuid="81">v1:/hb1<name>v1:/hb1</name><hostUuid>81</hostUuid></brick> </hotBricks> <coldBricks> <coldBrickType>Distribute</coldBrickType> <numberOfBricks>2</numberOfBricks> <brick uuid="81">v1:/br1<name>v1:/br1</name><hostUuid>81</hostUuid></brick> <brick uuid="81">v1:/br2<name>v1:/br2</name><hostUuid>81</hostUuid></brick> <count>0</count> </coldBricks> </bricks> </volume> </volumes> </volInfo> </cliOutput> Change-Id: I6e52541bb6d8a6a17e17bfcb42434beaac13db56 BUG: 1261837 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12158 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: add gluster v tier <vol>Dan Lambright2015-09-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | Currently the tier feature piggy backs off the rebalance command syntax to obtain status and this is clumsy. Introduce a new tier command that can do tier specific operations, starting with volume status to display counters. Old commands: gluster volume attach-tier <vol> [replica count] {bricklist..} gluster volume detach-tier <vol> {start|stop|commit} New commands: gluster volume tier <vol> attach [replica count] {bricklist} | detach {start|stop|commit} | status Change-Id: Ic07b3c6260588162de7d34380f8cbd3d8a7f35d3 BUG: 1255693 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/11984 Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gluster/cli: snapshot delete all does not work with xmlRajesh Joseph2015-08-281-0/+18
| | | | | | | | | | | | | Problem: snapshot delete all command fails with --xml option Fix: Provided xml support for delete all command Change-Id: I77cad131473a9160e188c783f442b6a38a37f758 BUG: 1257533 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/12027 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* cli: on error invoke cli_cmd_broadcast_response function in separate threadvmallika2015-08-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | There is a problem in current CLI framework CLI holds the lock when processing command. When processing quota list command, below sequence of steps executed in the same thread and causing deadlock 1) CLI holds the lock 2) Send rpc_clnt_submit request to quotad for quota usage 3) If quotad is down, rpc_clnt_submit invokes cbk function with error 4) cbk function cli_quotad_getlimit_cbk tries to hold lock to broadcast the results and hangs, because same thread has already holding the lock This patch fixes the problem by creating seperate thread for broadcasting the result Change-Id: I53be006eadf6aaf348083d9168535530d70a8ab3 BUG: 1242819 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11990 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/shared_storage: Provide a volume set option to create and mount the ↵Avra Sengupta2015-06-041-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shared storage Introducing a global volume set option(cluster.enable-shared-storage) which helps create and set-up the shared storage meta volume. gluster volume set all cluster.enable-shared-storage enable On enabling this option, the system analyzes the number of peers in the cluster, which are currently connected, and chooses three such peers(including the node the command is issued from). From these peers a volume(gluster_shared_storage) is created. Depending on the number of peers available the volume is either a replica 3 volume(if there are 3 connected peers), or a replica 2 volume(if there are 2 connected peers). "/var/run/gluster/ss_brick" serves as the brick path on each node for the shared storage volume. We also mount the shared storage at "/var/run/gluster/shared_storage" on all the nodes in the cluster as part of enabling this option. If there is only one node in the cluster, or only one node is up then the command will fail Once the volume is created, and mounted the maintainance of the volume like adding-bricks, removing bricks etc., is expected to be the onus of the user. On disabling the option, we provide the user a warning, and on affirmation from the user we stop the shared storage volume, and unmount it from all the nodes in the cluster. gluster volume set all cluster.enable-shared-storage disable Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc BUG: 1222013 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* build: do not #include "config.h" in each fileNiels de Vos2015-05-291-5/+0
| | | | | | | | | | | | | | | | | | Instead of including config.h in each file, and have the additional config.h included from the compiler commandline (-include option). When a .c file tests for a certain #define, and config.h was not included, incorrect assumtions were made. With this change, it can not happen again. BUG: 1222319 Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10808 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cli/tiering: Enhance cli output for tieringMohammed Rafi KC2015-05-081-2/+4
| | | | | | | | | | | | | | Fix for handling cli output for attach-tier and detach-tier Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678 BUG: 1211562 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10284 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-071-2/+2
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/marker: turn off inode quotas by defaultvmallika2015-05-061-0/+3
| | | | | | | | | | | | | | | | | | | | | inode quota is a new feature implemented in glusterfs-3.7 if quota is enabled in the older version and is upgraded to a new version, we can hit setxattr spike during self-heal of inode quotas. So, when a quota is enabled, turn off inode-quotas with a xlator option. With this patch, we still account for inode quotas but only when a write operation is performed for a particular file. User will be able to query inode quotas once the Inode-quota xlator option is enabled. Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a BUG: 1209430 Signed-off-by: vmallika <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/10152 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Status EnhancementsAravinda VK2015-05-051-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Discussion in gluster-devel http://www.gluster.org/pipermail/gluster-devel/2015-April/044301.html MASTER NODE - Master Volume Node MASTER VOL - Master Volume name MASTER BRICK - Master Volume Brick SLAVE USER - Slave User to which Geo-rep session is established SLAVE - <SLAVE_NODE>::<SLAVE_VOL> used in Geo-rep Create command SLAVE NODE - Slave Node to which Master worker is connected STATUS - Worker Status(Created, Initializing, Active, Passive, Faulty, Paused, Stopped) CRAWL STATUS - Crawl type(Hybrid Crawl, History Crawl, Changelog Crawl) LAST_SYNCED - Last Synced Time(Local Time in CLI output and UTC in XML output) ENTRY - Number of entry Operations pending.(Resets on worker restart) DATA - Number of Data operations pending(Resets on worker restart) META - Number of Meta operations pending(Resets on worker restart) FAILURES - Number of Failures CHECKPOINT TIME - Checkpoint set Time(Local Time in CLI output and UTC in XML output) CHECKPOINT COMPLETED - Yes/No or N/A CHECKPOINT COMPLETION TIME - Checkpoint Completed Time(Local Time in CLI output and UTC in XML output) XML output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> cliOutput> geoRep> volume> name> sessions> session> session_slave> pair> master_node> master_brick> slave_user> slave/> slave_node> status> crawl_status> entry> data> meta> failures> checkpoint_completed> master_node_uuid> last_synced> checkpoint_time> checkpoint_completion_time> BUG: 1212410 Change-Id: I944a6c3c67f1e6d6baf9670b474233bec8f61ea3 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10121 Tested-by: NetBSD Build System Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: support for tier volumes 'detach start' and 'detach commit'Dan Lambright2015-04-221-0/+4
| | | | | | | | | | | | | | | | | | | | | These commands work in a manner analagous to rebalancing when removing a brick. The existing migration daemon detects "detach start" and switches to moving data off the hot tier. While in this state all lookups are directed to the cold tier. gluster v detach-tier <vol> start gluster v detach-tier <vol> commit The status and stop cli commands shall be submitted separately. Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0 BUG: 1205540 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: root <root@localhost.localdomain> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10108 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: NetBSD Build System
* cli: Allow options prefixed with '--' for geo-rep config cmdsKotresh HR2015-04-091-0/+4
| | | | | | | | | | | | | | | | | | | | | Geo-replications allows users to send rsync_options from cli but while parsing, options prefixed with '--' are treated as unrecognized apart from a few options. Because of this rsync_options can't be set through cli. This patch allows such options if it's a geo-rep config command. e.g. gluster vol geo-rep master fedora1::slave config rsync-options \ "--bwlimit=1.5m" Change-Id: I4b4f10c0efd3f3a362ece65e697d5ce1498f84ac BUG: 1168108 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9198 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Support distributed replicated volumes on hot tierDan Lambright2015-04-081-1/+1
| | | | | | | | | | | | | | We did not set up the graph properly for hot tiers with replicated subvolumes. Also add check that the file has not already been moved by another replicated brick on the same node. Change-Id: I9adef565ab60f6774810962d912168b77a6032fa BUG: 1206517 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10054 Reviewed-by: Joseph Fernandes <josferna@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* cli/glusterd: cli command implementation for bitrot featuresGaurav Kumar Garg2015-03-181-0/+3
| | | | | | | | | | | | | | | | | CLI command for bitrot features. volume bitrot <volname> enable|disable Above command will enable/disable bitrot feature for particular volume. BUG: 1170075 Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Anand nekkunti <anekkunt@redhat.com> Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com> Reviewed-on: http://review.gluster.org/9866 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/quota : Introducing inode quotavmallika2015-03-181-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ========================================================================== Inode quota ========================================================================== = Currently, the only way to retrieve the number of files/objects in a = = directory or volume is to do a crawl of the entire directory/volume. = = This is expensive and is not scalable. = = = = The proposed mechanism will provide an easier alternative to determine = = the count of files/objects in a directory or volume. = = = = The new mechanism proposes to store count of objects/files as part of = = an extended attribute of a directory. Each directory's extended = = attribute value will indicate the number of files/objects present = = in a tree with the directory being considered as the root of the tree. = = = = The count value can be accessed by performing a getxattr(). = = Cluster translators like afr, dht and stripe will perform aggregation = = of count values from various bricks when getxattr() happens on the key = = associated with file/object count. = A new interface is introduced: ------------------------------ limit-objects : limit the number of inodes at directory level list-objects : list the directories where the limit is set remove-objects : remove the limit from the directory ========================================================================== CLI COMMAND: gluster volume quota <volname> limit-objects <path> <number> [<percent>] * <number> is a hard-limit for number of objects limitation for path "<path>" If hard-limit is exceeded, creation of file/directory is no longer permitted. * <percent> is a soft-limit for number of objects creation for path "<path>" If soft-limit is exceeded, a warning is issued for each creation. CLI COMMAND: gluster volume quota <volname> remove-objects [path] ========================================================================== CLI COMMAND: gluster volume quota <volname> list-objects [path] ... Sample output: ------------------ Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------ -------------------------------------- /dir 10 80% 10 0 Yes Yes ========================================================================== [root@snapshot-28 dir]# ls a b file11 file12 file13 file14 file15 file16 file17 [root@snapshot-28 dir]# touch a1 touch: cannot touch `a1': Disk quota exceeded * Nine files are created in directory "dir" and directory is included in * the count too. Hence the limit "10" is reached and further file creation fails ========================================================================== Note: We have also done some re-factoring in cli for volume name validation. New function cli_validate_volname is created ========================================================================== Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b BUG: 1190108 Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9769 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* CLI : GLobal option for NFS-GaneshaMeghana Madhusudhan2015-03-181-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A new global CLI option has been introduced for NFS-Ganesha. gluster features.ganesha enable/disable. This option is persistent and shall be inherited by new volumes created after this option is set. gluster features.ganesha enable It carries out the following functions: 1. Disables gluster-nfs across the cluster 2. Starts NFS-Ganesha server on a subset of nodes and exports '/'. 3. Creates the HA cluster for NFS-Ganesha. 4. Writes the option into the global config file. gluster features.ganesha disable 1. Stops NFS-Ganesha server. 2. Tears down the HA cluster for NFS-Ganesha With this change the older volume set options with keys "nfs-ganesha.host" and "nfs-ganesha.enable" will no longer be supported. This commit has only has the CLI related changes. Another patch will be submitted to support this feature entirely. Change-Id: Ie4b66a16c23b33b795738654b9a68f8e2c34efe3 BUG: 1188184 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/9538 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* features/quota : Fix XML output for quota list commandSachin Pandit2015-02-191-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sample output: --------------- Sample 1) ---------- [root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4 /dir5 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volQuota> <limit> <path>/dir1</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> <limit> <path>/dir4</path> <path>No such file or directory</path> </limit> <limit> <path>/dir5</path> <path>No such file or directory</path> </limit> </volQuota> </cliOutput> Sample 2) --------- gluster volume quota vol1 list --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volQuota/> </cliOutput> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <volQuota> <limit> <path>/dir</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> <limit> <path>/dir1</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> </volQuota> </cliOutput> Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a BUG: 1185259 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9481 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: volume status for tcp,rdma type volume display only tcp portMohammed Rafi KC2015-02-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For tcp,rdma type voumes, there will be two ports, one for tcp and one for rdma. But volume status command only display tcp port. By this change, adding an extra column for rdma port and changing the port to tcp port. Eg: >gluster volume status pathy >For tcp,rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 49153 Y 14158 >For rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 0 49153 Y 14158 For tcp type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 0 Y 14158 >gluster volume status patchy detail Status of volume: xcube2 ------------------------------------------------------------------------------ Brick : Brick brickname TCP Port : 49152 RDMA Port : 49153 Online : Y Pid : 14158 File System : ext4 Device : /dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 31.1GB Total Disk Space : 47.9GB Inode Count : 3203072 Free Inodes : 2926789 >gluster volume status xcube --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>xcube</volName> <nodeCount>2</nodeCount> <node> <hostname>hostname</hostname> <path>/home/brick1</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>49152</port> <ports> <tcp>49152</tcp> <rdma>N/A</rdma> </ports> <pid>5657</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>5665</pid> </node> <tasks/> </volume> </volumes> </volStatus> </cliOutput> Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51 BUG: 1164079 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9191 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/glusterd: Support of volume get for a specific volume optionAtin Mukherjee2014-08-261-0/+3
| | | | | | | | | | | | | | This patch introduces a cli command to display a specific volume option/all volume options of a specific volume with the following usage: Usage: volume get <VOLNAME> <key|all> Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d BUG: 983317 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8305 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Xml output for geo-replication status command.ndarshan2014-07-241-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds xml output for geo-replication status and status detail command. sample: -------------------------------------------------------------- <geoRep> <volume> <name>master</name> <sessions> <session> <session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave> <pair> <master_node>localhost.localdomain</master_node> <master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid> <master_brick>/root/master_b1</master_brick> <slave>ssh://XXXXXXXXXXX::slave</slave> <status>faulty</status> <checkpoint_status>N/A</checkpoint_status> <crawl_status>N/A</crawl_status> </pair> </session> </sessions> </volume> </geoRep> ------------------------------------------------------------- Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f BUG: 1121518 Signed-off-by: ndarshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/8089 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: Fix no libxml compilation for snapshot statusKaushal M2014-07-241-3/+3
| | | | | | | | | | | BUG: 1122186 Change-Id: Ib887f2194258e85d40f65a758b6a963a17911395 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8363 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cli/snapshot: provide --xml support for all snapshot commandRajesh Joseph2014-07-121-0/+11
| | | | | | | | | | | | | | Now --xml option can be used with all snapshot command. It will form the cli output in xml form. Change-Id: Ifc0ac31d2a9f91e136e87f3b51a629df7dba94e8 BUG: 1096610 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7663 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Added support for dispersed volumesXavier Hernandez2014-07-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new options have been added to the 'create' command of the cli interface: disperse [<count>] redundancy <count> Both are optional. A dispersed volume is created by specifying, at least, one of them. If 'disperse' is missing or it's present but '<count>' does not, the number of bricks enumerated in the command line is taken as the disperse count. If 'redundancy' is missing, the lowest optimal value is assumed. A configuration is considered optimal (for most workloads) when the disperse count - redundancy count is a power of 2. If the resulting redundancy is 1, the volume is created normally, but if it's greater than 1, a warning is shown to the user and he/she must answer yes/no to continue volume creation. If there isn't any optimal value for the given number of bricks, a warning is also shown and, if the user accepts, a redundancy of 1 is used. If 'redundancy' is specified and the resulting volume is not optimal, another warning is shown to the user. A distributed-disperse volume can be created using a number of bricks multiple of the disperse count. Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4 BUG: 1118629 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/7782 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: more glusterd and cli fixes for geo-rep.Ajeet Jha2013-12-121-14/+0
| | | | | | | | | | | | | | | | | | | | | | | | -> handle option validation cases in reset case. -> Creating valid conf path when glusterd restarts. -> Reading the gsyncd worker thread status and displaying it. -> Displaying status-detail per worker. -> Fetch checkpoint info in geo-rep status. -> use-tarssh value validation added. misc: misc geo-rep fixes based on cluster, logrotate etc.. -> cluster/dht: fix 'stime' getxattr getting overwritten. -> cluster/afr: return max of 'stime' values in subvol. -> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary. -> cluster/dht: fix convoluted logic while aggregating. -> cluster/*: fix 'stime' min/max fetch logic. Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85 BUG: 1036552 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Fail creation of bricks on root partition in script modeKrutika Dhananjay2013-12-041-0/+1
| | | | | | | | | | | | | | | Also added an option 'wignore' to save ourselves the trouble of modifying test scripts in our regression test suite as well as those that are still under review. Change-Id: Id320c03595506e9da187e766991c19640bd000c5 BUG: 1028281 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6409 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli, glusterd: More quota fixes ...Krutika Dhananjay2013-11-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... which may be grouped under the following categories: 1. Fix incorrect cli exit status for 'quota list' cmd 2. Print appropriate error message on quota parse errors in cli Authored by: Anuradha Talur <atalur@redhat.com> 3. glusterd: Improve quota validation during stage-op 4. Fix peer probe issues resulting from quota conf checksum mismatches 5. Enhancements to CLI output in the event of quota command failures Authored by: Kaushal Madappa <kmadappa@redhat.com> 7. Move aux mount location from /tmp to /var/run/gluster Authored by: Krishnan Parthasarathi <kparthas@redhat.com> 8. Fix performance issues in quota limit-usage Authored by: Krutika Dhananjay <kdhananj@redhat.com> Note: Some functions that were used in earlier version of quota, that aren't called anymore have been removed. Change-Id: I9d874f839ae5fdcfbe6d4f2d727eac091f27ac57 BUG: 969461 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6366 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Changes to quota command Quota featureRaghavendra G2013-11-261-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | re-work. Following are the cli commands that are new/re-worked: ====================================================== volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} | volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} | volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool] volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] glusterd changes: ================= * Quota limits are now set as extended attributes by glusterd from the aux mount created by the cli. * The gfids of the directories on which quota limits are set for a given volume are stored in /var/lib/glusterd/vols/<volname>/quota.conf file in binary format, and whose cksum and version is stored in /var/lib/glusterd/vols/<volname>/quota.cksum. Original-author: Krutika Dhananjay <kdhananj@redhat.com> Original-author: Krishnan Parthasarathi <kparthas@redhat.com> BUG: 969461 Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* bd_map: Remove bd_map xlatorM. Mohan Kumar2013-11-131-13/+0
| | | | | | | | | | | Remove bd_map xlator and CLI related changes. Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/5747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Changes to cli-glusterd communicationKaushal M2013-10-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Glusterd changes: With this patch, glusterd creates a socket file in DATADIR/run/glusterd.socket , and listen on it for cli requests. It listens for 2 rpc programs on the socket file, - The glusterd cli rpc program, for all cli commands - A reduced glusterd handshake program, just for the 'system:: getspec' command The location of the socket file can be changed with the glusterd option 'glusterd-sockfile'. To retain compatibility with the '--remote-host' cli option, glusterd also listens for the cli requests on port 24007. But, for the sake of security, it listens using a reduced cli rpc program on the port. The reduced rpc program only contains read-only procs used for 'volume (info|list|status)', 'peer status' and 'system:: getwd' cli commands. CLI changes: The gluster cli now uses the glusterd socket file for communicating with glusterd by default. A new option '--gluster-sock' has been added to allow specifying the sockfile used to connect. Using the '--remote-host' option will make cli connect to the given host & port. Tests changes: cluster.rc has been modified to make use of socket files and use different log files for each glusterd. Some of the tests using cluster.rc have been fixed. Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53 BUG: 980754 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5280 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-0/+2
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* logging: Remove multiple definitions of DEFAULT_LOG_FILE_DIRECTORYVijay Bellur2013-09-241-2/+1
| | | | | | | | | Change-Id: I8d670a228d3c1282aa7d70b151f166d04abc40e5 BUG: 764890 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/5909 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* cli: Add statusStr xml tag to task list and rebalance/remove brick statusAravinda VK2013-09-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New xml tag statusStr added to following gluster cli commands gluster volume status all --xml (For Task status) gluster volume rebalance <VOLNAME> status --xml gluster volume remove-brick <VOLNAME> <BRICK1..> status --xml Example(volume status all): <task> <type>Rebalance</type> <id>82d8d122-8738-4144-8507-d93fc98b61df</id> <status>3</status> <statusStr>completed</statusStr> </task> Example(volume rebalance <VOL> status) <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> Also modified task status as string instead of showing number in gluster volume status all Example: Status of volume: gv1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick sumne.sumne:/gfs/b1 49154 Y 15489 Brick sumne.sumne:/gfs/b2 49155 Y 15493 NFS Server on localhost N/A N 15913 Task ID Status ---- -- ------ Rebalance 82d8d122-8738-4144-8507-d93fc98b61df completed BUG: 1003521 Change-Id: Ib283016af4c18132fb13fb33d44075782d77823c Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5739 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/cli: Geo-Replication "status detail" cmdVenky Shankar2013-09-041-0/+14
| | | | | | | | | | | | | | | | | | | | | | Provides detailed status info in the following format MASTER <master-vol> SLAVE <slave-vol> NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING ----------------------------------------------------------------------------------- This patch introdues "status detail" command to show crawl related information in CLI. These values are "pulled" from gsyncd when "status detail" is executed. Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1 BUG: 990420 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5590 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli : Display error messages if virt file has been deleted or is invalid.Avra Sengupta2013-07-271-1/+1
| | | | | | | | | | | | | "gluster volume set <VOLNAME> group virt" will display error message if virt file is deleted or is invalid. Change-Id: Icb202b6a445597fcd9a3dcef8001891f2601a115 BUG: 916127 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4586 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Increased timeout for gluster volume heal commnadsVenkatesh Somyajulu2013-07-231-1/+1
| | | | | | | | | | | | | | | | | Problem: If number of files are very large, then gluster volume heal volumename info commnads take large time. So timeout of 2 minutes seems to be insufficient. Fix: Increased timeout to 10 minutes Change-Id: I5f847163e01c4afbb587b726833ad80183f1a928 BUG: 986945 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5372 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterfs : Moved option files, and statedumps from /tmpAvra Sengupta2013-01-291-0/+1
| | | | | | | | | Change-Id: Ibdede396c4d6859225937316b7a59a661bcaf9f5 BUG: 764890 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4422 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* debug/trace: save the recent fops received in the event-historyRaghavendra Bhat2013-01-171-0/+3
| | | | | | | | | | | | | | | | | * Make use of event-history in debug/trace xlator to dump the recent fops, when statedump is given. trace xlator saves the fop it received along with the time in the event-history and upon statedump signal, dumps its history. The size of the event-history can be given as a xlator option. * Make changes in trace to take logging into log-file or logging to history as an option. By default both are off. Change-Id: I12baee5805c6efb55735cead4e2093fb94d7a6a0 BUG: 797171 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/4088 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: output xml in pretty formatKaushal M2013-01-161-1/+1
| | | | | | | | | | | | | Gluster cli now prints XML outputs in 'pretty' format. This solves the problem of empty elements occuring as two tags instead of being collapsed into one. Change-Id: Iab7aeadcff29c18ae388b58e446a16e937ac09ed BUG: 849293 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4355 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* BD Backend: CLI commands to create/delete imageM. Mohan Kumar2012-11-291-0/+13
| | | | | | | | | | | | | | | | | Cli commands added to create/delete a LV device. The following command creates lv in a given vg. $ gluster bd create <volname>:<vgname>/<lvname> <size> The following command deletes lv in a given vg. $ gluster bd delete <volname>:<vgname>/<lvname> BUG: 805138 Change-Id: Ie4e100eca14e2ee32cf2bb4dd064b17230d673bf Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/3718 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: XML output for "geo-replication <VOL> {start|stop}"Kaushal M2012-11-231-0/+4
| | | | | | | | | Change-Id: I8d2213cc00deb458fb765b848d0e3452574cc98f BUG: 864499 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4217 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Changes and enhancements to XML outputKaushal M2012-10-111-4/+32
| | | | | | | | | | | | | | | | | | | | This patch contains several xml related changes which fix some bugs and introduce xml output for commands which were missing it. These include, * XML output for rebalance & remove-brick status * XML output for replace-brick * XML output for 'volume status all' in on xml document * proper XML output for "volume {create|start|stop|delete}" * type & status of a volume in 'volume info' is now given as a string as well This patch also cleans up the '#if (HAVE_LIB_XML)' sections from the code-base, so that it is not littered around. Change-Id: I5bb022adf0fedf7e3ead92b4b79bfa02b0b5fef5 BUG: 828131 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3869 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: removed extra dict unrefs and memory leaksKrutika Dhananjay2012-09-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEMS ADDRESSED: a. The following change http://review.gluster.com/#change,3948 introduces extra dict unrefs in cli. b. There are instances of memory leak in gf_cli_*_cbk functions. FIX: Problem (a) is fixed by ensuring the dict is ref'd (indirectly at the time of creation) and unref'd (in CLI_STACK_DESTROY) ONLY once. The following is the rationale behind this approach: The number of refs and unrefs on dict varies across the different commands that use it. The cli thread MUST wait for the gf_cli_*_cbks to complete before the frame is destroyed. This rules out the need to do extra refs and unrefs in the code path. Problem (b) is fixed by doing unref on dicts that are created for the purpose of unserializing the response but never freed in gf_cli_*_cbk functions. Change-Id: I5a7431543fc8e3cac4d256f5c87d1e3c62a331be BUG: 823081 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/3966 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Fix to log command status at the appropriate timeKrutika Dhananjay2012-09-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: In the existing implementation, the success/failure of execution of a command is decided (and logged) in glusterd handler functions. Strictly speaking, the logging mechanism must take into account what course the command takes within the state machine before concluding whether it succeeded or failed. FIX: This patch attempts to fix the above issue for vol commands. The format of the log message is as follows: for failure: <command string> : FAILED : <cause of failure> for success: <command string> : SUCCESS APPROACH (in a nutshell): * The command string is packed into dict at cli and sent to glusterd. * glusterd logs the command status just before doing a "submit_reply", which is called (either directly or indirectly via a call to glusterd_op_cli_send_response) at 2 places for every vol command: i. in handler functions, and ii. in glusterd_op_txn_complete In short, the failure of a command in the handler implies the command has indeed failed. However, its success in the handler does NOT necessarily mean the command succeeded/will succeed. Change-Id: I5a8a2ddc318ef2dc2a9699f704a6bcd2f0ab0277 BUG: 823081 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/3948 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-09-131-7/+6
| | | | | | | | | | | | License message changed for server-side, dual license GPLV2 and LGPLv3+. Change-Id: Ia9e53061b9d2df3b3ef3bc9778dceff77db46a09 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3940 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-08-281-16/+7
| | | | | | | | | | | | | | | | | | The license message is changed to Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com> This file is part of GlusterFS. This file is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. Change-Id: I07d2b63ed5fbbbd1884f1e74f2dd56013d15b0f4 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3858 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Proper xml output for "gluster peer status"Kaushal M2012-08-201-0/+4
| | | | | | | | | | Change-Id: I5d72d3844aba0417652498f4dc706b4a68d36bd8 BUG: 847760 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3814 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <krishnan.parthasarathi@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* core: canonicalize pathsRajesh Amaravathi2012-05-221-3/+0
| | | | | | | | | | | | canonicalize paths during add-brick, creation of volume, setting nfs.export-dir in volgen BUG: 789870 Change-Id: I1d3788ac850359b0def0457113831e825a475d58 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.com/3315 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>