summaryrefslogtreecommitdiffstats
path: root/cli/src/cli-xml-output.c
Commit message (Collapse)AuthorAgeFilesLines
* cli-*: make some structs static to reduce space consumed.Yaniv Kaul2019-11-261-14/+15
| | | | | | | | | | | | | | | | | | | | | | | There's a small difference when structs are defined static. Whenever possible, define them as such. Specifically, before: text data bss dec hex filename 678 216 0 894 37e ./cli/src/cli-cmd-misc.o 150024 1264 16 151304 24f08 ./cli/src/cli-rpc-ops.o 71980 64 0 72044 1196c ./cli/src/cli-cmd-parser.o 66189 4 16 66209 102a1 ./cli/src/cli-xml-output.o After: text data bss dec hex filename 670 216 0 886 376 ./cli/src/cli-cmd-misc.o 149848 1392 16 151256 24ed8 ./cli/src/cli-rpc-ops.o 70346 1320 0 71666 117f2 ./cli/src/cli-cmd-parser.o 66157 4 16 66177 10281 ./cli/src/cli-xml-output.o Change-Id: I206bd895290595d79fac7b26eee66f4279b50f92 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* cli: fix distCount valueSanju Rakonde2019-10-111-2/+3
| | | | | | | | | | gluster volume info --xml id displaying wrong distCount value. This patch addresses it. fixes: bz#1758878 Change-Id: I64081597e06018361e6524587b433b0c4b2a0260 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* cli: Remove Wformat-truncation compiler warningSheetal Pamecha2019-07-021-17/+55
| | | | | | | | | | | | This warning is issued due to unhandled output truncation. As in the code, truncation is expected, we can remove this warning by placing a check on the return value of the function and handling it. In this way, the warning will not be issued. Change-Id: I1820b58fe9a7601961c20944b259df322db35057 updates: bz#1193929 Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* Fix some "Null pointer dereference" coverity issuesXavi Hernandez2019-05-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | This patch fixes the following CID's: * 1124829 * 1274075 * 1274083 * 1274128 * 1274135 * 1274141 * 1274143 * 1274197 * 1274205 * 1274210 * 1274211 * 1288801 * 1398629 Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558 Updates: bz#789278 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* tier/cli: remove tier code to increase code coverage in cliSanju Rakonde2019-04-251-357/+26
| | | | | | Change-Id: I56cc09243dab23b3be86a7faac45001dda77181f updates: bz#1693692 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Multiple files: remove HAVE_BD_XLATOR related code.Yaniv Kaul2019-03-251-45/+0
| | | | | | | | | | | | The BD translator was removed some time ago, (in commit a907e468e724c32b9833ce59806fc215c7122d63). This completes the work. Compile-tested only! updates: bz#1635688 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I999df52e479a72d3cc9523f22f9056de17eb559c
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-051-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* cli : fix coverity issue in cli-xml-output.cSunny Kumar2018-11-181-0/+10
| | | | | | | | This patch fixes 1124659, 1241480 and 1274196. Change-Id: Ib89f53b8e34fcc47184d08ad57f2ee32fd00d78c updates: bz#789278 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* Land part 2 of clang-format changesGluster Ant2018-09-121-5074/+4850
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* multiple files: calloc -> mallocYaniv Kaul2018-09-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xlators/cluster/stripe/src/stripe-helpers.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible xlators/cluster/dht/src/tier.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible xlators/cluster/dht/src/dht-layout.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible xlators/cluster/dht/src/dht-helper.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible xlators/cluster/dht/src/dht-common.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible xlators/cluster/afr/src/afr.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible xlators/cluster/afr/src/afr-inode-read.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible tests/bugs/replicate/bug-1250170-fsync.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible tests/basic/gfapi/gfapi-async-calls-test.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible tests/basic/ec/ec-fast-fgetxattr.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible rpc/xdr/src/glusterfs3.h: Move to GF_MALLOC() instead of GF_CALLOC() when possible rpc/rpc-transport/socket/src/socket.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible rpc/rpc-lib/src/rpc-clnt.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible extras/geo-rep/gsync-sync-gfid.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible cli/src/cli-xml-output.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible cli/src/cli-rpc-ops.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible cli/src/cli-cmd-volume.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible cli/src/cli-cmd-system.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible cli/src/cli-cmd-snapshot.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible cli/src/cli-cmd-peer.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible cli/src/cli-cmd-global.c: Move to GF_MALLOC() instead of GF_CALLOC() when possible It doesn't make sense to calloc (allocate and clear) memory when the code right away fills that memory with data. It may be optimized by the compiler, or have a microscopic performance improvement. In some cases, also changed allocation size to be sizeof some struct or type instead of a pointer - easier to read. In some cases, removed redundant strlen() calls by saving the result into a variable. 1. Only done for the straightforward cases. There's room for improvement. 2. Please review carefully, especially for string allocation, with the terminating NULL string. Only compile-tested! updates: bz#1193929 Original-Author: Yaniv Kaul <ykaul@redhat.com> Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Signed-off-by: Amar Tumballi <amarts@redhat.com> Change-Id: I16274dca4078a1d06ae09a0daf027d734b631ac2
* {cli-cmd-parser|cli-rpc-ops||cli-xml-output}.c: strncpy()->sprintf(), reduce ↵Yaniv Kaul2018-08-241-2/+4
| | | | | | | | | | | | | | | | | | | | | | strlen()'s strncpy may not be very efficient for short strings copied into a large buffer: If the length of src is less than n, strncpy() writes additional null bytes to dest to ensure that a total of n bytes are written. Instead, use snprintf(). Also: - save the result of strlen() and re-use it when possible. - move from GF_CALLOC() to GF_MALLOC() for the strings. - move from strlen to sizeof() for const strings. Compile-tested only! Change-Id: I3cf49c5401ee100a5db6a4954c3d699ec1814c17 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* cli : fix few coverity in cliSunny Kumar2018-08-241-4/+4
| | | | | | | | | | This patch fixes DEADCODE and FORWARD_NULL. CID : 1124495, 1124380, 1124381 Change-Id: I79992e396dbcb1bfe6cd0614d49a8da3f67d648d updates: bz#789278 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* All: remove memset() before sprintf()Yaniv Kaul2018-08-141-139/+2
| | | | | | | | | | | | It's not needed. There's a good chance the compiler is smart enough to remove it anyway, but it can't hurt - I hope. Compile-tested only! Change-Id: Id7c054e146ba630227affa591007803f3046416b updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* All: run codespell on the code and fix issues.Yaniv Kaul2018-07-221-2/+2
| | | | | | | | | | | | Please review, it's not always just the comments that were fixed. I've had to revert of course all calls to creat() that were changed to create() ... Only compile-tested! Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* cli: Fix for gluster volume info --xmlSanju Rakonde2018-05-151-2/+2
| | | | | | | | | | | | Problem: gluster volume info --xml is showing same uuid to all the bricks of a tier volume. Solution: While iterating over hot/cold bricks of a tier volume, use correct iterator. Fixes: bz#1577627 Change-Id: Icf6a9c2a10b9da453abc262a57b7884d6638e3ed Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* cli/xml: fix return handlingAtin Mukherjee2017-07-061-40/+52
| | | | | | | | | | | | | | | | | | | | The return code of xmlTextWriter* APIs says it returns either the bytes written (may be 0 because of buffering) or -1 in case of error. Now if the volume of the xml data payload is not huge then most of the time the data to be written gets buffered, however when this grows sometimes this APIs will return the total number of bytes written and then it becomes absolutely mandatory that every such call is followed by XML_RET_CHECK_AND_GOTO otherwise we may end up returning a non zero ret code which would result into the overall xml generation to fail. Change-Id: I02ee7076e1d8c26cf654d3dc3e77b1eb17cbdab0 BUG: 1467841 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17702 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
* cli: fix build error with --disable-xml-outputKinglong Mee2017-05-081-13/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ./configure --disable-xml-output --disable-georeplication make Making all in src CC cli.o In file included from cli.c:42:0: cli.h:440:24: error: unknown type name ‘xmlTextWriterPtr’ cli_xml_output_common (xmlTextWriterPtr writer, int op_ret, int op_errno, ^ cli.h:443:26: error: unknown type name ‘xmlTextWriterPtr’ cli_xml_snapshot_delete (xmlTextWriterPtr writer, xmlDocPtr doc, dict_t *dict, ^ cli.h:443:51: error: unknown type name ‘xmlDocPtr’ cli_xml_snapshot_delete (xmlTextWriterPtr writer, xmlDocPtr doc, dict_t *dict, ^ make[1]: *** [cli.o] Error 1 make: *** [all-recursive] Error 1 Change-Id: I36c2dfc11f89d774b62dfe6f50b156826bed5b66 Signed-off-by: Kinglong Mee <mijinlong@open-fs.com> Reviewed-on: https://review.gluster.org/17136 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Tested-by: Prashanth Pai <ppai@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* cli: keep 'gluster volume status detail' consistentXavier Hernandez2017-01-191-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The output of the command 'gluster volume status <volname> detail' is not consistent between operating systems. On linux hosts it shows the file system type, the device name, mount options and inode size of each brick. However the same command executed on a FreeBSD host doesn't show all this information, even for bricks stored on a linux. Additionally, for hosts other than linux, this information is shown as 'N/A' many times. This has been fixed to show as much information as it can be retrieved from the operating system. The file contrib/mount/mntent.c has been mostly rewriten because it contained many errors that caused mount information to not be retrieved on some operating systems. Change-Id: Icb6e19e8af6ec82255e7792ad71914ef679fc316 BUG: 1411334 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/16371 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* glusterd: Add info on op-version for clients in vol status outputSamikshan Bairagya2017-01-121-0/+12
| | | | | | | | | | | | | | | | | | | | | Currently the `gluster volume status <VOLNAME|all> clients` command gives us the following information on clients: 1. Brick name 2. Client count for each brick 3. hostname:port for each client 4. Bytes read and written for each client There is no information regarding op-version for each client. This patch adds that to the output. Change-Id: Ib2ece93ab00c234162bb92b7c67a7d86f3350a8d BUG: 1409078 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/16303 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cli: Suppress unused but set warningsVijay Bellur2016-12-141-3/+1
| | | | | | | | | | | | | | | | | | GCC has the following complaint during compilation: ../../../cli/src/cli-rpc-ops.c: In function ‘gf_cli_get_volume_cbk’: ../../../cli/src/cli-rpc-ops.c:846:36: warning: variable ‘caps’ set but not used [-Wunused-but-set-variable] char *caps = NULL; Change-Id: Ia378a6c83ba70ae35290802b7b38ad2830c0956c BUG: 1402261 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/15982 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* io-stats: Add stats for upcall notificationsv3.10devPoornima G2016-08-311-1/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this patch, there will be additional entries seen in the profile info: UPCALL : Total number of upcall events that were sent from the brick(in brick profile), and number of upcall notifications recieved by client(in client profile) Cache invalidation events: ------------------------- CI_IATT : Number of upcalls that were cache invalidation and had one of the IATT_UPDATE_FLAGS set. This indicates that one of the iatt value was changed. CI_XATTR : Number of upcalls that were cache invalidation, and had one of the UP_XATTR or UP_XATTR_RM set. This indicates that an xattr was updated or deleted. CI_RENAME : Number of upcalls that were cache invalidation, resulted by the renaming of a file or directory CI_UNLINK : Number of upcalls that were cache invalidation, resulted by the unlink of a file. CI_FORGET : Number of upcalls that were cache invalidation, resulted by the forget of inode on the server side. Lease events: ------------ LEASE_RECALL : Number of lease recalls sent by the brick (in brick profile), and number of lease recalls recieved by client(in client profile) Note that the sum of CI_IATT, CI_XATTR, CI_RENAME, CI_UNLINK, CI_FORGET, LEASE_RECALL may not be equal to UPCALL. This is because, each cache invalidation can carry multiple flags. Eg: - Every CI_XATTR will have CI_IATT - Every CI_UNLINK will also increment CI_IATT as link count is an iatt attribute. Also UP_PARENT_DENTRY_FLAGS is currently not accounted for, as CI_RENAME and CI_UNLINK will always have the flag UP_PARENT_DENTRY_FLAGS Change-Id: Ieb8cd21dde2c4c7618f12d025a5e5156f9cc0fe9 BUG: 1371543 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/15193 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd : Introduce reset brickAnuradha Talur2016-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The command basically allows replace brick with src and dst bricks as same. Usage: gluster v reset-brick <volname> <hostname:brick-path> start This command kills the brick to be reset. Once this command is run, admin can do other manual operations that they need to do, like configuring some options for the brick. Once this is done, resetting the brick can be continued with the following options. gluster v reset-brick <vname> <hostname:brick> <hostname:brick> commit {force} Does the job of resetting the brick. 'force' option should be used when the brick already contains volinfo id. Problem: On doing a disk-replacement of a brick in a replicate volume the following 2 scenarios may occur : a) there is a chance that reads are served from this replaced-disk brick, which leads to empty reads. b) potential data loss if next writes succeed only on replaced brick, and heal is done to other bricks from this one. Solution: After disk-replacement, make sure that reset-brick command is run for that brick so that pending markers are set for the brick and it is not chosen as source for reads and heal. But, as of now replace-brick for the same brick-path is not allowed. In order to fix the above mentioned problem, same brick-path replace-brick is needed. With this patch reset-brick commit {force} will be allowed even when source and destination <hostname:brickpath> are identical as long as 1) destination brick is not alive 2) source and destination brick have the same brick uuid and path. Also, the destination brick after replace-brick will use the same port as the source brick. Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de BUG: 1266876 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12250 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ashish Pandey <aspandey@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cli: fix unused variable warnings/errorsKaleb S. KEITHLEY2016-08-271-20/+3
| | | | | | | | | | | | | | | | | | | http://review.gluster.org/14085 fixes a/the "leak" - via the generated rpc/xdr headers - of pragmas that mask these warnings. However 14085 won't pass the smoke test until all the warnings are fixed. Change-Id: Ifc33762cf62259961ceb35ae9ac3cbec7094b703 BUG: 1369124 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/15238 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/cli: cli to get local state representation from glusterdSamikshan Bairagya2016-08-261-14/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently there is no existing CLI that can be used to get the local state representation of the cluster as maintained in glusterd in a readable as well as parseable format. The CLI added has the following usage: # gluster get-state [daemon] [odir <path/to/output/dir>] [file <filename>] This would dump data points that reflect the local state representation of the cluster as maintained in glusterd (no other daemons are supported as of now) to a file inside the specified output directory. The default output directory and filename is /var/run/gluster and glusterd_state_<timestamp> respectively. The option for specifying the daemon name leaves room to add support for other daemons in the future. Following are the data points captured as of now to represent the state from the local glusterd pov: * Peer: - Primary hostname - uuid - state - connection status - List of hostnames * Volumes: - name, id, transport type, status - counts: bricks, snap, subvol, stripe, arbiter, disperse, redundancy - snapd status - quorum status - tiering related information - rebalance status - replace bricks status - snapshots * Bricks: - Path, hostname (for all bricks these info will be shown) - port, rdma port, status, mount options, filesystem type and signed in status for bricks running locally. * Services: - name, online status for initialised services * Others: - Base port, last allocated port - op-version - MYUUID Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618 BUG: 1353156 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: http://review.gluster.org/14873 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* snapshot/cli: Fix snapshot status xml outputAvra Sengupta2016-08-231-33/+46
| | | | | | | | | | | | | | | | | | | | | | | | snap status --xml errors out if a brick is down and doesn't have pid. It is handled in the cli of the snap status where "N/A" is displayed in such a scenario. Handled the same in xml snap status <snapname> --xml fails as the writer is not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's status to differentiate between the two scenarios. Added testcase volume-snapshot-xml.t to check all snapshot commands xml outputs Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745 BUG: 1325831 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/14018 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* snapshot: Display number of snapshots in volume infoAvra Sengupta2016-08-231-0/+12
| | | | | | | | | | | | | | | Display number of snapshots in a volume in volume info output. This number gets modified, with create, delete, and restore operations. Change-Id: Ic9b7c2b6950980f8ce75ca362998c097ea7c863d BUG: 1360693 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/15029 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* Snapshot/xml:xml output for snapshot cloneMohammed Rafi KC2016-07-291-2/+72
| | | | | | | | | | | | | | | | | | Snapshot clone is used to create a regular volume from snapshot. Currently snapshot clone is not supporting xml outout. This change introduce a xml output for snapshot clone command Change-Id: I417b480d36f9d84ee088004999b041c9619edd50 BUG: 1207604 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10065 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* cli/xml: Fix wrong XML format in volume get commandAravinda VK2016-07-181-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without this Patch, <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volGetopts> <count>258</count> <Option>cluster.lookup-unhashed</Option> <Value>on</Value> <Option>cluster.lookup-optimize</Option> <Value>off</Value> <Option>cluster.min-free-disk</Option> <Value>10%</Value> <Option>cluster.min-free-inodes</Option> <Value>5%</Value>... With this patch, <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volGetopts> <count>258</count> <Opt> <Option>cluster.lookup-unhashed</Option> <Value>on</Value> </Opt> <Opt> <Option>cluster.lookup-optimize</Option> <Value>off</Value> </Opt> <Opt> <Option>cluster.min-free-disk</Option> <Value>10%</Value> </Opt> <Opt> <Option>cluster.min-free-inodes</Option> <Value>5%</Value> </Opt>... BUG: 1302277 Change-Id: I6c5a040f659f2244ddcd47c57882b4f300cbe52f Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/14931 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* core: assorted typos and spelling mistakes reported by Debian lintianKaleb S KEITHLEY2016-05-181-1/+1
| | | | | | | | | | | | | | | Also missing bang (!) in #!/bin/bash in shell scripts. Change-Id: I567a4be8f0f31f6285550f243fe802895f6bc43b BUG: 1336793 Reported-by: Patrick Matthäi <pmatthaei@debian.org> Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/14398 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
* Cli/tier: separating services from cold bricks in xmlhari2016-03-161-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fix: The cold bricks tag included the processes also. The patch has removed the processes from being mentioned inside the cold brick tag and are mentioned below by closing the cold brick tag after the brick count. Previous output: <coldBricks> <node> <hostname>192.168.1.102</hostname> <path>/data/gluster/b3</path> <peerid>8c088528-e1aee3b2b40f</peerid> <status>1</status> <port>49157</port> <ports> <tcp>49157</tcp> <rdma>N/A</rdma> </ports> <pid>1160</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>8c088528-e1aee3b2b40f</peerid> <status>0</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>-1</pid> </node> </coldBricks> Expected output: <coldBricks> <node> <hostname>192.168.1.102</hostname> <path>/data/gluster/b3</path> <peerid>8c088528-e1aee3b2b40f</peerid> <status>1</status> <port>49157</port> <ports> <tcp>49157</tcp> <rdma>N/A</rdma> </ports> <pid>1160</pid> </node> </coldBricks> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>8c088528-e1aee3b2b40f</peerid> <status>0</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>-1</pid> </node> Change-Id: Ieccd017d7b2edb16786323f1a76402f020bdfb0d BUG: 1294497 Signed-off-by: hari <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/13101 Smoke: Gluster Build System <jenkins@build.gluster.com> Tested-by: hari gowtham <hari.gowtham005@gmail.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* all: fixes for clang compile warningsKaleb S KEITHLEY2016-02-151-20/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cli/src/cli-cmd-parser.c (chenk) cli/src/cli-xml-output.c (spandit) cli/src/cli.c (chenk) libglusterfs/src/common-utils.c (vmallika) libglusterfs/src/gfdb/gfdb_sqlite3.c (jfernand +1) rpc/rpc-transport/socket/src/socket.c (?) xlators/cluster/afr/src/afr-transaction.c (?) xlators/cluster/dht/src/dht-common.h (srangana +2) xlators/cluster/dht/src/dht-selfheal.c (srangana +2) xlators/debug/io-stats/src/io-stats.c (R. Wareing) xlators/features/barrier/src/barrier.c (vshastry) xlators/features/bit-rot/src/bitd/bit-rot-scrub.h (vshankar +1) xlators/features/shard/src/shard.c (kdhananj +1) xlators/mgmt/glusterd/src/glusterd-ganesha.c (skoduri) xlators/mgmt/glusterd/src/glusterd-handler.c (atinmu) xlators/mgmt/glusterd/src/glusterd-op-sm.h (atinmu) xlators/mgmt/glusterd/src/glusterd-snapshot.c (spandit) xlators/mgmt/glusterd/src/glusterd-syncop.c (atinmu) xlators/mgmt/glusterd/src/glusterd-volgen.c (atinmu) xlators/protocol/client/src/client-messages.h (mselvaga +1) xlators/storage/bd/src/bd-helper.c (M. Mohan Kumar) xlators/storage/bd/src/bd.c (M. Mohan Kumar) xlators/storage/posix/src/posix.c (nbalacha +1) Change-Id: I85934fbcaf485932136ef3acd206f6ebecde61dd BUG: 1293133 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/13031 CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
* cli: Add arbiter details to volinfo xml outputRavishankar N2016-01-191-3/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | The following are added: 1. "<arbiterCount>1</arbiterCount>" and "<coldarbiterCount>1</coldarbiterCount>" 2. "<isArbiter>0</isArbiter>" on the brick info, like so: <brick uuid="cafa8612-d7d4-4007-beea-72ae7477f3bb">127.0.0.2:/home/ravi/bricks/brick1 <name>127.0.0.2:/home/ravi/bricks/brick1</name> <hostUuid>cafa8612-d7d4-4007-beea-72ae7477f3bb</hostUuid> <isArbiter>0</isArbiter> </brick> Also fix a bug in gluster vol info where the abiter brick was shown the wrong brick of the cold tier after performing a tier-attach. Change-Id: Id978325d02b04f1a08856427827320e169169810 BUG: 1297750 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/13229 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cli/xml: display correct xml output of tier volumeGaurav Kumar Garg2015-12-211-15/+32
| | | | | | | | | | | | | | | | | | | | Currently When hot tier type is distributed-replicate and cold tier type is disperse volume then #gluster volume info --xml command is not giving its correct output. In case of HOT tier case its displaying wrong volume type. With this fix it will show correct xml output for tier volume irrespective of all the type of the volume's. Change-Id: If1de8d52d1e0ef3d0523163abed37b2b571715e8 BUG: 1292084 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12982 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cli/quota: show used_space,(file,dir) count even when quota limit is not setManikandan Selvaganesh2015-12-081-4/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: As of now quota 'list/list-objects' will list the usage only if limit is set for every directory else it will fail with ENOATTR(If inode/inode-quota is already configured for the first time). Feature: With the patch we are enhancing this command to list the usage even if quota limit is not set but still the user has to configure inode/inode-quota for the first time. Example: Consider we have /client/dir and /client1(absolute path from mount point): Quota limit is set only on /client. when we try listing /client/dir or /client1, it shows "Limit not set". Fix: The patch fixes this by showing "used space" in case of list command and shows "file_count" & "dir_count" in case of list-objects command. This works fine with xml output as well. Change-Id: I68b08ec77a583b3c7f39fe4d6b15d3d77adb095a BUG: 1284752 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12741 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* gluster v status --xml for a replicated hot tier volumehari gowtham2015-10-081-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>tiervol</volName> <nodeCount>11</nodeCount> <hotBricks> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b5_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49164</port> <ports> <tcp>49164</tcp> <rdma>N/A</rdma> </ports> <pid>8684</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b5_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49163</port> <ports> <tcp>49163</tcp> <rdma>N/A</rdma> </ports> <pid>8687</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b4_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49162</port> <ports> <tcp>49162</tcp> <rdma>N/A</rdma> </ports> <pid>8699</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b4_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49161</port> <ports> <tcp>49161</tcp> <rdma>N/A</rdma> </ports> <pid>8708</pid> </node> </hotBricks> <coldBricks> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b1_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49155</port> <ports> <tcp>49155</tcp> <rdma>N/A</rdma> </ports> <pid>8716</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b1_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49156</port> <ports> <tcp>49156</tcp> <rdma>N/A</rdma> </ports> <pid>8724</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>8678</pid> </node> </coldBricks> <tasks> <task> <type>Tier migration</type> <id>975bfcfa-077c-4edb-beba-409c2013f637</id> <status>1</status> <statusStr>in progress</statusStr> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> Change-Id: I69252a36b6e6b2f3cbe5db06e9a716f504a1dba4 BUG: 1268810 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12302 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tier/cli: number of bricks remains the same in v info --xmlhari gowtham2015-10-061-1/+1
| | | | | | | | | | | | | | | | | | The number of bricks count remains one for the cold type. Actual result: <numberOfBricks>1 x 2 = 2</numberOfBricks> Expected result: <numberOfBricks>3 x 2 = 6</numberOfBricks> Change-Id: I31480a7808b248ef9ea805cb64f7663d44647ddf BUG: 1268822 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12303 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* quota : xml output modified to give exact available space in bytesManikandan Selvaganesh2015-09-301-26/+25
| | | | | | | | | | | | | | | | Currrently, 'gluster v quota <VOLNAME> list' command rounds off the available space and shows it to the user. Now, 'gluster v quota <VOLNAME> list --xml' command is modified to show the exact available space in bytes. Change-Id: I3772e036a2537c1df12f22cf32dfe4ac7940988f BUG: 1261404 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12137 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Tier/cli: tier related information in volume info xmlhari gowtham2015-09-291-28/+253
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gluster v info didnt differentiate the hot bricks and cold bricks and other few values <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volInfo> <volumes> <volume> <name>rmbr</name> <id>72d223fc-96ba-4f4a-ac6e-0d0bc16ef127</id> <status>1</status> <statusStr>Started</statusStr> <brickCount>3</brickCount> <distCount>1</distCount> <stripeCount>1</stripeCount> <replicaCount>1</replicaCount> <disperseCount>0</disperseCount> <redundancyCount>0</redundancyCount> <type>5</type> <typeStr>Tier</typeStr> <transport>0</transport> <xlators/> <bricks> <hotBricks> <hotBrickType>Distribute</hotBrickType> <numberOfBricks>1</numberOfBricks> <brick uuid="81">v1:/hb1<name>v1:/hb1</name><hostUuid>81</hostUuid></brick> </hotBricks> <coldBricks> <coldBrickType>Distribute</coldBrickType> <numberOfBricks>2</numberOfBricks> <brick uuid="81">v1:/br1<name>v1:/br1</name><hostUuid>81</hostUuid></brick> <brick uuid="81">v1:/br2<name>v1:/br2</name><hostUuid>81</hostUuid></brick> <count>0</count> </coldBricks> </bricks> </volume> </volumes> </volInfo> </cliOutput> Change-Id: I6e52541bb6d8a6a17e17bfcb42434beaac13db56 BUG: 1261837 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12158 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tier/cli: tier related information in volume status commandhari gowtham2015-09-181-1/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>v1</volName> <nodeCount>5</nodeCount> <hotBrick> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/hbr1</path> <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid> <status>1</status> <port>49154</port> <ports> <tcp>49154</tcp> <rdma>N/A</rdma> </ports> <pid>6535</pid> </node> </hotBrick> <coldBrick> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/cb1</path> <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid> <status>1</status> <port>49152</port> <ports> <tcp>49152</tcp> <rdma>N/A</rdma> </ports> <pid>6530</pid> </node> </coldBrick> <coldBrick> <node> <hostname>NFS Server</hostname> <path>10.70.42.203</path> <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>6519</pid> </node> </coldBrick> <tasks> <task> <type>Rebalance</type> <id>8da729f2-f1b2-4f55-9945-472130be93f7</id> <status>4</status> <statusStr>failed</statusStr> </task> </tasks> </volume> <tasks/> </volume> </volumes> </volStatus> </cliOutput> Change-Id: Idfdbce47d03ee2cdbf407c57159fd37a2900ad2c BUG: 1263100 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12176 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* xml/tiering: enhance xml output for tiering status related cli commandsHari Gowtham2015-09-071-0/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <task-id>34f47e29-2193-4a86-9b1e-c7e56bdae3d4</task-id> <op>7</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <promotedfiles>0</promotedfiles> <demotedfiles>0</demotedfiles> <statusStr>in progress</statusStr> </node> </volRebalance> </cliOutput> Change-Id: I61083f7b9b0b3bd840982b8c5d6ea4b42e27c9b3 BUG: 1252737 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/11890 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* gluster/cli: snapshot delete all does not work with xmlRajesh Joseph2015-08-281-68/+230
| | | | | | | | | | | | | Problem: snapshot delete all command fails with --xml option Fix: Provided xml support for delete all command Change-Id: I77cad131473a9160e188c783f442b6a38a37f758 BUG: 1257533 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/12027 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* snapshot: Fix snapshot info's xml outputAvra Sengupta2015-08-241-1/+5
| | | | | | | | | | | | | | Display description field with (null) if no description is present for the snapshot, instead of removing the field altogether. Change-Id: I965b08cd6e54eea56c32e2712fab7daa8a663f11 BUG: 1250387 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11834 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* xml output: Fix non-uniform opErrstr xml outputAvra Sengupta2015-08-141-2/+9
| | | | | | | | | | | | | | Display <opErrstr/> in case of no operrstr for all xml output of gluster commands. Change-Id: Ie16f749f90b4642357c562012408c434cd38661f BUG: 1245895 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11835 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* cli/tiering: Enhance cli output for tieringMohammed Rafi KC2015-05-081-5/+5
| | | | | | | | | | | | | | Fix for handling cli output for attach-tier and detach-tier Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678 BUG: 1211562 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10284 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: remove replace brick with data migration support form cli/glusterdGaurav Kumar Garg2015-05-071-104/+2
| | | | | | | | | | | | | | | | Replace-brick operation with data migration support have been deprecated from gluster. With this fix replace brick command will support only one commad gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b BUG: 1094119 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/10101 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* geo-rep: Status EnhancementsAravinda VK2015-05-051-41/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Discussion in gluster-devel http://www.gluster.org/pipermail/gluster-devel/2015-April/044301.html MASTER NODE - Master Volume Node MASTER VOL - Master Volume name MASTER BRICK - Master Volume Brick SLAVE USER - Slave User to which Geo-rep session is established SLAVE - <SLAVE_NODE>::<SLAVE_VOL> used in Geo-rep Create command SLAVE NODE - Slave Node to which Master worker is connected STATUS - Worker Status(Created, Initializing, Active, Passive, Faulty, Paused, Stopped) CRAWL STATUS - Crawl type(Hybrid Crawl, History Crawl, Changelog Crawl) LAST_SYNCED - Last Synced Time(Local Time in CLI output and UTC in XML output) ENTRY - Number of entry Operations pending.(Resets on worker restart) DATA - Number of Data operations pending(Resets on worker restart) META - Number of Meta operations pending(Resets on worker restart) FAILURES - Number of Failures CHECKPOINT TIME - Checkpoint set Time(Local Time in CLI output and UTC in XML output) CHECKPOINT COMPLETED - Yes/No or N/A CHECKPOINT COMPLETION TIME - Checkpoint Completed Time(Local Time in CLI output and UTC in XML output) XML output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> cliOutput> geoRep> volume> name> sessions> session> session_slave> pair> master_node> master_brick> slave_user> slave/> slave_node> status> crawl_status> entry> data> meta> failures> checkpoint_completed> master_node_uuid> last_synced> checkpoint_time> checkpoint_completion_time> BUG: 1212410 Change-Id: I944a6c3c67f1e6d6baf9670b474233bec8f61ea3 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/10121 Tested-by: NetBSD Build System Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: fix vol_type in volume info xmlAtin Mukherjee2015-04-261-2/+4
| | | | | | | | | | | | xml parsing of voltype should be inline with the cli Change-Id: I41ddddac00d07f03b56a041e1c3f5a132fbd7393 BUG: 1212398 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10271 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* features/quota : Introducing inode quotavmallika2015-03-181-0/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ========================================================================== Inode quota ========================================================================== = Currently, the only way to retrieve the number of files/objects in a = = directory or volume is to do a crawl of the entire directory/volume. = = This is expensive and is not scalable. = = = = The proposed mechanism will provide an easier alternative to determine = = the count of files/objects in a directory or volume. = = = = The new mechanism proposes to store count of objects/files as part of = = an extended attribute of a directory. Each directory's extended = = attribute value will indicate the number of files/objects present = = in a tree with the directory being considered as the root of the tree. = = = = The count value can be accessed by performing a getxattr(). = = Cluster translators like afr, dht and stripe will perform aggregation = = of count values from various bricks when getxattr() happens on the key = = associated with file/object count. = A new interface is introduced: ------------------------------ limit-objects : limit the number of inodes at directory level list-objects : list the directories where the limit is set remove-objects : remove the limit from the directory ========================================================================== CLI COMMAND: gluster volume quota <volname> limit-objects <path> <number> [<percent>] * <number> is a hard-limit for number of objects limitation for path "<path>" If hard-limit is exceeded, creation of file/directory is no longer permitted. * <percent> is a soft-limit for number of objects creation for path "<path>" If soft-limit is exceeded, a warning is issued for each creation. CLI COMMAND: gluster volume quota <volname> remove-objects [path] ========================================================================== CLI COMMAND: gluster volume quota <volname> list-objects [path] ... Sample output: ------------------ Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------ -------------------------------------- /dir 10 80% 10 0 Yes Yes ========================================================================== [root@snapshot-28 dir]# ls a b file11 file12 file13 file14 file15 file16 file17 [root@snapshot-28 dir]# touch a1 touch: cannot touch `a1': Disk quota exceeded * Nine files are created in directory "dir" and directory is included in * the count too. Hence the limit "10" is reached and further file creation fails ========================================================================== Note: We have also done some re-factoring in cli for volume name validation. New function cli_validate_volname is created ========================================================================== Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b BUG: 1190108 Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9769 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/quota : Fix XML output for quota list commandSachin Pandit2015-02-191-115/+124
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sample output: --------------- Sample 1) ---------- [root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4 /dir5 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volQuota> <limit> <path>/dir1</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> <limit> <path>/dir4</path> <path>No such file or directory</path> </limit> <limit> <path>/dir5</path> <path>No such file or directory</path> </limit> </volQuota> </cliOutput> Sample 2) --------- gluster volume quota vol1 list --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volQuota/> </cliOutput> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <volQuota> <limit> <path>/dir</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> <limit> <path>/dir1</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> </volQuota> </cliOutput> Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a BUG: 1185259 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9481 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: volume status for tcp,rdma type volume display only tcp portMohammed Rafi KC2015-02-181-11/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For tcp,rdma type voumes, there will be two ports, one for tcp and one for rdma. But volume status command only display tcp port. By this change, adding an extra column for rdma port and changing the port to tcp port. Eg: >gluster volume status pathy >For tcp,rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 49153 Y 14158 >For rdma type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 0 49153 Y 14158 For tcp type volume Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick brickname 49152 0 Y 14158 >gluster volume status patchy detail Status of volume: xcube2 ------------------------------------------------------------------------------ Brick : Brick brickname TCP Port : 49152 RDMA Port : 49153 Online : Y Pid : 14158 File System : ext4 Device : /dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 31.1GB Total Disk Space : 47.9GB Inode Count : 3203072 Free Inodes : 2926789 >gluster volume status xcube --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>xcube</volName> <nodeCount>2</nodeCount> <node> <hostname>hostname</hostname> <path>/home/brick1</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>49152</port> <ports> <tcp>49152</tcp> <rdma>N/A</rdma> </ports> <pid>5657</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>5665</pid> </node> <tasks/> </volume> </volumes> </volStatus> </cliOutput> Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51 BUG: 1164079 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9191 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>