summaryrefslogtreecommitdiffstats
path: root/cli
Commit message (Collapse)AuthorAgeFilesLines
* build: MKDIR_P is not defined for Makefiles, use mkdir_p insteadNiels de Vos2015-11-101-1/+1
| | | | | | | | | | | Change-Id: Id6d5263eb7b1c53e72a7668e716e9cc4e34b82cd Reported-by: Milind Changire <mchangir@redhat.com> BUG: 1198849 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/12553 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Milind Changire <mchangir@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tiering:Message shown in gluster vol tier <volname> status output is incorrectMohamed Ashiq2015-11-061-2/+4
| | | | | | | | | | | Change-Id: I15a1a637090f1cc2f200d5c3582317e4aa3cf334 BUG: 1278927 Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com> Reviewed-on: http://review.gluster.org/12532 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* mgmt/glusterd: Store arbiter-count and restore itPranith Kumar K2015-11-041-12/+34
| | | | | | | | | | | | | | | | | | | | | | | | Problem: 1) Glusterd doesn't remember about arbiter information of replica volume in store. When glusterd goes down and comes backup, arbiter volumes will become replica volumes. 2) Glusterd doesn't import/export arbiter information to/from the other peers. 3) Volume info doesn't show any arbiter count in the output. Fix: 1) Persist arbiter information in glusterd-store 2) Import/Export arbiter information of the volume 3) Change volume info output to show arbiter count. Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361 BUG: 1276675 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12475 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* correction of message displayed after attach tierhari gowtham2015-11-031-10/+36
| | | | | | | | | | | | | the message after attach tier is saying rebalance. It is changed according to tiering. Change-Id: I1834511f86483fa60f404d7defe5be59c025e9d6 BUG: 1277081 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* quota: add version to quota xattrsvmallika2015-11-021-6/+5
| | | | | | | | | | | | | | | | | | | | | When a quota is disable and the clean-up process terminated without completely cleaning-up the quota xattrs. Now when quota is enabled again, this can mess-up the accounting A version number is suffixed for all quota xattrs and this version number is specific to marker xaltor, i.e when quota xattrs are requested by quotad/client marker will remove the version suffix in the key before sending the response Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb BUG: 1272411 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/12386 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cli : 'gluster volume help' output sorted alphabetically Mohamed Ashiq2015-10-288-18/+89
| | | | | | | | | | | | | | 'gluster volume help' output is not sorted alphabetically. This makes little harder for the user to search or get to know of few gluster volume commands usage just from gluster cli. Change-Id: I855da2e4748a5c2ff3be319c50fa9548d676ee8a BUG: 1242894 Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com> Reviewed-on: http://review.gluster.org/11663 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
* core: use syscall wrappers instead of direct syscalls - miscellaneousKaleb S. KEITHLEY2015-10-283-16/+18
| | | | | | | | | | | | | | | various xlators and other components are invoking system calls directly instead of using the libglusterfs/syscall.[ch] wrappers. If not using the system call wrappers there should be a comment in the source explaining why the wrapper isn't used. Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c BUG: 1267967 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/12276 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* Tier/cli: removing warning message for tieringhari gowtham2015-10-211-10/+0
| | | | | | | | | | | | | | The warning message for tiering being under experimental staus is removed. Change-Id: I7d1d535d380b672c70f03ecc0d24a113600ea43f BUG: 1273726 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12407 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cli/quota : rm -rf on /<mountpoint>/<dir> is not showing quota headerManikandan Selvaganesh2015-10-141-4/+23
| | | | | | | | | | | | | | | | | Currently, when 'gluster v quota <VOLNAME> list' command is issued after an rm -rf on /run/gluster/vol/<directory>, quota output header is not shown. It is because the list_count was properly calculated with 'gluster v quota <VOLNAME> remove /path' and not with an rm -rf. The patch fixes this issue. Change-Id: I5266a8b0b9322b7db1b9e1d6b0327065931f4bcb BUG: 1269375 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12345 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli : freeing the allocated memoryManikandan Selvaganesh2015-10-131-0/+1
| | | | | | | | | | Change-Id: Ibcbad94c091a9c24fe5aff2d7e8bcd9ac88da7bf BUG: 1248521 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12337 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* gluster v status --xml for a replicated hot tier volumehari gowtham2015-10-081-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>tiervol</volName> <nodeCount>11</nodeCount> <hotBricks> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b5_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49164</port> <ports> <tcp>49164</tcp> <rdma>N/A</rdma> </ports> <pid>8684</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b5_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49163</port> <ports> <tcp>49163</tcp> <rdma>N/A</rdma> </ports> <pid>8687</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b4_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49162</port> <ports> <tcp>49162</tcp> <rdma>N/A</rdma> </ports> <pid>8699</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b4_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49161</port> <ports> <tcp>49161</tcp> <rdma>N/A</rdma> </ports> <pid>8708</pid> </node> </hotBricks> <coldBricks> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b1_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49155</port> <ports> <tcp>49155</tcp> <rdma>N/A</rdma> </ports> <pid>8716</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b1_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49156</port> <ports> <tcp>49156</tcp> <rdma>N/A</rdma> </ports> <pid>8724</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>8678</pid> </node> </coldBricks> <tasks> <task> <type>Tier migration</type> <id>975bfcfa-077c-4edb-beba-409c2013f637</id> <status>1</status> <statusStr>in progress</statusStr> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> Change-Id: I69252a36b6e6b2f3cbe5db06e9a716f504a1dba4 BUG: 1268810 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12302 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tier/cli: number of bricks remains the same in v info --xmlhari gowtham2015-10-061-1/+1
| | | | | | | | | | | | | | | | | | The number of bricks count remains one for the cold type. Actual result: <numberOfBricks>1 x 2 = 2</numberOfBricks> Expected result: <numberOfBricks>3 x 2 = 6</numberOfBricks> Change-Id: I31480a7808b248ef9ea805cb64f7663d44647ddf BUG: 1268822 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12303 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tier/cli : throw a warning when user issues a detach-tier commit/forceManikandan Selvaganesh2015-10-053-3/+6
| | | | | | | | | | | | | command Change-Id: Idf7664d509156ce46ef4308ffc07fb556a0aedd2 BUG: 1268755 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12297 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* cli/tier : fixes cli crash when user tries "gluster v tier" commandManikandan Selvaganesh2015-10-011-1/+1
| | | | | | | | | | | | Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Change-Id: I919d8935c849f9be6b2cb43e8332afb821778d89 BUG: 1267539 Reviewed-on: http://review.gluster.org/12258 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* quota : xml output modified to give exact available space in bytesManikandan Selvaganesh2015-09-303-36/+36
| | | | | | | | | | | | | | | | Currrently, 'gluster v quota <VOLNAME> list' command rounds off the available space and shows it to the user. Now, 'gluster v quota <VOLNAME> list --xml' command is modified to show the exact available space in bytes. Change-Id: I3772e036a2537c1df12f22cf32dfe4ac7940988f BUG: 1261404 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12137 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Tier/cli: tier related information in volume info xmlhari gowtham2015-09-292-28/+266
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gluster v info didnt differentiate the hot bricks and cold bricks and other few values <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volInfo> <volumes> <volume> <name>rmbr</name> <id>72d223fc-96ba-4f4a-ac6e-0d0bc16ef127</id> <status>1</status> <statusStr>Started</statusStr> <brickCount>3</brickCount> <distCount>1</distCount> <stripeCount>1</stripeCount> <replicaCount>1</replicaCount> <disperseCount>0</disperseCount> <redundancyCount>0</redundancyCount> <type>5</type> <typeStr>Tier</typeStr> <transport>0</transport> <xlators/> <bricks> <hotBricks> <hotBrickType>Distribute</hotBrickType> <numberOfBricks>1</numberOfBricks> <brick uuid="81">v1:/hb1<name>v1:/hb1</name><hostUuid>81</hostUuid></brick> </hotBricks> <coldBricks> <coldBrickType>Distribute</coldBrickType> <numberOfBricks>2</numberOfBricks> <brick uuid="81">v1:/br1<name>v1:/br1</name><hostUuid>81</hostUuid></brick> <brick uuid="81">v1:/br2<name>v1:/br2</name><hostUuid>81</hostUuid></brick> <count>0</count> </coldBricks> </bricks> </volume> </volumes> </volInfo> </cliOutput> Change-Id: I6e52541bb6d8a6a17e17bfcb42434beaac13db56 BUG: 1261837 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12158 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tier/cli: Change detach-tier commit force to detach-tier forceMohammed Rafi KC2015-09-222-16/+5
| | | | | | | | | | | | | | | | | | Current detach-tier cli command support commit force. Deprecating the same to force. So the new syntax would be: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Change-Id: Ie86dfd72341078c0a1be94767f523730911312ef BUG: 1261862 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12151 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tier/cli: tier related information in volume status commandhari gowtham2015-09-181-1/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>v1</volName> <nodeCount>5</nodeCount> <hotBrick> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/hbr1</path> <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid> <status>1</status> <port>49154</port> <ports> <tcp>49154</tcp> <rdma>N/A</rdma> </ports> <pid>6535</pid> </node> </hotBrick> <coldBrick> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/cb1</path> <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid> <status>1</status> <port>49152</port> <ports> <tcp>49152</tcp> <rdma>N/A</rdma> </ports> <pid>6530</pid> </node> </coldBrick> <coldBrick> <node> <hostname>NFS Server</hostname> <path>10.70.42.203</path> <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>6519</pid> </node> </coldBrick> <tasks> <task> <type>Rebalance</type> <id>8da729f2-f1b2-4f55-9945-472130be93f7</id> <status>4</status> <statusStr>failed</statusStr> </task> </tasks> </volume> <tasks/> </volume> </volumes> </volStatus> </cliOutput> Change-Id: Idfdbce47d03ee2cdbf407c57159fd37a2900ad2c BUG: 1263100 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12176 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/cli : adding a check for gluster v tier help commandManikandan Selvaganesh2015-09-161-0/+2
| | | | | | | | | | | | | | Currently, 'gluster v tier/attach-tier/detach-tier help' command shows the usage, and then prints 'Tier command failed'. With this patch the error message is removed. Change-Id: I1679fe3303d73ba6b6fdbb7ee18028062d446f39 BUG: 1263224 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12181 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: add gluster v tier <vol>Dan Lambright2015-09-094-7/+194
| | | | | | | | | | | | | | | | | | | | | | | Currently the tier feature piggy backs off the rebalance command syntax to obtain status and this is clumsy. Introduce a new tier command that can do tier specific operations, starting with volume status to display counters. Old commands: gluster volume attach-tier <vol> [replica count] {bricklist..} gluster volume detach-tier <vol> {start|stop|commit} New commands: gluster volume tier <vol> attach [replica count] {bricklist} | detach {start|stop|commit} | status Change-Id: Ic07b3c6260588162de7d34380f8cbd3d8a7f35d3 BUG: 1255693 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/11984 Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* xml/tiering: enhance xml output for tiering status related cli commandsHari Gowtham2015-09-071-0/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <task-id>34f47e29-2193-4a86-9b1e-c7e56bdae3d4</task-id> <op>7</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <promotedfiles>0</promotedfiles> <demotedfiles>0</demotedfiles> <statusStr>in progress</statusStr> </node> </volRebalance> </cliOutput> Change-Id: I61083f7b9b0b3bd840982b8c5d6ea4b42e27c9b3 BUG: 1252737 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/11890 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* all: reduce "inline" usageJeff Darcy2015-09-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | There are three kinds of inline functions: plain inline, extern inline, and static inline. All three have been removed from .c files, except those in "contrib" which aren't our problem. Inlines in .h files, which are overwhelmingly "static inline" already, have generally been left alone. Over time we should be able to "lower" these into .c files, but that has to be done in a case-by-case fashion requiring more manual effort. This part was easy to do automatically without (as far as I can tell) any ill effect. In the process, several pieces of dead code were flagged by the compiler, and were removed. Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155 BUG: 1245331 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/11769 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* gluster/cli: snapshot delete all does not work with xmlRajesh Joseph2015-08-284-114/+347
| | | | | | | | | | | | | Problem: snapshot delete all command fails with --xml option Fix: Provided xml support for delete all command Change-Id: I77cad131473a9160e188c783f442b6a38a37f758 BUG: 1257533 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/12027 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* cli: on error invoke cli_cmd_broadcast_response function in separate threadvmallika2015-08-284-19/+89
| | | | | | | | | | | | | | | | | | | | | | | | There is a problem in current CLI framework CLI holds the lock when processing command. When processing quota list command, below sequence of steps executed in the same thread and causing deadlock 1) CLI holds the lock 2) Send rpc_clnt_submit request to quotad for quota usage 3) If quotad is down, rpc_clnt_submit invokes cbk function with error 4) cbk function cli_quotad_getlimit_cbk tries to hold lock to broadcast the results and hangs, because same thread has already holding the lock This patch fixes the problem by creating seperate thread for broadcasting the result Change-Id: I53be006eadf6aaf348083d9168535530d70a8ab3 BUG: 1242819 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11990 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* snapshot: Fix snapshot info's xml outputAvra Sengupta2015-08-241-1/+5
| | | | | | | | | | | | | | Display description field with (null) if no description is present for the snapshot, instead of removing the field altogether. Change-Id: I965b08cd6e54eea56c32e2712fab7daa8a663f11 BUG: 1250387 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11834 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* quota :display the size of soft limit percentageManikandan Selvaganesh2015-08-212-16/+21
| | | | | | | | | | | | | | Display the size equivalent to the soft limit percentage in gluster v quota <volname> list <path> and gluster v quota <volname> list-objects <path> command Change-Id: I31ee82e9e836068348cf9458dcaf13f043d9fd87 BUG: 1248521 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/11808 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* xml output: Fix non-uniform opErrstr xml outputAvra Sengupta2015-08-141-2/+9
| | | | | | | | | | | | | | Display <opErrstr/> in case of no operrstr for all xml output of gluster commands. Change-Id: Ie16f749f90b4642357c562012408c434cd38661f BUG: 1245895 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11835 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* cli: changing the assignment to comparison operator on an if statementHari Gowtham2015-08-131-1/+1
| | | | | | | | | | | | | | CID: 1124702 Change-Id: I6366834224a8176824070150b7f2af76b4d65b7f BUG: 789278 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/11665 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* rpc: add owner xlator argument to rpc_clnt_newKrishnan Parthasarathi2015-08-122-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The @owner argument tells RPC layer the xlator that owns the connection and to which xlator THIS needs be set during network notifications like CONNECT and DISCONNECT. Code paths that originate from the head of a (volume) graph and use STACK_WIND ensure that the RPC local endpoint has the right xlator saved in the frame of the call (callback pair). This guarantees that the callback is executed in the right xlator context. The client handshake process which includes fetching of brick ports from glusterd, setting lk-version on the brick for the session, don't have the correct xlator set in their frames. The problem lies with RPC notifications. It doesn't have the provision to set THIS with the xlator that is registered with the corresponding RPC programs. e.g, RPC_CLNT_CONNECT event received by protocol/client doesn't have THIS set to its xlator. This implies, call(-callbacks) originating from this thread don't have the right xlator set too. The fix would be to save the xlator registered with the RPC connection during rpc_clnt_new. e.g, protocol/client's xlator would be saved with the RPC connection that it 'owns'. RPC notifications such as CONNECT, DISCONNECT, etc inherit THIS from the RPC connection's xlator. Change-Id: I9dea2c35378c511d800ef58f7fa2ea5552f2c409 BUG: 1235582 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/11436 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* quota : checking for absolute path in quota commandManikandan Selvaganesh2015-08-121-0/+6
| | | | | | | | | | | | | | | | | | Currently, if absolute path is not entered in "gluster volume quota <vol-name> list <path>", it just shows the header (Path Hard-limit Soft-limit...) instead of showing an error message. With this patch, it shows an error to enter the absolute path. Change-Id: I2c3d34bfdc7b924d00b11f8649b73a5069cbc2dc BUG: 1245558 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/11738 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cli : removing extra colon from rebalance status outputSakshi2015-07-291-4/+4
| | | | | | | | | | | | Change-Id: I74417471d7d2a86f198037d88dbf7d072c4349c3 BUG: 1218960 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/10475 Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: N Balachandran <nbalacha@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli : changing the gluster peer probe messageMohamed Ashiq Liyazudeen2015-07-281-2/+2
| | | | | | | | | | | | | | The gluster peer probe command with invalid ips dont report that IP's can also be a valid in usage. Change-Id: I8f58341a2b76369ccf62f88ca0ecd8a9a9529af6 BUG: 1242742 Signed-off-by: Mohamed Ashiq Liyazudeen <mliyazud@redhat.com> Reviewed-on: http://review.gluster.org/11657 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* quota : validating soft limit percentage.Manikandan Selvaganesh2015-07-201-2/+4
| | | | | | | | | | | Change-Id: I14c049c84c468b6415a1de45441b2fed94e8ed4b BUG: 1240654 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/11566 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cli: Create missing logbuf_poolAnoop C S2015-07-161-0/+4
| | | | | | | | | | | | | | | | | | | | | | | Since logbuf_pool was not created via glusterfs_ctx_defaults_init(), the following error was present in cli logs repeateadly for each and every execution of a gluster command. E [mem-pool.c:417:mem_get0] (-->/usr/local/lib/libglusterfs.so.0(+0x7e262) [0x7fdbc0b1f262] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x804) [0x7fdbc0ac7844] -->/usr/local/lib/libglusterfs.so.0(mem_get0+0x78) [0x7fdbc0af5b48] ) 0-mem-pool: invalid argument [Invalid argument] This change creates ctx->logbuf_pool via glusterfs_ctx_defaults_init() in cli.c so that the above error is no longer logged in cli logs. Change-Id: I3fcd9cfefa06ddd52e1989b039ff5637372c3235 BUG: 1243753 Signed-off-by: Anoop C S <anoopcs@redhat.com> Reviewed-on: http://review.gluster.org/11691 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* quota/marker: fix spurious failure afr-quota-xattr-mdata-heal.tvmallika2015-07-101-10/+6
| | | | | | | | | | | | | | | | During quota-update process if inode info is present in size-xattr and missing in contri-xattrs, then in function '_mq_get_metadata', we set contri-size as zero (on error -2, which means usage info present, but inode info missing). With this we are calculating wrong delta and updating the same. With this patch we are ignoring errors if inode info in xattrs are missing Change-Id: I7940a0e299b8bb425b5b43746b1f13f775c7fb92 BUG: 1241153 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11583 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* common-ha : Fixing add node operationMeghana M2015-06-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Resource create for the added node referenced a variable new_node that was never passed. This led to a wrong schema type in the cib file and hence the added node always ended up in failed state. And also, resources were wrongly created twice and led to more errors. I have fixed the variable name and deleted the repetitive invocation of the recreate-resource function. The new node has to be added to the existing ganesha-ha config file for correct behaviour during subsequent add-node operations. This edited file has to be copied to all the other cluster nodes. I have added a fix for this as well. Change-Id: Ie55138e2657d22298d89db1c08f2e17930686bd6 BUG: 1233246 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11316 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Fix snapshot of a volume with geo-repKotresh HR2015-06-171-2/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Snapshot fails for a volume configured with geo-rep if geo-rep is created with root user with the following syntax. gluster vol geo-rep <master_vol> root@<slave_host>::<slave_vol> It works fine if created with following syntax. gluster vol geo-rep <master_vol> <slave_host>::<slave_vol> Cause: Geo-rep maintains persistent dictionary of slave associated with master volume. The dictionary saves the slave info along with 'root@' as sent from cli. Snapshot while constructing the working dir path to copy configuration files, constructs using this dictionary. But the actual working dir is created with out considering 'root@'. Hence the issue. Fix: Fix is done at two layers. 1. Parse and negelect 'root@' in cli itself. 2. For existing geo-rep sessions and upgrade scenarios, parse and neglect 'root@' in snapshot code as well. Change-Id: If4e04f7f776ef71df4dd1e7e053ef75db98762b2 BUG: 1231789 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/11233 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* NFS-Ganesha: Automatically export vol that was exported before vol restartMeghana M2015-06-161-1/+2
| | | | | | | | | | | | | | | | | | | | Consider a volume that is exported via NFS-Ganesha. Stopping this volume will automatically unexport the volume. Starting this volume should automatically export it. Although the logic was already there, there was a bug in it. Fixing the same by introducing a hook script. Also with the new CLI options, the hook script S31ganesha-set.sh is no longer required. Hence, removing the same. Adding a comment to tell the user that one of the CLI commands will take a few minutes to complete. Change-Id: Ibff769ca04fef0c2a129c83fe31fc9c869350e8d BUG: 1231738 Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11247 Reviewed-by: soumya k <skoduri@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* features/bitrot: tuanble object signing waiting time value for bitrotGaurav Kumar Garg2015-06-151-3/+31
| | | | | | | | | | | | | | Currently bitrot using 120 second waiting time for object to be signed after all fop's released. This signing waiting time value should be tunable. Command for changing the signing waiting time will be #gluster volume bitrot <VOLNAME> signing-time <waiting time value in second> Change-Id: I89f3121564c1bbd0825f60aae6147413a2fbd798 BUG: 1228680 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/11105
* glusterd/tier: glusterd crashed with detach-tier commit forceMohammed Rafi KC2015-06-111-1/+1
| | | | | | | | | | | | | | | glusterd crashed when doing "detach-tier commit force" on a non-tiered volume. Change-Id: I884771893bb80bec46ae8642c2cfd7e54ab116a6 BUG: 1228112 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11081 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Joseph Fernandes Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/shared_storage: Provide a volume set option to create and mount the ↵Avra Sengupta2015-06-044-21/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shared storage Introducing a global volume set option(cluster.enable-shared-storage) which helps create and set-up the shared storage meta volume. gluster volume set all cluster.enable-shared-storage enable On enabling this option, the system analyzes the number of peers in the cluster, which are currently connected, and chooses three such peers(including the node the command is issued from). From these peers a volume(gluster_shared_storage) is created. Depending on the number of peers available the volume is either a replica 3 volume(if there are 3 connected peers), or a replica 2 volume(if there are 2 connected peers). "/var/run/gluster/ss_brick" serves as the brick path on each node for the shared storage volume. We also mount the shared storage at "/var/run/gluster/shared_storage" on all the nodes in the cluster as part of enabling this option. If there is only one node in the cluster, or only one node is up then the command will fail Once the volume is created, and mounted the maintainance of the volume like adding-bricks, removing bricks etc., is expected to be the onus of the user. On disabling the option, we provide the user a warning, and on affirmation from the user we stop the shared storage volume, and unmount it from all the nodes in the cluster. gluster volume set all cluster.enable-shared-storage disable Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc BUG: 1222013 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10793 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: Fix incorrect parse logic for volume heal commandsAnuradha2015-06-021-2/+2
| | | | | | | | | | | | | | heal-op was being incorrectly set to GF_SHD_OP_SBRAIN_HEAL_FROM_BIGGER_FILE. Change-Id: I4d4461c7737feae30102e82f7788083017485669 BUG: 1221128 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/10771 Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* build: do not #include "config.h" in each fileNiels de Vos2015-05-2916-80/+0
| | | | | | | | | | | | | | | | | | Instead of including config.h in each file, and have the additional config.h included from the compiler commandline (-include option). When a .c file tests for a certain #define, and config.h was not included, incorrect assumtions were made. With this change, it can not happen again. BUG: 1222319 Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/10808 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tiering: Correct errors in cli and glusterdMohammed Rafi KC2015-05-281-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem 1: volume info shows Cold Bricks instead of Tier type eg: Volume Name: patchy2 Type: Tier Volume ID: 28c25b8d-b8a1-45dc-b4b7-cbd0b344f58f Status: Started Number of Bricks: 3 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 1 Brick1: 10.70.1.35:/home/brick43 Cold Bricks: Cold Tier Type : Distribute Number of Bricks: 2 Brick2: 10.70.1.35:/home/brick19 Brick3: 10.70.1.35:/home/brick16 Options Reconfigured: Problem 2: Detach-tier sending enums of Rebalance detach-tier has it's own Enum to send with detach-tier command, using that enums will make more appropriate. Problem 3: Wrongly sets hot_brick count during the dictionary copying for response Change-Id: Icc054a999a679456881bc70511470d32ff8a86e4 BUG: 1211264 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10768 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System
* features/bitrot: reimplement scrubbing frequencyVenky Shankar2015-05-282-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch reimplments existing scrub-frequency mechanism used to schedule scrubber runs. Existing mechanism uses periodic sleeps (waking up periodically on minimum granularity) and performing a number of tracking checks based on counters and sleep times. This patch does away with all the nifty counters and uses timer-wheel to schedule scrub runs. Scheduling changes are peformed by merely calculating the new expiry time and calling mod_timer() [mod_timer_pending() in some cases] making the code more debuggable and easier to follow. This also introduces "hourly" scrubbing tunable as an aid for testing scrubbing during development/testing cycle. One could also implement on-demand scrubbing with ease: by invoking mod_timer() with an expiry of one (1) second, thereby scheduling a scrub run the very next second. Change-Id: I6c7c5f0c6c9f886bf574d88c04cde14b76e60a8b BUG: 1224596 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/10893 Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/tiering: volume info should display details about tierMohammed Rafi KC2015-05-101-34/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | >> gluster volume info patchy Volume Name: patchy Type: Tier Volume ID: 8bf1a1ca-6417-484f-821f-18973a7502a8 Status: Created Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: hostname:/home/brick30 Brick2: hostname:/home/brick31 Cold Bricks: Cold Tier Type : Disperse Number of Bricks: 1 x (4 + 2) = 6 Brick3: hostname:/home/brick20 Brick4: hostname:/home/brick21 Brick5: hostname:/home/brick23 Brick6: hostname:/home/brick24 Brick7: hostname:/home/brick25 Brick8: hostname:/home/brick26 Change-Id: I7b9025af81263ebecd641b4b6897b20db8b67195 BUG: 1212400 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10339 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli/tiering: display hot tier, and cold tier separatelyMohammed Rafi KC2015-05-091-15/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cli commands display the brick information without a way to distinguish hot tier, and cold tier. This patch will change all the cli related output, without changing the corresponding xml output. This patch will change following things >> gluster volume info Volume Name: patchy Type: Tier Volume ID: 7745d367-811a-4fe9-a500-d04e7afa94bf Status: Created Number of Bricks: 3 x 2 = 6 Transport-type: tcp Hot Bricks: Brick1: hostname:/home/brick21 Brick2: hostname:/home/brick20 Cold Bricks: Brick3: hostname:/home/brick19 Brick4: hostname:/home/brick16 Brick5: hostname:/home/brick17 Brick6: hostname:/home/brick18 >>gluster volume status Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick hostname:/home/brick21 49152 0 Y 4690 Brick hostname:/home/brick20 49153 0 Y 4707 Cold Bricks: Brick hostname:/home/brick19 49154 0 Y 4724 Brick hostname:/home/brick16 49155 0 Y 4741 Brick hostname:/home/brick17 49156 0 Y 4758 Brick hostname:/home/brick18 49157 0 Y 4775 NFS Server on localhost 2049 0 Y 4793 Task Status of Volume patchy ------------------------------------------------------------------------------ There are no active volume tasks >>gluster volume status pathy detail Status of volume: patchy Hot Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick21 TCP Port : 49162 RDMA Port : 0 Online : Y Pid : 22677 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick20 TCP Port : 49161 RDMA Port : 0 Online : Y Pid : 22660 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Cold Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick19 TCP Port : 49157 RDMA Port : 0 Online : Y Pid : 22501 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick16 TCP Port : 49158 RDMA Port : 0 Online : Y Pid : 22518 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick17 TCP Port : 49159 RDMA Port : 0 Online : Y Pid : 22535 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick18 TCP Port : 49160 RDMA Port : 0 Online : Y Pid : 22552 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Change-Id: I7d584eb8782129c12876cce2ba8ffba6c0a620bd BUG: 1206546 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10328 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/tiering : cksum mismatch for tiered volumeMohammed Rafi KC2015-05-091-1/+1
| | | | | | | | | | | | | | | | | | | Once we updated the volinfo from orginator node, the hot type was overwritten with volume type. Then the same dictionary was sent to peer node to perform the commit of attach-tier, that will cause hot type to replace with volume type, eventually end up in cksum mismatch Change-Id: I402dceb4d672d0b3a7b91a92f52c1057050dbedc BUG: 1215660 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Conflicts: xlators/mgmt/glusterd/src/glusterd-brick-ops.c Reviewed-on: http://review.gluster.org/10406 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/tiering: Enhance cli output for tieringMohammed Rafi KC2015-05-085-17/+381
| | | | | | | | | | | | | | Fix for handling cli output for attach-tier and detach-tier Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678 BUG: 1211562 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/10284 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: add counter support for tiered volumesDan Lambright2015-05-081-78/+21
| | | | | | | | | | | | | | | | This fix adds support to view the number of promoted or demoted files from the cli. The mechanism is isolmorphic to checking the status of volumes being rebalanced. gluster volume rebalance <vol> tier status Change-Id: I1b11ca27355ceec36c488967c23531202030e205 BUG: 1213063 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/10292 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>