summaryrefslogtreecommitdiffstats
path: root/cli
Commit message (Collapse)AuthorAgeFilesLines
* cli,glusterd: Changes to cli-glusterd communicationKaushal M2013-10-172-14/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Glusterd changes: With this patch, glusterd creates a socket file in DATADIR/run/glusterd.socket , and listen on it for cli requests. It listens for 2 rpc programs on the socket file, - The glusterd cli rpc program, for all cli commands - A reduced glusterd handshake program, just for the 'system:: getspec' command The location of the socket file can be changed with the glusterd option 'glusterd-sockfile'. To retain compatibility with the '--remote-host' cli option, glusterd also listens for the cli requests on port 24007. But, for the sake of security, it listens using a reduced cli rpc program on the port. The reduced rpc program only contains read-only procs used for 'volume (info|list|status)', 'peer status' and 'system:: getwd' cli commands. CLI changes: The gluster cli now uses the glusterd socket file for communicating with glusterd by default. A new option '--gluster-sock' has been added to allow specifying the sockfile used to connect. Using the '--remote-host' option will make cli connect to the given host & port. Tests changes: cluster.rc has been modified to make use of socket files and use different log files for each glusterd. Some of the tests using cluster.rc have been fixed. Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53 BUG: 980754 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5280 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: [Feature] Command implementation to get heal-countVenkatesh Somyajulu2013-10-143-15/+203
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently to know the number of files to be healed, either user has to go to backend and check the number of entries present in indices/xattrop directory. But if a volume consists of large number of bricks, going to each backend and counting the number of entries is a time-taking task. Otherwise user can give gluster volume heal vol-name info command but with this approach if no. of entries are very hugh in the indices/ xattrop directory, it will comsume time. So as a feature, new command is implemented. Command 1: gluster volume heal vn statistics heal-count This command will get the number of entries present in every brick of a volume. The output displays only entries count. Command 2: gluster volume heal vn statistics heal-count replica 192.168.122.1:/home/user/brickname Here if we are concerned with just one replica. So providing any one of the brick of a replica will get the number of entries to be healed for that replica only. Example: Replicate volume with replica count 2. Backend status: -------------- [root@dhcp-0-17 xattrop]# ls -lia | wc -l 1918 NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy entries so actual no. of entries to be healed are 1916. [root@dhcp-0-17 xattrop]# pwd /home/user/2ty/.glusterfs/indices/xattrop Command output: -------------- Gathering count of entries to be healed on volume volume3 has been successful Brick 192.168.122.1:/home/user/22iu Status: Brick is Not connected Entries count is not available Brick 192.168.122.1:/home/user/2ty Number of entries: 1916 Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290 BUG: 1015990 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6044 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr : Implementation of command "gluster volume heal vn statistics"Venkatesh Somyajulu2013-10-143-3/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "gluster volume heal volumename statistics" command gives the summary of the afr crawl done based on the entries present in the xattrop directory. Whenever afr crawls are attempted, the beginning time of crawl, end time of crawl, no of files healed, heal-failed count and number of files in split brain are shown along with the type of the crawl. If crawl is already in progress then it will give the number of files healed, heal failed count and number of files in split-brain from the beginning of the crawl and instead of telling the end time of the crawl, "CRAWL IN PROGRESS" message will be shown. Output format: command: "gluster volume heal volume-name statistics" Output: Gathering afr crawl statistics crawl statistics on volume volume-name has been successful ------------------------------------------------ Crawl statistics for brick no 0 Hostname of brick 192.168.122.248 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 ------------------------------------------------ Crawl statistics for brick no 1 Hostname of brick 192.168.122.1 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 -------------------------------------------------- Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9 BUG: 949400 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4790 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-085-53/+156
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: add node uuid in rebalance and remove brick status xml outputBala.FA2013-10-032-11/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds node uuid in rebalance/remove-brick status xml output. Output XML will look like <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843 BUG: 1012296 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/6005 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: runtime in xml output of rebalance/remove-brick statusAravinda VK2013-10-031-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "runtime in secs" is available in the CLI output of rebalance status and remove-brick status, but not available in xml output when --xml is passed. runtime in aggregate section will be max of all nodes runtimes. Example output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> BUG: 1012773 Change-Id: I8deaba08922a53cd2d3b411e097a7b3cf591b127 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5997 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: skipped tag in xml output of rebalance/remove-brick statusAravinda VK2013-10-031-3/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Skipped files count is available in CLI output of rebalance status and remove-brick status, but not available in xml output. Example output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <status>0</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <status>0</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> BUG: 1012772 Change-Id: I05191293403e66e0d681f0cd0422aa3c78a2d91d Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/6000 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* logging: Remove multiple definitions of DEFAULT_LOG_FILE_DIRECTORYVijay Bellur2013-09-241-2/+1
| | | | | | | | | Change-Id: I8d670a228d3c1282aa7d70b151f166d04abc40e5 BUG: 764890 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/5909 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* glusterd/cli: Status detail cli parse check and vol geo status crash fixAvra Sengupta2013-09-211-0/+7
| | | | | | | | | | Change-Id: I1841864273fc4242de15fbfcf76fd5de40269f28 BUG: 1006249 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5889 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-191-3/+9
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: add aggregate status for rebalance and remove-brick status xml outputTimothy Asir2013-09-181-1/+15
| | | | | | | | | | | | | | | | Add aggregate status information in <aggregate> section of gluster volume 'rebalance status' and 'remove-brick status cli xml output. The aggregate status determined based on the most critical level and the aggregate status will have 'Complete' only when all individual status are 'Complete'. Change-Id: Ie805b9dd52fd82fd277c3da9ee91cc8b6dea8212 BUG: 1006813 Signed-off-by: Timothy Asir <tjeyasin@redhat.com> Reviewed-on: http://review.gluster.org/4950 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli,glusterd: Task parameters in xml outputKaushal M2013-09-131-1/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces task parameters for the asynchronus task shown in volume status. The parameters are only given for xml output. The parameters shown currently are, - source and destination bricks for replace-brick tasks ...... <tasks> <task> <type>Replace brick</type> <id>3d1a1005-9d2e-4ae0-bd62-577bc1d333a3</id> <status>1</status> <params> <srcBrick>archm:/export/test4</srcBrick> <dstBrick>archm:/export/test-replace1</dstBrick> </params> </task> </tasks> ...... - list of bricks being removed for remove-brick tasks ...... <tasks> <task> <type>Remove brick</type> <id>901c20ca-0da2-41de-8669-5f0caca6b846</id> <status>1</status> <params> <brick>archm:/export/test2</brick> <brick>archm:/export/test3</brick> </params> </task> </tasks> ...... The changes for non-xml output will be done in a subsequent patch. Change-Id: I322afe2f83ed8adeddb99f7962c25911204dc204 BUG: 916577 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5771 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add statusStr xml tag to task list and rebalance/remove brick statusAravinda VK2013-09-123-19/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New xml tag statusStr added to following gluster cli commands gluster volume status all --xml (For Task status) gluster volume rebalance <VOLNAME> status --xml gluster volume remove-brick <VOLNAME> <BRICK1..> status --xml Example(volume status all): <task> <type>Rebalance</type> <id>82d8d122-8738-4144-8507-d93fc98b61df</id> <status>3</status> <statusStr>completed</statusStr> </task> Example(volume rebalance <VOL> status) <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> Also modified task status as string instead of showing number in gluster volume status all Example: Status of volume: gv1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick sumne.sumne:/gfs/b1 49154 Y 15489 Brick sumne.sumne:/gfs/b2 49155 Y 15493 NFS Server on localhost N/A N 15913 Task ID Status ---- -- ------ Rebalance 82d8d122-8738-4144-8507-d93fc98b61df completed BUG: 1003521 Change-Id: Ib283016af4c18132fb13fb33d44075782d77823c Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5739 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Fix 'status all' xml output when volumes are not startedKaushal M2013-09-111-10/+12
| | | | | | | | | | | | CLI now only outputs one XML document for 'status all' only containing those volumes which are started. BUG: 1004218 Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5773 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/cli: Geo-Replication "status detail" cmdVenky Shankar2013-09-044-141/+365
| | | | | | | | | | | | | | | | | | | | | | Provides detailed status info in the following format MASTER <master-vol> SLAVE <slave-vol> NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING ----------------------------------------------------------------------------------- This patch introdues "status detail" command to show crawl related information in CLI. These values are "pulled" from gsyncd when "status detail" is executed. Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1 BUG: 990420 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5590 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-041-60/+8
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Add server uuid into volume brick info xmlTimothy Asir2013-08-181-3/+23
| | | | | | | | | | | | | | | | | Add server uuid as an attribute to the existing brick details in the volume info cli xml output. Currently, when a node has more than one ip, the oVirt-engine fails to map the corresponding server using the ip alone. If we get the host uuid along with brick details in volume info command it will be easy for ovirt-engine to find out the server and thereby we can avoid confusion in finding the server. Change-Id: I3c9c9acea80e10e0b2977477759d9af045e48959 BUG: 955588 Signed-off-by: Timothy Asir <tjeyasin@redhat.com> Reviewed-on: http://review.gluster.org/4875 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* log: set ident to openlogBala.FA2013-08-131-1/+2
| | | | | | | | | | | | | | | | | | | | at syslog side, log message is identified by its properties like programname, pid, etc. brick/mount processes need to be identified uniquely as they are different process of gluterfsd/glusterfs. At rsyslog side, log separated by programname/app-name with pid works but bit hard to identify them in long run which process is for what brick/mount. This patch fixes by setting identity string at openlog() which sets programname/app-name as similar to old style log file prefixed by gluster, glusterd, glusterfs or glusterfsd Change-Id: Ia05068943fa67ae1663aaded1444cf84ea648db8 BUG: 928648 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/5541 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli,glusterd: Fix when tasks are shown in 'volume status'Kaushal M2013-08-031-1/+5
| | | | | | | | | | | | | Asynchronous tasks are shown in 'volume status' only for a normal volume status request for either all volumes or a single volume. Change-Id: I9d47101511776a179d213598782ca0bbdf32b8c2 BUG: 888752 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5308 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/dht: Treat migration failures due to space constraints as skippedshishir gowda2013-07-301-12/+28
| | | | | | | | | | | | | | | | Currently rebalance/remove-brick op's display migration failed count even for files which failed due to space issues (not enough space for file, or migration leading to cluster imbalance) These will now be counted as skipped, and rebalance/remove-brick status will display the additional counter Change-Id: I674904d380b5f8300e9ca9e6af557c3d30d6cff4 BUG: 989846 Signed-off-by: shishir gowda <sgowda@redhat.com> Reviewed-on: http://review.gluster.org/5399 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Fixing create force issues while it returned true everytime.Avra Sengupta2013-07-291-5/+1
| | | | | | | | | | | | | | | | | | | | Now geo-rep create force will return true if a node is down, and log an appropriate message. It will also return true with an appropriate log message if the slave verification fails. However it will not return true if the config file is deleted, ot corrupted, so as not to get the state_file's path. It will also fail if the slave url is invalid. If the push-pem option is given and /var/lib/glusterd/geo-replication/common_secret.pem.pub is not present, then also the create force command will fail. Change-Id: Ie7532a0884ddf9c3008bd30832d171d5b53b540e BUG: 988314 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli : Display error messages if virt file has been deleted or is invalid.Avra Sengupta2013-07-273-6/+26
| | | | | | | | | | | | | "gluster volume set <VOLNAME> group virt" will display error message if virt file is deleted or is invalid. Change-Id: Icb202b6a445597fcd9a3dcef8001891f2601a115 BUG: 916127 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4586 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli, glusterd: Cleanup logging of bd op commands.Vijay Bellur2013-07-271-4/+4
| | | | | | | | | | | | | | This patch prevents messages of the form "bd op: %s : SUCCESS" from being logged in .cmd_log_history. Change-Id: Iebeb7e26d409bf99b9c8df0a5c1c5a5d30d78a61 BUG: 823081 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/4871 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: M. Mohan Kumar <mohan@in.ibm.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-264-126/+866
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli :remove-brick process output leads to ambiguitysusant2013-07-241-3/+6
| | | | | | | | | | | | | The output of remove-brick status as "Not started" leads to ambiguity.We should not show the status of the Server nodes which do not participate in the remove-brick process. Change-Id: I85fea40deb15f3e2dd5487d881f48c9aff7221de BUG: 986896 Signed-off-by: susant <spalai@redhat.com> Reviewed-on: http://review.gluster.org/5383 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: gluster volume heal commands are more elaborativeVenkatesh Somyajulu2013-07-241-5/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | 1. "gluster volume heal volume-name" output :Launching heal operation to perform index self heal on volume volume-name has been successful 2. "gluster volume heal volume-name full" Output :Launching heal operation to perform full self heal on volume volume-name has been successful 3. "gluster volume heal volume-name info" Output :Gathering list of entries to be healed on volume volume-name has been successful 4. "gluster volume heal volume-name info healed" Output :Gathering list of healed entries on volume volume-name has been successful 5. "gluster volume heal volume-name info split-brain" Output :Gathering list of split brain entries on volume volume-name has been successful 6. "gluster volume heal volume-name info heal-failed" Output :Gathering list of heal failed entries on volume volume-name has been successful Change-Id: I74c90e8129d23d513ddb7879358a9d21c94a5c0d BUG: 978936 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5286 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Increased timeout for gluster volume heal commnadsVenkatesh Somyajulu2013-07-232-3/+6
| | | | | | | | | | | | | | | | | Problem: If number of files are very large, then gluster volume heal volumename info commnads take large time. So timeout of 2 minutes seems to be insufficient. Fix: Increased timeout to 10 minutes Change-Id: I5f847163e01c4afbb587b726833ad80183f1a928 BUG: 986945 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5372 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: check for null in is_server_debug_xlator()Ravishankar N2013-07-121-0/+2
| | | | | | | | | | | | | | | | | Command: gluster volume set <volname> diagnostics.client-log-level trace Expected output: "volume set: failed: option log-level trace: 'trace' is not valid (possible options are DEBUG, WARNING, ERROR, INFO, CRITICAL, NONE, TRACE.)" Current output: gluster cli receives a segmentation fault Fix: check for NULL before calling strstr Change-Id: If4c7a85a635849a388cf122543e12349c109643c BUG: 982174 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5298 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Fix remove brick cli out for wrong volume nameVenkatesh Somyajulu2013-07-041-6/+3
| | | | | | | | | | | | | | | | | | Problem: gluster volume remove-brick command, was not printing the error in case of volume-name specified is wrong. Fix: Fix will print error message to indicate that provided volume name is invalid. Although patch for bug 961669 http://review.gluster.org/#/c/4975/ does print cli-output now, but still xml is unable to use the response values Change-Id: I2ee1df86c1e756fb8e93b4d6bbdd102b4f368f87 BUG: 961307 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4972 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Fix in letter case in volume heal outputVenkatesh Somyajulu2013-07-031-2/+2
| | | | | | | | | | Change-Id: I25d13444c2cbff9b26642e91677ad1e09e77aa1e BUG: 978936 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5259 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Log peer op status at the appropriate timeKrutika Dhananjay2013-06-182-142/+45
| | | | | | | | | | Change-Id: Ia8e1af082078f2f791708ba4faa4992bf291dd6e BUG: 961339 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/5023 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Add a cmd for getting uuid of local nodeKrishnan Parthasarathi2013-06-102-0/+157
| | | | | | | | | | | | | | | | | | | Usage: gluster system:: uuid get This is needed since we generate uuid of a node in a lazy manner. ie, we generate a uuid for the node only on the first volume or peer operation, when the node needs an external identity. With this command, we can force[1] the uuid generation, without a volume or peer operation performed. [1]: Querying for uuid (or uuid get), forces uuid to come into existence. Change-Id: I62c8b6754117756aa4d773dd48af4ddeb1a1d878 BUG: 971661 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5175 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: Remove unused port info from peer status.Venkatesh Somyajulu2013-06-052-28/+2
| | | | | | | | | | | | | | | Problem: "gluster peer status" on some nodes gives port info and fails to give on other. But it is a hard coded value. Fix: Removing the port info from command Change-Id: I919f0349f252e658bfc13e60bb8e171da32eaf25 BUG: 964026 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5027 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: set min-op-version and max-op-version for getspecJeff Darcy2013-05-301-0/+34
| | | | | | | | | Change-Id: I2185df5d6b560d9367ae404c91812048e1655180 BUG: 969193 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/5119 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: remove-brick: prevent removal from a replicate volume.Ravishankar N2013-05-131-0/+2
| | | | | | | | | | | | Prevent the removal of brick(s) from a plain replicate volume and display the error message at the CLI. Change-Id: I8e182404564147329d8cd364b7c7931d19f14570 BUG: 961669 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/4975 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gluster/CLI: crash upon executing "peer status" commandSantosh Kumar Pradhan2013-05-101-0/+9
| | | | | | | | | | | | | | | | | | | | | | | Problem: While doing "gluster peer status", cli_cmd_peer_status_cbk() creates the frame and passes as arg to gf_cli_list_friends() which sets frame->local to GF_CLI_LIST_PEERS flag (value: 0x1). It expects gf_cli_list_friends_cbk() [invoked through cli_cmd_submit()] to reset frame->local to NULL. But if cli_cmd_submit() fails some where before gf_cli_list_friends_cbk() gets invoked, then the flag value remains in frame->local and causes a SEGV while destroying the stack i.e. [CLI_STACK_DESTROY => cli_local_wipe()]. Fix: In gf_cli_list_friends(), if cli_cmd_submit() fails, then reset the value of frame->local to NULL. Change-Id: Ied19f07eaf67e3bd42c75cdc2ff3729b0789e632 BUG: 961691 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/4976 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Avoid storing empty lines in command historyRavishankar N2013-04-281-1/+2
| | | | | | | | | | | | | | | | | When the console manager is run in the interactive mode, it also saves empty lines (i.e. the Enter key is pressed without running a command) in it's command history. Avoid this by processing the line only if readline() returns a non-empty string. Makes it easier to navigate the history using arrow keys. modified: cli/src/cli-rl.c Change-Id: I0fcce394474589bb345b7c9ef39d25849dc0c2af BUG: 957139 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/4894 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: add a command 'gluster pool list [--xml]'Niels de Vos2013-04-263-66/+194
| | | | | | | | | | | | | | | | | * unlike 'gluster peer status', which lists only info about peers, this command lists localhost also in the list, so the sorted output from all the nodes should match. * made the output script friendly by keeping it one output per line. Change-Id: I853656753b35c617debbcceecbb71c8d6dd3c334 BUG: 764638 Original-review: http://review.gluster.org/4221 Original-author: Amar Tumballi <amarts@redhat.com> Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/4862 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: changes in 'volume create' behaviourKrutika Dhananjay2013-04-092-7/+66
| | | | | | | | | | | | | This patch incorporates all the changes suggested on the behaviour of 'volume create' command in http://review.gluster.org/#change,4214 (comment #14, to be precise). Change-Id: Iaac524a59738b177415595b18aa8a136090d3d25 BUG: 948729 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4740 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Address a double free with volume info.Vijay Bellur2013-04-081-2/+2
| | | | | | | | | | | | | Crash is observed when volume info is performed on a non-exisiting volume name and the output format is xml. Change-Id: I88aa5d9dc954b1352f5cc3b5b38742c832bc1bb8 BUG: 949298 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/4785 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: add more meaningful error messagesBala.FA2013-04-021-84/+151
| | | | | | | | | Change-Id: I6e88e6763fa537f4705427b4673d86e6443c2c98 BUG: 928648 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/4747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Made volume top help string clearM S Vishwanath Bhat2013-04-011-4/+3
| | | | | | | | | | | | | | | | nfs option is not applicable for read-perf and write-perf nfs option and brick option can not be used in same command Change-Id: I920ba0de011df0cc5e0adca6597aaea9372fe592 BUG: 924335 Signed-off-by: M S Vishwanath Bhat <vbhat@redhat.com> Reviewed-on: http://review.gluster.org/4706 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* config: better (i.e. more portable) test for libxml2Kaleb S. KEITHLEY2013-03-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Over the weekend I tried to build on MacOS X¹ and ran into the following issues: 1) The recent change to autogen.sh to test for pkg-config falls down. 2) After removing the pkg-config test in autogen.sh, w/o pkg-config the PKG_CHECK_MODULES macro invocation in configure[.ac] falls down. N.B. Solaris users run into this too, even through there's a (broken) pkg-config package that can be installed. 3) There are other problems in the code related to fuse that are beyond the scope of this. It seems that pkg-config is only a requirement for the definition of the PKG_CHECK_MODULES macro used to detect libxml2. Since this seems to be inherently unportable — at least to MacOS X and Solaris — I'd like to: A) Change the use of the PKG_CHECK_MODULES macro to the more portable AM_PATH_XML2 macro provided by the libxml2 package in /usr/.../share/aclocal/libxml.m4 2) Revisit the decision to add the check for pkg-config in autogen.sh in BZ 921817. For now this is just an rfc. If people are agreeable I'll reenter this change against BZ 921817. ¹Mountain Lion 10.8.3, XCode 4.6.1 Change-Id: I237b1ed8919088345b8fd943423b2a6ad289981b BUG: 921817 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/4720 Reviewed-by: Justin Clift <jclift@redhat.com> Tested-by: Justin Clift <jclift@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Better message on rebalance/remove-brick stopKaushal M2013-03-201-6/+43
| | | | | | | | | | | | A better message is displayed when rebalance or remove-brick is stopped, as migration may be still in progress. Change-Id: I5f301cc66f349a1a85245f7d7508bf746756bdea BUG: 923529 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4695 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* object-storage: Restoring multi volume support in UFO.Mohammed Junaid2013-03-071-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | * Currently, the users of UFO are restricted to use just one volume at any given point of time. This patch removes this limitation. * The usage of gluster-swift-gen-builders has also changed. With this commit the users should mention the list of volumes that they want to expose through UFO. So, only the volumes mentioned during the ring file generation can be accessed. Usage: gluster-swift-gen-builders <vol-name1> [<vol-name2>]... This is an intermediate fix until we remove the account, container and object server processes. Once we have this frame work running, it will completely eliminate the ring files. Change-Id: I9ad3808519fec9c7c60ad846c4f8b653117a8337 BUG: 909053 Signed-off-by: Mohammed Junaid <junaid@redhat.com> Reviewed-on: http://review.gluster.org/4485 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Peter Portante <pportant@redhat.com>
* cli: Display option 'device vg' only when bd backend enabledM. Mohan Kumar2013-02-201-1/+5
| | | | | | | | | Change-Id: If61c237948f51d48305f4897b3f226eead10bae8 BUG: 912997 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/4555 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Added validation function for performance cache max and min size.Avra Sengupta2013-02-191-6/+6
| | | | | | | | | Change-Id: I0b8dbc4b65412b8aff24873f030c03e3dcfcb988 BUG: 782095 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4541 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Better mechanism to handle memory accountingVijay Bellur2013-02-171-0/+2
| | | | | | | | | | | | | | Memory accounting will now be enabled if: 1) Any glusterfs process is spawned with argument --mem-accounting. 2) DEBUG is defined. Change-Id: I3345e114127a57ce61916be0e2c4e0049a4c3432 BUG: 834465 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/4523 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: "volume heal info" doesn't report output properlyVenkatesh Somyajulu2013-02-041-18/+31
| | | | | | | | | | | | Problem: "volume heal info" doesn't reports files to be healed when gluster* processes on one of the storage node is not running Change-Id: Iff7d41407014624e4da9b70d710039ac14b48291 BUG: 880898 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4371 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterfs : Moved option files, and statedumps from /tmpAvra Sengupta2013-01-292-0/+6
| | | | | | | | | Change-Id: Ibdede396c4d6859225937316b7a59a661bcaf9f5 BUG: 764890 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4422 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>