summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* libglusterfs: Add monotonic clocking counter for timer threadHarshavardhana2013-10-151-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gettimeofday() returns the current wall clock time and timezone. Using these functions in order to measure the passage of time (how long an operation took) therefore seems like a no-brainer. This time suffer's from some limitations: a. They have a low resolution: “High-performance” timing by definition, requires clock resolutions into the microseconds or better. b. They can jump forwards and backwards in time: Computer clocks all tick at slightly different rates, which causes the time to drift. Most systems have NTP enabled which periodically adjusts the system clock to keep them in sync with “actual” time. The adjustment can cause the clock to suddenly jump forward (artificially inflating your timing numbers) or jump backwards (causing your timing calculations to go negative or hugely positive). In such cases timer thread could go into an infinite loop. From 'man gettimeofday': ---------- .. .. The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2). .. .. ---------- Rationale: For calculating interval timing for Timer thread, all that’s needed should be clock as a simple counter that increments at a stable rate. This is necessary to avoid the jumps which are caused by using "wall time", this counter must be monotonic that can never “tick” backwards, ever. Change-Id: I701d31e71a85a73d21a6c5cd15583e7a5a645eeb BUG: 1017993 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/6070 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: [Feature] Command implementation to get heal-countVenkatesh Somyajulu2013-10-144-25/+178
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently to know the number of files to be healed, either user has to go to backend and check the number of entries present in indices/xattrop directory. But if a volume consists of large number of bricks, going to each backend and counting the number of entries is a time-taking task. Otherwise user can give gluster volume heal vol-name info command but with this approach if no. of entries are very hugh in the indices/ xattrop directory, it will comsume time. So as a feature, new command is implemented. Command 1: gluster volume heal vn statistics heal-count This command will get the number of entries present in every brick of a volume. The output displays only entries count. Command 2: gluster volume heal vn statistics heal-count replica 192.168.122.1:/home/user/brickname Here if we are concerned with just one replica. So providing any one of the brick of a replica will get the number of entries to be healed for that replica only. Example: Replicate volume with replica count 2. Backend status: -------------- [root@dhcp-0-17 xattrop]# ls -lia | wc -l 1918 NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy entries so actual no. of entries to be healed are 1916. [root@dhcp-0-17 xattrop]# pwd /home/user/2ty/.glusterfs/indices/xattrop Command output: -------------- Gathering count of entries to be healed on volume volume3 has been successful Brick 192.168.122.1:/home/user/22iu Status: Brick is Not connected Entries count is not available Brick 192.168.122.1:/home/user/2ty Number of entries: 1916 Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290 BUG: 1015990 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6044 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr : Implementation of command "gluster volume heal vn statistics"Venkatesh Somyajulu2013-10-142-1/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "gluster volume heal volumename statistics" command gives the summary of the afr crawl done based on the entries present in the xattrop directory. Whenever afr crawls are attempted, the beginning time of crawl, end time of crawl, no of files healed, heal-failed count and number of files in split brain are shown along with the type of the crawl. If crawl is already in progress then it will give the number of files healed, heal failed count and number of files in split-brain from the beginning of the crawl and instead of telling the end time of the crawl, "CRAWL IN PROGRESS" message will be shown. Output format: command: "gluster volume heal volume-name statistics" Output: Gathering afr crawl statistics crawl statistics on volume volume-name has been successful ------------------------------------------------ Crawl statistics for brick no 0 Hostname of brick 192.168.122.248 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 ------------------------------------------------ Crawl statistics for brick no 1 Hostname of brick 192.168.122.1 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 -------------------------------------------------- Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9 BUG: 949400 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4790 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-084-27/+91
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: add node uuid in rebalance and remove brick status xml outputBala.FA2013-10-031-1/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds node uuid in rebalance/remove-brick status xml output. Output XML will look like <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843 BUG: 1012296 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/6005 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Validating invalid log-level under geo-rep config options.Ajeet Jha2013-10-032-5/+18
| | | | | | | | | | | Change-Id: I8ff6b48ef41fd6e9ea68c42dfb9878f8a08ed627 BUG: 1010874 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/5989 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* glusterd: Calculate volume op-versions only on set/resetKaushal M2013-09-293-6/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The volume op-versions are calculated during a volume set/reset, reading a volume from disk and importing a volume during probe or volume sync. The calculation of the volume op-version depends on the clusters op-version as some features are enabled automatically depending on the clusters op-version. We also don't store the volume op-versions persistently and don't export the volume op-versions during sync. Due to this, there can occur cases which will lead to inconsistencies in volumes in different peers. One such case is below, Consider, a cluster made up 3 peers P1, P2 and P3, operating at op-version N. The cluster has two volumes V1 and V2, which have volume op-versions N (since volume op-version cannot be greater than cluster op-version). We have, Cluster-op-version = N V1 op-version = N V2 op-version = N A set operation on V1 causes the clusters op-version to be bumped up to N+1. Assume that there exist some features that are automatically enabled on op-version N+1. The op-version of V2 remains at N as no operation has been performed on it. So, Cluster op-version = N+1 V1 op-version = N+1 V2 op-version = N Now, we probe a new peer P4. On the new peer we will have the following op-versions, Cluster op-version = N+1 V1 op-version = N+1 V2 op-version = N+1 This happens because we don't send volume op-versions during the sync after probe. P4 will freshly calculate the op-version of V2 (assuming features have been auto enabled due to the cluster op-version being N+1) as N+1. Another case is when glusterd on a peer restarts. Assume P3 was restarted, glusterd will recalculate the volume op-versions during the restore state. Again, op-version of V2 will be calculated as N+1 assuming auto enabled features. This will lead to inconsistency in the volume representation in memory and on disk, as glusterd will assume the volume contains auto enabled features, but the volfiles don't contain them as they were not regenrated. These kind of issues can be solved by calculating the volume op-version only when features are enabled and disabled (ie. during volume set/reset), persisting the volume-op-versions and exporting/importing them. Change-Id: I52de0668c92628622e85f4588fb28829a7231132 BUG: 1005043 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5568 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Fix storing volumes on setting global optsKaushal M2013-09-291-3/+6
| | | | | | | | | | | | | | | | | | Glusterd would not store all the volumes when a global options were set. When setting a global option, like 'nfs.*' options, glusterd used to modify the volinfo for all the volumes, but would store only the volinfo for the named volume. This lead to mismatch in the persisted and the in-memory representation of those volumes, which lead to problems like peers being rejected because of volume mismatches. Change-Id: I8bca10585e34b7135cb32af0055dbd462b3fb9b5 BUG: 1012400 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6007 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Set errstr appropriately on peer op failureKrutika Dhananjay2013-09-291-11/+17
| | | | | | | | | | Change-Id: I27f5f7cd54115d7b236b42f6beaaa05a8b379dd7 BUG: 1010153 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/5978 Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gNFS: Incorrect NFS ACL encoding for XFSSantosh Kumar Pradhan2013-09-291-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | Problem: Incorrect NFS ACL encoding causes "system.posix_acl_default" setxattr failure on bricks on XFS file system. XFS (potentially others?) doesn't understand when the 0x10 prefix is added to the ACL type field for default ACLs (which the Linux NFS client adds) which causes setfacl()->setxattr() to fail silently. NFS client adds NFS_ACL_DEFAULT(0x1000) for default ACL. FIX: Mask the prefix (added by NFS client) OFF, so the setfacl is not rejected when it hits the FS. Original patch by: "Richard Wareing" Change-Id: I17ad27d84f030cdea8396eb667ee031f0d41b396 BUG: 1009210 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/5980 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* logging: Remove multiple definitions of DEFAULT_LOG_FILE_DIRECTORYVijay Bellur2013-09-241-1/+0
| | | | | | | | | Change-Id: I8d670a228d3c1282aa7d70b151f166d04abc40e5 BUG: 764890 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/5909 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* glusterd/cli: Status detail cli parse check and vol geo status crash fixAvra Sengupta2013-09-211-7/+12
| | | | | | | | | | Change-Id: I1841864273fc4242de15fbfcf76fd5de40269f28 BUG: 1006249 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5889 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Adding transaction checks for cluster unlock.Avra Sengupta2013-09-201-3/+12
| | | | | | | | | | | | | | | | | | | | | | While a gluster command holding lock is in execution, any other gluster command which tries to run will fail to acquire the lock. As a result command#2 will follow the cleanup code flow, which also includes unlocking the held locks. As both the commands are run from the same node, command#2 will end up releasing the locks held by command#1 even before command#1 reaches completion. Now we call the unlock routine in the code path, of the cluster has been locked during the same transaction. Signed-off-by: Avra Sengupta <asengupt@redhat.com> Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834 BUG: 1008172 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5937 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: NFS daemon is limiting IOs to 64KBSantosh Kumar Pradhan2013-09-201-0/+18
| | | | | | | | | | | | | | | | | | | | | | | Problem: Gluster NFS server is hard-coding the max rsize/wsize to 64KB which is very less for NFS running over 10GE NIC. The existing options nfs.read-size, nfs.write-size are not working as expected. FIX: Make the options nfs.read-size (for rsize) and nfs.write-size (for wsize) work to tune the NFS I/O size. Value range would be 4KB(Min)-64KB(Default)-1MB(max). NB: Credit to "Richard Wareing" for catching it. Change-Id: I2754ecb0975692304308be8bcf496c713355f1c8 BUG: 1009223 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/5964 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-192-2/+97
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Don't reset rebalance status on add-brickKaushal M2013-09-181-9/+0
| | | | | | | | | | | | | | | | | | | | The rebalance status was being reset to 'Not started' when add-brick was performed. This would lead to odd cases where a 'rebalance status' on a volume would show status as 'not started' but would also include the rebalance statistics. This also affected the showing of asynchronus task status in 'volume status' command. By not resetting the status prevent the above issues from happening. Since we use the running/not-running of the rebalance process as the check when performing other operations we can safely leave the rebalance stats collected on an add-brick. Change-Id: I4c69d9c789d081c6de7e7a81dd0d4eba2e83ec17 BUG: 1006247 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5895 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd : Blocking invalid values for geo-rep config optionsAvra Sengupta2013-09-172-9/+64
| | | | | | | | | | | Change-Id: Ia9ee44763a9c2798b26d3225bf03a974d7ece21f BUG: 998962 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5666 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli,glusterd: Task parameters in xml outputKaushal M2013-09-133-6/+165
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces task parameters for the asynchronus task shown in volume status. The parameters are only given for xml output. The parameters shown currently are, - source and destination bricks for replace-brick tasks ...... <tasks> <task> <type>Replace brick</type> <id>3d1a1005-9d2e-4ae0-bd62-577bc1d333a3</id> <status>1</status> <params> <srcBrick>archm:/export/test4</srcBrick> <dstBrick>archm:/export/test-replace1</dstBrick> </params> </task> </tasks> ...... - list of bricks being removed for remove-brick tasks ...... <tasks> <task> <type>Remove brick</type> <id>901c20ca-0da2-41de-8669-5f0caca6b846</id> <status>1</status> <params> <brick>archm:/export/test2</brick> <brick>archm:/export/test3</brick> </params> </task> </tasks> ...... The changes for non-xml output will be done in a subsequent patch. Change-Id: I322afe2f83ed8adeddb99f7962c25911204dc204 BUG: 916577 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5771 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: Update sub_count on remove brickVijay Bellur2013-09-111-0/+1
| | | | | | | | | | Change-Id: I7c17de39da03c6b2764790581e097936da406695 BUG: 1002556 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/5893 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Allow bumping down a peer's op-version during probeKaushal M2013-09-101-15/+9
| | | | | | | | | | | | | | | | | | Earlier, a peer running a higher op-version couldn't be probed into a cluster running at a lower op-version. This created issues when trying to expand an upgraded cluster. This patch changes this behaviour. The cluster no longer rejects a peer being probed if its op-version is higher than the cluster op-version. The peer will reduce its op-version if it doesn't have any volumes. If the peer contains volumes and needs to reduce its op-version, it fails the handshake and the probe fails. Change-Id: I12c6c873922799e1557b7184e956baea643d0dea BUG: 1005038 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5715 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Added missing MY_UUID conversion in gsync stagingAvra Sengupta2013-09-101-0/+2
| | | | | | | | | Change-Id: Ia4bf607e044d50636fb0a599a2ff91059f5b17aa BUG: 1006177 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5887 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gsyncd / geo-rep: "disjoint" cascading geo-replication sessionsVenky Shankar2013-09-041-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | Slave's xtime is now stored on the master itself (and that too only on the root), which implies it cannot be propogated to the cascaded slave. Thus the intermediate master now makes use of it's own volume information to propogate volume-mark and xtime. On starting Geo-Replication "geo-replication.ignore-pid-check" marker option is enabled, which is an override for the client-pid check in marker. This options triggers marker update only for geo-replication auxillary mount (client-pid == -1). Since gsyncd not does setxattr() directly on the bricks, this option won't trigger a chain of spurious metadata updates that would need to be processed by gsyncd. Change-Id: If50c5ef275dfb6b4ff4fd35be2565587e2fdf3e1 BUG: 996371 Original Author: Venky Shankar <vshankar@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5592 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* features/marker: force xtime updates (configurable) for client-pid = -1Venky Shankar2013-09-042-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is required by Geo-Replication that does auxillary mount with client-pid as -1 (which has special treatment at specific places in GlusterFS), to trigger xtime updates on the intermediate master in a cascading setup. Marker too had a check to "not" mark updates for geo-replication's auxillary mounts. With the new geo-replication design, xtimes are not set by the master on the slave for all entities. Due to this cascading setups were broken. This patch introduces "geo-replication.ignore-pid-check" option as a "override" for the client-pid check for gsyncd's client-pid. When this options is enabled, marker start "marking" even if the updates are from the special client. Geo-Replication on the detection of itself being an intermediate master, enables this option. Change-Id: I9f7140edd12fef5480595ee0f93f35b94cdb8345 BUG: 996371 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5591 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Added op-version checks to geo-rep commands.Venky Shankar2013-09-041-0/+43
| | | | | | | | | | | | | | Added op-version checks to all geo-rep commands. Min op-version should be 2. Change-Id: I942d897404e11e4d53123409731ba5cd252668fe BUG: 847839 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5732 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/gverify: Check for passwordless ssh in gverify.Venky Shankar2013-09-042-79/+77
| | | | | | | | | | Change-Id: I8c2d398114ad4534bcc052f9a5be8bbb2e7e2582 BUG: 999531 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5677 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Allowing root@hostname::slave georep sessions to be created.Venky Shankar2013-09-043-27/+110
| | | | | | | | | | | | | | non-root@hostname::slave-vol geo-rep sessions are not supported. only hostname and root@hostname sessions are supported, and are treated as the same. Change-Id: I87551e1bd4ff4e0e6520c34eb3d944587cc65476 BUG: 998933 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/gverify.sh: Stops session being created with invalid slave detailsVenky Shankar2013-09-041-8/+68
| | | | | | | | | | | | | create force will fail with proper message, if the ip is not reachable, or is unable to fetch slave details. Change-Id: I44a3ba777b37702ffd0e48e9cb46c51e293327d4 BUG: 988314 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5516 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-043-117/+142
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* performance/readdir-ahead: introduce directory read-ahead translatorBrian Foster2013-09-041-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a translator to improve the performance of typical, sequential directory reads (i.e., ls). readdir-ahead begins preloading the contents of a directory on open and serves readdir requests from the preloaded content. readdir-ahead is currently implemented to only handle the single threaded directory read case. readdir-ahead is currently disabled by default. It can be enabled with the following command: gluster volume set <volname> readdir-ahead on The following are results of a getdents test on a single brick volume. Test info: - Single VM, gluster client/server. - Volume mounted with native client using --gid-timeout=2. - getdents on single directory with 100k 0-byte files. Test results: - !readdir-ahead read 3120080 bytes from offset 0 3 MiB, 4348 ops, 0:00:07.00 (416.590 KiB/sec and 594.4737 ops/sec) - readdir-ahead read 3120080 bytes from offset 0 3 MiB, 4348 ops, 0:00:03.00 (820.116 KiB/sec and 1170.3043 ops/sec) BUG: 980517 Change-Id: Ieceb9e1eb47d1d5b5af8da2bf03839537364653f Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-on: http://review.gluster.org/4519 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Regenerate client volfiles during upgradeshishir gowda2013-09-033-7/+30
| | | | | | | | | | | Change-Id: I1442bc1d115a9c6ecf139a0ca9da74d07e0fe928 BUG: 1003855 Signed-off-by: shishir gowda <sgowda@redhat.com> Reviewed-on: http://review.gluster.org/5764 Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* features/qemu-block: support for QCOW2 and QED formatsAnand Avati2013-09-032-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for internals snapshots using QCOW2 and general framework for external snapshots (next patch) with QCOW2 and QED. For internal snapshots, the file must be "initialized" or "formatted" into QCOW2 format, and specify a file size. Snapshots can be created, deleted, and applied ("goto"). e.g: // Format and Initialize sh# setfattr -n trusted.glusterfs.block-format -v qcow2:10GB /mnt/imgfile sh# ls -l /mnt/imgfile -rw-r--r-- 1 root root 10G Jul 18 21:20 imgfile // Create a snapshot sh# setfattr -n trusted.glusterfs.block-snapshot-create -v name1 imgfile // Apply a snapshot sh# setfattr -n trusted.gluterfs.block-snapshot-goto -v name1 imgfile Change-Id: If993e057a9455967ba3fa9dcabb7f74b8b2cf4c3 BUG: 986775 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5367 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
* nfs: persistent caching of connected NFS-clientsNiels de Vos2013-08-281-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce /var/lib/glusterfs/nfs/rmtab to contain a list of NFS-clients which have a volume mounted. The volume option 'nfs.mount-rmtab' can be set to an alternative filename. When the file is located on shared storage, multiple gNFS servers can use the same file to present a single NFS-server. This cache is read when a system administrator calls 'showmount -a' and updated when an NFS-client calls MNT or UMNT from the MOUNT protocol. Usage: - create a volume for storing the shared rmtab file - mount the volume on all storage servers, at the same location - make sure that the volume is mounted at boot (add to /etc/fstab) - place the rmtab file on the volume: # gluster volume set <VOLUME> nfs.mount-rmtab <MOUNTPOINT>/<FILENAME> - any subsequent mount requests will add an entry to this file - 'showmount -a' requests will return the NFS-clients using the cluster Note: The NFS-server does currently not support reconfigure(). When a configuration option is set/changed, the NFS-server glusterfs process gets restarted. This causes the active NFS-clients to be forgotten (the entries are saved in the old rmtab, but we do not have a reference to that file any more, so we can't re-add them). Therefor a re-mount done by the NFS-clients is needed before they get listed in the rmtab again. Change-Id: I58f47135d60ad112849d647bea4e1129683dd2b3 BUG: 904065 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/4430 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd: add check in remove-brick start variantRavishankar N2013-08-211-11/+11
| | | | | | | | | | | | | | | | | | | | | | The 'start' variant of the remove-brick command only applies at the dht level wherein we can remove all the bricks of a sub-volume (and remove multiple such sub-volumes) but not select bricks of it. This patch disallows removing individual replica bricks of multiple sub-volumes (i.e. reducing the replcia count of the volume) using remove-brick 'start'. The preferred method for such an operation is to use commit force. This patch also reverts the check to prevent removal of bricks from a replicate volume (commit 0d415f7) BUG: 961669 Change-Id: I447ad27f73a0963b5e09fb317bf7267a7a5a6147 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5566 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: release big locks while doing mountAnand Avati2013-08-181-0/+4
| | | | | | | | | | | | | Else things can deadlock in getspec v/s glusterd_do_mount() Change-Id: Ie70b43916e495c1c8f93e4ed0836c2fb7b0e1f1d BUG: 997576 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5636 Tested-by: Joe Julian <joe@julianfamily.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Try to start all bricks on 'start force'Kaushal M2013-08-181-2/+7
| | | | | | | | | | | | | | | | | | | A volume would fail to start if any one of the bricks fails staging or fails to start, even with the 'force' option. With this patch, when the 'force' option is given for a volume start, glusterd will continue and start other bricks even if one fails staging or starting. Also did a small fix in changelog, to prevent it crashing when it fails to init. Change-Id: I7efbd9ab13d12d69b0335ae54143fa17586f8f98 BUG: 994375 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5510 Reviewed-by: Venky Shankar <vshankar@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add server uuid into volume brick info xmlTimothy Asir2013-08-181-0/+10
| | | | | | | | | | | | | | | | | Add server uuid as an attribute to the existing brick details in the volume info cli xml output. Currently, when a node has more than one ip, the oVirt-engine fails to map the corresponding server using the ip alone. If we get the host uuid along with brick details in volume info command it will be easy for ovirt-engine to find out the server and thereby we can avoid confusion in finding the server. Change-Id: I3c9c9acea80e10e0b2977477759d9af045e48959 BUG: 955588 Signed-off-by: Timothy Asir <tjeyasin@redhat.com> Reviewed-on: http://review.gluster.org/4875 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Move certain logs into 'DEBUG' levelHarshavardhana2013-08-181-3/+3
| | | | | | | | | | | | Confusing "Error" messages in logs can cause user panic and false positives - avoid them as necessary in future. Change-Id: I906c64eea879b19a8db099c89d1d7f874e5530db BUG: 995784 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5555 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: remove-brick:Allow simultaneous removal of multiple subvolumes.Ravishankar N2013-08-132-36/+89
| | | | | | | | | | | | | | | | | Currently, remove-brick supports removal of only one distributed stripe/ replica pair at a time. Fix it to support removal of multiple pairs. This is consistent with add-brick behaviour which supports adding multiple stripe/replica pairs simultaneously. Removal is successful irrespective of the order of the bricks given at the CLI, as long as the bricks are from the same subvolume(s). Change-Id: I7c11c1235ce07b124155978b9d48d0ea65396103 BUG: 974007 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5210 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Correcting a log message in glusterd-geo-rep.cM S Vishwanath Bhat2013-08-051-1/+1
| | | | | | | | | | Change-Id: I4352f513fc5616daa20e9a4ad51a63fb13a27dff BUG: 847839 Signed-off-by: M S Vishwanath Bhat <vbhat@redhat.com> Reviewed-on: http://review.gluster.org/5472 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Add switch and nufa options to 'gluster cli'Harshavardhana2013-08-032-17/+53
| | | | | | | | | Change-Id: Ic3c43291e0e1ead0d89c0436e8d70aa5dee2f543 BUG: 924488 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5391 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Fix when tasks are shown in 'volume status'Kaushal M2013-08-031-0/+4
| | | | | | | | | | | | | Asynchronous tasks are shown in 'volume status' only for a normal volume status request for either all volumes or a single volume. Change-Id: I9d47101511776a179d213598782ca0bbdf32b8c2 BUG: 888752 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5308 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Use volume op-versions during volgenKaushal M2013-08-024-19/+14
| | | | | | | | | | | | | | Instead of using the cluster op-version, volume op-version is used to enable open-behind during volgen. For doing this, the volume op-versions are updated before regenerating the volfiles. Change-Id: I675bb549bf7c7c0279030dca698fb530781addc6 BUG: 990830 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5385 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/dht: Re-initialize skipped file count in glusterdshishir gowda2013-07-311-0/+1
| | | | | | | | | Change-Id: I42d08b3a6a7a3839f5e9953e1f83959222c080f8 Signed-off-by: shishir gowda <sgowda@redhat.com> BUG: 989846 Reviewed-on: http://review.gluster.org/5446 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd : Checking session created or not in case of geo-rep stopAvra Sengupta2013-07-311-2/+6
| | | | | | | | | | | | | | | | | Performing statefile check in case of geo-rep stop, so as to provide proper error message in case session is not created. However in case of geo-rep stop force, we allow the command to succeed even in case that the session is not created, because the stop command is a failsafe command to stop running geo-rep sessions on any nodes. Change-Id: I2b6a0253de977633606c422cbbc9e37cede9a268 BUG: 989541 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5417 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd : initiating gsyncd restart during add-brickAvra Sengupta2013-07-313-25/+150
| | | | | | | | | | | | | | | | | | | | | | | During add-brick, when a new brick is added in one of the nodes that was already a part of the existing volume, and gsyncd was already running on that node, then all gsyncd processes running on that node, for that particular master and any slave sessions will be restarted If a new brick is added in a new node, then after adding the brick, the user has to perform the following steps: 1. gluster system:: execute gsec_create 2. gluster volume geo-replication <master-vol> <slave-vol> create push-pem force 3. gluster volume geo-replication <master-vol> <slave-vol> start force Change-Id: I4b9633e176c80e4a7cf33f42ebfa47ab8fc283f1 BUG: 989532 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5416 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: Fix a minor typo.Vijay Bellur2013-07-311-1/+1
| | | | | | | | | | | Thanks to Patrick Matthäi <pmatthaei@debian.org> for the patch. Signed-off-by: Vijay Bellur <vbellur@redhat.com> Change-Id: I59da74298894ccc2ab30967ffe44cc844aa73f82 BUG: 814534 Reviewed-on: http://review.gluster.org/5436 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* cluster/dht: Treat migration failures due to space constraints as skippedshishir gowda2013-07-302-0/+28
| | | | | | | | | | | | | | | | Currently rebalance/remove-brick op's display migration failed count even for files which failed due to space issues (not enough space for file, or migration leading to cluster imbalance) These will now be counted as skipped, and rebalance/remove-brick status will display the additional counter Change-Id: I674904d380b5f8300e9ca9e6af557c3d30d6cff4 BUG: 989846 Signed-off-by: shishir gowda <sgowda@redhat.com> Reviewed-on: http://review.gluster.org/5399 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Fixing create force issues while it returned true everytime.Avra Sengupta2013-07-291-42/+71
| | | | | | | | | | | | | | | | | | | | Now geo-rep create force will return true if a node is down, and log an appropriate message. It will also return true with an appropriate log message if the slave verification fails. However it will not return true if the config file is deleted, ot corrupted, so as not to get the state_file's path. It will also fail if the slave url is invalid. If the push-pem option is given and /var/lib/glusterd/geo-replication/common_secret.pem.pub is not present, then also the create force command will fail. Change-Id: Ie7532a0884ddf9c3008bd30832d171d5b53b540e BUG: 988314 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli, glusterd: Cleanup logging of bd op commands.Vijay Bellur2013-07-271-1/+0
| | | | | | | | | | | | | | This patch prevents messages of the form "bd op: %s : SUCCESS" from being logged in .cmd_log_history. Change-Id: Iebeb7e26d409bf99b9c8df0a5c1c5a5d30d78a61 BUG: 823081 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/4871 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: M. Mohan Kumar <mohan@in.ibm.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: let each brick write the valgrind o/p to different fileRaghavendra Bhat2013-07-261-1/+5
| | | | | | | | | | | | | Till now all the brick processes were writing the valgrind information to the same log file. Change-Id: I0251c943935e2901b729c71f21d0677edb9f6867 BUG: 922877 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/5394 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>