summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorshravantc <shravantc@ymail.com>2015-03-30 11:07:34 +0530
committerKaleb KEITHLEY <kkeithle@redhat.com>2015-04-01 10:23:16 -0700
commitc1e1c7443a26d132b3ba6cb164f80ca934690ba6 (patch)
tree8bf69b9274934b5bd1c53087b50d325b903e22fe
parentdc26a253f5f393745bd435721e31d6e2e598eed1 (diff)
doc : editing admin directory quota
added 'setting alert time' to managing directory quota. added 'Displaying Quota Limit Information Using the df Utility' to displaying quota limit information. Change-Id: Iaaf6201e8aa3687ee70ae28c41b870f2d7a4127f BUG: 1206539 Signed-off-by: shravantc <shravantc@ymail.com> Reviewed-on: http://review.gluster.org/10039 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
-rw-r--r--doc/admin-guide/en-US/markdown/admin_directory_Quota.md123
-rw-r--r--doc/admin-guide/en-US/markdown/admin_setting_volumes.md87
2 files changed, 176 insertions, 34 deletions
diff --git a/doc/admin-guide/en-US/markdown/admin_directory_Quota.md b/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
index 21e42c66902..402ac5e4fcc 100644
--- a/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
+++ b/doc/admin-guide/en-US/markdown/admin_directory_Quota.md
@@ -1,6 +1,6 @@
#Managing Directory Quota
-Directory quotas in GlusterFS allow you to set limits on usage of disk
+Directory quotas in GlusterFS allows you to set limits on usage of the disk
space by directories or volumes. The storage administrators can control
the disk space utilization at the directory and/or volume levels in
GlusterFS by setting limits to allocatable disk space at any level in
@@ -8,10 +8,9 @@ the volume and directory hierarchy. This is particularly useful in cloud
deployments to facilitate utility billing model.
> **Note**
->
-> For now, only Hard limit is supported. Here, the limit cannot be
+> For now, only Hard limits are supported. Here, the limit cannot be
> exceeded and attempts to use more disk space or inodes beyond the set
-> limit will be denied.
+> limit is denied.
System administrators can also monitor the resource utilization to limit
the storage for the users depending on their role in the organization.
@@ -22,22 +21,20 @@ You can set the quota at the following levels:
- **Volume level** – limits the usage at the volume level
> **Note**
->
> You can set the disk limit on the directory even if it is not created.
> The disk limit is enforced immediately after creating that directory.
-> For more information on setting disk limit, see ?.
##Enabling Quota
You must enable Quota to set disk limits.
-**To enable quota**
+**To enable quota:**
-- Enable the quota using the following command:
+- Use the following command to enable quota:
- `# gluster volume quota enable `
+ # gluster volume quota enable
- For example, to enable quota on test-volume:
+ For example, to enable quota on the test-volume:
# gluster volume quota test-volume enable
Quota is enabled on /test-volume
@@ -48,11 +45,11 @@ You can disable Quota, if needed.
**To disable quota:**
-- Disable the quota using the following command:
+- Use the following command to disable quota:
- `# gluster volume quota disable `
+ # gluster volume quota disable
- For example, to disable quota translator on test-volume:
+ For example, to disable quota translator on the test-volume:
# gluster volume quota test-volume disable
Quota translator is disabled on /test-volume
@@ -64,20 +61,19 @@ disk limit or set disk limit for the existing directories. The directory
name should be relative to the volume with the export directory/mount
being treated as "/".
-**To set or replace disk limit**
+**To set or replace disk limit:**
- Set the disk limit using the following command:
- `# gluster volume quota limit-usage /`
+ # gluster volume quota limit-usage /
- For example, to set limit on data directory on test-volume where
+ For example, to set limit on data directory on the test-volume where
data is a directory under the export directory:
# gluster volume quota test-volume limit-usage /data 10GB
Usage limit has been set on /data
> **Note**
- >
> In a multi-level directory hierarchy, the strictest disk limit
> will be considered for enforcement.
@@ -86,35 +82,78 @@ being treated as "/".
You can display disk limit information on all the directories on which
the limit is set.
-**To display disk limit information**
+**To display disk limit information:**
- Display disk limit information of all the directories on which limit
is set, using the following command:
- `# gluster volume quota list`
+ # gluster volume quota list
- For example, to see the set disks limit on test-volume:
+ For example, to see the set disks limit on the test-volume:
# gluster volume quota test-volume list
-
-
/Test/data 10 GB 6 GB
/Test/data1 10 GB 4 GB
- Display disk limit information on a particular directory on which
limit is set, using the following command:
- `# gluster volume quota list `
+ # gluster volume quota list
- For example, to see the set limit on /data directory of test-volume:
+ For example, to view the set limit on /data directory of test-volume:
# gluster volume quota test-volume list /data
+ /Test/data 10 GB 6 GB
+###Displaying Quota Limit Information Using the df Utility
- /Test/data 10 GB 6 GB
+You can create a report of the disk usage using the df utility by taking quota limits into consideration. To generate a report, run the following command:
+
+ # gluster volume set VOLNAME quota-deem-statfs on
+
+In this case, the total disk space of the directory is taken as the quota hard limit set on the directory of the volume.
+
+>**Note**
+>The default value for quota-deem-statfs is off. However, it is recommended to set quota-deem-statfs to on.
+
+The following example displays the disk usage when quota-deem-statfs is off:
+
+ # gluster volume set test-volume features.quota-deem-statfs off
+ volume set: success
+ # gluster volume quota test-volume list
+ Path Hard-limit Soft-limit Used Available
+ -----------------------------------------------------------
+ / 300.0GB 90% 11.5GB 288.5GB
+ /John/Downloads 77.0GB 75% 11.5GB 65.5GB
+
+Disk usage for volume test-volume as seen on client1:
+
+ # df -hT /home
+ Filesystem Type Size Used Avail Use% Mounted on
+ server1:/test-volume fuse.glusterfs 400G 12G 389G 3% /home
+
+The following example displays the disk usage when quota-deem-statfs is on:
+
+ # gluster volume set test-volume features.quota-deem-statfs on
+ volume set: success
+ # gluster vol quota test-volume list
+ Path Hard-limit Soft-limit Used Available
+ -----------------------------------------------------------
+ / 300.0GB 90% 11.5GB 288.5GB
+ /John/Downloads 77.0GB 75% 11.5GB 65.5GB
+
+Disk usage for volume test-volume as seen on client1:
+
+ # df -hT /home
+ Filesystem Type Size Used Avail Use% Mounted on
+ server1:/test-volume fuse.glusterfs 300G 12G 289G 4% /home
+
+The quota-deem-statfs option when set to on, allows the administrator to make the user view the total disk space available on the directory as the hard limit set on it.
##Updating Memory Cache Size
+### Setting Timeout
+
For performance reasons, quota caches the directory sizes on client. You
can set timeout indicating the maximum valid duration of directory sizes
in cache, from the time they are populated.
@@ -132,11 +171,11 @@ force fetching of directory sizes from server for every operation that
modifies file data and will effectively disables directory size caching
on client side.
-**To update the memory cache size**
+**To update the memory cache size:**
-- Update the memory cache size using the following command:
+- Use the following command to update the memory cache size:
- `# gluster volume set features.quota-timeout`
+ # gluster volume set features.quota-timeout
For example, to update the memory cache size for every 5 seconds on
test-volume:
@@ -144,21 +183,37 @@ on client side.
# gluster volume set test-volume features.quota-timeout 5
Set volume successful
+##Setting Alert Time
+
+Alert time is the frequency at which you want your usage information to be logged after you reach the soft limit.
+
+**To set the alert time:**
+
+- Use the following command to set the alert time:
+
+ # gluster volume quota VOLNAME alert-time time
+
+ >**Note**
+ >
+ >The default alert-time is one week.
+
+ For example, to set the alert time to one day:
+
+ # gluster volume quota test-volume alert-time 1d
+ volume quota : success
+
##Removing Disk Limit
You can remove set disk limit, if you do not want quota anymore.
-**To remove disk limit**
+**To remove disk limit:**
-- Remove disk limit set on a particular directory using the following
- command:
+- Use the following command to remove the disk limit set on a particular directory:
- `# gluster volume quota remove `
+ # gluster volume quota remove
For example, to remove the disk limit on /data directory of
test-volume:
# gluster volume quota test-volume remove /data
Usage limit set on /data is removed
-
-
diff --git a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md b/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
index a500a7214da..e58bb63ab23 100644
--- a/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
+++ b/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
@@ -8,6 +8,93 @@ To create a new volume in your storage environment, specify the bricks
that comprise the volume. After you have created a new volume, you must
start it before attempting to mount it.
+###Formatting and Mounting Bricks
+
+####Creating a Thinly Provisioned Logical Volume
+
+To create a thinly provisioned logical volume, proceed with the following steps:
+
+ 1. Create a physical volume(PV) by using the pvcreate command.
+ For example:
+
+ `pvcreate --dataalignment 1280K /dev/sdb`
+
+ Here, /dev/sdb is a storage device.
+ Use the correct dataalignment option based on your device.
+
+ >**Note**
+ >
+ >The device name and the alignment value will vary based on the device you are using.
+
+ 2. Create a Volume Group (VG) from the PV using the vgcreate command:
+
+For example:
+
+ `vgcreate --physicalextentsize 128K gfs_vg /dev/sdb`
+
+ It is recommended that only one VG must be created from one storage device.
+
+ 3. Create a thin-pool using the following commands:
+
+ 1. Create an LV to serve as the metadata device using the following command:
+
+ `lvcreate -L metadev_sz --name metadata_device_name VOLGROUP`
+
+ For example:
+
+ `lvcreate -L 16776960K --name gfs_pool_meta gfs_vg`
+
+ 2. Create an LV to serve as the data device using the following command:
+
+ `lvcreate -L datadev_sz --name thin_pool VOLGROUP`
+
+ For example:
+
+ `lvcreate -L 536870400K --name gfs_pool gfs_vg`
+
+ 3. Create a thin pool from the data LV and the metadata LV using the following command:
+
+ `lvconvert --chunksize STRIPE_WIDTH --thinpool VOLGROUP/thin_pool --poolmetadata VOLGROUP/metadata_device_name`
+
+ For example:
+
+ `lvconvert --chunksize 1280K --thinpool gfs_vg/gfs_pool --poolmetadata gfs_vg/gfs_pool_meta`
+
+ >**Note**
+ >
+ >By default, the newly provisioned chunks in a thin pool are zeroed to prevent data leaking between different block devices.
+
+ `lvchange --zero n VOLGROUP/thin_pool`
+
+ For example:
+
+ `lvchange --zero n gfs_vg/gfs_pool`
+
+ 4. Create a thinly provisioned volume from the previously created pool using the lvcreate command:
+
+ For example:
+
+ `lvcreate -V 1G -T gfs_vg/gfs_pool -n gfs_lv`
+
+ It is recommended that only one LV should be created in a thin pool.
+
+Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly.
+
+ 1. Run # mkfs.xfs -f -i size=512 -n size=8192 -d su=128K,sw=10 DEVICE to format the bricks to the supported XFS file system format. Here, DEVICE is the thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by GlusterFS.
+
+ Run # mkdir /mountpoint to create a directory to link the brick to.
+
+ Add an entry in /etc/fstab:
+
+ `/dev/gfs_vg/gfs_lv /mountpoint xfs rw,inode64,noatime,nouuid 1 2`
+
+ Run # mount /mountpoint to mount the brick.
+
+ Run the df -h command to verify the brick is successfully mounted:
+
+ `# df -h
+ /dev/gfs_vg/gfs_lv 16G 1.2G 15G 7% /exp1`
+
- Volumes of the following types can be created in your storage
environment: