summaryrefslogtreecommitdiffstats
path: root/doc/features
diff options
context:
space:
mode:
Diffstat (limited to 'doc/features')
-rw-r--r--doc/features/brick-failure-detection.md67
-rw-r--r--doc/features/ctime.md68
-rw-r--r--doc/features/file-snapshot.md91
-rw-r--r--doc/features/ganesha-ha.md43
-rw-r--r--doc/features/nufa.md20
-rw-r--r--doc/features/rdma-cm-in-3.4.0.txt9
-rw-r--r--doc/features/rebalance.md74
-rw-r--r--doc/features/server-quorum.md44
-rw-r--r--doc/features/worm.md75
9 files changed, 111 insertions, 380 deletions
diff --git a/doc/features/brick-failure-detection.md b/doc/features/brick-failure-detection.md
deleted file mode 100644
index 24f2a18f39f..00000000000
--- a/doc/features/brick-failure-detection.md
+++ /dev/null
@@ -1,67 +0,0 @@
-# Brick Failure Detection
-
-This feature attempts to identify storage/file system failures and disable the failed brick without disrupting the remainder of the node's operation.
-
-## Description
-
-Detecting failures on the filesystem that a brick uses makes it possible to handle errors that are caused from outside of the Gluster environment.
-
-There have been hanging brick processes when the underlying storage of a brick went unavailable. A hanging brick process can still use the network and repond to clients, but actual I/O to the storage is impossible and can cause noticible delays on the client side.
-
-Provide better detection of storage subsytem failures and prevent bricks from hanging. It should prevent hanging brick processes when storage-hardware or the filesystem fails.
-
-A health-checker (thread) has been added to the posix xlator. This thread periodically checks the status of the filesystem (implies checking of functional storage-hardware).
-
-`glusterd` can detect that the brick process has exited, `gluster volume status` will show that the brick process is not running anymore. System administrators checking the logs should be able to triage the cause.
-
-## Usage and Configuration
-
-The health-checker is enabled by default and runs a check every 30 seconds. This interval can be changed per volume with:
-
- # gluster volume set <VOLNAME> storage.health-check-interval <SECONDS>
-
-If `SECONDS` is set to 0, the health-checker will be disabled.
-
-## Failure Detection
-
-Error are logged to the standard syslog (mostly `/var/log/messages`):
-
- Jun 24 11:31:49 vm130-32 kernel: XFS (dm-2): metadata I/O error: block 0x0 ("xfs_buf_iodone_callbacks") error 5 buf count 512
- Jun 24 11:31:49 vm130-32 kernel: XFS (dm-2): I/O Error Detected. Shutting down filesystem
- Jun 24 11:31:49 vm130-32 kernel: XFS (dm-2): Please umount the filesystem and rectify the problem(s)
- Jun 24 11:31:49 vm130-32 kernel: VFS:Filesystem freeze failed
- Jun 24 11:31:50 vm130-32 GlusterFS[1969]: [2013-06-24 10:31:50.500674] M [posix-helpers.c:1114:posix_health_check_thread_proc] 0-failing_xfs-posix: health-check failed, going down
- Jun 24 11:32:09 vm130-32 kernel: XFS (dm-2): xfs_log_force: error 5 returned.
- Jun 24 11:32:20 vm130-32 GlusterFS[1969]: [2013-06-24 10:32:20.508690] M [posix-helpers.c:1119:posix_health_check_thread_proc] 0-failing_xfs-posix: still alive! -> SIGTERM
-
-The messages labelled with `GlusterFS` in the above output are also written to the logs of the brick process.
-
-## Recovery after a failure
-
-When a brick process detects that the underlaying storage is not responding anymore, the process will exit. There is no automated way that the brick process gets restarted, the sysadmin will need to fix the problem with the storage first.
-
-After correcting the storage (hardware or filesystem) issue, the following command will start the brick process again:
-
- # gluster volume start <VOLNAME> force
-
-## How To Test
-
-The health-checker thread that is part of each brick process will get started automatically when a volume has been started. Verifying its functionality can be done in different ways.
-
-On virtual hardware:
-
-* disconnect the disk from the VM that holds the brick
-
-On real hardware:
-
-* simulate a RAID-card failure by unplugging the card or cables
-
-On a system that uses LVM for the bricks:
-
-* use device-mapper to load an error-table for the disk, see [this description](http://review.gluster.org/5176).
-
-On any system (writing to random offsets of the block device, more difficult to trigger):
-
-1. cause corruption on the filesystem that holds the brick
-2. read contents from the brick, hoping to hit the corrupted area
-3. the filsystem should abort after hitting a bad spot, the health-checker should notice that shortly afterwards
diff --git a/doc/features/ctime.md b/doc/features/ctime.md
new file mode 100644
index 00000000000..74a77abed4b
--- /dev/null
+++ b/doc/features/ctime.md
@@ -0,0 +1,68 @@
+# Consistent time attributes in gluster across replica/distribute
+
+
+#### Problem:
+Traditionally gluster has been using time attributes (ctime, atime, mtime) of files/dirs from bricks. The problem with this approach is that, it is not consisteant across replica and distribute bricks. And applications which depend on it breaks as replica might not always return time attributes from same brick.
+
+Tar especially gives "file changed as we read it" whenever it detects ctime differences when stat is served from different bricks. The way we have been trying to solve it is to serve the stat structures from same brick in afr, max-time in dht. But it doesn't avoid the problem completely. Because there is no way to change ctime at the moment(lutimes() only allows mtime, atime), there is little we can do to make sure ctimes match after self-heals/xattr updates/rebalance.
+
+#### Solution Proposed:
+Store time attribues (ctime, mtime, atime) as an xattr of the file. The xattr is updated based
+on the fop. If a filesystem fop changes only mtime and ctime, update only those in xattr for
+that file.
+
+#### Design Overview:
+1) As part of each fop, top layer will generate a time stamp and pass it to the down along
+ with other information
+ - This will bring a dependency for NTP synced clients along with servers
+ - There can be a diff in time if the fop stuck in the xlator for various reason,
+for ex: because of locks.
+
+ 2) On the server, posix layer stores the value in the memory (inode ctx) and will sync the data periodically to the disk as an extended attr
+ - Of course sync call also will force it. And fop comes for an inode which is not linked, we do the sync immediately.
+
+ 3) Each time when inodes are created or initialized it read the data from disk and store in inode ctx.
+
+ 4) Before setting to inode_ctx we compare the timestamp stored and the timestamp received, and only store if the stored value is lesser than the current value.
+
+ 5) So in best case data will be stored and retrieved from the memory. We replace the values in iatt with the values in inode_ctx.
+
+ 6) File ops that changes the parent directory attr time need to be consistent across all the distributed directories across the subvolumes. (for eg: a create call will change ctime and mtime of parent dir)
+
+ - This has to handle separately because we only send the fop to the hashed subvolume.
+ - We can asynchronously send the timeupdate setattr fop to the other subvoumes and change the values for parent directory if the file fops is successful on hashed subvolume.
+ - This will have a window where the times are inconsistent across dht subvolume (Please provide your suggestions)
+
+7) Currently we have couple of mount options for time attributes like noatime, relatime , nodiratime etc. But we are not explicitly handled those options even if it is given as mount option when gluster mount.
+
+
+#### Implementation Overview:
+This features involves changes in following xlators.
+ - utime xlator
+ - posix xlator
+
+##### utime xlator:
+This is a new client side xlator which does following tasks.
+
+1. It will generate a time stamp and passes it down in frame->root->ctime and over the network.
+2. Based on fop, it also decides the time attributes to be updated and this passed using "frame->root->flags"
+
+ Patches:
+ 1. https://review.gluster.org/#/c/19857/
+
+##### posix xlator:
+Following tasks are done in posix xlator:
+
+1. Provides APIs to set and get the xattr from backend. It also caches the xattr in inode context. During get, it updates time attributes stored in xattr into iatt structure.
+2. Based on the flags from utime xlator, relevant fops update the time attributes in the xattr.
+
+ Patches:
+ 1. https://review.gluster.org/#/c/19267/
+ 2. https://review.gluster.org/#/c/19795/
+ 3. https://review.gluster.org/#/c/19796/
+
+#### Pending Work:
+1. Handling of time related mount options (noatime, realatime,etc)
+2. flag based create (depending on flags in open, create behaviour might change)
+3. Changes in dht for direcotory sync acrosss multiple subvolumes
+4. readdirp stat need to be worked on.
diff --git a/doc/features/file-snapshot.md b/doc/features/file-snapshot.md
deleted file mode 100644
index 7f7c419fc7f..00000000000
--- a/doc/features/file-snapshot.md
+++ /dev/null
@@ -1,91 +0,0 @@
-#File Snapshot
-This feature gives the ability to take snapshot of files.
-
-##Descritpion
-This feature adds file snapshotting support to glusterfs. Snapshots can be created , deleted and reverted.
-
-To take a snapshot of a file, file should be in QCOW2 format as the code for the block layer snapshot has been taken from Qemu and put into gluster as a translator.
-
-With this feature, glusterfs will have better integration with Openstack Cinder, and in general ability to take snapshots of files (typically VM images).
-
-New extended attribute (xattr) will be added to identify files which are 'snapshot managed' vs raw files.
-
-##Volume Options
-Following volume option needs to be set on the volume for taking file snapshot.
-
- # features.file-snapshot on
-##CLI parameters
-Following cli parameters needs to be passed with setfattr command to create, delete and revert file snapshot.
-
- # trusted.glusterfs.block-format
- # trusted.glusterfs.block-snapshot-create
- # trusted.glusterfs.block-snapshot-goto
-##Fully loaded Example
-Download glusterfs3.5 rpms from download.gluster.org
-Install these rpms.
-
-start glusterd by using the command
-
- # service glusterd start
-Now create a volume by using the command
-
- # gluster volume create <vol_name> <brick_path>
-Run the command below to make sure that volume is created.
-
- # gluster volume info
-Now turn on the snapshot feature on the volume by using the command
-
- # gluster volume set <vol_name> features.file-snapshot on
-Verify that the option is set by using the command
-
- # gluster volume info
-User should be able to see another option in the volume info
-
- # features.file-snapshot: on
-Now mount the volume using fuse mount
-
- # mount -t glusterfs <vol_name> <mount point>
-cd into the mount point
- # cd <mount_point>
- # touch <file_name>
-Size of the file can be set and format of the file can be changed to QCOW2 by running the command below. File size can be in KB/MB/GB
-
- # setfattr -n trusted.glusterfs.block-format -v qcow2:<file_size> <file_name>
-Now create another file and send data to that file by running the command
-
- # echo 'ABCDEFGHIJ' > <data_file1>
-copy the data to the one file to another by running the command
-
- # dd if=data-file1 of=big-file conv=notrunc
-Now take the `snapshot of the file` by running the command
-
- # setfattr -n trusted.glusterfs.block-snapshot-create -v <image1> <file_name>
-Add some more contents to the file and take another file snaphot by doing the following steps
-
- # echo '1234567890' > <data_file2>
- # dd if=<data_file2> of=<file_name> conv=notrunc
- # setfattr -n trusted.glusterfs.block-snapshot-create -v <image2> <file_name>
-Now `revert` both the file snapshots and write data to some files so that data can be compared.
-
- # setfattr -n trusted.glusterfs.block-snapshot-goto -v <image1> <file_name>
- # dd if=<file_name> of=<out-file1> bs=11 count=1
- # setfattr -n trusted.glusterfs.block-snapshot-goto -v <image2> <file_name>
- # dd if=<file_name> of=<out-file2> bs=11 count=1
-Now read the contents of the files and compare as below:
-
- # cat <data_file1>, <out_file1> and compare contents.
- # cat <data_file2>, <out_file2> and compare contents.
-##one line description for the variables used
-file_name = File which will be creating in the mount point intially.
-
-data_file1 = File which contains data 'ABCDEFGHIJ'
-
-image1 = First file snapshot which has 'ABCDEFGHIJ' + some null values.
-
-data_file2 = File which contains data '1234567890'
-
-image2 = second file snapshot which has '1234567890' + some null values.
-
-out_file1 = After reverting image1 this contains 'ABCDEFGHIJ'
-
-out_file2 = After reverting image2 this contians '1234567890'
diff --git a/doc/features/ganesha-ha.md b/doc/features/ganesha-ha.md
new file mode 100644
index 00000000000..4b226a22ccf
--- /dev/null
+++ b/doc/features/ganesha-ha.md
@@ -0,0 +1,43 @@
+# Overview of Ganesha HA Resource Agents in GlusterFS 3.7
+
+The ganesha_mon RA monitors its ganesha.nfsd daemon. While the
+daemon is running, it creates two attributes: ganesha-active and
+grace-active. When the daemon stops for any reason, the attributes
+are deleted. Deleting the ganesha-active attribute triggers the
+failover of the virtual IP (the IPaddr RA) to another node —
+according to constraint location rules — where ganesha.nfsd is
+still running.
+
+The ganesha_grace RA monitors the grace-active attribute. When
+the grace-active attibute is deleted, the ganesha_grace RA stops,
+and will not restart. This triggers pacemaker to invoke the notify
+action in the ganesha_grace RAs on the other nodes in the cluster;
+which send a DBUS message to their respective ganesha.nfsd.
+
+(N.B. grace-active is a bit of a misnomer. while the grace-active
+attribute exists, everything is normal and healthy. Deleting the
+attribute triggers putting the surviving ganesha.nfsds into GRACE.)
+
+To ensure that the remaining/surviving ganesha.nfsds are put into
+ NFS-GRACE before the IPaddr (virtual IP) fails over there is a
+short delay (sleep) between deleting the grace-active attribute
+and the ganesha-active attribute. To summarize, e.g. in a four
+node cluster:
+
+1. on node 2 ganesha_mon::monitor notices that ganesha.nfsd has died
+
+2. on node 2 ganesha_mon::monitor deletes its grace-active attribute
+
+3. on node 2 ganesha_grace::monitor notices that grace-active is gone
+and returns OCF_ERR_GENERIC, a.k.a. new error. When pacemaker tries
+to (re)start ganesha_grace, its start action will return
+OCF_NOT_RUNNING, a.k.a. known error, don't attempt further restarts.
+
+4. on nodes 1, 3, and 4, ganesha_grace::notify receives a post-stop
+notification indicating that node 2 is gone, and sends a DBUS message
+to its ganesha.nfsd, putting it into NFS-GRACE.
+
+5. on node 2 ganesha_mon::monitor waits a short period, then deletes
+its ganesha-active attribute. This triggers the IPaddr (virt IP)
+failover according to constraint location rules.
+
diff --git a/doc/features/nufa.md b/doc/features/nufa.md
deleted file mode 100644
index 03b8194b4c0..00000000000
--- a/doc/features/nufa.md
+++ /dev/null
@@ -1,20 +0,0 @@
-# NUFA Translator
-
-The NUFA ("Non Uniform File Access") is a variant of the DHT ("Distributed Hash
-Table") translator, intended for use with workloads that have a high locality
-of reference. Instead of placing new files pseudo-randomly, it places them on
-the same nodes where they are created so that future accesses can be made
-locally. For replicated volumes, this means that one copy will be local and
-others will be remote; the read-replica selection mechanisms will then favor
-the local copy for reads. For non-replicated volumes, the only copy will be
-local.
-
-## Interface
-
-Use of NUFA is controlled by a volume option, as follows.
-
- gluster volume set myvolume cluster.nufa on
-
-This will cause the NUFA translator to be used wherever the DHT translator
-otherwise would be. The rest is all automatic.
-
diff --git a/doc/features/rdma-cm-in-3.4.0.txt b/doc/features/rdma-cm-in-3.4.0.txt
deleted file mode 100644
index fd953e56b3f..00000000000
--- a/doc/features/rdma-cm-in-3.4.0.txt
+++ /dev/null
@@ -1,9 +0,0 @@
-Following is the impact of http://review.gluster.org/#change,149.
-
-New userspace packages needed:
-librdmacm
-librdmacm-devel
-
-rdmacm needs an IPoIB address for connection establishment. This requirement results in following issues:
-* Because of bug #890502, we've to probe the peer on an IPoIB address. This imposes a restriction that all volumes created in the future have to communicate over IPoIB address (irrespective of whether they use gluster's tcp or rdma transport).
-* Currently client has an independence to choose b/w tcp and rdma transports while communicating with the server (by creating volumes with transport-type tcp,rdma). This independence was a byproduct of our ability use the normal channel used with transport-type tcp for rdma connectiion establishment handshake too. However, with new requirement of IPoIB address for connection establishment, we loose this independence (till we bring in multi-network support - where a brick can be identified by a set of ip-addresses and we can choose different pairs of ip-addresses for communication based on our requirements - in glusterd).
diff --git a/doc/features/rebalance.md b/doc/features/rebalance.md
deleted file mode 100644
index 29b993008d2..00000000000
--- a/doc/features/rebalance.md
+++ /dev/null
@@ -1,74 +0,0 @@
-## Background
-
-
-For a more detailed description, view Jeff Darcy's blog post [here]
-(http://hekafs.org/index.php/2012/03/glusterfs-algorithms-distribution/)
-
-GlusterFS uses the distribute translator (DHT) to aggregate space of multiple servers. DHT distributes files among its subvolumes using a consistent hashing method providing 32-bit hashes. Each DHT subvolume is given a range in the 32-bit hash space. A hash value is calculated for every file using a combination of its name. The file is then placed in the subvolume with the hash range that contains the hash value.
-
-## What is rebalance?
-
-The rebalance process migrates files between the DHT subvolumes when necessary.
-
-## When is rebalance required?
-
-Rebalancing is required for two main cases.
-
-1. Addition/Removal of bricks
-
-2. Renaming of a file
-
-## Addition/Removal of bricks
-
-Whenever the number or order of DHT subvolumes change, the hash range given to each subvolume is recalculated. When this happens, already existing files on the volume will need to be moved to the correct subvolume based on their hash. Rebalance does this activity.
-
-Addition of bricks which increase the size of a volume will increase the number of DHT subvolumes and lead to recalculation of hash ranges (This doesn't happen when bricks are added to a volume to increase redundancy, i.e. increase replica count of a volume). This will require an explicit rebalance command to be issued to migrate the files.
-
-Removal of bricks which decrease the size of a volumes also causes the hash ranges of DHT to be recalculated. But we don't need to issue an explicit rebalance command in this case, as rebalance is done automatically by the remove-brick process if needed.
-
-## Renaming of a file
-
-Renaming of file will cause its hash to change. The file now needs to be moved to the correct subvolume based on its new hash. Rebalance does this.
-
-## How does rebalance work?
-
-At a high level, the rebalance process consists of the following 3 steps:
-
-1. Crawl the volume to access all files
-2. Calculate the hash for the file
-3. If needed move the migrate the file to the correct subvolume.
-
-
-The rebalance process has been optimized by making it distributed across the trusted storage pool. With distributed rebalance, a rebalance process is launched on each peer in the cluster. Each rebalance process will crawl files on only those bricks of the volume which are present on it, and migrate the files which need migration to the correct brick. This speeds up the rebalance process considerably.
-
-## What will happen if rebalance is not run?
-
-### Addition of bricks
-
-With the current implementation of add-brick, when the size of a volume is augmented by adding new bricks, the new bricks are not put into use immediately i.e., the hash ranges there not recalculated immediately. This means that the files will still be placed only onto the existing bricks, leaving the newly added storage space unused. Starting a rebalance process on the volume will cause the hash ranges to be recalculated with the new bricks included, which allows the newly added storage space to be used.
-
-### Renaming a file
-
-When a file rename causes the file to be hashed to a new subvolume, DHT writes a link file on the new subvolume leaving the actual file on the original subvolume. A link file is an empty file, which has an extended attribute set that points to the subvolume on which the actual file exists. So, when a client accesses the renamed file, DHT first looks for the file in the hashed subvolume and gets the link file. DHT understands the link file, and gets the actual file from the subvolume pointed to by the link file. This leads to a slight reduction in performance. A rebalance will move the actual file to the hashed subvolume, allowing clients to access the file directly once again.
-
-## Are clients affected during a rebalance process?
-
-The rebalance process is transparent to applications on the clients. Applications which have open files on the volume will not be affected by the rebalance process, even if the open file requires migration. The DHT translator on the client will hide the migration from the applications.
-
-##How are open files migrated?
-
-(A more technical description of the algorithm used can be seen in the commit message of commit a07bb18c8adeb8597f62095c5d1361c5bad01f09.)
-
-To achieve migration of open files, two things need to be assured of,
-a) any writes or changes happening to the file during migration are correctly synced to destination subvolume after the migration is complete.
-b) any further changes should be made to the destination subvolume
-
-Both of these requirements require sending notificatoins to clients. Clients are notified by overloading an attribute used in every callback functions. DHT understands these attributes in the callbacks and can be notified if a file is being migrated or not.
-
-During rebalance, a file will be in two phases
-
-1. Migration in process - In this phase the file is being migrated by the rebalance process from the source subvolume to the destination subvolume. The rebalance process will set a 'in-migration' attribute on the file, which will notify the clients' DHT translator. The clients' DHT translator will then take care to send any further changes to the destination subvolume as well. This way we satisfy the first requirement
-
-2. Migration completed - Once the file has been migrated, the rebalance process will set a 'migration-complete' attribute on the file. The clients will be notified of the completion and all further operations on the file will happen on the destination subvolume.
-
-The DHT translator handles the above and allows the applications on the clients to continue working on a file under migration.
diff --git a/doc/features/server-quorum.md b/doc/features/server-quorum.md
deleted file mode 100644
index 7b20084cea8..00000000000
--- a/doc/features/server-quorum.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Server Quorum
-
-Server quorum is a feature intended to reduce the occurrence of "split brain"
-after a brick failure or network partition. Split brain happens when different
-sets of servers are allowed to process different sets of writes, leaving data
-in a state that can not be reconciled automatically. The key to avoiding split
-brain is to ensure that there can be only one set of servers - a quorum - that
-can continue handling writes. Server quorum does this by the brutal but
-effective means of forcing down all brick daemons on cluster nodes that can no
-longer reach enough of their peers to form a majority. Because there can only
-be one majority, there can be only one set of bricks remaining, and thus split
-brain can not occur.
-
-## Options
-
-Server quorum is controlled by two parameters:
-
- * **cluster.server-quorum-type**
-
- This value may be "server" to indicate that server quorum is enabled, or
- "none" to mean it's disabled.
-
- * **cluster.server-quorum-ratio**
-
- This is the percentage of cluster nodes that must be up to maintain quorum.
- More precisely, this percentage of nodes *plus one* must be up.
-
-Note that these are cluster-wide flags. All volumes served by the cluster will
-be affected. Once these values are set, quorum actions - starting or stopping
-brick daemons in response to node or network events - will be automatic.
-
-## Best Practices
-
-If a cluster with an even number of nodes is split exactly down the middle,
-neither half can have quorum (which requires **more than** half of the total).
-This is particularly important when N=2, in which case the loss of either node
-leads to loss of quorum. Therefore, it is highly advisable to ensure that the
-cluster size is three or greater. The "extra" node in this case need not have
-any bricks or serve any data. It need only be present to preserve the notion
-of a quorum majority less than the entire cluster membership, allowing the
-cluster to survive the loss of a single node without losing quorum.
-
-
-
diff --git a/doc/features/worm.md b/doc/features/worm.md
deleted file mode 100644
index dba99777da5..00000000000
--- a/doc/features/worm.md
+++ /dev/null
@@ -1,75 +0,0 @@
-#WORM (Write Once Read Many)
-This features enables you to create a `WORM volume` using gluster CLI.
-##Description
-WORM (write once,read many) is a desired feature for users who want to store data such as `log files` and where data is not allowed to get modified.
-
-GlusterFS provides a new key `features.worm` which takes boolean values(enable/disable) for volume set.
-
-Internally, the volume set command with 'feature.worm' key will add 'features/worm' translator in the brick's volume file.
-
-`This change would be reflected on a subsequent restart of the volume`, i.e gluster volume stop, followed by a gluster volume start.
-
-With a volume converted to WORM, the changes are as follows:
-
-* Reads are handled normally
-* Only files with O_APPEND flag will be supported.
-* Truncation,deletion wont be supported.
-
-##Volume Options
-Use the volume set command on a volume and see if the volume is actually turned into WORM type.
-
- # features.worm enable
-##Fully loaded Example
-WORM feature is being supported from glusterfs version 3.4
-start glusterd by using the command
-
- # service glusterd start
-Now create a volume by using the command
-
- # gluster volume create <vol_name> <brick_path>
-start the volume created by running the command below.
-
- # gluster vol start <vol_name>
-Run the command below to make sure that volume is created.
-
- # gluster volume info
-Now turn on the WORM feature on the volume by using the command
-
- # gluster vol set <vol_name> worm enable
-Verify that the option is set by using the command
-
- # gluster volume info
-User should be able to see another option in the volume info
-
- # features.worm: enable
-Now restart the volume for the changes to reflect, by performing volume stop and start.
-
- # gluster volume <vol_name> stop
- # gluster volume <vol_name> start
-Now mount the volume using fuse mount
-
- # mount -t glusterfs <vol_name> <mnt_point>
-create a file inside the mount point by running the command below
-
- # touch <file_name>
-Verify that user is able to create a file by running the command below
-
- # ls <file_name>
-
-##How To Test
-Now try deleting the above file which is been created
-
- # rm <file_name>
-Since WORM is enabled on the volume, it gives the following error message `rm: cannot remove '/<mnt_point>/<file_name>': Read-only file system`
-
-put some content into the file which is created above.
-
- # echo "at the end of the file" >> <file_name>
-Now try editing the file by running the commnad below and verify that the following error message is displayed `rm: cannot remove '/<mnt_point>/<file_name>': Read-only file system`
-
- # sed -i "1iAt the beginning of the file" <file_name>
-Now read the contents of the file and verify that file can be read.
-
- cat <file_name>
-
-`Note: If WORM option is set on the volume before it is started, then volume need not be restarted for the changes to get reflected`.