summaryrefslogtreecommitdiffstats
path: root/tests
diff options
context:
space:
mode:
authorRaghavendra Bhat <raghavendra@redhat.com>2018-06-08 09:54:00 -0400
committerAmar Tumballi <amarts@redhat.com>2018-08-24 04:18:55 +0000
commitf191bb7bc1abd250bdf0a5a6972ce95fcbd3314b (patch)
tree761211f59a43a954dc3312b570080faa68027be8 /tests
parent11a9f9f0dcca58446f306ef2060a345348ed91c1 (diff)
features/snapview-server: validate the fs instance before doing fop there
PROBLEM: ======== USS design depends on snapview-server translator communicating with each individual snapshot via gfapi. So, the snapview-server xlator maintains the glfs instance (thus the snapshot) to which a inode belongs to by storing it inside the inode context. Suppose, a file from a snapshot is opened by a application, and the fd is still valid from application's point of view (i.e. application has not yet closed fd). Now, if the snapshot to which the opened file belongs to is deleted, then the glfs_t instance corresponding to the snapshot is destroyed by snapview-server as part of snap deletion. But now, if the application does IO on the fd it has kept open, then snapview server tries to send that request to the corresponding snap via glfs instance for that snapshot stored in the inode context for the file on which the application is sending the fop. And this results in freed up glfs_t pointer being accessed and causes a segfault. FIX: === For fd based operations, check whether the glfs instance that the inode contains in its context, is still valid or not. For non fd based operations, usually lookup should guarantee that. But if the file was already looked up, and the client accessing the snap data (either NFS, or native glusterfs fuse) does not bother to send a lookup and directly sends a path based fop, then that path based fop should ensure that the fs instance is valid. Change-Id: I881be15ec46ecb51aa844d7fd41d5630f0d644fb updates: bz#1602070 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r--tests/basic/open-fd-snap-delete.t74
1 files changed, 74 insertions, 0 deletions
diff --git a/tests/basic/open-fd-snap-delete.t b/tests/basic/open-fd-snap-delete.t
new file mode 100644
index 00000000000..a9f47cac19d
--- /dev/null
+++ b/tests/basic/open-fd-snap-delete.t
@@ -0,0 +1,74 @@
+#!/bin/bash
+
+. $(dirname $0)/../include.rc
+. $(dirname $0)/../volume.rc
+. $(dirname $0)/../snapshot.rc
+. $(dirname $0)/../fileio.rc
+
+cleanup;
+
+TEST init_n_bricks 3;
+TEST setup_lvm 3;
+
+# start glusterd
+TEST glusterd;
+
+TEST pidof glusterd;
+
+TEST $CLI volume create $V0 $H0:$L1 $H0:$L2 $H0:$L3;
+TEST $CLI volume set $V0 nfs.disable false
+
+
+TEST $CLI volume start $V0;
+
+TEST $GFS --volfile-server=$H0 --volfile-id=$V0 $M0;
+
+for i in {1..10} ; do echo "file" > $M0/file$i ; done
+
+# Create file and directory
+TEST touch $M0/f1
+TEST mkdir $M0/dir
+
+TEST $CLI snapshot config activate-on-create enable
+TEST $CLI volume set $V0 features.uss enable;
+
+for i in {1..10} ; do echo "file" > $M0/dir/file$i ; done
+
+TEST $CLI snapshot create snap1 $V0 no-timestamp;
+
+for i in {11..20} ; do echo "file" > $M0/file$i ; done
+for i in {11..20} ; do echo "file" > $M0/dir/file$i ; done
+
+TEST $CLI snapshot create snap2 $V0 no-timestamp;
+
+TEST fd1=`fd_available`
+TEST fd_open $fd1 'r' $M0/.snaps/snap2/dir/file11;
+TEST fd_cat $fd1
+
+TEST $CLI snapshot delete snap2;
+
+TEST ! fd_cat $fd1;
+
+# the return value of this command (i.e. fd_close) depetends
+# mainly on how the release operation on a file descriptor is
+# handled in snapview-server process. As of now snapview-server
+# returns 0 for the release operation. And it is similar to how
+# posix xlator does. So, as of now the expectation is to receive
+# success for the close operation.
+TEST fd_close $fd1;
+
+# This check is mainly to ensure that the snapshot daemon
+# (snapd) is up and running. If it is not running, the following
+# stat would receive ENOTCONN.
+
+TEST stat $M0/.snaps/snap1/dir/file1
+
+TEST $CLI snapshot delete snap1;
+
+TEST rm -rf $M0/*;
+
+TEST $CLI volume stop $V0;
+
+TEST $CLI volume delete $V0;
+
+cleanup