summaryrefslogtreecommitdiffstats
path: root/tests/snapshot.rc
Commit message (Collapse)AuthorAgeFilesLines
* tests: use trap mechanism to ensure that proper cleanups happenJeff Darcy2016-04-121-0/+5
| | | | | | | | | | | | | | | | | | | | | | | This actually consists of several parts. * Added a generic cleanup-scheduling mechanism. Instead of calling "trap ... EXIT" directly, just call "push_trapfunc ..." instead and your cleanup function will be called along with any others. * Converted a few tests to use push_trapfunc. * Added "push_trapfunc cleanup_lvm" to snapshot.rc to address the particular problem that's driving this - snapshot tests not calling cleanup_lvm on their own and leaving bad state for the next test. Change-Id: I548a97a26328390992fc71ee1f03c0463703f9d7 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/13933 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* snapshot: Decreasing VHD_SIZE to 300MAvra Sengupta2016-01-281-3/+3
| | | | | | | | | | | | | | | | Decreasing the VHD_SIZE in snapshot.rc, as 1G of lvs are not needed for test setups, and allocating as much space might not allow the tests to be run in some setups Change-Id: I46ad0e2751301ba9f19f7ac548d41fa4521baa75 BUG: 1302234 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/13297 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* snapshot/clone : Fix tier pause failure for snapshot cloneAvra Sengupta2015-12-021-0/+7
| | | | | | | | | | | | | | | | | | | | On a tiered volume, snapshot clone fails while trying to pause tier, as we pass volname(snap) to the brick_op_phase module, which tries to look for the snap volume amongst regular volumes, and obviously doesn't find it and fail. Well as snapshot volumes are read only volume, and will not have tiering daemon acting upon them, there is really no need to pause tiereing while taking clone of snapshot volumes. Hence removing the code to pause and resume tiering during clone create. Change-Id: I2266aba589a830a13a806c0d8a56fd8855143ccd BUG: 1279327 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/12548 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* snapshot: Inherit snap-max-hard-limit from original volumeAvra Sengupta2015-11-021-0/+16
| | | | | | | | | | | | | | | | | | A snapshot should inherit snap-max-hard-limit from the original volume while being created and when being restored to, it should restore the same. Similarly a clone taken from a snapshot should inherit snap-max-hard-limit from the snapshot. Change-Id: If8e90e2ffc10e22086b803ac8e2638a16bcec968 BUG: 1275616 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/12437 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* snapshot: Fix snapshot info's xml outputAvra Sengupta2015-08-241-0/+14
| | | | | | | | | | | | | | Display description field with (null) if no description is present for the snapshot, instead of removing the field altogether. Change-Id: I965b08cd6e54eea56c32e2712fab7daa8a663f11 BUG: 1250387 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11834 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* Tests portability: umount(8)Emmanuel Dreyfus2015-06-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1) Avoid hangs on unmounting NFS on NetBSD NetBSD umount(8) on a NFS mount whose server is gone will wait forever because umount(8) calls realpath(3) and tries to access the mount before it calls unmount(2). The non-portable, NetBSD-specific umount -R flag prevent that behavior. We therefore introduce UMOUNT_F, defined as "umount -f" on Linux and "umount -f -R" on NetBSD to take care of forced unmounts, especially in the NFS case. 2) Enforce usage of force_umount wrapper with timeout Whenever umount is used it should be wrapped in force_umount with tiemout handling. That saves us timing issues, and it handles the NetBSD NFS case. 3) Cleanup kernel cache flush. We used (cd $M0 && umount $M0 ) as a portable kernel cache flush trick, but it does not flush everything we need on Linux. Introduce a drop_cache() shell function that reverts to previously used echo 3 > /proc/sys/vm/drop_caches on Linux, and keeps (cd $M0 && umount $M0 ) on other systems. BUG: 1129939 Change-Id: Iab1f5a023405f1f7270c42b595573702ca1eb6f3 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/11114 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapshot: Handshake with glusterd is not properMohammed Rafi KC2015-05-071-0/+27
| | | | | | | | | | | | | | | | | | | | If a snap is activated or deactivated, when a node is down, it is not retrieving the data properly during the handshake of glusterd With this patch, a version check will made when a glusterd is started running. If there is a mismach in version, then peers will exchange the healed data. Change-Id: I8bd2a347723db2194d3fa73295878b4dd2e9be5d BUG: 1122377 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9664 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal@redhat.com>
* snapshot/uss: fix regression failure in bug-1162498.tAvra Sengupta2015-05-041-0/+4
| | | | | | | | | | | | | | | .snaps seems to take some time, before it is available based on the state of the system. Using EXPECT_WITHIN instead of TEST to check the contents of .snaps, hence giving it some time to come up. Change-Id: Iac166500d5a09ba8bab00d994c27a9ad0a01b9c3 BUG: 1218120 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/10518 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tests: fix volume_exists to be used from EXPECT_WITHINJeff Darcy2015-04-011-6/+5
| | | | | | | | | | | | | | | | | | | | | Fixes the spurious volume-snapshot-clone.t regression failures. In brief, the problem is that the script wasn't waiting for config commands to complete, and would *sometimes* query the status of a volume while that volume was still being deleted. It turns out that "!" doesn't work properly from EXPECT_WITHIN, so there was a choice between changing that or changing volume_exists. This seemed less risky. Because of code duplication, two instances of the function had to be changed, and the other caller (volume-snapshot.t) did too. Change-Id: I766d4dc7c5b11038ede8e45d9d1f29cd02a622a0 BUG: 1163543 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/10053 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapshot: append timestamp with snapnameMohammed Rafi KC2015-03-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Appending GMT time stamp with snapname by default. If no-timestamp flag is given during snapshot creation, then time stamp will not append with snapname; Initial consumer of this feature is Samba's Shadow Copy feature. This feature allows Windows user to get previous revisions of a file. For this feature to work snapshot names under .snaps folder (USS) should have timestamp in following format appended: @GMT-YYYY.MM.DD-hh.mm.ss PS: https://www.samba.org/samba/docs/man/manpages/vfs_shadow_copy2.8.html This format is configurable by Samba conf file. Due to a limitation in Windows directory access the exact format cannot be used by USS. Therefore we have modified the file format to: _GMT-YYYY.MM.DD-hh.mm.ss Snapshot scheduling feature also required to append timestamp to the snapshot name therefore timestamp is appended in snapshot creation itself instead of doing the changes in snapview server. More info: https://www.mail-archive.com/gluster-users@gluster.org/msg18895.html Change-Id: Idac24670948cf4c0fbe916ea6690e49cbc832d07 BUG: 1189473 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9597 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* tests: move all test-cases into component subdirectoriesNiels de Vos2015-01-061-0/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There are around 300 regression tests, 250 being in tests/bugs. Running partial set of tests/bugs is not easy because this is a flat directory with almost all tests inside. It would be valuable to make partial test/bugs easier, and allow the use of mulitple build hosts for a single commit, each running a subset of the tests for a quicker result. Additional changes made: - correct the include path for *.rc shell libraries and *.py utils - make the testcases pass checkpatch - arequal-checksum in afr/self-heal.t was never executed, now it is - include.rc now complains loudly if it fails to find env.rc Change-Id: I26ffd067e9853d3be1fd63b2f37d8aa0fd1b4fea BUG: 1178685 Reported-by: Emmanuel Dreyfus <manu@netbsd.org> Reported-by: Atin Mukherjee <amukherj@redhat.com> URL: http://www.gluster.org/pipermail/gluster-devel/2014-December/043414.html Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/9353 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Snapshot should be deactivated when it is createdvmallika2014-11-121-1/+5
| | | | | | | | | | | | | | | | | By default snapshot should be deactivated and this should be a configurable option. This behaviour can be configured by the command below: gluster snapshot config activate-on-create <enable|disable> Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513 BUG: 1157991 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8985 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* USS : Display only the activated snapshotsSachin Pandit2014-11-121-0/+6
| | | | | | | | | | | | | | | Instead of displaying all the snapshots in the uss world, it is better if we display only the activated snapshots. Change-Id: I70d3ec212b62ec15956ae3e826bc4201d8dedd17 BUG: 1155042 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8958 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Regression test portability: TAPEmmanuel Dreyfus2014-10-301-0/+1
| | | | | | | | | | | | | | | | | | Even with successful tests on NetBSD, we had a failure message at the end: "No plan found in TAP output" This was caused by a white space left padded numerical variable. Stripping the white spaces fixes the problem. While there add SKIP_TEST for NetBSD on unspported tests so that it does not triger a failure. BUG: 1129939 Change-Id: I8d0bc125c4208974657977568d838ee2dd19783c Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8981 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* test : Fix for spurious failureSachin Pandit2014-09-231-0/+6
| | | | | | | | | | | | | | | | | | Problem : Once the features.uss is enabled it does not wait for the process to be created. And if we try to check for the pid of the snapd then it will not be present which causes a failure. Solution : Adding a EXPECT_WITHIN which waits to get the pid until certain time period. Change-Id: If075860173a996f9eee13b346e939686b94ec3f6 BUG: 1145450 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8814 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Regression test portability: LVMEmmanuel Dreyfus2014-08-311-0/+9
| | | | | | | | | | | Skip any test involving LVM on NetBSD as LVM is not supported BUG: 1129939 Change-Id: I2237bae1128d1a81047c9ff7f905431156daf8b7 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8556 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* feature/snapshot : Interface to delete all snapshots belongingSachin Pandit2014-07-251-0/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to a system as-well-as to a particular volume. Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1112613 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot: Activation and De-activation of snapshotJoseph Fernandes2014-05-021-0/+13
| | | | | | | | | | | | | | | | | | | | | Previously, snapshots by default were activated on creation and there was no option to activate or deactivate them on demand. This will allow the user to activate and deactivate on demand. The CLI goes as follows 1) Activate the snap using a command "gluster snapshot activate <snapname> [force]" 2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>" Note: Even now the snapshot will be activated during creation. Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f BUG: 1061685 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7476 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* [glusterd/snapshot] snapshot create force optionJoseph Fernandes2014-04-301-0/+11
| | | | | | | | | | | | | | | | | | | | Implement force option in snapshot create i.e 1) Creation of snapshot fails if the original volume bricks are down 2) With a force option creation of snapshot will continue even if the original volume bricks are down. This was the fix for bugs 1089527 and 1083502 Change-Id: I8de0242adf8ee0af00db9fa8701d86fabc12e7fc BUG: 1090042 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7520 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: MacOSX Porting fixesHarshavardhana2014-04-241-1/+7
| | | | | | | | | | | | | | | | | | | | | git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs Working functionality on MacOSX - GlusterD (management daemon) - GlusterCLI (management cli) - GlusterFS FUSE (using OSXFUSE) - GlusterNFS (without NLM - issues with rpc.statd) Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Dennis Schafroth <dennis@schafroth.com> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Dennis Schafroth <dennis@schafroth.com> Reviewed-on: http://review.gluster.org/7503 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-0/+290
This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>