summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
* Fix invalid seekdir() usagev3.6.0beta3Emmanuel Dreyfus2014-09-303-3/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to POSIX, seekdir() should only be given offset obtained from telldir() on the same DIR * http://pubs.opengroup.org/onlinepubs/9699919799/functions/seekdir.html Code from afr-self-heald.c and index.c is operating outside of the specification, by doing using seekdir() with offset from a previously open/close/re-open directory. This seems to work on Linux (although with no guarantee it will always in the future). On NetBSD the seekdir() with a in invalid offset is a nilpotent operation, and causes an infinite loop, since index_fill_readdir() always restart from the beginning of the directory. The situation is fixed by using a non anonymous fd in afr-self-heald.c: we explicitely open the directory so that it remains open on the brick side during the timeframe where we want to reuse offsets in seekdir(). This requires adding an opendir fop in index xlator. If the brick was not updated, the opendir will fail and we fallback to the standard violating approach for backward compatibility on Linux. On other systems we fail since it never worked. While there, add tests to check seekdir() success in index and posix xlators, so that incorrect usage from calling code produce an explicit error instead of an infinite loop. We can only do it on non Linux systems, for the sake of backward compatibility when the brick was updated but not the client. Backport of I88ca90acfcfee280988124bd6addc1a1893ca7ab BUG: 1138897 Change-Id: I5446a9a17d5451ec5aab8fbd10d381da9a0a23ad Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8860 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/quota: Heal pgfid xattr on existing data when the quota is enablevmallika2014-09-303-3/+40
| | | | | | | | | | | | | | | | | | | | | This is a backport of http://review.gluster.org/#/c/8878/ The pgfid extended attributes are used to construct the ancestry path (from the file to the volume root) for nameless lookups on files. As NFS relies on nameless lookups heavily, quota enforcement through NFS would be inconsistent if quota were to be enabled on a volume with existing data. Solution is to heal the pgfid extended attributes as a part of lookup perfomed by quota-crawl process. In a posix lookup check for pgfid xattr and if it is missing set the xattr. BUG: 1147953 Change-Id: I707d91a056e07452bfd1e070af5eddaa752a84ac Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8890 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr : Fix incorrect looping of index healerAnuradha2014-09-291-4/+9
| | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8868 Sending appropriate return value from afr_selfheal() fixes the issue. Credits : Krutika Dhananjay and Pranith Kumar. Change-Id: I1dc8105078e99dbc12295ef557a407bf3d8cfec3 BUG: 1147486 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/8884 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Launch self-heal only when all the brick status is knownPranith Kumar K2014-09-291-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: File goes into split-brain because of wrong erasing of xattrs. RCA: The issue happens because index self-heal is triggered even before all the bricks are up. So what ends up happening while erasing the xattrs is, xattrs are erased only on the sink brick for the brick that it thinks is up leading to split-brain Example: lets say the xattrs before heal started are: brick 2: trusted.afr.vol1-client-2=0x000000020000000000000000 trusted.afr.vol1-client-3=0x000000020000000000000000 brick 3: trusted.afr.vol1-client-2=0x000010040000000000000000 trusted.afr.vol1-client-3=0x000000000000000000000000 if only brick-2 came up at the time of triggering the self-heal only 'trusted.afr.vol1-client-2' is erased leading to the following xattrs: brick 2: trusted.afr.vol1-client-2=0x000000000000000000000000 trusted.afr.vol1-client-3=0x000000020000000000000000 brick 3: trusted.afr.vol1-client-2=0x000010040000000000000000 trusted.afr.vol1-client-3=0x000000000000000000000000 So the file goes into split-brain. BUG: 1142612 Change-Id: I0c8b66e154f03b636db052c97745399a7cca265b Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8756 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Fix inode leakKrutika Dhananjay2014-09-291-0/+2
| | | | | | | | | | | | Backport of: http://review.gluster.org/8875 Change-Id: Ib000be1238d38f8d63ff25b3873bb813bf72beec BUG: 1145914 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8876 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Improve debugging experience for glusterd locksKrishnan Parthasarathi2014-09-261-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | Today, when glusterd's internal locking mechanism fails with invalid type or when another competing lock is being held, the log message doesn't provide enough information directly as to which command saw this (first). Following is a snippet of how a failure would look in the log file. This would greatly assist in debugging. [2014-09-03 04:57:58.549418] E [glusterd-locks.c:520:glusterd_mgmt_v3_lock] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(__glusterd_handle_create_volume+0x801) [0x7f30b071e651] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x2c) [0x7f30b072e19c] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x55d) [0x7f30b072de6d]))) 0-management: Invalid entity. Cannot perform locking operation on vol types Change-Id: I0595f49d60e620e8b065f3506bdb147ccee383a7 BUG: 1145093 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8842 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Do not forbid fallocate on non Linux systemsEmmanuel Dreyfus2014-09-261-7/+2
| | | | | | | | | | | | | | | | | | | Linux fallocate() differs from posix_fallocate() by an extra flag that can have the FALLOC_FL_KEEP_SIZE value; Do not test FALLOC_FL_KEEP_SIZE existence to enable fallocate() in posix xlator, as sys_fallocate() in libglusterfs provides support for both implementations. Backport of Idf41a0396028a15e81281791bf6912d7fd674e3f BUG: 1138897 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Change-Id: Ie6e5ea923561630c52a6db5c7f83313cfdc34811 Reviewed-on: http://review.gluster.org/8862 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: More dict_t leak fixesKrutika Dhananjay2014-09-262-33/+68
| | | | | | | | | | | | Backport of: http://review.gluster.org/8852 Change-Id: I2c927092dc5a834fabdd3495a7f7a3527604a6d2 BUG: 1136831 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8869 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Fix locking issues in entry self-healKrutika Dhananjay2014-09-261-92/+123
| | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/8837/ Original reporter of the bug & designer of the solution: Pranith Kumar K <pkarampu@redhat.com> Change-Id: I6e36326e1d9398ede82166b358ab438367dd3011 BUG: 1136829 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8845 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* 32 bit fix: use off_t and not size_t for truncate()Emmanuel Dreyfus2014-09-251-7/+7
| | | | | | | | | | | | | | | | Make sure off_t and not size_t is used when holding file offsets for ftruncate()/truncate(). It works on 64 bit machines where sizeof(size_t) == sizeof(off_t) == 8, but breaks for big offsets on 32 bit machines because sizeof(size_t) == 4 and sizeof(off_t) == 8 This is backport of Ia2637be772ba9b11731d59fdbffbd269f0ff56c8 BUG: 1138897 Change-Id: I8fe77a86831f0db4eff5b5c89efe004b9a0b29e9 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8743 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* OSX/FreeBSD: Regression fixHarshavardhana2014-09-241-4/+16
| | | | | | | | | | | Introduced in "1f6e992f1aaa676be5bd47d17e58f1171825cf43" Change-Id: Id684e2f082def7d01ef3c258ea6598da6205591f BUG: 1117822 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8840 Reviewed-by: Justin Clift <justin@gluster.org> Tested-by: Justin Clift <justin@gluster.org>
* cluster/afr: Fix spurious metadata self-healsv3.6.0beta2Pranith Kumar K2014-09-247-29/+86
| | | | | | | | | | | | | | | Backport of http://review.gluster.org/8709 - Added logging for metadata and data self-heals which helped in debugging this issue. - Added checks to skip self-heals when no sinks are available to heal BUG: 1145987 Change-Id: Ide03af4f531a1280ec8ad95b627285df4d7bc42d Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8832 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Fixed mem leaks in self-heal code path.Anuradha2014-09-242-1/+17
| | | | | | | | | | | | | | | | | backport of: http://review.gluster.org/8821 AFR_STACK_RESET previously didn't cleanup afr_local_t, leading to memory leaks. With this patch, cleanup is done. All credit goes to Pranith Kumar Karampuri. Change-Id: I26506dfd9273b917eff5127c3e0cf9421e60f228 BUG: 1145914 Reviewed-on: http://review.gluster.org/8831 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Do not hardcode setfattr(1) pathEmmanuel Dreyfus2014-09-241-1/+13
| | | | | | | | | | | | | | Turn setfattr(1) absolute path into an OS-dependant macro. Let compiler option override it to fit custom installation if needed. Backport of I8f469c5741a85b6e8d8f6299a9540b3d64611d2f BUG: 1138897 Change-Id: I279752f2ec5db1abc25830cb9a23290cc401d517 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8828 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Add last successful glusterd lock backtraceKrishnan Parthasarathi2014-09-241-0/+14
| | | | | | | | | | | | | | | Also, moved the backtrace fetching logic to a separate function. Modified the backtrace fetching logic able to work under memory pressure conditions. Change-Id: Ie38bea425a085770f41831314aeda95595177ece BUG:1145093 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8794 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Authenticate management handshake requestsKaushal M2014-09-243-0/+67
| | | | | | | | | | | | | | | | | | | | | | Backport of 371bb42 glusterd: Authenticate management handshake requests from master. Management handshake requests, which are used to validate op-version supported by the peers, are now only allowed if, - the glusterd doesn't have any other peer, or - the request was sent by another peer. This prevents the op-version of a peer being changed because of a connection attempt by an invalid peer. BUG: 1144978 Change-Id: I5a909dad37e9873efe8b75dad41b7af71ce91c3d Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8819 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* ec: Add config information in an xattrXavier Hernandez2014-09-237-1/+185
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To simplify backward compatibility of the ec xlator when some parameter or the implementation itself is changed, a new xattr is added to each file with the configuration needed to recover it. The new attribute is called 'trusted.ec.config', and it's a 64-bit value containing the following information: 8 bits: version of the config information (currently always 0) 8 bits: algorithm used to encode the file (currently always 0) 8 bits: size of the galois field (currently always 8) 8 bits: number of bricks 8 bits: redundancy 24 bits: chunk size (currently 512) This new xattr could allow, in a future version, to have different configurations per file. This is a backport of http://review.gluster.org/8770/ Change-Id: I8c12d40ff546cc201fc66caa367484be3d48aeb4 BUG: 1140862 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8825 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Do not reset pending xattrs on gfid or type mismatch in entry-shKrutika Dhananjay2014-09-231-18/+79
| | | | | | | | | | | | Backport of: http://review.gluster.org/8816 Change-Id: I8463a579f542a2336b02edba8f5fbfea0edbbffe BUG: 1136829 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8823 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Don't start heal when lookup succeeds on < 2 childrenPranith Kumar K2014-09-236-8/+29
| | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8698 Problem: When self-heal code doesn't see at least 2 successes on looking up children, then self-heal can't be done. What is happening now is if all the lookups fail then the pending changelog is all zeros in xattrs so all the children are becoming sources and leading to crashes when the code paths further assume that some data structures are populated properly Fix: Don't proceed with self-heals when < 2 children succeed lookups. BUG: 1145726 Change-Id: I65465843f0e554c8ccdd8fa930ab42ac123ec023 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8824 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/marker: Fill loc->path before sending the control to healingVarun Shastry2014-09-232-24/+42
| | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8296 Problem: The xattr healing part of the marker requires path to be present in the loc. Currently path is not filled while triggering from the readdirp_cbk. Solution: Current patch tries to fill the loc with path. Change-Id: I2e2589ecfa6b6a6e27407c9541fa90a314649bec BUG: 1145623 Signed-off-by: Varun Shastry <vshastry@redhat.com> Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8820 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Set the rlimit for Open FDs to higher value.Vijaikumar M2014-09-231-0/+18
| | | | | | | | | | | | | | | | | | | | Default 'open FD limit' is 1024. As the number of volumes/bricks increases, brick-to-glusterd socket FDs also increases in glusterd and runs out of the limit. Solution is to set the 'Open FD' limit to higher value in glusterd Change-Id: Iaa60b2155df2fa5a0759e054bdebffbc09f63ec1 BUG: 1145095 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8578 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8807
* glusterd/snapshot : Fail the snapshot create operation if geo-rep is runningSachin Pandit2014-09-231-1/+11
| | | | | | | | | | | | | | | | | | | As one of the recommandations for taking a snapshot is not to have an active geo-replication session, its better to display an error saying session is active when snapshot create command is issued. Change-Id: I94593dbd2659610e033ca316176dda1ac8dc5ce6 BUG: 1145091 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8804
* glusterd/snapshot: Inherit the mount options of a original brick when ↵Vijaikumar M2014-09-237-122/+250
| | | | | | | | | | | | | | | | | | | | | | | | creating snapshots. When creating a snapshot a LVM is created at the backend and is mounted under /var/run/gluster/snaps/... However, this mount does not inherit the mount options for the original brick acting as the parent for the snap. If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss. Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b BUG: 1145088 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8394 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8802 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Error msg for snapshot status for no existing snaps should be ↵Vijaikumar M2014-09-231-35/+42
| | | | | | | | | | | | | | | | | | | | aligned with error messages of info and list. When a snapshot operation like status, info, list performed on a non-existing snapshot. For Status error message is displayed as 'Snap not found' For List and Info error message is displayed as 'Snapshot does not exist' Have the consistant error message all the places Change-Id: I7b241217dba62fda844481731a6858e4ecb12897 BUG: 1145087 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8309 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8801 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/snapshot: Proper err msg for snapshot create command.Rajesh Joseph2014-09-231-0/+77
| | | | | | | | | | | | | | | | | | | problem: Snapshot command fails if one or more bricks are not thinly provisioned. But the error message is a generic error message which is confusing to the user. fix: Provide correct error message in case of failure. Change-Id: Iad247f966423a8f73ef6da57cab7ed6cddc05861 BUG: 1145086 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8377 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8800 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* feature/snapshot : Interface to delete all snapshots belonging to a system ↵Sachin Pandit2014-09-231-25/+186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as-well-as to a particular volume Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1145083 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8798 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* glusterd/snapshot: Print correct error message on cli for snapshot operation ↵Vijaikumar M2014-09-231-0/+10
| | | | | | | | | | | | | | | | | | | | | performed on a cluster with op-version less than 30600. Currently we get error message as on cli 'Another transaction is in progress Please try again after sometime' when a snapshot operation is performed on a cluster with op-version less than 30600. We need to print the correct error message in this case. Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141 BUG: 1145068 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8371 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8796
* cli/snapshot : gluster volume info should not show the options which are not ↵Sachin Pandit2014-09-234-257/+222
| | | | | | | | | | | | | | | | | | | | | | | | | | | | set explicitly. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1145020 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8793 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* cluster/afr: Set all the xattrs needed by index xlatorAnuradha2014-09-212-41/+30
| | | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8652 Index xlator removes the index file from indices xattrop directory in case the value for keys sent are zero. If all the required keys are not set by afr then index file might be removed in an invalid way. With this change all the keys required by index xlator are set by afr such that invalid removal of files does not occur. Change-Id: I1b77904920c8566057415c52242179aec6a015e2 BUG: 1144744 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/8788 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/dht: Fix dict_t leaks in rebalance process' execution pathKrutika Dhananjay2014-09-201-4/+7
| | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8763 Two dict_t objects are leaked for every file migrated in success codepath. It is the caller's responsibility to unref dict that it gets from calls to syncop_getxattr(); and rebalance performs two syncop_getxattr()s per file without freeing them. Also, syncop_getxattr() on GF_XATTR_LINKINFO_KEY doesn't seem to be using the response dict. Hence, NULL is now passed as opposed to @dict to syncop_getxattr(). Change-Id: I48926389db965e006da151bf0ccb6bcaf3585199 BUG: 1144640 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8785 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* ec: Fix some size_t vars to 64 bits even on 32 bits machinesXavier Hernandez2014-09-196-21/+20
| | | | | | | | | | | | | | | | | | | The 64 bits 'trusted.ec.size' extended attribute was incorrectly computed on 32 bits machines due to an overflow on negative numbers. Also changed some potentially dangerous uses of size_t in other places. This is a backport of http://review.gluster.org/8738/ Change-Id: Id76cfe49a2f350e564b5c71d8c8644fb9ce86662 BUG: 1144407 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8779 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* ec: Fix invalid inode lock in ftruncateXavier Hernandez2014-09-195-21/+21
| | | | | | | | | | | | | | | | | | | | | | The fops 'truncate' and 'ftruncate' share some code and inodelk() was always made against the inode inside the loc_t structure instead of that of fd_t. Since ftruncate has the loc initialized to NULL, this fop was executed without any lock, allowing some concurrent modifications in the file size. Also changed the way in which 'fop' and 'ffop' are differentiated in shared code. Now it uses 'id' field instead of checking if 'fd' is NULL. This is a backport of http://review.gluster.org/8695/ Change-Id: Ibd18accf2652193b395a841b9029729e5f4867c6 BUG: 1140847 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8780 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: perform list-xattr during lookupRavishankar N2014-09-194-11/+222
| | | | | | | | | | | | | Detect and heal mismatching user extended attributes during lookup. Backport of: http://review.gluster.org/8558 Change-Id: Id03c9746f083ffd3014711d0b3a2e5a71a45eed4 BUG: 1144274 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8773 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* storage/posix: Log when mkdir is on an existing gfid but non-existentRaghavendra G2014-09-191-1/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | path. consider following steps on a distribute volume 1. rename (src, dst) on hashed subvolume 2. snapshot taken 3. restore snapshots and do stat on src and dst Now, we end up with two directories src and dst having same gfid, because of distribute creating directories on non-existent subvolumes as part of directory healing. This can happen even with race between rename and directory healing in dht-lookup. This can lead to undefined behaviour while accessing any of both directories. Hence, we are logging paths of both directories, so that a sysadmin can take some corrective action when (s)he sees this log. One of the corrective action can be to copy contents of both directories from backend into a new directory and delete both directories. Since effort involved to fix this issue is non-trivial, giving this workaround till we come up with a fix. Change-Id: I38f4520e6787ee33180a9cd1bf2f36f46daea1ea BUG: 1144485 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on-master: http://review.gluster.org/8008 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8783 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Do not assume sizeof(size_t)Emmanuel Dreyfus2014-09-182-2/+2
| | | | | | | | | | | | | | | | | This fixes an assumption that sizeof(size_t) == sizeof(uint64_t), which is not guaranteed. At least on NetBSD/i386, size_t is 32 bit long. Caught by tests/basics/file-snapshot.t This is a backport of Ib7620a2ffe8758521886af37bc280101a040d860 BUG: 1138897 Change-Id: Ie0b80ee9ddbcccaf9fd4f5d28d80fcd080b0ed40 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8631 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr : Mark pending changelog xattrs for new creationsAnuradha2014-09-187-90/+148
| | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8555 Based on type of file, set appropriate pending changelogs for new entries. Change-Id: Icf9af866fe9a9e511210e8ad097e968e2307d8ee BUG: 1141787 Reviewed-on: http://review.gluster.org/8555 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/8748 Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapview-server: get the handle if its absent before doing any fopRaghavendra Bhat2014-09-182-29/+160
| | | | | | | | | | | | | | | | | | | | | | | | * Now that NFS server does inode linking in readdirp, it can resolve the gfid (i.e. find the right inode from its inode table) present in the filehandle sent by the NFS client on which a fop came. So instead of sending the lookup on that entry, it directly sends the fop. But snapview-server does not get the handle for the entries in readdirp (because doing a lookup on each entry via gfapi would be costly. So it waits till a lookup is done on that inode, to get the handle and the fs instance and fill it in the inode context). So when NFS resoves the gfid and directly sends the fop, snapview-server will not be able to perform the fop as the inode contet would not contain the fs instance and the handle. So fops should check for the handle before doing gfapi calls. If the handle and fs instance are not present in the inode context they should get them by doing an explicit lookup on the entry. rebase of the patch http://review.gluster.org/#/c/8324/ Change-Id: I70c9c8edb2e7ddad79cf6ade3e041b9d02241cd1 BUG: 1143961 Reviewed-on: http://review.gluster.org/8768 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapview-server: register a callback with glusterd to getRaghavendra Bhat2014-09-189-1117/+1230
| | | | | | | | | | | | | | | | | | | notifications * As of now snapview-server is polling (sending rpc requests to glusterd) to get the latest list of snapshots at some regular time intervals (non configurable). Instead of that register a callback with glusterd so that glusterd sends notifications to snapd whenever a snapshot is created/deleted and snapview-server can configure itself. rebase of the patch http://review.gluster.org/#/c/8150/ Change-Id: Iee2582b1a823d50c79233a41cf2106f458b40691 BUG: 1143961 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8767 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* USS: initialize a list before using it.Raghavendra G2014-09-181-1/+3
| | | | | | | | | | | | backport of the patch http://review.gluster.org/8569 by Raghavendra G <rgowdapp@redhat.com> Change-Id: I7b25fdf27c6d7ff66d24925bc73d9c6681259d37 BUG: 1143961 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8764 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Cluster/DHT: Changing rename log severityNithya Balachandran2014-09-181-6/+5
| | | | | | | | | | | | | | Changing log level for a rename message from debug to info to improve debuggability Change-Id: I53031fcf97fffd62095692477330ecde0cf47dcd BUG: 1138395 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on-master: http://review.gluster.org/8582 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8626
* cluster/dht: Changed log level to DEBUGVenkatesh Somyajulu2014-09-181-4/+4
| | | | | | | | | | | | Change-Id: I7a4ee0c5a6a94bd4f31aff510a2971750913ed45 BUG: 1142402 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on-master: http://review.gluster.org/8621 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-by: susant palai <spalai@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8749
* cluster/dht: Fixed double UNWIND in lookup everywhere codeShyam2014-09-161-4/+4
| | | | | | | | | | | | | | | | | | | | | | | In dht_lookup_everywhere_done: Line: 1194 we call DHT_STACK_UNWIND and in the same if condition we go ahead and call, goto unwind_hashed_and_cached; which at Line 1371 calls another UNWIND. As is obvious, higher frames could cleanup their locals and on receiving the next unwind could cause a coredump of the process. Fixed the same by calling the required return post the first unwind Change-Id: Ic5d57da98255b8616a65b4caaedabeba9144fd49 BUG: 1142409 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on-master: http://review.gluster.org/8666 Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: susant palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/8751
* cluster/dht: fix memory corruption in locking api.Raghavendra G2014-09-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | <man 3 qsort> The contents of the array are sorted in ascending order according to a comparison function pointed to by compar, which is called with two arguments that "point to the objects being compared". </man 3 qsort> qsort passes "pointers to members of the array" to comparision function. Since the members of the array happen to be (dht_lock_t *), the arguments passed to dht_lock_request_cmp are of type (dht_lock_t **). Previously we assumed them to be of type (dht_lock_t *), which resulted in memory corruption. Change-Id: Iee0758704434beaff3c3a1ad48d549cbdc9e1c96 BUG: 1142406 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on-master: http://review.gluster.org/8659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8750
* features/quota: fixes to dentry management code in rename.Raghavendra G2014-09-161-10/+6
| | | | | | | | | | | | | | | | | | | 1. After a successful rename (src, dst), the dentry <dst-parent, dst-basename> would be associated with src-inode. 2. Its src inode that survives if both of src and dst are present. The fixes are done based on the above two observation. Change-Id: I7492a512e3732b1455c243b02fae12d489532bfb BUG: 1142411 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on-master: http://review.gluster.org/8687 Reviewed-by: susant palai <spalai@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8752
* cluster/dht: Rename should not fail post hardlink creationShyam2014-09-162-41/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | In the rename path, we wind the creation of newname hardlink and linkto file in dst hashed a the same time. If the linkto creation fails, but the link creation succeeds, we enter the failure code and cleanup the created newname hardlink. In the interim if another client looks up newname and finds it as a hardlink from FUSE, it could send an unlink for oldname instead of a rename. This combined with the above cleanup code could end up losing all the files copies, and thereby losing data. This fix separates these steps into 2 parts, creating the linkto first and then the link file, so that post link file creation no failures would cleanup the newname file. If linkto fails then link is not attempted, thereby not polluting the name space with newname. Change-Id: I61da8e906060da16a31ea1076eec2f01fd617f44 BUG: 1138395 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on-master: http://review.gluster.org/8570 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8615
* ec: Optimize read/write performanceXavier Hernandez2014-09-1613-268/+706
| | | | | | | | | | | | | | | | | | | | This patch significantly improves performance of read/write operations on a dispersed volume by reusing previous inodelk/ entrylk operations on the same inode/entry. This reduces the latency of each individual operation considerably. Inode version and size are also updated when needed instead of on each request. This gives an additional boost. This is a backport of http://review.gluster.org/8369/ Change-Id: I4b98d5508c86b53032e16e295f72a3f83fd8fcac BUG: 1140844 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8746 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* ec: Only heal data/metadata when inode has enough informationXavier Hernandez2014-09-161-0/+8
| | | | | | | | | | | | | | | | Sometimes loc_t structure in a heal request doesn't contain enough information to do an inodelk call (basically the gfid is missing). In these cases, self heal only recovers entry information. This is a backport of http://review.gluster.org/8368/ Change-Id: I459990c7df728ff4baf164df046672ddcde3efa5 BUG: 1140626 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* gNFS: Fix memory leak in setacl code pathNiels de Vos2014-09-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If ACL is set on a file in Gluster NFS mount (setfacl command), and it succeed, then the NFS call state data is leaked. Though all the failure code path frees up the memory. Impact: There is a OOM kill i.e. vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) FIX: Make sure to deallocate the memory for call state in acl3_setacl_cbk() using nfs3_call_state_wipe(); Cherry picked from commit 5c869aea79c0f304150eac014c7177e74ce0852e: > Change-Id: I9caa3f851e49daaba15be3eec626f1f2dd8e45b3 > BUG: 1139195 > Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> > Reviewed-on: http://review.gluster.org/8651 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Niels de Vos <ndevos@redhat.com> Change-Id: I9caa3f851e49daaba15be3eec626f1f2dd8e45b3 BUG: 1139244 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8654 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Propagate EIO on inode's type mismatchKrutika Dhananjay2014-09-167-121/+332
| | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8574, and http://review.gluster.org/8586 Original author of the test script: Pranith Kumar K <pkarampu@redhat.com> Change-Id: I0c32bdd8e666f8175c0a8fbf940934e6ce469931 BUG: 1136830 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8706 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Fix dict_t leaksKrutika Dhananjay2014-09-168-35/+64
| | | | | | | | | | | | | | | Backport of: http://review.gluster.org/#/c/8557/ dict_t objects that are ref'd in alloca'd "replies" in afr_replies_copy() are not unref'd before "replies" go out of scope. Change-Id: I9bb45bc673ec13292ac96dda060aceb48739ebe8 BUG: 1136831 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8704 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>