summaryrefslogtreecommitdiffstats
path: root/xlators/cluster/ec
Commit message (Collapse)AuthorAgeFilesLines
* cluster/ec: Return correct error code and log messageAshish Pandey2020-07-081-2/+9
| | | | | | | | | | | | | In case of readdir was send with an FD on which opendir was failed, this FD will be useless and we return it with error. For now, we are returning it with EINVAL without logging any message in log file. Return a correct error code and also log the message to improve thing to debug. fixes: #1220 Change-Id: Iaf035254b9c5aa52fa43ace72d328be622b06169 (cherry picked from commit af70cb5eedd80207cd184e69f2a4fb252b72d070)
* cluster/ec: Change handling of heal failure to avoid crashAshish Pandey2020-02-282-13/+13
| | | | | | | | | | | | | | | | | Problem: ec_getxattr_heal_cbk was called with NULL as second argument in case heal was failing. This function was dereferencing "cookie" argument which caused crash. Solution: Cookie is changed to carry the value that was supposed to be stored in fop->data, so even in the case when fop is NULL in error case, there won't be any NULL dereference. Thanks to Xavi for the suggestion about the fix. Change-Id: I0798000d5cadb17c3c2fbfa1baf77033ffc2bb8c fixes: bz#1806836
* cluster/ec: skip updating ctx->loc again when ec_fix_open/opendirKinglong Mee2020-02-262-10/+14
| | | | | | | | | | | | | The ec_manager_open/opendir memsets ctx->loc which causes memory/inode leak, and ec_fheal uses ctx->loc out of fd->lock that loc_copy may copy bad data when memset it. This patch skips updating ctx->loc when it is initilizaed. With it, ctx->loc is filled once, and never updated. Change-Id: I3bf5ffce4caf4c1c667f7acaa14b451d37a3550a fixes: bz#1806838 Signed-off-by: Kinglong Mee <mijinlong@horiscale.com>
* cluster/ec: Update lock->good_mask on parent fop failurePranith Kumar K2019-10-302-0/+6
| | | | | | | | | | When discard/truncate performs write fop, it should do so after updating lock->good_mask to make sure readv happens on the correct mask fixes: bz#1739449 Change-Id: Idfef0bbcca8860d53707094722e6ba3f81c583b7 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Fix reopen flags to avoid misbehaviorPranith Kumar K2019-10-302-3/+8
| | | | | | | | | | | | | | | | | | | | | | | Problem: when a file needs to be re-opened O_APPEND and O_EXCL flags are not filtered in EC. - O_APPEND should be filtered because EC doesn't send O_APPEND below EC for open to make sure writes happen on the individual fragments instead of at the end of the file. - O_EXCL should be filtered because shd could have created the file so even when file exists open should succeed - O_CREAT should be filtered because open happens with gfid as parameter. So open fop will create just the gfid which will lead to problems. Fix: Filter out these two flags in reopen. Change-Id: Ia280470fcb5188a09caa07bf665a2a94bce23bc4 Fixes: bz#1739450 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Always read from good-maskPranith Kumar K2019-10-302-5/+25
| | | | | | | | | | There are cases where fop->mask may have fop->healing added and readv shouldn't be wound on fop->healing. To avoid this always wind readv to lock->good_mask updates: bz#1739449 Change-Id: I2226ef0229daf5ff315d51e868b980ee48060b87 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: inherit healing from lock when it has infoKinglong Mee2019-10-301-2/+3
| | | | | | | | | If lock has info, fop should inherit healing mask from it. Otherwise, fop cannot inherit right healing when changed_flags is zero. Change-Id: Ife80c9169d2c555024347a20300b0583f7e8a87f updates: bz#1739449 Signed-off-by: Kinglong Mee <mijinlong@horiscale.com>
* cluster/ec: Prevent double pre-op xattropsPranith Kumar K2019-10-301-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Race: Thread-1 Thread-2 1) Does ec_get_size_version() to perform pre-op fxattrop as part of write-1 2) Calls ec_set_dirty_flag() in ec_get_size_version() for write-2. This sets dirty[] to 1 3) Completes executing ec_prepare_update_cbk leading to ctx->dirty[] = '1' 4) Takes LOCK(inode->lock) to check if there are any flags and sets dirty-flag because lock->waiting_flag is 0 now. This leads to fxattrop to increment on-disk dirty[] to '2' At the end of the writes the file will be marked for heal even when it doesn't need heal. Fix: Perform ec_set_dirty_flag() and other checks inside LOCK() to prevent dirty[] to be marked as '1' in step 2) above Updates bz#1739446 Change-Id: Icac2ab39c0b1e7e154387800fbededc561612865 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: fix EIO error for concurrent writes on sparse filesXavi Hernandez2019-10-241-9/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EC doesn't allow concurrent writes on overlapping areas, they are serialized. However non-overlapping writes are serviced in parallel. When a write is not aligned, EC first needs to read the entire chunk from disk, apply the modified fragment and write it again. The problem appears on sparse files because a write to an offset implicitly creates data on offsets below it (so, in some way, they are overlapping). For example, if a file is empty and we read 10 bytes from offset 10, read() will return 0 bytes. Now, if we write one byte at offset 1M and retry the same read, the system call will return 10 bytes (all containing 0's). So if we have two writes, the first one at offset 10 and the second one at offset 1M, EC will send both in parallel because they do not overlap. However, the first one will try to read missing data from the first chunk (i.e. offsets 0 to 9) to recombine the entire chunk and do the final write. This read will happen in parallel with the write to 1M. What could happen is that half of the bricks process the write before the read, and the half do the read before the write. Some bricks will return 10 bytes of data while the otherw will return 0 bytes (because the file on the brick has not been expanded yet). When EC tries to recombine the answers from the bricks, it can't, because it needs more than half consistent answers to recover the data. So this read fails with EIO error. This error is propagated to the parent write, which is aborted and EIO is returned to the application. The issue happened because EC assumed that a write to a given offset implies that offsets below it exist. This fix prevents the read of the chunk from bricks if the current size of the file is smaller than the read chunk offset. This size is correctly tracked, so this fixes the issue. Also modifying ec-stripe.t file for Test #13 within it. In this patch, if a file size is less than the offset we are writing, we fill zeros in head and tail and do not consider it strip cache miss. That actually make sense as we know what data that part holds and there is no need of reading it from bricks. Backport of: > Patch:https://review.gluster.org/#/c/glusterfs/+/23066/ > Change-Id: Ic342e8c35c555b8534109e9314c9a0710b6225d6 > BUG: 1730715 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> (cherry picked from commit b01a43586c5abc23a874e5528a063c508f952cbd) Change-Id: Ic342e8c35c555b8534109e9314c9a0710b6225d6 Fixes: bz#1739451 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* ctime/rebalance: Heal ctime xattr on directory during rebalanceKotresh HR2019-09-271-3/+4
| | | | | | | | | | | | | | | | | | | | | | | After add-brick and rebalance, the ctime xattr is not present on rebalanced directories on new brick. This patch fixes the same. Note that ctime still doesn't support consistent time across distribute sub-volume. This patch also fixes the in-memory inconsistency of time attributes when metadata is self healed. Backport of: > Patch: https://review.gluster.org/23127 > Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df > BUG: 1734026 > Signed-off-by: Kotresh HR <khiremat@redhat.com> Patch: https://review.gluster.org/23127 Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df fixes: bz#1752413 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* cluster/ec: honor contention notifications for partially acquired locksXavi Hernandez2019-06-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | EC was ignoring lock contention notifications received while a lock was being acquired. When a lock is partially acquired (some bricks have granted the lock but some others not yet) we can receive notifications from acquired bricks, which should be honored, since we may not receive more notifications after that. Since EC was ignoring them, once the lock was acquired, it was not released until the eager-lock timeout, causing unnecessary delays on other clients. This fix takes into consideration the notifications received before having completed the full lock acquisition. After that, the lock will be releaed as soon as possible. Backport of: > BUG: bz#1708156 > Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Fixes: bz#1714172 Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* cluster/ec: Reopen shouldn't happen with O_TRUNCPranith Kumar K2019-05-151-1/+1
| | | | | | | | | | | | | Problem: Doing re-open with O_TRUNC will truncate the fragment even when it is not needed needing extra heals Fix: At the time of re-open don't use O_TRUNC. fixes bz#1709660 Change-Id: Idc6408968efaad897b95a5a52481c66e843d3fb8 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: fix fd reopenPranith Kumar K2019-05-0814-274/+328
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently EC tries to reopen fd's that have been opened while a brick was down. This is done as part of regular write operations, just after having acquired the locks, and it's sent as a sub-fop of the main write fop. There were two problems: 1. The reopen was attempted on all UP bricks, even if a previous lock didn't succeed. This is incorrect because most probably the open will fail. 2. If reopen is sent and fails, the error is propagated to the main operation, causing it to fail when it shouldn't. To fix this, we only attempt reopens on bricks where the current fop owns a lock, and we prevent any error to be propagated to the main fop. To implement this behaviour an argument used to indicate the minimum number of required answers has overloaded to also include some flags. To make the change consistent, it has been necessary to rename the argument, which means that a lot of files have been changed. However there are no functional changes. This change has also uncovered a problem in discard code, which didn't correctely process requests of small sizes because no real discard fop was being processed, only a write of 0's on some region. In this case some fields of the fop remained uninitialized or with incorrect values. To fix this, a new function has been created to simulate success on a fop and it's used in the discard case. Thanks to Pranith for providing a test script that has also detected an issue in this patch. This patch includes a small modification of this script to force data to be written into bricks before stopping them. Backport of: > Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec > BUG: bz#1699866 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec Fixes: bz#1699917 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* ec: fix truncate lock to cover the write in tuncate cleanKinglong Mee2019-04-161-2/+6
| | | | | | | | | | | ec_truncate_clean does writing under the lock granted for truncate, but the lock is calculated by ec_adjust_offset_up, so that, the write in ec_truncate_clean is out of lock. Updates: bz#1699499 Change-Id: Idbe1fd48d26afe49c36b77db9f12e0907f5a4134 Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> (cherry picked from commit 0e1223491e964096384edfae5032ed0d50d028ad)
* cluster/ec: Don't enqueue an entry if it is already healingAshish Pandey2019-04-165-30/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: 1 - heal-wait-qlength is by default 128. If shd is disabled and we need to heal files, client side heal is needed. If we access these files that will trigger the heal. However, it has been observed that a file will be enqueued multiple times in the heal wait queue, which in turn causes queue to be filled and prevent other files to be enqueued. 2 - While a file is going through healing and a write fop from mount comes on that file, it sends write on all the bricks including healing one. At the end it updates version and size on all the bricks. However, it does not unset dirty flag on all the bricks, even if this write fop was successful on all the bricks. After healing completion this dirty flag remain set and never gets cleaned up if SHD is disabled. Solution: 1 - If an entry is already in queue or going through heal process, don't enqueue next client side request to heal the same file. 2 - Unset dirty on all the bricks at the end if fop has succeeded on all the bricks even if some of the bricks are going through heal. Change-Id: Ia61ffe230c6502ce6cb934425d55e2f40dd1a727 updates: bz#1693223 Signed-off-by: Ashish Pandey <aspandey@redhat.com> (cherry picked from commit 313dcefe7a62bd16cd794040df068f9bec9c6927)
* cluster/ec: Fix handling of heal info cases without locksAshish Pandey2019-04-091-25/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we use heal info command, it takes lot of time as in some cases it takes lock on entries to find out if the entry actually needs heal or not. There are some cases where we can avoid these locks and can conclude if the entry needs heal or not. 1 - We do a lookup (without lock) on an entry, which we found in .glusterfs/indices/xattrop, and find that lock count is zero. Now if the file contains dirty bit set on all or any brick, we can say that this entry needs heal. 2 - If the lock count is one and dirty is greater than 1, then it also means that some fop had left the dirty bit set which made the dirty count of current fop (which has taken lock) more than one. At this point also we can definitely say that this entry needs heal. This patch is modifying code to take into consideration above two points. It is also changing code to not to call ec_heal_inspect if ec_heal_do was called from client side heal. Client side heal triggeres heal only when it is sure that it requires heal. [We have changed the code to not to call heal for lookup] updates bz#1697764 Change-Id: I7f09f0ecd12f65a353297aefd57026fd2bebdf9c Signed-off-by: Ashish Pandey <aspandey@redhat.com> (cherry picked from commit da47caf2405c08c9abafc4a55525a8b2c2dd5bb8)
* cluster/ec: NULL pointer deferencing clang fixSheetal Pamecha2018-12-141-1/+0
| | | | | | | | Removing VALIDATE_OR_GOTO check on "this" Change-Id: I154deaca5302b41c1cafd87077de880dd03ec613 Updates: bz#1622665 Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
* xlator: make 'xlator_api' mandatoryAmar Tumballi2018-12-131-1/+19
| | | | | | | | | | | | | | * Remove the options to load old symbol. * keep only 'xlator_api' symbol from being exported using xlator.sym * add xlator_api to all the xlators where its missing NOTE: This covers all the xlators which has at least a test case to validate its loading. If there is a translator, which doesn't have any test, then we should probably remove that from codebase. fixes: #164 Change-Id: Ibcdc8c9844cda6b4463d907a15813745d14c1ebb Signed-off-by: Amar Tumballi <amarts@redhat.com>
* libglusterfs: Move devel headers under glusterfs directoryShyamsundarR2018-12-0522-49/+49
| | | | | | | | | | | | | | | | | | | | | | | | libglusterfs devel package headers are referenced in code using include semantics for a program, this while it works can be better especially when dealing with out of tree xlator builds or in general out of tree devel package usage. Towards this, the following changes are done, - moved all devel headers under a glusterfs directory - Included these headers using system header notation <> in all code outside of libglusterfs - Included these headers using own program notation "" within libglusterfs This change although big, is just moving around the headers and making it correct when including these headers from other sources. This helps us correctly include libglusterfs includes without namespace conflicts. Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b Updates: bz#1193929 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* Multiple xlator .h files: remove unused private gf_* memory types.Yaniv Kaul2018-11-301-1/+0
| | | | | | | | | | | | | It seems there were quite a few unused enums (that in turn cause unndeeded memory allocation) in some xlators. I've removed them, hopefully not causing any damage. Compile-tested only! updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I8252bd763dc1506e2d922496d896cd2fc0886ea7
* all: fix the format string exceptionsAmar Tumballi2018-11-051-9/+9
| | | | | | | | | | | | | | | | Currently, there are possibilities in few places, where a user-controlled (like filename, program parameter etc) string can be passed as 'fmt' for printf(), which can lead to segfault, if the user's string contains '%s', '%d' in it. While fixing it, makes sense to make the explicit check for such issues across the codebase, by making the format call properly. Fixes: CVE-2018-14661 Fixes: bz#1644763 Change-Id: Ib547293f2d9eb618594cbff0df3b9c800e88bde4 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* cluster/ec: prevent infinite loop in self-heal fullXavi Hernandez2018-10-311-5/+6
| | | | | | | | | | | | | | | | | | | | | | | There was a problem in commit 7f81067 that caused infinite loop when full heal was triggered. The previous commit was made to prevent self-heal to go idle after a replace brick operation. One of the changes consisted on setting a flag to force an immediate scan of the dirty directory if a heal on a directory succeeded (assuming it could have generated newer entries). However that change was causing an issue with a full self-heal, since every time an already healed directory was checked and it returned suceessfully, it was also setting the flag, forcing self-heal to start over again. This patch fixes this issue by only setting the flag if the heal is not full. It's assumed that a full self-heal will already traverse all entries automatically, so there's no need to force a new scan later. Change-Id: Id12dbfc04e622b18183e796cc6cc87ccc30a6d55 fixes: bz#1636631 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* cluster/ec: Change log level to DEBUG for lookup combineAshish Pandey2018-10-311-1/+1
| | | | | | | | | | | | | As lookup is not a locked fop, we can not trust the data received in this to be same. Changing the log level to DEBUG in case lookup finds any difference. Change-Id: I39499c44688a2455c7c6c69a798762d045d21b39 updates: bz#1640066 BUG: 1640066 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/ec: NULL pointer deferencing clang fixSheetal Pamecha2018-10-191-2/+0
| | | | | | | | | Removing VALIDATE_OR_GOTO check on "this" Updates: bz#1622665 Change-Id: Ic7cffbb697da814f835d0ad46e25256da6afb406 Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
* all: fix warnings on non 64-bits architecturesXavi Hernandez2018-10-1011-71/+81
| | | | | | | | | | When compiling in other architectures there appear many warnings. Some of them are actual problems that prevent gluster to work correctly on those architectures. Change-Id: Icdc7107a2bc2da662903c51910beddb84bdf03c0 fixes: bz#1632717 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* Quota related files: use dict_{setn|getn|deln|get_int32n|set_int32n|set_strn}Yaniv Kaul2018-09-261-1/+1
| | | | | | | | | | | | | | In a previous patch (https://review.gluster.org/20769) we've added the key length to be passed to dict_* funcs, to remove the need to strlen() it. This patch moves some code to use it. Please review carefully. Compile-tested only! updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: If4f425a9827be7c36ccfbb9761006ae824a818c6
* cluster/ec: variable-length array (VLA) declaration clang fixSheetal Pamecha2018-09-261-1/+1
| | | | | | | | | | | Problem: variable-length array size becomes zero Modified array size to size+1 while declaring Updates: bz#1622665 Change-Id: I98ee8447c87f37c36c49f50058292e8c1757a1f9 Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
* Land part 2 of clang-format changesGluster Ant2018-09-1221-14016/+13766
| | | | | Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4 Signed-off-by: Nigel Babu <nigelb@redhat.com>
* Land clang-format changesGluster Ant2018-09-1214-1018/+1056
| | | | Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
* ec-heal: remove a duplicate definition of alloca0Amar Tumballi2018-09-101-1/+0
| | | | | | | | | | the same macro is defined in common-utils.h, which seems to be much better place for the same. Updates: bz#1193929 Change-Id: I409b719c291102136500b955e5827a550142ed96 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* cluster/ec: Don't update trusted.ec.version if fop succeedsAshish Pandey2018-09-071-0/+9
| | | | | | | | | | | | | | | | | | | | If a fop has succeeded on all the bricks and trying to release the lock, there is no need to update the version for the file/entry. All it will do is to increase the version from x to x+1 on all the bricks. If this update (x to x+1) fails on some brick, this will indicate that the entry is unhealthy while in realty everything is fine with the entry. Avoiding this update will help to not to send one xattrop at the end of the fops. Which will decrease the chances of entries being in unhealthy state and also improve the performance. Change-Id: Id9fca6bd2991425db6ed7d1f36af27027accb636 fixes: bz#1623759 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/ec: Improve logging for some critical error messagesAshish Pandey2018-09-073-14/+55
| | | | | | Change-Id: I037e52a3467467b81a1ba5416317870864060d4d updates: bz#1615703 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/ec: Fix Coverity issueAshish Pandey2018-08-312-42/+0
| | | | | | | | | | | | | | | | | | | | | | | | Fix following coverity issues- CID: 1382378 1382459 https://scan6.coverity.com/reports.htm#v42607/p10714/fileInstanceId=85091670&defectInstanceId=25915064&mergedDefectId=1382459 https://scan6.coverity.com/reports.htm#v42607/p10714/fileInstanceId=85091670&defectInstanceId=25915063&mergedDefectId=1382378 Problem: ASSERT_LOCAL(this, healer) function is supposed to get the local healer so that we can take advantage of it while healing and reading data. However, we are not using healer->local anywhere. Also, this is not as useful in context of EC as it is in AFR. In EC we have to raed fragments from 4 bricks to heal a bad fragment on other brick. Change-Id: Iea8ce127ea02cc84e3823cb2be82a47872217b33 updates: bz#789278 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* multiple files: remove unndeeded memset()Yaniv Kaul2018-08-291-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a squash of multiple commits: contrib/fuse-lib/misc.c: remove unneeded memset() All flock variables are properly set, no need to memset it. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I8e0512c5a88daadb0e587f545fdb9b32ca8858a2 libglusterfs/src/{client_t|fd|inode|stack}.c: remove some memset() I don't think there's a need for any of them. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I2be9ccc3a5cb5da51a92af73488cdabd1c527f59 libglusterfs/src/xlator.c: remove unneeded memset() All xl->mem_acct members are properly set, no need to memset it. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I7f264cd47e7a06255a3f3943c583de77ae8e3147 xlators/cluster/afr/src/afr-self-heal-common.c: remove unneeded memset() Since we are going over the whole array anyway, initialize it properly, to either 1 or 0. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: Ied4210388976b6a7a2e91cc3de334534d6fef201 xlators/cluster/dht/src/dht-common.c: remove unneeded memset() Since we are going over the whole array anyway it is initialized properly. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: Idc436d2bd0563b6582908d7cbebf9dbc66a42c9a xlators/cluster/ec/src/ec-helpers.c: remove unneeded memset() Since we are going over the whole array anyway it is initialized properly. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I81bf971f7fcecb4599e807d37f426f55711978fa xlators/mgmt/glusterd/src/glusterd-volgen.c: remove some memset() I don't think there's a need for any of them. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I476ea59ba53546b5153c269692cd5383da81ce2d xlators/mgmt/glusterd/src/glusterd-geo-rep.c: read() in 4K blocks The current 1K seems small. 4K is usually better (in Linux). Also remove a memset() that I don't think is needed between reads. Only compile-tested! Signed-off-by: Yaniv Kaul <ykaul@redhat.com> Change-Id: I5fb7950c92d282948376db14919ad12e589eac2b xlators/storage/posix/src/posix-{gfid-path|inode-fd-ops}.c: remove memset() before sys_*xattr() functions. I don't see a reason to memset the array sent to the functions sys_llistxattr(), sys_lgetxattr(), sys_lgetxattr(), sys_flistxattr(), sys_fgetxattr(). (Note: it's unclear to me why we are calling sys_*txattr() functions with XATTR_VAL_BUF_SIZE-1 size instead of XATTR_VAL_BUF_SIZE ). Only compile-tested! Change-Id: Ief2103b56ba6c71e40ed343a93684eef6b771346 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* cluster/ec: Prevent a possible out-of-bounds readVijay Bellur2018-08-271-0/+1
| | | | | | | | | | | Addresses CID 1370939 In ec_code_x64_epilog(), there is a possibility of reading from an incorrect index of ec_code_x64_regmap array Change-Id: Ib8a228bbe13631188343634b2bde5919cdaab5a4 Updates: bz#789278 Signed-off-by: Vijay Bellur <vbellur@redhat.com>
* multiple files: move from strlen() to sizeof()Yaniv Kaul2018-08-254-8/+8
| | | | | | | | | | | | {ec-heal|ec-combine|ec-helpers|ec-inode-read}.c For const strings, just do compile time size calc instead of runtime. Compile-tested only! Change-Id: If92ba0a7a20f64b898d01c6e3b6708190ca93e04 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* cluster/ec: FORWARD_NULL coverity fixSunil Kumar Acharya2018-08-162-1/+5
| | | | | | | | | | | Fixing FORWARD_NULL coverify errors with EC. CID: 1394650 BUG: 789278 Change-Id: I52c99dac3483ca31a86cd7e3a959d4010b195f32 updates: bz#789278 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* coverity: Ignore most of SECURE_TEMP issuesShyamsundarR2018-07-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | mkstemp as per the Linux man page, uses 0600 as the permission bits when creating the file. This is hence safe and a Coverity warning that should be ignored. Further, we are mostly a multi-threaded program in all our daemons and cannot set and unset umask at will in a multi-threaded program, to address the coverity issue. This change attempts to nudge coverity to ignore this warning, using the pattern, /* coverity[EVENT_TAG_NAME] ... */ <line of code that has the issue> This commit is an experiment, if post merge the next coverity report ignores these errors, the above pattern (as found using an internet search) works and can be applied to certain other warnings as well. Change-Id: I73a184ce1a54dd9e66542952b1190a74438c826a Updates: bz#789278 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* All: run codespell on the code and fix issues.Yaniv Kaul2018-07-229-9/+9
| | | | | | | | | | | | Please review, it's not always just the comments that were fixed. I've had to revert of course all calls to creat() that were changed to create() ... Only compile-tested! Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* md-cache: Do not invalidate cache post set/remove xattrPoornima G2018-07-111-0/+63
| | | | | | | | | | | | | | | Since setxattr and removexattr fops cbk do not carry poststat, the stat cache was being invalidated in setxatr/remoxattr cbk. Hence the further lookup wouldn't be served from cache. To prevent this invalidation, md-cache is modified to get the poststat in set/removexattr_cbk in dict. Co-authored with Xavi Hernandez. Change-Id: I6b946be2d20b807e2578825743c25ba5927a60b4 fixes: bz#1586018 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Signed-off-by: Poornima G <pgurusid@redhat.com>
* afr,ec: Print if the subvolume is up in statedumpPranith Kumar K2018-07-031-0/+1
| | | | | | fixes bz#1597156 Change-Id: I323eb9190e40b12df216698dcdba74a6d336beeb Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Fix pre-op xattrop managementXavi Hernandez2018-05-233-32/+66
| | | | | | | | | | | | | | | | | | | | Multiple pre-op xattrop can be simultaneously being processed. On the cbk it was checked if the fop was waiting for some specific data (like size and version) and, if so, it was assumed that this answer should contain that data. This is not true, since a fop can be waiting for some data, but it may come from the xattrop of another fop. This patch differentiates between needing some information and providing it. This is related to parallel writes. Disabling them fixed the problem, but also prevented concurrent reads. A change has been made so that disabling parallel writes still allows parallel reads. Fixes: bz#1578325 Change-Id: I74772ad6b80b7b37805da93d5ec3ae099e96b041 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* libglusterfs: Capture the dict response in syncop_xattrop_cbkkarthik-us2018-04-271-1/+1
| | | | | | | | | | | | | | | Problem: Currently it is not possible to capture the xattrs values which are set on the bricks by calling syncop_(f)xattrop, because the response dict is not being assigned to any of the dictionaries. Fix: In the xattrop callback capture the response dict and send it back to the caller if it is requested. Change-Id: I9de9bcd97d6008091c9b060bcca3676cb9ae8ef9 fixes: bz#1572076 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* cluster/ec: Turn ON the stripe-cache option by defaultAshish Pandey2018-04-061-1/+1
| | | | | | Change-Id: I0a290396c30c635b13ee73004d20259efb76a954 fixes: bz#1563945 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/ec: send list-node-uuids request to all subvolumesXavi Hernandez2018-03-281-1/+1
| | | | | | | | | | | | The xattr trusted.glusterfs.list-node-uuids was only sent to a single subvolume. This was returning null uuids from the other subvolumes as if they were down. This fix forces that xattr to be requested from all subvolumes. Change-Id: If62eb39a6857258923ba625e153d4ad79018ea2f fixes: bz#1561406 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* cluster/ec: fix SHD crash for null gfid'sXavi Hernandez2018-03-211-0/+8
| | | | | | | | | | | | | | | | | | | When the self-heal daemon is doing a full sweep it uses readdirp to get extra stat information from each file. This information is obtained in two steps by the posix xlator: first the directory is read to get the entries and then each entry is stated to get additional info. Between these two steps, it's possible that the file is removed by the user, so we'll get an error, leaving stat info empty. EC's heal daemon was using the gfid blindly, causing an assert failure when protocol/client was trying to encode the gfid. To fix the problem a check has been added. If we detect a null gfid, we simply ignore it and continue healing. Change-Id: I2e4acdcecd0b6951055e50d1c37d686a2186a228 BUG: 1558016 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* cluster/ec: Change default read policy to gfid-hashAshish Pandey2018-03-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Whenever we read data from file over NFS, NFS reads more data then requested and caches it. Based on the stat information it makes sure that the cached/pre-read data is valid or not. Consider 4 + 2 EC volume and all the bricks are on differnt nodes. In EC, with round-robin read policy, reads are sent on different set of data bricks. This way, it balances the read fops to go on all the bricks and avoid heating UP (overloading) same set of bricks. Due to small difference in clock speed, it is possible that we get minor difference for atime, mtime or ctime for different bricks. That might cause a different stat returned to NFS based on which NFS will discard cached/pre-read data which is actually not changed and could be used. Solution: Change read policy for EC as gfid-hash. That will force all the read to go to same set of bricks. Change-Id: I825441cc519e94bf3dc3aa0bd4cb7c6ae6392c84 BUG: 1554743 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/ec: avoid delays in self-healXavi Hernandez2018-03-144-48/+93
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Self-heal creates a thread per brick to sweep the index looking for files that need to be healed. These threads are started before the volume comes online, so nothing is done but waiting for the next sweep. This happens once per minute. When a replace brick command is executed, the new graph is loaded and all index sweeper threads started. When all bricks have reported, a getxattr request is sent to the root directory of the volume. This causes a heal on it (because the new brick doesn't have good data), and marks its contents as pending to be healed. This is done by the index sweeper thread on the next round, one minute later. This patch solves this problem by waking all index sweeper threads after a successful check on the root directory. Additionally, the index sweep thread scans the index directory sequentially, but it might happen that after healing a directory entry more index entries are created but skipped by the current directory scan. This causes the remaining entries to be processed on the next round, one minute later. The same can happen in the next round, so the heal is running in bursts and taking a lot to finish, specially on volumes with many directory levels. This patch solves this problem by immediately restarting the index sweep if a directory has been healed. Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e BUG: 1547662 Signed-off-by: Xavi Hernandez <jahernan@redhat.com>
* cluster/ec: Do lock conflict check correctly for wait-listPranith Kumar K2018-02-011-8/+15
| | | | | | | | | | | | | | Problem: ec_link_has_lock_conflict() is traversing over only owner_list but the function is also getting called with wait_list. Fix: Modify ec_link_has_lock_conflict() to traverse lists correctly. Updated the callers to reflect the changes. BUG: 1540669 Change-Id: Ibd7ea10f4498e7c2761f9a6faac6d5cb7d750c91 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec : EC options for GD2Sunil Kumar Acharya2018-01-221-1/+34
| | | | | | | Updates #302 Change-Id: I31b4648f7b1a394fceece5cba8120c579c66edd9 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>