summaryrefslogtreecommitdiffstats
path: root/xlators/cluster
Commit message (Collapse)AuthorAgeFilesLines
* cluster/ec : EC options for GD2Sunil Kumar Acharya2018-01-221-1/+34
| | | | | | | Updates #302 Change-Id: I31b4648f7b1a394fceece5cba8120c579c66edd9 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* afr: add quorum checks in post-opRavishankar N2018-01-191-0/+29
| | | | | | | | | | afr relies on pending changelog xattrs to identify source and sinks and the setting of these xattrs happen in post-op. So if post-op fails, we need to unwind the write txn with a failure. Change-Id: I0f019ac03890108324ee7672883d774918b20be1 BUG: 1506140 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/afr: Adding option to take full file lockkarthik-us2018-01-193-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In replica 3 volumes there is a possibilities of ending up in split brain scenario, when multiple clients writing data on the same file at non overlapping regions in parallel. Scenario: - Initially all the copies are good and all the clients gets the value of data readables as all good. - Client C0 performs write W1 which fails on brick B0 and succeeds on other two bricks. - C1 performs write W2 which fails on B1 and succeeds on other two bricks. - C2 performs write W3 which fails on B2 and succeeds on other two bricks. - All the 3 writes above happen in parallel and fall on different ranges so afr takes granular locks and all the writes are performed in parallel. Since each client had data-readables as good, it does not see file going into split-brain in the in_flight_split_brain check, hence performs the post-op marking the pending xattrs. Now all the bricks are being blamed by each other, ending up in split-brain. Fix: Have an option to take either full lock or range lock on files while doing data transactions, to prevent the possibility of ending up in split brains. With this change, by default the files will take full lock while doing IO. If you want to make use of the old range lock change the value of "cluster.full-lock" to "no". Change-Id: I7893fa33005328ed63daa2f7c35eeed7c5218962 BUG: 1535438 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* core: fix some of the dict_{get,set} with proper APIsAmar Tumballi2018-01-178-13/+12
| | | | | | | updates #220 Change-Id: I6e25dbb69b2c7021e00073e8f025d212db7de0be Signed-off-by: Amar Tumballi <amarts@redhat.com>
* locks: added inodelk/entrylk contention upcall notificationsXavier Hernandez2018-01-164-94/+239
| | | | | | | | | | | | | | The locks xlator now is able to send a contention notification to the current owner of the lock. This is only a notification that can be used to improve performance of some client side operations that might benefit from extended duration of lock ownership. Nothing is done if the lock owner decides to ignore the message and to not release the lock. For forced release of acquired resources, leases must be used. Change-Id: I7f1ad32a0b4b445505b09908a050080ad848f8e0 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
* cluster/afr: Fixing the flaws in arbiter becoming source patchkarthik-us2018-01-137-180/+271
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Setting the write_subvol value to read_subvol in case of metadata transaction during pre-op (commit 19f9bcff4aada589d4321356c2670ed283f02c03) might lead to the original problem of arbiter becoming source. Scenario: 1) All bricks are up and good 2) 2 writes w1 and w2 are in progress in parallel 3) ctx->read_subvol is good for all the subvolumes 4) w1 succeeds on brick0 and fails on brick1, yet to do post-op on the disk 5) read/lookup comes on the same file and refreshes read_subvols back to all good 6) metadata transaction happens which makes ctx->write_subvol to be assigned with ctx->read_subvol which is all good 7) w2 succeeds on brick1 and fails on brick0 and this will update the brick in reverse order leading to arbiter becoming source Fix: Instead of setting the ctx->write_subvol to ctx->read_subvol in the pre-op statge, if there is a metadata transaction, check in the function __afr_set_in_flight_sb_status() if it is a data/metadata transaction. Use the value of ctx->write_subvol if it is a data transactions and ctx->read_subvol value for other transactions. With this patch we assign the value of ctx->write_subvol in the afr_transaction_perform_fop() with the on disk value, instead of assigning it in the afr_changelog_pre_op() with the in memory value. Change-Id: Id2025a7e965f0578af35b1abaac793b019c43cc4 BUG: 1482064 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* cluster/dht: Update options for gd2N Balachandran2018-01-121-15/+40
| | | | | | | | | Update DHT options for GD2 Updates gluster/glusterfs#302 Change-Id: Ia597fe364e97edd7bcf72d89f4ccdd50713a8837 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: Change datatype of search_unhashed variableVarsha Rao2018-01-101-1/+1
| | | | | | | | | | Variable search_unhashed is of type boolean, change it to integer type. This fixes the warning increment of a boolean expression. BUG: 1531987 Change-Id: Ibf153f6a9ad704da38bff346b6a21a71323ed9bb Signed-off-by: Varsha Rao <varao@redhat.com>
* cluster/ec: OpenFD heal implementation for ECSunil Kumar Acharya2018-01-057-91/+243
| | | | | | | | | | | | | Existing EC code doesn't try to heal the OpenFD to avoid unnecessary healing of the data later. Fix implements the healing of open FDs before carrying out file operations on them by making an attempt to open the FDs on required up nodes. BUG: 1431955 Change-Id: Ib696f59c41ffd8d5678a484b23a00bb02764ed15 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* cluster/dht: Use percentages for space checkN Balachandran2018-01-021-5/+20
| | | | | | | | | | | | | | | | | With heterogenous bricks now being supported in DHT we could run into issues where files are not migrated even though there is sufficient space in newly added bricks which just happen to be considerably smaller than older bricks. Using percentages instead of absolute available space for space checks can mitigate that to some extent. Marking bug-1247563.t as that used to depend on the easier code to prevent a file from migrating. This will be removed once we find a way to force a file migration failure. Change-Id: I3452520511f304dbf5af86f0632f654a92fcb647 BUG: 1529440 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* posix: Introduce flags for validity of iatt membersRavishankar N2017-12-292-5/+5
| | | | | | | | | | | | | | | | | | v1 of the patch started off as adding new fields to iatt that can be filled up using statx but the discussions were more around introducing masks to check the validity of different fields from a RIO perspective. To that extent, I have dropped the statx call in this version and introduced a 64 bit mask for existing fields. The masks I have defined are similar with the statx() flags' masks. I have *not* changed iatt_to_stat() to use the macros IATT_TYPE_VALID, IATT_GFID_VALID etc before blindly copying from struct iatt to struct. Also fixed warnings in xlators because of atime/mtime/ctime seconds field change from uint32_t to int64_t. Change-Id: I4ac614f1e8d5c8246fc99d5bc2d2a23e7941512b Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* mgmt/glusterd: Adding validation for setting quorum-countkarthik-us2017-12-291-1/+2
| | | | | | | | | | | In a replicated volume it was allowing to set the quorum-count value between the range [1 - 2147483647]. This patch adds validation for allowing only maximum of replica_count number of quorum-count value to be set on a volume. Change-Id: I13952f3c6cf498c9f2b91161503fc0fba9d94898 BUG: 1529515 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* cluster/dht: Add migration checks to dht_(f)xattropN Balachandran2017-12-265-47/+326
| | | | | | | | | | | | The dht_(f)xattrop implementation did not implement migration phase1/phase2 checks which could cause issues with rebalance on sharded volumes. This does not solve the issue where fops may reach the target out of order. Change-Id: I2416fc35115e60659e35b4b717fd51f20746586c BUG: 1471031 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/ec: Fix possible shift overflowXavier Hernandez2017-12-223-10/+12
| | | | | | | | | | A coverity scan has revelaed a potential shift overflow while scanning the bitmap of available subvolumes. The actual overflow cannot happen, but I've changed to test used to control the limit to make it explicit. Change-Id: Ieb55f010bbca68a1d86a93e47822f7c709a26e83 BUG: 789278 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* cluster/ec: Change [f]getxattr to parallel-dispatch-onePranith Kumar K2017-12-224-5/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | At the moment in EC, [f]getxattr operations wait to acquire a lock while other operations are in progress even when it is in the same mount with a lock on the file/directory. This happens because [f]getxattr operations follow the model where the operation is wound on 'k' of the bricks and are matched to make sure the data returned is same on all of them. This consistency check requires that no other operations are on-going while [f]getxattr operations are wound to the bricks. We can perform [f]getxattr in another way as well, where we find the good_mask from the lock that is already granted and wind the operation on any one of the good bricks and unwind the answer after adjusting size/blocks to the parent xlator. Since we are taking into account good_mask, the reply we get will either be before or after a possible on-going operation. Using this method, the operation doesn't need to depend on completion of on-going operations which could be taking long time (In case of some slow disks and writes are in progress etc). Thus we reduce the time to serve [f]getxattr requests. I changed [f]getxattr to dispatch-one and added extra logic in ec_link_has_lock_conflict() to not have any conflicts for fops with EC_MINIMUM_ONE as fop->minimum to achieve the effect described above. Modified scripts to make sure READ fop is received in EC to trigger heals. Updates gluster/glusterfs#368 Change-Id: I3b4ebf89181c336b7b8d5471b0454f016cdaf296 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Add default value for the redundancy optionKamal Mohanan2017-12-211-0/+1
| | | | | | | Updates #302 Change-Id: Ifccc6a64c2b2a3b3a0133e6c8042cdf61a0838ce Signed-off-by: Kamal Mohanan <kmohanan@redhat.com>
* rchecksum/fips: Replace MD5 usage to enable fips supportKotresh HR2017-12-213-4/+4
| | | | | | | | | rchecksum uses MD5 which is not fips compliant. Hence using sha256 for the same. Updates: #230 Change-Id: I7fad016fcc2a9900395d0da919cf5ba996ec5278 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* stripe, quiesce: volume option fixesAmar Tumballi2017-12-201-4/+13
| | | | | | | Updates: #302 Change-Id: I8f84260ba007f2afc7fe3b7c15c06069942ad6d1 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* dht: Fill first_up_subvol before use in dht_opendirPoornima G2017-12-151-0/+5
| | | | | | | | Reported by: Sam McLeod Change-Id: Ic8f9b46b173796afd70aff1042834b03ac3e80b2 BUG: 1512437 Signed-off-by: Poornima G <pgurusid@redhat.com>
* all: Simplify component message id's definitionXavier Hernandez2017-12-143-2079/+281
| | | | | | | | | This patch creates a new way of defining message id's that is easier and less error prone because it doesn't require so many manual changes each time a new component is defined or a new message created. Change-Id: I71ba8af9ac068f5add7e74f316a2478bc991c67b Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* glusterfs: Use gcc builtin ATOMIC operator to increase/decreate refcount.Mohit Agrawal2017-12-123-37/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In glusterfs code base we call mutex_lock/unlock to take reference/dereference for a object.Sometime it could be reason for lock contention also. Solution: There is no need to use mutex to increase/decrease ref counter, instead of using mutex use gcc builtin ATOMIC operation. Test: I have not observed yet how much performance gain after apply this patch specific to glusterfs but i have tested same with below small program(mutex and atomic both) and get good difference. static int numOuterLoops; static void * threadFunc(void *arg) { int j; for (j = 0; j < numOuterLoops; j++) { __atomic_add_fetch (&glob, 1,__ATOMIC_ACQ_REL); } return NULL; } int main(int argc, char *argv[]) { int opt, s, j; int numThreads; pthread_t *thread; int verbose; int64_t n = 0; if (argc < 2 ) { printf(" Please provide 2 args Num of threads && Outer Loop\n"); exit (-1); } numThreads = atoi(argv[1]); numOuterLoops = atoi (argv[2]); if (1) { printf("\tthreads: %d; outer loops: %d;\n", numThreads, numOuterLoops); } thread = calloc(numThreads, sizeof(pthread_t)); if (thread == NULL) { printf ("calloc error so exit\n"); exit (-1); } __atomic_store (&glob, &n, __ATOMIC_RELEASE); for (j = 0; j < numThreads; j++) { s = pthread_create(&thread[j], NULL, threadFunc, NULL); if (s != 0) { printf ("pthread_create failed so exit\n"); exit (-1); } } for (j = 0; j < numThreads; j++) { s = pthread_join(thread[j], NULL); if (s != 0) { printf ("pthread_join failed so exit\n"); exit (-1); } } printf("glob value is %ld\n",__atomic_load_n (&glob,__ATOMIC_RELAXED)); exit(0); } time ./thr_count 800 800000 threads: 800; outer loops: 800000; glob value is 640000000 real 1m10.288s user 0m57.269s sys 3m31.565s time ./thr_count_atomic 800 800000 threads: 800; outer loops: 800000; glob value is 640000000 real 0m20.313s user 1m20.558s sys 0m0.028 Change-Id: Ie5030a52ea264875e002e108dd4b207b15ab7cc7 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* cluster/dht: fix memory leaks in rebalanceSusant Palai2017-12-111-12/+19
| | | | | | | | | | | | From code reading it was found that in gf_defrag_process_dir, GF_FREE was called directly on dir_dfmeta->equeue leading to leaks of memory for list of entries read from all the local subvols in case of a failure. This patch frees the entries read from all the local subvols. Change-Id: If5e8f557372a8fc2af86628b401e8de1b54986a1 BUG: 1430305 Signed-off-by: Susant Palai <spalai@redhat.com>
* dht: Send an event when disks get fullAnkit raj2017-12-091-4/+22
| | | | | | | | | | Send an event if DHT determines that a subvol is getting full or running out of inodes. Change-Id: Ie026f4ee1832b5df1e80b16cb949b2cc31a25d6f Bug: 1440659 Signed-off-by: Ankit raj <anraj@redhat.com> Signed-off-by: N Balachandran <nbalacha@redhat.com>
* dht/crypt/tier: Fix use of booleans as integersShyamsundarR2017-12-062-16/+6
| | | | | | BUG: 1520974 Change-Id: I19ea40c888e88a7a4ac271168ed1820c2075be93 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* cluster/dht: Fix build error due to switch statement on a booleanShreyas Siravara2017-12-051-16/+5
| | | | | Change-Id: Idf672b435e389baada732f609398404479306909 BUG: 1520974
* cluster/ec: Fix bugs in stripe-cache featureAshish Pandey2017-12-052-1/+4
| | | | | | | | | | | | | | 1 - This patch fixes a bug in ec_update_stripe() that prevented some stripes to be updated after a write. 2 - This patch also include code modification for the case in which a file does not exist and we write on unaligned offset and user size, the last stripe on which "end" will fall should also be cached. Change-Id: I069cb4be1c8d59c206e3b35a6991e1fbdbc9b474 BUG: 1520758 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/dht: populate inode in dentry for single subvolume dhtRaghavendra G2017-12-022-1/+69
| | | | | | | | | | ... in readdirp response if dentry points to a directory inode. This is a special case where the entire layout is stored in one single subvolume and hence no need for lookup to construct the layout Change-Id: I44fd951e2393ec9dac2af120469be47081a32185 BUG: 1492625 Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/dht: don't overfill the buffer in readdir(p)Raghavendra G2017-12-021-3/+18
| | | | | | | | | | | | | | | | | | | | Superflous dentries that cannot be fit in the buffer size provided by kernel are thrown away by fuse-bridge. This means, * the next readdir(p) seen by readdir-ahead would have an offset of a dentry returned in a previous readdir(p) response. When readdir-ahead detects non-monotonic offset it turns itself off which can result in poor readdir performance. * readdirp can be cpu-intensive on brick and there is no point to read all those dentries just to be thrown away by fuse-bridge. So, the best strategy would be to fill the buffer optimally - neither overfill nor underfill. Change-Id: Idb3d85dd4c08fdc4526b2df801d49e69e439ba84 BUG: 1492625 Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
* Tier: Stop tierd for detach starthari gowtham2017-12-011-3/+10
| | | | | | | | | | | | | | | | | | | Problem: tierd was stopped only after detach commit This makes the detach take a longer time. The detach demotes the files to the cold brick and if the promotion frequency is hit, then the tierd starts to promote files to hot tier again. Fix: stop tierd after detach start so the files get demoted faster. Note: the is_tier_enabled was not maintained properly. That has been fixed too. some code clean up has been done. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I532f7410cea04fbb960105483810ea3560ca149b BUG: 1446381
* dht: coverity fix in dht-rebalance.ckarthik-us2017-11-301-1/+0
| | | | | | | | | | | Fixed UNUSED_VALUE warning in dht_migrate_file. Issue ID: 526 From: http://download.gluster.org/pub/gluster/glusterfs/ static-analysis/master/glusterfs-coverity/2017-11-30-eb013e4c Change-Id: I37395e8ce7088742501424fcce918f0ee8ab4f3d BUG: 789278 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* cluster/ec: EC DISCARD doesn't punch hole properlySunil Kumar Acharya2017-11-281-2/+4
| | | | | | | | | | | | | | Problem: DISCARD operation on EC volume was punching hole of lesser size than the specified size in some cases. Solution: EC was not handling punch hole for tail part in some cases. Updated the code to handle it appropriately. BUG: 1516206 Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* cluster/ec: Prevent self-heal to work after PARENT_DOWNXavier Hernandez2017-11-282-28/+52
| | | | | | | | | | | | | | | | | | | When the volume is being stopped, PARENT_DOWN event is received. This instructs EC to wait until all pending operations are completed before declaring itself down. However heal operations are ignored and allowed to continue even after having said it was down. This may cause unexpected results and crashes. To solve this, heal operations are considered exactly equal as any other operation and EC won't propagate PARENT_DOWN until all operations, including healing, are complete. To avoid big delays if this happens in the middle of a big heal, a check has been added to quit current heal if shutdown is detected. Change-Id: I26645e236ebd115eb22c7ad4972461111a2d2034 BUG: 1515266 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* afr: volume option fixes for GD2Ravishankar N2017-11-272-39/+121
| | | | | | | | | This patch takes care of volume options exposed via the CLI. Updates #302 Change-Id: I6fd1645604928f6b9700e2425af4147cc6446a3a Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: coverity fixesRavishankar N2017-11-245-24/+24
| | | | | | | | | | | | | | | | | | | | | | 1.afr_discover_do: COPY_PASTE_ERROR 2.afr_fav_child_reset_sink_xattrs_cbk: REVERSE_INULL 3.afr_fop_lock_proceed: UNUSED_VALUE 4.afr_local_init: CHECKED_RETURN 5.afr_set_split_brain_choice: REVERSE_INULL 6.__afr_inode_write_finalize: FORWARD_NULL 7.afr_refresh_heal_done: REVERSE_INULL 8.afr_xl_op:UNUSED_VALUE 9.afr_changelog_populate_xdata: DEADCODE 10.set_afr_pending_xattrs_option: RESOURCE_LEAK Note: RESOURCE_LEAK complaints about afr_fgetxattr_pathinfo_cbk, afr_getxattr_list_node_uuids_cbk and afr_getxattr_pathinfo_cbk seem to be false alarms. Change-Id: Ia4ca1478b5e2922084732d14c1e7b1b03ad5ac45 BUG: 789278 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* tier: coverity fixes in tier.chari gowtham2017-11-231-25/+16
| | | | | | | | | | | | fixes coverity issues: 127, 83, 312, 314, 48, and 506 from https://download.gluster.org/pub/gluster/glusterfs/static-analysis /master/glusterfs-coverity/2017-10-30-9aa574a5/html/ Change-Id: Ifb206a8758790faf96619bcc9961dcf169aaad25 BUG: 789278 Signed-off-by: hari gowtham <hgowtham@redhat.com>
* cluster/dht: Serialize mds update code path with lookup unwind in selfhealMohit Agrawal2017-11-223-306/+216
| | | | | | | | | | | | | | | | Problem: Sometime test case ./tests/bugs/bug-1371806_1.t is failing on centos due to race condition between fresh lookup and setxattr fop. Solution: In selfheal code path we do save mds on inode_ctx, it was not serialize with lookup unwind. Due to this behavior after lookup unwind if mds is not saved on inode_ctx and if any subsequent setxattr fop call it has failed with ENOENT because no mds has found on inode ctx.To resolve it save mds on inode ctx has been serialize with lookup unwind. BUG: 1498966 Change-Id: I8d4bb40a6cbf0cec35d181ec0095cc7142b02e29 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* cluster/dht: make rebalance use truncate incaseSusant Palai2017-11-223-71/+99
| | | | | | | | | .. the brick file system does not support fallocate. Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c BUG: 1488103 Signed-off-by: Susant Palai <spalai@redhat.com>
* ec: Use tiebreaker_inodelk where necessaryPranith Kumar K2017-11-221-8/+11
| | | | | | | | | | | | | | | | | When there are big directories or files that need to be healed, other shds are stuck on getting lock on self-heal domain for these directories/files. If there is a tie-breaker logic, other shds can heal some other files/directories while 1 of the shds is healing the big file/directory. Before this patch: 96.67 4890.64 us 12.89 us 646115887.30us 340869 INODELK After this patch: 40.76 42.35 us 15.09 us 6546.50us 438478 INODELK Fixes gluster/glusterfs#354 Change-Id: Ia995b5576b44f770c064090705c78459e543cc64 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/dht: Coverity fixes for dht-rebalance.ckarthik-us2017-11-211-80/+54
| | | | | | | | | | | | | | | | | | Warning Functions DEADCODE gf_defrag_handle_migrate_error gf_defrag_get_entry gf_defrag_process_dir gf_defrag_start_crawl dht_migrate_file UNUSED_VALUE migrate_special_files dht_migrate_file FORWARD_NULL gf_tier_do_fix_layout Change-Id: I6f408585b83a267581a4273dae7c22b8993163d5 BUG: 789278 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* cluster/dht: dead code coverity fixKartik_Burmee2017-11-211-3/+0
| | | | | | | | | | | | issue: Execution cannot reach this statement: "call_stub_destroy(stub);" function: dht_mkdir_hashed_cbk fix: removed the statement and the corresponding 'if' condition block. Change-Id: I3e31056ee489ede6864e51a8e666edc7da3c175f BUG: 789278 Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* cluster/dht: Don't set ACLs on linkto fileN Balachandran2017-11-191-0/+11
| | | | | | | | | | | | | | The trusted.SGI_ACL_FILE appears to set posix ACLs on the linkto file that is a target of file migration. This can mess up file permissions and cause linkto identification to fail. Now we remove all ACL xattrs from the results of the listxattr call on the source before setting them on the target. Change-Id: I56802dbaed783a16e3fb90f59f4ce849f8a4a9b4 BUG: 1514329 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* afr: add checks for allowing lookupsRavishankar N2017-11-184-93/+162
| | | | | | | | | | | | | | | | | | | | | | Problem: In an arbiter volume, lookup was being served from one of the sink bricks (source brick was down). shard uses the iatt values from lookup cbk to calculate the size and block count, which in this case were incorrect values. shard_local_t->last_block was thus initialised to -1, resulting in an infinite while loop in shard_common_resolve_shards(). Fix: Use client quorum logic to allow or fail the lookups from afr if there are no readable subvolumes. So in replica-3 or arbiter vols, if there is no good copy or if quorum is not met, fail lookup with ENOTCONN. With this fix, we are also removing support for quorum-reads xlator option. So if quorum is not met, neither read nor write txns are allowed and we fail the fop with ENOTCONN. Change-Id: Ic65c00c24f77ece007328b421494eee62a505fa0 BUG: 1467250 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/afr: Fix for arbiter becoming sourcekarthik-us2017-11-184-6/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: When eager-lock is on, and two writes happen in parallel on a FD we were observing the following behaviour: - First write fails on one data brick - Since the post-op is not yet happened, the inode refresh will get both the data bricks as readable and set it in the inode context - In flight split brain check see both the data bricks as readable and allows the second write - Second write fails on the other data brick - Now the post-op happens and marks both the data bricks as bad and arbiter will become source for healing Fix: Adding one more variable called write_suvol in inode context and it will have the in memory representation of the writable subvols. Inode refresh will not update this value and its lifetime is pre-op through unlock in the afr transaction. Initially the pre-op will set this value same as read_subvol in inode context and then in the in flight split brain check we will use this value instead of read_subvol. After all the checks we will update the value of this and set the read_subvol same as this to avoid having incorrect value in that. Change-Id: I2ef6904524ab91af861d59690974bbc529ab1af3 BUG: 1482064 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* tier: coverity fix for tier-common.chari gowtham2017-11-161-28/+5
| | | | | | | | | fix for the coverity id: 258, 162 Change-Id: I35ba21e37e186b7c1ce54faf5b24f48858e6fc70 BUG: 789278 Signed-off-by: hari gowtham <hgowtham@redhat.com>
* cluster/ec: Fix op-version for disperse.other-eager-lockXavier Hernandez2017-11-141-1/+1
| | | | | | | | | The op-version used for the new option was wrong. It has been set to 3.13.0. Change-Id: I88fbd7834e4a8018c8906303e734c251e90be8cf BUG: 1502610 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* cluster: dead_code coverity fixKartik_Burmee2017-11-141-2/+0
| | | | | | | | | | | | function: stripe_entry_self_heal issue: Execution cannot reach this statement: "dict_unref(xdata);" fix: removed the 'if' condition and the corresponding actions because if the execution reaches this statement, then the value of xdata will always be NULL Change-Id: Iebc825339e2e1236b92bed39d81a1a9aba10164e BUG: 789278 Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* cluster/ec: Keep last written strip in in-memory cacheAshish Pandey2017-11-108-84/+519
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Consider an EC volume with configuration 4 + 2. The stripe size for this would be 512 * 4 = 2048. That means, 2048 bytes of user data stored in one stripe. Let's say 2048 + 512 = 2560 bytes are already written on this volume. 512 Bytes would be in second stripe. Now, if there are sequential writes with offset 2560 and of size 1 Byte, we have to read the whole stripe, encode it with 1 Byte and then again have to write it back. Next, write with offset 2561 and size of 1 Byte will again READ-MODIFY-WRITE the whole stripe. This is causing bad performance because of lots of READ request travelling over the network. There are some tools and scenario's where such kind of load is coming and users are not aware of that. Example: fio and zip Solution: One possible solution to deal with this issue is to keep last stripe in memory. This way, we need not to read it again and we can save READ fop going over the network. Considering the above example, we have to keep last 2048 bytes (maximum) in memory per file. Change-Id: I3f95e6fc3ff81953646d374c445a40c6886b0b85 BUG: 1471753 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* Coverity Issue: PW.INCLUDE_RECURSION in several filesGirjesh Rajoria2017-11-096-10/+0
| | | | | | | | | | | | | | Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439, 440, 441, 442, 443 Issue: Event include_recursion Removed redundant, recursive includes from the files. Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7 BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* cluster/ec: UNUSED_VALUE coverity fixSunil Kumar Acharya2017-11-081-0/+4
| | | | | | | | | | Problem: Return values are not used. Solution: Removed the unused values. BUG: 789278 Change-Id: Ica2b95bb659dfc7ec5346de0996b9d2fcd340f8d Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* cluster/ec: REVERSE_INULL coverity fixSunil Kumar Acharya2017-11-081-2/+1
| | | | | | | | | | Problem: fop could be NULL. Solution: Check has been added to verify fop. BUG: 789278 Change-Id: I7e8d2c1bdd8960c609aa485f180688a95606ebf7 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>