From 77889c34b6f169dfc3257e2b566afa2f4bad1697 Mon Sep 17 00:00:00 2001 From: Xavi Hernandez Date: Fri, 27 Mar 2020 23:56:15 +0100 Subject: write-behind: fix data corruption There was a bug in write-behind that allowed a previous completed write to overwrite the overlapping region of data from a future write. Suppose we want to send three writes (W1, W2 and W3). W1 and W2 are sequential, and W3 writes at the same offset of W2: W2.offset = W3.offset = W1.offset + W1.size Both W1 and W2 are sent in parallel. W3 is only sent after W2 completes. So W3 should *always* overwrite the overlapping part of W2. Suppose write-behind processes the requests from 2 concurrent threads: Thread 1 Thread 2 wb_enqueue_tempted(W1) /* W1 is assigned gen X */ wb_enqueue_tempted(W2) /* W2 is assigned gen X */ wb_process_queue() __wb_preprocess_winds() /* W1 and W2 are sequential and all * other requisites are met to merge * both requests. */ __wb_collapse_small_writes(W1, W2) __wb_fulfill_request(W2) __wb_pick_unwinds() -> W2 /* In this case, since the request is * already fulfilled, wb_inode->gen * is not updated. */ wb_do_unwinds() STACK_UNWIND(W2) /* The application has received the * result of W2, so it can send W3. */ wb_enqueue_tempted(W3) /* W3 is assigned gen X */ wb_process_queue() /* Here we have W1 (which contains * the conflicting W2) and W3 with * same gen, so they are interpreted * as concurrent writes that do not * conflict. */ __wb_pick_winds() -> W3 wb_do_winds() STACK_WIND(W3) wb_process_queue() /* Eventually W1 will be * ready to be sent */ __wb_pick_winds() -> W1 __wb_pick_unwinds() -> W1 /* Here wb_inode->gen is * incremented. */ wb_do_unwinds() STACK_UNWIND(W1) wb_do_winds() STACK_WIND(W1) So, as we can see, W3 is sent before W1, which shouldn't happen. The problem is that wb_inode->gen is only incremented for requests that have not been fulfilled but, after a merge, the request is marked as fulfilled even though it has not been sent to the brick. This allows that future requests are assigned to the same generation, which could be internally reordered. Solution: Increment wb_inode->gen before any unwind, even if it's for a fulfilled request. Special thanks to Stefan Ring for writing a reproducer that has been crucial to identify the issue. Change-Id: Id4ab0f294a09aca9a863ecaeef8856474662ab45 Signed-off-by: Xavi Hernandez Fixes: #884 --- xlators/performance/write-behind/src/write-behind.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'xlators') diff --git a/xlators/performance/write-behind/src/write-behind.c b/xlators/performance/write-behind/src/write-behind.c index ab6b76cace8..e1d0e9aaf00 100644 --- a/xlators/performance/write-behind/src/write-behind.c +++ b/xlators/performance/write-behind/src/write-behind.c @@ -1283,14 +1283,14 @@ __wb_pick_unwinds(wb_inode_t *wb_inode, list_head_t *lies) wb_inode->window_current += req->orig_size; + wb_inode->gen++; + if (!req->ordering.fulfilled) { /* burden increased */ list_add_tail(&req->lie, &wb_inode->liability); req->ordering.lied = 1; - wb_inode->gen++; - uuid_utoa_r(req->gfid, gfid); gf_msg_debug(wb_inode->this->name, 0, "(unique=%" PRIu64 -- cgit