diff options
author | Jens Axboe <axboe@fb.com> | 2015-11-25 20:12:54 +0300 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2015-11-25 20:12:54 +0300 |
commit | dcd8376c369fa8fde8269e721b14f50475dd397b (patch) | |
tree | 1ef1de4cb71304ec3a7506d43ac80a00a036c2b5 /block/blk-flush.c | |
parent | 55ce0da1da287822e5ffb5fcd6e357180d5ba4cd (diff) | |
download | linux-dcd8376c369fa8fde8269e721b14f50475dd397b.tar.xz |
Revert "blk-flush: Queue through IO scheduler when flush not required"
This reverts commit 1b2ff19e6a957b1ef0f365ad331b608af80e932e.
Jan writes:
--
Thanks for report! After some investigation I found out we allocate
elevator specific data in __get_request() only for non-flush requests. And
this is actually required since the flush machinery uses the space in
struct request for something else. Doh. So my patch is just wrong and not
easy to fix since at the time __get_request() is called we are not sure
whether the flush machinery will be used in the end. Jens, please revert
1b2ff19e6a957b1ef0f365ad331b608af80e932e. Thanks!
I'm somewhat surprised that you can reliably hit the race where flushing
gets disabled for the device just while the request is in flight. But I
guess during boot it makes some sense.
--
So let's just revert it, we can fix the queue run manually after the
fact. This race is rare enough that it didn't trigger in testing, it
requires the specific disable-while-in-flight scenario to trigger.
Diffstat (limited to 'block/blk-flush.c')
-rw-r--r-- | block/blk-flush.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-flush.c b/block/blk-flush.c index c81d56ec308f..9c423e53324a 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -422,7 +422,7 @@ void blk_insert_flush(struct request *rq) if (q->mq_ops) { blk_mq_insert_request(rq, false, false, true); } else - q->elevator->type->ops.elevator_add_req_fn(q, rq); + list_add_tail(&rq->queuelist, &q->queue_head); return; } |