diff options
author | Bart Van Assche <bvanassche@acm.org> | 2023-05-19 07:40:47 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-05-20 04:52:29 +0300 |
commit | be4c427809b0a746aff54dbb8ef663f0184291d0 (patch) | |
tree | e10ee35815db073751165e2af27b2d777084e2a4 /block/blk-mq.c | |
parent | 360f264834e34d08530c2fb9b67e3ffa65318761 (diff) | |
download | linux-be4c427809b0a746aff54dbb8ef663f0184291d0.tar.xz |
blk-mq: use the I/O scheduler for writes from the flush state machine
Send write requests issued by the flush state machine through the normal
I/O submission path including the I/O scheduler (if present) so that I/O
scheduler policies are applied to writes with the FUA flag set.
Separate the I/O scheduler members from the flush members in struct
request since now a request may pass through both an I/O scheduler
and the flush machinery.
Note that the actual flush requests, which have no bio attached to the
request still bypass the I/O schedulers.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
[hch: rebased]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20230519044050.107790-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r-- | block/blk-mq.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index c0b394096b6b..aac67bc3d368 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -458,7 +458,7 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) * Flush/passthrough requests are special and go directly to the * dispatch list. */ - if (!op_is_flush(data->cmd_flags) && + if ((data->cmd_flags & REQ_OP_MASK) != REQ_OP_FLUSH && !blk_op_is_passthrough(data->cmd_flags)) { struct elevator_mq_ops *ops = &q->elevator->type->ops; @@ -2497,7 +2497,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags) * dispatch it given we prioritize requests in hctx->dispatch. */ blk_mq_request_bypass_insert(rq, flags); - } else if (rq->rq_flags & RQF_FLUSH_SEQ) { + } else if (req_op(rq) == REQ_OP_FLUSH) { /* * Firstly normal IO request is inserted to scheduler queue or * sw queue, meantime we add flush request to dispatch queue( |