diff options
author | Keith Busch <kbusch@kernel.org> | 2022-09-09 21:40:22 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2022-09-12 09:10:34 +0300 |
commit | 4acb83417cadfdcbe64215f9d0ddcf3132af808e (patch) | |
tree | 45bdd14acf29ddc9b49cbb26e30323446d4aaeb6 /block/blk-mq-tag.c | |
parent | c35227d4e8cbc70a6622cc7cc5f8c3bff513f1fa (diff) | |
download | linux-4acb83417cadfdcbe64215f9d0ddcf3132af808e.tar.xz |
sbitmap: fix batched wait_cnt accounting
Batched completions can clear multiple bits, but we're only decrementing
the wait_cnt by one each time. This can cause waiters to never be woken,
stalling IO. Use the batched count instead.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215679
Signed-off-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20220909184022.1709476-1-kbusch@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq-tag.c')
-rw-r--r-- | block/blk-mq-tag.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 8e3b36d1cb57..9eb968e14d31 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -196,7 +196,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) * other allocations on previous queue won't be starved. */ if (bt != bt_prev) - sbitmap_queue_wake_up(bt_prev); + sbitmap_queue_wake_up(bt_prev, 1); ws = bt_wait_ptr(bt, data->hctx); } while (1); |