diff options
author | zhangyi (F) <yi.zhang@huawei.com> | 2019-10-23 10:10:09 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2019-10-24 07:09:56 +0300 |
commit | a1f58ba46f794b1168d1107befcf3d4b9f9fd453 (patch) | |
tree | 7db113e60fe823f52bf4d743444f9f47898e81d7 /fs | |
parent | ef03681ae8df770745978148a7fb84796ae99cba (diff) | |
download | linux-a1f58ba46f794b1168d1107befcf3d4b9f9fd453.tar.xz |
io_uring: correct timeout req sequence when inserting a new entry
The sequence number of the timeout req (req->sequence) indicate the
expected completion request. Because of each timeout req consume a
sequence number, so the sequence of each timeout req on the timeout
list shouldn't be the same. But now, we may get the same number (also
incorrect) if we insert a new entry before the last one, such as submit
such two timeout reqs on a new ring instance below.
req->sequence
req_1 (count = 2): 2
req_2 (count = 1): 2
Then, if we submit a nop req, req_2 will still timeout even the nop req
finished. This patch fix this problem by adjust the sequence number of
each reordered reqs when inserting a new entry.
Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/io_uring.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index b65a68582a7c..1b46c72f8975 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1912,6 +1912,7 @@ static int io_timeout(struct io_kiocb *req, const struct io_uring_sqe *sqe) struct io_ring_ctx *ctx = req->ctx; struct list_head *entry; struct timespec64 ts; + unsigned span = 0; if (unlikely(ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; @@ -1960,9 +1961,17 @@ static int io_timeout(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (ctx->cached_sq_head < nxt_sq_head) tmp += UINT_MAX; - if (tmp >= tmp_nxt) + if (tmp > tmp_nxt) break; + + /* + * Sequence of reqs after the insert one and itself should + * be adjusted because each timeout req consumes a slot. + */ + span++; + nxt->sequence++; } + req->sequence -= span; list_add(&req->list, entry); spin_unlock_irq(&ctx->completion_lock); |