diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2020-07-30 18:43:50 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-07-30 20:42:21 +0300 |
commit | 01cec8c18f5ad9c27eee9f21439072832181039e (patch) | |
tree | 5671fb2c3e87266395efd49695e7af7456ac3aa1 /fs/io_uring.c | |
parent | 4693014340808e7f099e302c1dc40e9d79ff7667 (diff) | |
download | linux-01cec8c18f5ad9c27eee9f21439072832181039e.tar.xz |
io_uring: get rid of atomic FAA for cq_timeouts
If ->cq_timeouts modifications are done under ->completion_lock, we
don't really nee any fetch-and-add and other complex atomics. Replace it
with non-atomic FAA, that saves an implicit full memory barrier.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io_uring.c')
-rw-r--r-- | fs/io_uring.c | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index efec290c6b08..fabf0b692384 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1205,7 +1205,8 @@ static void io_kill_timeout(struct io_kiocb *req) ret = hrtimer_try_to_cancel(&req->io->timeout.timer); if (ret != -1) { - atomic_inc(&req->ctx->cq_timeouts); + atomic_set(&req->ctx->cq_timeouts, + atomic_read(&req->ctx->cq_timeouts) + 1); list_del_init(&req->timeout.list); req->flags |= REQ_F_COMP_LOCKED; io_cqring_fill_event(req, 0); @@ -4972,9 +4973,10 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer) struct io_ring_ctx *ctx = req->ctx; unsigned long flags; - atomic_inc(&ctx->cq_timeouts); - spin_lock_irqsave(&ctx->completion_lock, flags); + atomic_set(&req->ctx->cq_timeouts, + atomic_read(&req->ctx->cq_timeouts) + 1); + /* * We could be racing with timeout deletion. If the list is empty, * then timeout lookup already found it and will be handling it. |