summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPavel Begunkov <asml.silence@gmail.com>2023-03-27 18:38:14 +0300
committerJens Axboe <axboe@kernel.dk>2023-04-03 16:16:15 +0300
commit13bfa6f15d0b39254937076ab0557da6875bb455 (patch)
tree673a5d032900decd9b93d7e4462a6b5696f608be
parent07d99096e1635805fb7c60382dc12554886a39b8 (diff)
downloadlinux-13bfa6f15d0b39254937076ab0557da6875bb455.tar.xz
io_uring: remove extra tw trylocks
Before cond_resched()'ing in handle_tw_list() we also drop the current ring context, and so the next loop iteration will need to pick/pin a new context and do trylock. The chunk removed by this patch was intended to be an optimisation covering exactly this case, i.e. retaking the lock after reschedule, but in reality it's skipped for the first iteration after resched as described and will keep hammering the lock if it's contended. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/1ecec9483d58696e248d1bfd52cf62b04442df1d.1679931367.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
-rw-r--r--io_uring/io_uring.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 24be4992821b..2669aca0ba39 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1186,8 +1186,7 @@ static unsigned int handle_tw_list(struct llist_node *node,
/* if not contended, grab and improve batching */
*locked = mutex_trylock(&(*ctx)->uring_lock);
percpu_ref_get(&(*ctx)->refs);
- } else if (!*locked)
- *locked = mutex_trylock(&(*ctx)->uring_lock);
+ }
req->io_task_work.func(req, locked);
node = next;
count++;