diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-11-02 00:25:08 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-11-02 00:25:08 +0300 |
commit | 4de520f1fcefd4ebb7dddcf28bde1b330c2f6b5d (patch) | |
tree | f30a598d13266a4e8b4760bcae85c64a90d20b86 /io_uring/io_uring.c | |
parent | f5277ad1e9768dbd05b1ae8dcdba690215d8c5b7 (diff) | |
parent | 8f350194d5cfd7016d4cd44e433df0faa4d4a703 (diff) | |
download | linux-4de520f1fcefd4ebb7dddcf28bde1b330c2f6b5d.tar.xz |
Merge tag 'io_uring-futex-2023-10-30' of git://git.kernel.dk/linux
Pull io_uring futex support from Jens Axboe:
"This adds support for using futexes through io_uring - first futex
wake and wait, and then the vectored variant of waiting, futex waitv.
For both wait/wake/waitv, we support the bitset variant, as the
'normal' variants can be easily implemented on top of that.
PI and requeue are not supported through io_uring, just the above
mentioned parts. This may change in the future, but in the spirit of
keeping this small (and based on what people have been asking for),
this is what we currently have.
Wake support is pretty straight forward, most of the thought has gone
into the wait side to avoid needing to offload wait operations to a
blocking context. Instead, we rely on the usual callbacks to retry and
post a completion event, when appropriate.
As far as I can recall, the first request for futex support with
io_uring came from Andres Freund, working on postgres. His aio rework
of postgres was one of the early adopters of io_uring, and futex
support was a natural extension for that. This is relevant from both a
usability point of view, as well as for effiency and performance. In
Andres's words, for the former:
Futex wait support in io_uring makes it a lot easier to avoid
deadlocks in concurrent programs that have their own buffer pool:
Obviously pages in the application buffer pool have to be locked
during IO. If the initiator of IO A needs to wait for a held lock
B, the holder of lock B might wait for the IO A to complete. The
ability to wait for a lock and IO completions at the same time
provides an efficient way to avoid such deadlocks
and in terms of effiency, even without unlocking the full potential
yet, Andres says:
Futex wake support in io_uring is useful because it allows for more
efficient directed wakeups. For some "locks" postgres has queues
implemented in userspace, with wakeup logic that cannot easily be
implemented with FUTEX_WAKE_BITSET on a single "futex word"
(imagine waiting for journal flushes to have completed up to a
certain point).
Thus a "lock release" sometimes need to wake up many processes in a
row. A quick-and-dirty conversion to doing these wakeups via
io_uring lead to a 3% throughput increase, with 12% fewer context
switches, albeit in a fairly extreme workload"
* tag 'io_uring-futex-2023-10-30' of git://git.kernel.dk/linux:
io_uring: add support for vectored futex waits
futex: make the vectored futex operations available
futex: make futex_parse_waitv() available as a helper
futex: add wake_data to struct futex_q
io_uring: add support for futex wake and wait
futex: abstract out a __futex_wake_mark() helper
futex: factor out the futex wake handling
futex: move FUTEX2_VALID_MASK to futex.h
Diffstat (limited to 'io_uring/io_uring.c')
-rw-r--r-- | io_uring/io_uring.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 36ae5ac2b070..ed254076c723 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -93,6 +93,7 @@ #include "net.h" #include "notif.h" #include "waitid.h" +#include "futex.h" #include "timeout.h" #include "poll.h" @@ -330,6 +331,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) sizeof(struct async_poll)); io_alloc_cache_init(&ctx->netmsg_cache, IO_ALLOC_CACHE_MAX, sizeof(struct io_async_msghdr)); + io_futex_cache_init(ctx); init_completion(&ctx->ref_comp); xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1); mutex_init(&ctx->uring_lock); @@ -349,6 +351,9 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) ctx->submit_state.free_list.next = NULL; INIT_WQ_LIST(&ctx->locked_free_list); INIT_HLIST_HEAD(&ctx->waitid_list); +#ifdef CONFIG_FUTEX + INIT_HLIST_HEAD(&ctx->futex_list); +#endif INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func); INIT_WQ_LIST(&ctx->submit_state.compl_reqs); INIT_HLIST_HEAD(&ctx->cancelable_uring_cmd); @@ -2914,6 +2919,7 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx) io_eventfd_unregister(ctx); io_alloc_cache_free(&ctx->apoll_cache, io_apoll_cache_free); io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free); + io_futex_cache_free(ctx); io_destroy_buffers(ctx); mutex_unlock(&ctx->uring_lock); if (ctx->sq_creds) @@ -3357,6 +3363,7 @@ static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, mutex_lock(&ctx->uring_lock); ret |= io_poll_remove_all(ctx, task, cancel_all); ret |= io_waitid_remove_all(ctx, task, cancel_all); + ret |= io_futex_remove_all(ctx, task, cancel_all); ret |= io_uring_try_cancel_uring_cmd(ctx, task, cancel_all); mutex_unlock(&ctx->uring_lock); ret |= io_kill_timeouts(ctx, task, cancel_all); |