diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2022-11-30 18:21:56 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2022-11-30 20:28:49 +0300 |
commit | 618d653a345a477aaae307a0455900eb8789e952 (patch) | |
tree | 33a00d4fbf44e46150be84f68b2792cf897c473b /io_uring/io_uring.h | |
parent | 443e57550670234f1bd34983b3c577edcf2eeef5 (diff) | |
download | linux-618d653a345a477aaae307a0455900eb8789e952.tar.xz |
io_uring: don't raw spin unlock to match cq_lock
There is one newly added place when we lock ring with io_cq_lock() but
unlocking is hand coded calling spin_unlock directly. It's ugly and
troublesome in the long run. Make it consistent with the other completion
locking.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/4ca4f0564492b90214a190cd5b2a6c76522de138.1669821213.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/io_uring.h')
-rw-r--r-- | io_uring/io_uring.h | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 062899b1fe86..2277c05f52a6 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -93,6 +93,11 @@ static inline void io_cq_lock(struct io_ring_ctx *ctx) spin_lock(&ctx->completion_lock); } +static inline void io_cq_unlock(struct io_ring_ctx *ctx) +{ + spin_unlock(&ctx->completion_lock); +} + void io_cq_unlock_post(struct io_ring_ctx *ctx); static inline struct io_uring_cqe *io_get_cqe_overflow(struct io_ring_ctx *ctx, |