diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2020-11-06 16:00:26 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-12-09 22:04:00 +0300 |
commit | f6edbabb8359798c541b0776616c5eab3a840d3d (patch) | |
tree | 97fc9338ff86d0e7c8a0b6eedaa3e49b38dd83bd /fs/io-wq.h | |
parent | 6b81928d4ca8668513251f9c04cdcb9d38ef51c7 (diff) | |
download | linux-f6edbabb8359798c541b0776616c5eab3a840d3d.tar.xz |
io_uring: always batch cancel in *cancel_files()
Instead of iterating over each request and cancelling it individually in
io_uring_cancel_files(), try to cancel all matching requests and use
->inflight_list only to check if there anything left.
In many cases it should be faster, and we can reuse a lot of code from
task cancellation.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io-wq.h')
-rw-r--r-- | fs/io-wq.h | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/fs/io-wq.h b/fs/io-wq.h index cba36f03c355..069496c6d4f9 100644 --- a/fs/io-wq.h +++ b/fs/io-wq.h @@ -129,7 +129,6 @@ static inline bool io_wq_is_hashed(struct io_wq_work *work) } void io_wq_cancel_all(struct io_wq *wq); -enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork); typedef bool (work_cancel_fn)(struct io_wq_work *, void *); |