diff options
author | David Howells <dhowells@redhat.com> | 2024-12-16 23:40:59 +0300 |
---|---|---|
committer | Christian Brauner <brauner@kernel.org> | 2024-12-21 00:34:03 +0300 |
commit | 31fc366aa7aa911ebc0744e99c82caee4e97315a (patch) | |
tree | da5073dea2d87d035032b94ecf573ebb53c894c1 /fs/netfs/buffered_read.c | |
parent | 360157829ee3dba848ffa817792d9a07969e0a95 (diff) | |
download | linux-31fc366aa7aa911ebc0744e99c82caee4e97315a.tar.xz |
netfs: Drop the was_async arg from netfs_read_subreq_terminated()
Drop the was_async argument from netfs_read_subreq_terminated(). Almost
every caller is either in process context and passes false. Some
filesystems delegate the call to a workqueue to avoid doing the work in
their network message queue parsing thread.
The only exception is netfs_cache_read_terminated() which handles
completion in the cache - which is usually a callback from the backing
filesystem in softirq context, though it can be from process context if an
error occurred. In this case, delegate to a workqueue.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/CAHk-=wiVC5Cgyz6QKXFu6fTaA6h4CjexDR-OV9kL6Vo5x9v8=A@mail.gmail.com/
Signed-off-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/20241216204124.3752367-10-dhowells@redhat.com
cc: Jeff Layton <jlayton@kernel.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Christian Brauner <brauner@kernel.org>
Diffstat (limited to 'fs/netfs/buffered_read.c')
-rw-r--r-- | fs/netfs/buffered_read.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index d420d623711c..fa1013020ac9 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -154,7 +154,7 @@ static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error } else { subreq->error = transferred_or_error; } - netfs_read_subreq_terminated(subreq, was_async); + schedule_work(&subreq->work); } /* @@ -255,7 +255,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq) goto prep_iter_failed; __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); subreq->error = 0; - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); goto done; } @@ -287,7 +287,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq) } while (size > 0); if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq, false); + netfs_rreq_terminated(rreq); /* Defer error return as we may need to wait for outstanding I/O. */ cmpxchg(&rreq->error, 0, ret); |