diff options
author | Christian Brauner <brauner@kernel.org> | 2024-12-21 00:34:18 +0300 |
---|---|---|
committer | Christian Brauner <brauner@kernel.org> | 2024-12-21 00:34:18 +0300 |
commit | 7a47db23a9f003614e15c687d2a5425c175a9ca8 (patch) | |
tree | 38292db9d7f2b020dc647d0871a5c938c7a82cc0 /fs/netfs/buffered_read.c | |
parent | 5fe85a5c513344161cde33b79f8badc81b8aa8d3 (diff) | |
parent | 794d8cf3a87a6b958d520a7c32d142f7ec30cb92 (diff) | |
download | linux-7a47db23a9f003614e15c687d2a5425c175a9ca8.tar.xz |
Merge patch series "netfs: Read performance improvements and "single-blob" support"
David Howells <dhowells@redhat.com> says:
This set of patches is primarily about two things: improving read
performance and supporting monolithic single-blob objects that have to be
read/written as such (e.g. AFS directory contents). The implementation of
the two parts is interwoven as each makes the other possible.
READ PERFORMANCE
================
The read performance improvements are intended to speed up some loss of
performance detected in cifs and to a lesser extend in afs. The problem is
that we queue too many work items during the collection of read results:
each individual subrequest is collected by its own work item, and then they
have to interact with each other when a series of subrequests don't exactly
align with the pattern of folios that are being read by the overall
request.
Whilst the processing of the pages covered by individual subrequests as
they complete potentially allows folios to be woken in parallel and with
minimum delay, it can shuffle wakeups for sequential reads out of order -
and that is the most common I/O pattern.
The final assessment and cleanup of an operation is then held up until the
last I/O completes - and for a synchronous sequential operation, this means
the bouncing around of work items just adds latency.
Two changes have been made to make this work:
(1) All collection is now done in a single "work item" that works
progressively through the subrequests as they complete (and also
dispatches retries as necessary).
(2) For readahead and AIO, this work item be done on a workqueue and can
run in parallel with the ultimate consumer of the data; for
synchronous direct or unbuffered reads, the collection is run in the
application thread and not offloaded.
Functions such as smb2_readv_callback() then just tell netfslib that the
subrequest has terminated; netfslib does a minimal bit of processing on the
spot - stat counting and tracing mostly - and then queues/wakes up the
worker. This simplifies the logic as the collector just walks sequentially
through the subrequests as they complete and walks through the folios, if
buffered, unlocking them as it goes. It also keeps to a minimum the amount
of latency injected into the filesystem's low-level I/O handling
The way netfs supports filesystems using the deprecated PG_private_2 flag
is changed: folios are flagged and added to a write request as they
complete and that takes care of scheduling the writes to the cache. The
originating read request can then just unlock the pages whatever happens.
SINGLE-BLOB OBJECT SUPPORT
==========================
Single-blob objects are files for which the content of the file must be
read from or written to the server in a single operation because reading
them in parts may yield inconsistent results. AFS directories are an
example of this as there exists the possibility that the contents are
generated on the fly and would differ between reads or might change due to
third party interference.
Such objects will be written to and retrieved from the cache if one is
present, though we allow/may need to propose multiple subrequests to do so.
The important part is that read from/write to the *server* is monolithic.
Single blob reading is, for the moment, fully synchronous and does result
collection in the application thread and, also for the moment, the API is
supplied the buffer in the form of a folio_queue chain rather than using
the pagecache.
AFS CHANGES
===========
This series makes a number of changes to the kafs filesystem, primarily in
the area of directory handling:
(1) AFS's FetchData RPC reply processing is made partially asynchronous
which allows the netfs_io_request's outstanding operation counter to
be removed as part of reducing the collection to a single work item.
(2) Directory and symlink reading are plumbed through netfslib using the
single-blob object API and are now cacheable with fscache. This also
allows the afs_read struct to be eliminated and netfs_io_subrequest to
be used directly instead.
(3) Directory and symlink content are now stored in a folio_queue buffer
rather than in the pagecache. This means we don't require the RCU
read lock and xarray iteration to access it, and folios won't randomly
disappear under us because the VM wants them back.
There are some downsides to this, though: the storage folios are no
longer known to the VM, drop_caches can't flush them, the folios are
not migrateable. The inode must also be marked dirty manually to get
the data written to the cache in the background.
(4) The vnode operation lock is changed from a mutex struct to a private
lock implementation. The problem is that the lock now needs to be
dropped in a separate thread and mutexes don't permit that.
(5) When a new directory or symlink is created, we now initialise it
locally and mark it valid rather than downloading it (we know what
it's likely to look like).
(6) We now use the in-directory hashtable to reduce the number of entries
we need to scan when doing a lookup. The edit routines have to
maintain the hash chains.
(7) Cancellation (e.g. by signal) of an async call after the rxrpc_call
has been set up is now offloaded to the worker thread as there will be
a notification from rxrpc upon completion. This avoids a double
cleanup.
SUPPORTING CHANGES
==================
To support the above some other changes are also made:
(1) A "rolling buffer" implementation is created to abstract out the two
separate folio_queue chaining implementations I had (one for read and
one for write).
(2) Functions are provided to create/extend a buffer in a folio_queue
chain and tear it down again. This is used to handle AFS directories,
but could also be used to create bounce buffers for content crypto and
transport crypto.
(3) The was_async argument is dropped from netfs_read_subreq_terminated().
Instead we wake the read collection work item by either queuing it or
waking up the app thread.
(4) We don't need to use BH-excluding locks when communicating between the
issuing thread and the collection thread as neither of them now run in
BH context.
MISCELLANY
==========
Also included are a number of new tracepoints; a split of the netfslib
write collection code to put retrying into its own file (it gets more
complicated with content encryption).
There are also some minor fixes AFS included, including fixing the AFS
directory format struct layout, reducing some directory over-invalidation
and making afs_mkdir() translate EEXIST to ENOTEMPY (which is not available
on all systems the servers support).
Finally, there's a patch to try and detect entry into the folio unlock
function with no folio_queue structs in the buffer (which isn't allowed in
the cases that can get there). This is a debugging patch, but should be
minimal overhead.
* patches from https://lore.kernel.org/r/20241216204124.3752367-1-dhowells@redhat.com: (31 commits)
netfs: Report on NULL folioq in netfs_writeback_unlock_folios()
afs: Add a tracepoint for afs_read_receive()
afs: Locally initialise the contents of a new symlink on creation
afs: Use the contained hashtable to search a directory
afs: Make afs_mkdir() locally initialise a new directory's content
netfs: Change the read result collector to only use one work item
afs: Make {Y,}FS.FetchData an asynchronous operation
afs: Fix cleanup of immediately failed async calls
afs: Eliminate afs_read
afs: Use netfslib for symlinks, allowing them to be cached
afs: Use netfslib for directories
afs: Make afs_init_request() get a key if not given a file
netfs: Add support for caching single monolithic objects such as AFS dirs
netfs: Add functions to build/clean a buffer in a folio_queue
afs: Add more tracepoints to do with tracking validity
cachefiles: Add auxiliary data trace
cachefiles: Add some subrequest tracepoints
netfs: Remove some extraneous directory invalidations
afs: Fix directory format encoding struct
afs: Fix EEXIST error returned from afs_rmdir() to be ENOTEMPTY
...
Link: https://lore.kernel.org/r/20241216204124.3752367-1-dhowells@redhat.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
Diffstat (limited to 'fs/netfs/buffered_read.c')
-rw-r--r-- | fs/netfs/buffered_read.c | 290 |
1 files changed, 109 insertions, 181 deletions
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 4dc9b8286355..f761d44b3436 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -64,37 +64,6 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in } /* - * Decant the list of folios to read into a rolling buffer. - */ -static size_t netfs_load_buffer_from_ra(struct netfs_io_request *rreq, - struct folio_queue *folioq, - struct folio_batch *put_batch) -{ - unsigned int order, nr; - size_t size = 0; - - nr = __readahead_batch(rreq->ractl, (struct page **)folioq->vec.folios, - ARRAY_SIZE(folioq->vec.folios)); - folioq->vec.nr = nr; - for (int i = 0; i < nr; i++) { - struct folio *folio = folioq_folio(folioq, i); - - trace_netfs_folio(folio, netfs_folio_trace_read); - order = folio_order(folio); - folioq->orders[i] = order; - size += PAGE_SIZE << order; - - if (!folio_batch_add(put_batch, folio)) - folio_batch_release(put_batch); - } - - for (int i = nr; i < folioq_nr_slots(folioq); i++) - folioq_clear(folioq, i); - - return size; -} - -/* * netfs_prepare_read_iterator - Prepare the subreq iterator for I/O * @subreq: The subrequest to be set up * @@ -128,19 +97,12 @@ static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq) folio_batch_init(&put_batch); while (rreq->submitted < subreq->start + rsize) { - struct folio_queue *tail = rreq->buffer_tail, *new; - size_t added; - - new = kmalloc(sizeof(*new), GFP_NOFS); - if (!new) - return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(new); - new->prev = tail; - tail->next = new; - rreq->buffer_tail = new; - added = netfs_load_buffer_from_ra(rreq, new, &put_batch); - rreq->iter.count += added; + ssize_t added; + + added = rolling_buffer_load_from_ra(&rreq->buffer, rreq->ractl, + &put_batch); + if (added < 0) + return added; rreq->submitted += added; } folio_batch_release(&put_batch); @@ -148,7 +110,7 @@ static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq) subreq->len = rsize; if (unlikely(rreq->io_streams[0].sreq_max_segs)) { - size_t limit = netfs_limit_iter(&rreq->iter, 0, rsize, + size_t limit = netfs_limit_iter(&rreq->buffer.iter, 0, rsize, rreq->io_streams[0].sreq_max_segs); if (limit < rsize) { @@ -157,20 +119,10 @@ static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq) } } - subreq->io_iter = rreq->iter; - - if (iov_iter_is_folioq(&subreq->io_iter)) { - if (subreq->io_iter.folioq_slot >= folioq_nr_slots(subreq->io_iter.folioq)) { - subreq->io_iter.folioq = subreq->io_iter.folioq->next; - subreq->io_iter.folioq_slot = 0; - } - subreq->curr_folioq = (struct folio_queue *)subreq->io_iter.folioq; - subreq->curr_folioq_slot = subreq->io_iter.folioq_slot; - subreq->curr_folio_order = subreq->curr_folioq->orders[subreq->curr_folioq_slot]; - } + subreq->io_iter = rreq->buffer.iter; iov_iter_truncate(&subreq->io_iter, subreq->len); - iov_iter_advance(&rreq->iter, subreq->len); + rolling_buffer_advance(&rreq->buffer, subreq->len); return subreq->len; } @@ -179,25 +131,14 @@ static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_request *rr loff_t i_size) { struct netfs_cache_resources *cres = &rreq->cache_resources; + enum netfs_io_source source; if (!cres->ops) return NETFS_DOWNLOAD_FROM_SERVER; - return cres->ops->prepare_read(subreq, i_size); -} - -static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, - bool was_async) -{ - struct netfs_io_subrequest *subreq = priv; - - if (transferred_or_error < 0) { - netfs_read_subreq_terminated(subreq, transferred_or_error, was_async); - return; - } + source = cres->ops->prepare_read(subreq, i_size); + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + return source; - if (transferred_or_error > 0) - subreq->transferred += transferred_or_error; - netfs_read_subreq_terminated(subreq, 0, was_async); } /* @@ -214,6 +155,47 @@ static void netfs_read_cache_to_pagecache(struct netfs_io_request *rreq, netfs_cache_read_terminated, subreq); } +static void netfs_issue_read(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + struct netfs_io_stream *stream = &rreq->io_streams[0]; + + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); + + /* We add to the end of the list whilst the collector may be walking + * the list. The collector only goes nextwards and uses the lock to + * remove entries off of the front. + */ + spin_lock(&rreq->lock); + list_add_tail(&subreq->rreq_link, &stream->subrequests); + if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { + stream->front = subreq; + if (!stream->active) { + stream->collected_to = stream->front->start; + /* Store list pointers before active flag */ + smp_store_release(&stream->active, true); + } + } + + spin_unlock(&rreq->lock); + + switch (subreq->source) { + case NETFS_DOWNLOAD_FROM_SERVER: + rreq->netfs_ops->issue_read(subreq); + break; + case NETFS_READ_FROM_CACHE: + netfs_read_cache_to_pagecache(rreq, subreq); + break; + default: + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + subreq->error = 0; + iov_iter_zero(subreq->len, &subreq->io_iter); + subreq->transferred = subreq->len; + netfs_read_subreq_terminated(subreq); + break; + } +} + /* * Perform a read to the pagecache from a series of sources of different types, * slicing up the region to be read according to available cache blocks and @@ -226,11 +208,9 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq) ssize_t size = rreq->len; int ret = 0; - atomic_inc(&rreq->nr_outstanding); - do { struct netfs_io_subrequest *subreq; - enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; + enum netfs_io_source source = NETFS_SOURCE_UNKNOWN; ssize_t slice; subreq = netfs_alloc_subrequest(rreq); @@ -242,20 +222,14 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq) subreq->start = start; subreq->len = size; - atomic_inc(&rreq->nr_outstanding); - spin_lock_bh(&rreq->lock); - list_add_tail(&subreq->rreq_link, &rreq->subrequests); - subreq->prev_donated = rreq->prev_donated; - rreq->prev_donated = 0; - trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock_bh(&rreq->lock); - source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size); subreq->source = source; if (source == NETFS_DOWNLOAD_FROM_SERVER) { unsigned long long zp = umin(ictx->zero_point, rreq->i_size); size_t len = subreq->len; + if (unlikely(rreq->origin == NETFS_READ_SINGLE)) + zp = rreq->i_size; if (subreq->start >= zp) { subreq->source = source = NETFS_FILL_WITH_ZEROES; goto fill_with_zeroes; @@ -275,17 +249,18 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq) netfs_stat(&netfs_n_rh_download); if (rreq->netfs_ops->prepare_read) { ret = rreq->netfs_ops->prepare_read(subreq); - if (ret < 0) - goto prep_failed; + if (ret < 0) { + subreq->error = ret; + /* Not queued - release both refs. */ + netfs_put_subrequest(subreq, false, + netfs_sreq_trace_put_cancel); + netfs_put_subrequest(subreq, false, + netfs_sreq_trace_put_cancel); + break; + } trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); } - - slice = netfs_prepare_read_iterator(subreq); - if (slice < 0) - goto prep_iter_failed; - - rreq->netfs_ops->issue_read(subreq); - goto done; + goto issue; } fill_with_zeroes: @@ -293,94 +268,50 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq) subreq->source = NETFS_FILL_WITH_ZEROES; trace_netfs_sreq(subreq, netfs_sreq_trace_submit); netfs_stat(&netfs_n_rh_zero); - slice = netfs_prepare_read_iterator(subreq); - if (slice < 0) - goto prep_iter_failed; - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_read_subreq_terminated(subreq, 0, false); - goto done; + goto issue; } if (source == NETFS_READ_FROM_CACHE) { trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - slice = netfs_prepare_read_iterator(subreq); - if (slice < 0) - goto prep_iter_failed; - netfs_read_cache_to_pagecache(rreq, subreq); - goto done; + goto issue; } pr_err("Unexpected read source %u\n", source); WARN_ON_ONCE(1); break; - prep_iter_failed: - ret = slice; - prep_failed: - subreq->error = ret; - atomic_dec(&rreq->nr_outstanding); - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); - break; - - done: + issue: + slice = netfs_prepare_read_iterator(subreq); + if (slice < 0) { + ret = slice; + subreq->error = ret; + trace_netfs_sreq(subreq, netfs_sreq_trace_cancel); + /* Not queued - release both refs. */ + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); + break; + } size -= slice; start += slice; + if (size <= 0) { + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + } + + netfs_issue_read(rreq, subreq); cond_resched(); } while (size > 0); - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq, false); + if (unlikely(size > 0)) { + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + netfs_wake_read_collector(rreq); + } /* Defer error return as we may need to wait for outstanding I/O. */ cmpxchg(&rreq->error, 0, ret); } -/* - * Wait for the read operation to complete, successfully or otherwise. - */ -static int netfs_wait_for_read(struct netfs_io_request *rreq) -{ - int ret; - - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); - wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE); - ret = rreq->error; - if (ret == 0 && rreq->submitted < rreq->len) { - trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); - ret = -EIO; - } - - return ret; -} - -/* - * Set up the initial folioq of buffer folios in the rolling buffer and set the - * iterator to refer to it. - */ -static int netfs_prime_buffer(struct netfs_io_request *rreq) -{ - struct folio_queue *folioq; - struct folio_batch put_batch; - size_t added; - - folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); - if (!folioq) - return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(folioq); - rreq->buffer = folioq; - rreq->buffer_tail = folioq; - rreq->submitted = rreq->start; - iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, 0); - - folio_batch_init(&put_batch); - added = netfs_load_buffer_from_ra(rreq, folioq, &put_batch); - folio_batch_release(&put_batch); - rreq->iter.count += added; - rreq->submitted += added; - return 0; -} - /** * netfs_readahead - Helper to manage a read request * @ractl: The description of the readahead request @@ -409,6 +340,8 @@ void netfs_readahead(struct readahead_control *ractl) if (IS_ERR(rreq)) return; + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags); + ret = netfs_begin_cache_read(rreq, ictx); if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) goto cleanup_free; @@ -420,7 +353,8 @@ void netfs_readahead(struct readahead_control *ractl) netfs_rreq_expand(rreq, ractl); rreq->ractl = ractl; - if (netfs_prime_buffer(rreq) < 0) + rreq->submitted = rreq->start; + if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0) goto cleanup_free; netfs_read_to_pagecache(rreq); @@ -436,23 +370,18 @@ EXPORT_SYMBOL(netfs_readahead); /* * Create a rolling buffer with a single occupying folio. */ -static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio) +static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio, + unsigned int rollbuf_flags) { - struct folio_queue *folioq; + ssize_t added; - folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); - if (!folioq) + if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0) return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(folioq); - folioq_append(folioq, folio); - BUG_ON(folioq_folio(folioq, 0) != folio); - BUG_ON(folioq_folio_order(folioq, 0) != folio_order(folio)); - rreq->buffer = folioq; - rreq->buffer_tail = folioq; - rreq->submitted = rreq->start + rreq->len; - iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, rreq->len); + added = rolling_buffer_append(&rreq->buffer, folio, rollbuf_flags); + if (added < 0) + return added; + rreq->submitted = rreq->start + added; rreq->ractl = (struct readahead_control *)1UL; return 0; } @@ -520,7 +449,7 @@ static int netfs_read_gaps(struct file *file, struct folio *folio) } if (to < flen) bvec_set_folio(&bvec[i++], folio, flen - to, to); - iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); + iov_iter_bvec(&rreq->buffer.iter, ITER_DEST, bvec, i, rreq->len); rreq->submitted = rreq->start + flen; netfs_read_to_pagecache(rreq); @@ -529,7 +458,7 @@ static int netfs_read_gaps(struct file *file, struct folio *folio) folio_put(sink); ret = netfs_wait_for_read(rreq); - if (ret == 0) { + if (ret >= 0) { flush_dcache_folio(folio); folio_mark_uptodate(folio); } @@ -588,7 +517,7 @@ int netfs_read_folio(struct file *file, struct folio *folio) trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); /* Set up the output buffer */ - ret = netfs_create_singular_buffer(rreq, folio); + ret = netfs_create_singular_buffer(rreq, folio, 0); if (ret < 0) goto discard; @@ -745,7 +674,7 @@ retry: trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); /* Set up the output buffer */ - ret = netfs_create_singular_buffer(rreq, folio); + ret = netfs_create_singular_buffer(rreq, folio, 0); if (ret < 0) goto error_put; @@ -810,15 +739,14 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); /* Set up the output buffer */ - ret = netfs_create_singular_buffer(rreq, folio); + ret = netfs_create_singular_buffer(rreq, folio, NETFS_ROLLBUF_PAGECACHE_MARK); if (ret < 0) goto error_put; - folioq_mark2(rreq->buffer, 0); netfs_read_to_pagecache(rreq); ret = netfs_wait_for_read(rreq); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); - return ret; + return ret < 0 ? ret : 0; error_put: netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); |