diff options
author | David Howells <dhowells@redhat.com> | 2024-09-27 11:08:42 +0300 |
---|---|---|
committer | Christian Brauner <brauner@kernel.org> | 2024-09-27 19:29:20 +0300 |
commit | 9fffa4e9b3b158f63334e603e610da7d529a0f9a (patch) | |
tree | 0778bf4f5c0d5696e5f68f6c3aae07350105af72 /fs/netfs | |
parent | ff98751bae40faed1ba9c6a7287e84430f7dec64 (diff) | |
download | linux-9fffa4e9b3b158f63334e603e610da7d529a0f9a.tar.xz |
netfs: Advance iterator correctly rather than jumping it
In netfs_write_folio(), use iov_iter_advance() to advance the folio as we
split bits of it off to subrequests rather than manually jumping the
->iov_offset value around. This becomes more problematic when we use a
bounce buffer made out of single-page folios to cover a multipage pagecache
folio.
Signed-off-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/2238548.1727424522@warthog.procyon.org.uk
cc: Jeff Layton <jlayton@kernel.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Christian Brauner <brauner@kernel.org>
Diffstat (limited to 'fs/netfs')
-rw-r--r-- | fs/netfs/write_issue.c | 12 |
1 files changed, 9 insertions, 3 deletions
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 04e66d587f77..f9761d11791d 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -307,6 +307,7 @@ static int netfs_write_folio(struct netfs_io_request *wreq, struct netfs_io_stream *stream; struct netfs_group *fgroup; /* TODO: Use this with ceph */ struct netfs_folio *finfo; + size_t iter_off = 0; size_t fsize = folio_size(folio), flen = fsize, foff = 0; loff_t fpos = folio_pos(folio), i_size; bool to_eof = false, streamw = false; @@ -462,7 +463,12 @@ static int netfs_write_folio(struct netfs_io_request *wreq, if (choose_s < 0) break; stream = &wreq->io_streams[choose_s]; - wreq->io_iter.iov_offset = stream->submit_off; + + /* Advance the iterator(s). */ + if (stream->submit_off > iter_off) { + iov_iter_advance(&wreq->io_iter, stream->submit_off - iter_off); + iter_off = stream->submit_off; + } atomic64_set(&wreq->issued_to, fpos + stream->submit_off); stream->submit_extendable_to = fsize - stream->submit_off; @@ -477,8 +483,8 @@ static int netfs_write_folio(struct netfs_io_request *wreq, debug = true; } - wreq->io_iter.iov_offset = 0; - iov_iter_advance(&wreq->io_iter, fsize); + if (fsize > iter_off) + iov_iter_advance(&wreq->io_iter, fsize - iter_off); atomic64_set(&wreq->issued_to, fpos + fsize); if (!debug) |