diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-04-29 17:40:40 +0300 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-05-09 23:21:44 +0300 |
commit | 2c69e2057962b6bd76d72446453862eb59325b49 (patch) | |
tree | 618562570ea6415752e472f1faba16ecb9c841bf /fs/mpage.c | |
parent | 7479c505b4ab5ed5f81f35fdd68c44c58d6f0439 (diff) | |
download | linux-2c69e2057962b6bd76d72446453862eb59325b49.tar.xz |
fs: Convert block_read_full_page() to block_read_full_folio()
This function is NOT converted to handle large folios, so include
an assert that the filesystem isn't passing one in. Otherwise, use
the folio functions instead of the page functions, where they exist.
Convert all filesystems which use block_read_full_page().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'fs/mpage.c')
-rw-r--r-- | fs/mpage.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/fs/mpage.c b/fs/mpage.c index 1fe56f8c495f..a04439b84ae2 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -36,7 +36,7 @@ * * The mpage code never puts partial pages into a BIO (except for end-of-file). * If a page does not map to a contiguous run of blocks then it simply falls - * back to block_read_full_page(). + * back to block_read_full_folio(). * * Why is this? If a page's completion depends on a number of different BIOs * which can complete in any order (or at the same time) then determining the @@ -68,7 +68,7 @@ static struct bio *mpage_bio_submit(struct bio *bio) /* * support function for mpage_readahead. The fs supplied get_block might * return an up to date buffer. This is used to map that buffer into - * the page, which allows readpage to avoid triggering a duplicate call + * the page, which allows read_folio to avoid triggering a duplicate call * to get_block. * * The idea is to avoid adding buffers to pages that don't already have @@ -296,7 +296,7 @@ confused: if (args->bio) args->bio = mpage_bio_submit(args->bio); if (!PageUptodate(page)) - block_read_full_page(page, args->get_block); + block_read_full_folio(page_folio(page), args->get_block); else unlock_page(page); goto out; @@ -425,7 +425,7 @@ static void clean_buffers(struct page *page, unsigned first_unmapped) /* * we cannot drop the bh if the page is not uptodate or a concurrent - * readpage would fail to serialize with the bh and it would read from + * read_folio would fail to serialize with the bh and it would read from * disk before we reach the platter. */ if (buffer_heads_over_limit && PageUptodate(page)) @@ -510,7 +510,7 @@ static int __mpage_writepage(struct page *page, struct writeback_control *wbc, /* * Page has buffers, but they are all unmapped. The page was * created by pagein or read over a hole which was handled by - * block_read_full_page(). If this address_space is also + * block_read_full_folio(). If this address_space is also * using mpage_readahead then this can rarely happen. */ goto confused; |