diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2021-12-03 16:50:01 +0300 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-01-08 08:28:41 +0300 |
commit | 7b774aab7941e195d3130caa856da6904333988b (patch) | |
tree | 8e226d20f5a1f82946e54c4303f393d909bdf769 /mm | |
parent | 3506659e18a61ae525f3b9b4f5af23b4b149d4db (diff) | |
download | linux-7b774aab7941e195d3130caa856da6904333988b.tar.xz |
shmem: Convert part of shmem_undo_range() to use a folio
find_lock_entries() never returns tail pages. We cannot use page_folio()
here as the pagevec may also contain swap entries, so simply cast for
now. This is an intermediate step which will be fully removed by the
end of this series.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/shmem.c | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/mm/shmem.c b/mm/shmem.c index 18f93c2d68f1..40da9075374b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -936,22 +936,22 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, while (index < end && find_lock_entries(mapping, index, end - 1, &pvec, indices)) { for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i]; + struct folio *folio = (struct folio *)pvec.pages[i]; index = indices[i]; - if (xa_is_value(page)) { + if (xa_is_value(folio)) { if (unfalloc) continue; nr_swaps_freed += !shmem_free_swap(mapping, - index, page); + index, folio); continue; } - index += thp_nr_pages(page) - 1; + index += folio_nr_pages(folio) - 1; - if (!unfalloc || !PageUptodate(page)) - truncate_inode_page(mapping, page); - unlock_page(page); + if (!unfalloc || !folio_test_uptodate(folio)) + truncate_inode_page(mapping, &folio->page); + folio_unlock(folio); } pagevec_remove_exceptionals(&pvec); pagevec_release(&pvec); |