diff options
author | Vishal Moola (Oracle) <vishal.moola@gmail.com> | 2022-12-21 21:08:45 +0300 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-01-19 04:12:47 +0300 |
commit | 318e9342fbbb6888d903d86e83865609901a1c65 (patch) | |
tree | 6d52286bb3c7871240ba2db524b203ee058d9478 /include/linux | |
parent | e11cb683b2ebc6699bc0ca200442f1b80a51553f (diff) | |
download | linux-318e9342fbbb6888d903d86e83865609901a1c65.tar.xz |
mm/memory: add vm_normal_folio()
Patch series "Convert deactivate_page() to folio_deactivate()", v4.
Deactivate_page() has already been converted to use folios. This patch
series modifies the callers of deactivate_page() to use folios. It also
introduces vm_normal_folio() to assist with folio conversions, and
converts deactivate_page() to folio_deactivate() which takes in a folio.
This patch (of 4):
Introduce a wrapper function called vm_normal_folio(). This function
calls vm_normal_page() and returns the folio of the page found, or null if
no page is found.
This function allows callers to get a folio from a pte, which will
eventually allow them to completely replace their struct page variables
with struct folio instead.
Link: https://lkml.kernel.org/r/20221221180848.20774-1-vishal.moola@gmail.com
Link: https://lkml.kernel.org/r/20221221180848.20774-2-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: SeongJae Park <sj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/mm.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index 253b2d7489e6..8e14183dfc58 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1968,6 +1968,8 @@ static inline bool can_do_mlock(void) { return false; } extern int user_shm_lock(size_t, struct ucounts *); extern void user_shm_unlock(size_t, struct ucounts *); +struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr, + pte_t pte); struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte); struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, |