diff options
author | Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> | 2021-07-08 04:10:18 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-07-08 21:48:23 +0300 |
commit | 3bbda69c48d27474a9e6a90cf4680b295a7efa46 (patch) | |
tree | e7c9acf3a396ef49be10a2385b38e232824e318c /mm | |
parent | 97113eb39fa7972722ff490b947d8af023e1f6a2 (diff) | |
download | linux-3bbda69c48d27474a9e6a90cf4680b295a7efa46.tar.xz |
mm/mremap: allow arch runtime override
Patch series "Speedup mremap on ppc64", v8.
This patchset enables MOVE_PMD/MOVE_PUD support on power. This requires
the platform to support updating higher-level page tables without updating
page table entries. This also needs to invalidate the Page Walk Cache on
architecture supporting the same.
This patch (of 3):
Architectures like ppc64 support faster mremap only with radix
translation. Hence allow a runtime check w.r.t support for fast mremap.
Link: https://lkml.kernel.org/r/20210616045735.374532-1-aneesh.kumar@linux.ibm.com
Link: https://lkml.kernel.org/r/20210616045735.374532-2-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/mremap.c | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/mm/mremap.c b/mm/mremap.c index 52369139f1b9..5989d3990020 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -25,7 +25,7 @@ #include <linux/userfaultfd_k.h> #include <asm/cacheflush.h> -#include <asm/tlbflush.h> +#include <asm/tlb.h> #include <asm/pgalloc.h> #include "internal.h" @@ -210,6 +210,15 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +#ifndef arch_supports_page_table_move +#define arch_supports_page_table_move arch_supports_page_table_move +static inline bool arch_supports_page_table_move(void) +{ + return IS_ENABLED(CONFIG_HAVE_MOVE_PMD) || + IS_ENABLED(CONFIG_HAVE_MOVE_PUD); +} +#endif + #ifdef CONFIG_HAVE_MOVE_PMD static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) @@ -218,6 +227,8 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, struct mm_struct *mm = vma->vm_mm; pmd_t pmd; + if (!arch_supports_page_table_move()) + return false; /* * The destination pmd shouldn't be established, free_pgtables() * should have released it. @@ -284,6 +295,8 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, struct mm_struct *mm = vma->vm_mm; pud_t pud; + if (!arch_supports_page_table_move()) + return false; /* * The destination pud shouldn't be established, free_pgtables() * should have released it. |