diff options
author | Nicholas Piggin <npiggin@gmail.com> | 2022-10-13 18:16:45 +0300 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2022-10-18 14:46:18 +0300 |
commit | b9ef323ea1682f9837bf63ba10c5e3750f71a20a (patch) | |
tree | 7edf904a6f570bedf5cf93b5b863f8d95980dca7 /arch/powerpc/include/asm/book3s | |
parent | b12eb279ff552bd67c167b0fe701ae602aa7311e (diff) | |
download | linux-b9ef323ea1682f9837bf63ba10c5e3750f71a20a.tar.xz |
powerpc/64s: Disable preemption in hash lazy mmu mode
apply_to_page_range on kernel pages does not disable preemption, which
is a requirement for hash's lazy mmu mode, which keeps track of the
TLBs to flush with a per-cpu array.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20221013151647.1857994-1-npiggin@gmail.com
Diffstat (limited to 'arch/powerpc/include/asm/book3s')
-rw-r--r-- | arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h index fab8332fe1ad..751921f6db46 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h @@ -32,6 +32,11 @@ static inline void arch_enter_lazy_mmu_mode(void) if (radix_enabled()) return; + /* + * apply_to_page_range can call us this preempt enabled when + * operating on kernel page tables. + */ + preempt_disable(); batch = this_cpu_ptr(&ppc64_tlb_batch); batch->active = 1; } @@ -47,6 +52,7 @@ static inline void arch_leave_lazy_mmu_mode(void) if (batch->index) __flush_tlb_pending(batch); batch->active = 0; + preempt_enable(); } #define arch_flush_lazy_mmu_mode() do {} while (0) |