diff options
author | Peter Zijlstra <peterz@infradead.org> | 2018-09-04 18:04:07 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-04-03 11:32:54 +0300 |
commit | 6137fed0823247e32306bde2b48cac627c24f894 (patch) | |
tree | 73fe1d4bac3022a68e12f0798b1f21d9b4af6d5f /arch/mips | |
parent | 7bb8709d6ad3ceeb5010a98b0d7eb11db8836da1 (diff) | |
download | linux-6137fed0823247e32306bde2b48cac627c24f894.tar.xz |
arch/tlb: Clean up simple architectures
For the architectures that do not implement their own tlb_flush() but
do already use the generic mmu_gather, there are two options:
1) the platform has an efficient flush_tlb_range() and
asm-generic/tlb.h doesn't need any overrides at all.
2) the platform lacks an efficient flush_tlb_range() and
we select MMU_GATHER_NO_RANGE to minimize full invalidates.
Convert all 'simple' architectures to one of these two forms.
alpha: has no range invalidate -> 2
arc: already used flush_tlb_range() -> 1
c6x: has no range invalidate -> 2
hexagon: has an efficient flush_tlb_range() -> 1
(flush_tlb_mm() is in fact a full range invalidate,
so no need to shoot down everything)
m68k: has inefficient flush_tlb_range() -> 2
microblaze: has no flush_tlb_range() -> 2
mips: has efficient flush_tlb_range() -> 1
(even though it currently seems to use flush_tlb_mm())
nds32: already uses flush_tlb_range() -> 1
nios2: has inefficient flush_tlb_range() -> 2
(no limit on range iteration)
openrisc: has inefficient flush_tlb_range() -> 2
(no limit on range iteration)
parisc: already uses flush_tlb_range() -> 1
sparc32: already uses flush_tlb_range() -> 1
unicore32: has inefficient flush_tlb_range() -> 2
(no limit on range iteration)
xtensa: has efficient flush_tlb_range() -> 1
Note this also fixes a bug in the existing code for a number
platforms. Those platforms that did:
tlb_end_vma() -> if (!full_mm) flush_tlb_*()
tlb_flush -> if (full_mm) flush_tlb_mm()
missed the case of shift_arg_pages(), which doesn't have @fullmm set,
nor calls into tlb_*vma(), but still frees page-tables and thus needs
an invalidate. The new code handles this by detecting a non-empty
range, and either issuing the matching range invalidate or a full
invalidate, depending on the capabilities.
No change in behavior intended.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/mips')
-rw-r--r-- | arch/mips/include/asm/tlb.h | 8 |
1 files changed, 0 insertions, 8 deletions
diff --git a/arch/mips/include/asm/tlb.h b/arch/mips/include/asm/tlb.h index 32b8a8187733..90f3ad76d9e0 100644 --- a/arch/mips/include/asm/tlb.h +++ b/arch/mips/include/asm/tlb.h @@ -5,14 +5,6 @@ #include <asm/cpu-features.h> #include <asm/mipsregs.h> -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) - -/* - * .. because we flush the whole mm when it fills up. - */ -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #define _UNIQUE_ENTRYHI(base, idx) \ (((base) + ((idx) << (PAGE_SHIFT + 1))) | \ (cpu_has_tlbinv ? MIPS_ENTRYHI_EHINV : 0)) |