summaryrefslogtreecommitdiff
path: root/arch/arc/include/asm/mmu.h
AgeCommit message (Collapse)AuthorFilesLines
2021-08-26ARC: mm: disintegrate mmu.h (arcv2 bits out)Vineet Gupta1-78/+2
non functional change Tested-by: kernel test robot <lkp@intel.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move MMU specific bits out of entry code ...Vineet Gupta1-0/+8
... to avoid polluting shared entry code (across three ISA variants) with ISA/MMU specific code. Cc: Jose Abreu <joabreu@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move MMU specific bits out of ASID allocatorVineet Gupta1-0/+13
And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move mmu/cache externs out to setup.hVineet Gupta1-4/+0
Don't pollute mmu.h and cache.h with ARC internal bootlog/setup related functions. Move them aside to setup.h Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: remove tlb paranoid codeVineet Gupta1-6/+0
This was used back in arc700 days when ASID allocator was fragile. Not needed in last 5 years Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 onlyVineet Gupta1-4/+0
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in ARC700 SMP port, it has to be repurposed for re-entrant interrupt handling, while UP port doesn't. We currently handle these use-cases using a fabricated #define which has usual issues of dependency nesting and obvious ugliness. So clean this up: for ARC700 don't use to cache pgd (even in UP) and do the opposite for ARCv2. And while here, switch to canonical pgd_offset(). Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: retire MMUv1 and MMUv2 supportVineet Gupta1-18/+4
There's no known/active customer using them with latest kernels anyways. Removal helps cleanup code and remove the hack for MMU_VER to MMU_V[3-4] conversion Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2019-10-28ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not availableVineet Gupta1-0/+2
TLBWriteNI was introduced in MMUv2 (to not invalidate uTLBs in Fast Path TLB Refill Handler). To avoid #ifdef'ery make it fallback to TLBWrite availabel on all MMUs. This will also help with next change Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-10-28ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch regVineet Gupta1-0/+4
ARC700 exception (and intr handling) didn't have auto stack switching thus had to rely on stashing a reg temporarily (to free it up) at a known place in memory, allowing to code up the low level stack switching. This however was not re-entrant in SMP which thus had to repurpose the per-cpu MMU SCRATCH DATA register otherwise used to "cache" the task pdg pointer (vs. reading it from mm struct) The newer HS cores do have auto-stack switching and thus even SMP builds can use the MMU SCRATCH reg as originally intended. This patch fixes the restriction to ARC700 SMP builds only Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-4/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-04ARCv2: PAE40: set MSB even if !CONFIG_ARC_HAS_PAE40 but PAE exists in SoCVineet Gupta1-0/+2
PAE40 confiuration in hardware extends some of the address registers for TLB/cache ops to 2 words. So far kernel was NOT setting the higher word if feature was not enabled in software which is wrong. Those need to be set to 0 in such case. Normally this would be done in the cache flush / tlb ops, however since these registers only exist conditionally, this would have to be conditional to a flag being set on boot which is expensive/ugly - specially for the more common case of PAE exists but not in use. Optimize that by zero'ing them once at boot - nobody will write to them afterwards Cc: stable@vger.kernel.org #4.4+ Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-05-05ARC: mm: fix build failure in linux-next for UP buildsVineet Gupta1-0/+4
kisskb build service reported ARC defconfig build failures in linux-next | arch/arc/include/asm/mmu.h:75:21: error: 'NR_CPUS' undeclared here (not in a function) | make[3]: *** [arch/arc/mm/ioremap.o] Error 1 | make[2]: *** [arch/arc/mm] Error 2 | make[1]: *** [arch/arc] Error 2 which I bisected to a subtle side-effect of a totally benign mm patch ("mm, vmalloc: properly track vmalloc users") which caused a header include chain deviation - asm/mmu.h using NR_CPUS before including linux/threads.h Fix that by adding the dependnet header and while it at fix a related header to include linux headers aheads of asm headers as sometimes that slso triggers such issues ! Reported-by: noreply@ellerman.id.au Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-29ARC: mm: PAE40 supportVineet Gupta1-0/+7
This is the first working implementation of 40-bit physical address extension on ARCv2. Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22ARCv2: MMUv4: TLB programming Model changesVineet Gupta1-1/+23
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: [SMP] ASID allocationVineet Gupta1-1/+1
-Track a Per CPU ASID counter -mm-per-cpu ASID (multiple threads, or mm migrated around) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-05ARC: fix new Section mismatches in build (post __cpuinit cleanup)Vineet Gupta1-1/+1
--------------->8-------------------- WARNING: vmlinux.o(.text+0x708): Section mismatch in reference from the function read_arc_build_cfg_regs() to the function .init.text:read_decode_cache_bcr() WARNING: vmlinux.o(.text+0x702): Section mismatch in reference from the function read_arc_build_cfg_regs() to the function .init.text:read_decode_mmu_bcr() --------------->8-------------------- Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] Track ASID allocation cycles/generationsVineet Gupta1-1/+1
This helps remove asid-to-mm reverse map While mm->context.id contains the ASID assigned to a process, our ASID allocator also used asid_mm_map[] reverse map. In a new allocation cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this to check if new @asid_cache belonged to some mm2 (from prev cycle). If so, it could locate that mm using the ASID reverse map, and mark that mm as unallocated ASID, to force it to refresh at the time of switch_mm() However, for SMP, the reverse map has to be maintained per CPU, so becomes 2 dimensional, hence got rid of it. With reverse map gone, it is NOT possible to reach out to current assignee. So we track the ASID allocation generation/cycle and on every switch_mm(), check if the current generation of CPU ASID is same as mm's ASID; If not it is refreshed. (Based loosely on arch/sh implementation) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] Refactor the TLB paranoid debug codeVineet Gupta1-1/+1
-Asm code already has values of SW and HW ASID values, so they can be passed to the printing routine. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] Remove legacy/unused debug codeVineet Gupta1-3/+0
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: MMUv4 preps/3 - Abstract out TLB Insert/DeleteVineet Gupta1-0/+2
This reorganizes the current TLB operations into psuedo-ops to better pair with MMUv4's native Insert/Delete operations Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-22ARC: Disintegrate arcregs.hVineet Gupta1-0/+36
* Move the various sub-system defines/types into relevant files/functions (reduces compilation time) * move CPU specific stuff out of asm/tlb.h into asm/mmu.h Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-22ARC: Use kconfig helper IS_ENABLED() to get rid of defines.hVineet Gupta1-0/+8
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-02-15ARC: MMU Context ManagementVineet Gupta1-0/+23
ARC700 MMU provides for tagging TLB entries with a 8-bit ASID to avoid having to flush the TLB every task switch. It also allows for a quick way to invalidate all the TLB entries for task useful for: * COW sementics during fork() * task exit()ing Signed-off-by: Vineet Gupta <vgupta@synopsys.com>