Age | Commit message (Collapse) | Author | Files | Lines |
|
non functional change
Tested-by: kernel test robot <lkp@intel.com>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
|
|
... to avoid polluting shared entry code (across three ISA variants)
with ISA/MMU specific code.
Cc: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
|
|
And while at it, rewrite commentary on ASID allocator
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
|
|
Don't pollute mmu.h and cache.h with ARC internal bootlog/setup
related functions. Move them aside to setup.h
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
|
|
This was used back in arc700 days when ASID allocator was fragile.
Not needed in last 5 years
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
|
|
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in
ARC700 SMP port, it has to be repurposed for re-entrant interrupt
handling, while UP port doesn't. We currently handle these use-cases
using a fabricated #define which has usual issues of dependency nesting
and obvious ugliness.
So clean this up: for ARC700 don't use to cache pgd (even in UP) and do
the opposite for ARCv2.
And while here, switch to canonical pgd_offset().
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
|
|
There's no known/active customer using them with latest kernels anyways.
Removal helps cleanup code and remove the hack for
MMU_VER to MMU_V[3-4] conversion
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
|
|
TLBWriteNI was introduced in MMUv2 (to not invalidate uTLBs in Fast Path
TLB Refill Handler). To avoid #ifdef'ery make it fallback to TLBWrite availabel on all MMUs. This will also help with next change
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
ARC700 exception (and intr handling) didn't have auto stack switching
thus had to rely on stashing a reg temporarily (to free it up) at a
known place in memory, allowing to code up the low level stack switching.
This however was not re-entrant in SMP which thus had to repurpose the
per-cpu MMU SCRATCH DATA register otherwise used to "cache" the task pdg
pointer (vs. reading it from mm struct)
The newer HS cores do have auto-stack switching and thus even SMP builds
can use the MMU SCRATCH reg as originally intended.
This patch fixes the restriction to ARC700 SMP builds only
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
PAE40 confiuration in hardware extends some of the address registers
for TLB/cache ops to 2 words.
So far kernel was NOT setting the higher word if feature was not enabled
in software which is wrong. Those need to be set to 0 in such case.
Normally this would be done in the cache flush / tlb ops, however since
these registers only exist conditionally, this would have to be
conditional to a flag being set on boot which is expensive/ugly -
specially for the more common case of PAE exists but not in use.
Optimize that by zero'ing them once at boot - nobody will write to
them afterwards
Cc: stable@vger.kernel.org #4.4+
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
kisskb build service reported ARC defconfig build failures in linux-next
| arch/arc/include/asm/mmu.h:75:21: error: 'NR_CPUS' undeclared here (not in a function)
| make[3]: *** [arch/arc/mm/ioremap.o] Error 1
| make[2]: *** [arch/arc/mm] Error 2
| make[1]: *** [arch/arc] Error 2
which I bisected to a subtle side-effect of a totally benign mm patch
("mm, vmalloc: properly track vmalloc users") which caused a header
include chain deviation - asm/mmu.h using NR_CPUS before including
linux/threads.h
Fix that by adding the dependnet header and while it at fix a related
header to include linux headers aheads of asm headers as sometimes that
slso triggers such issues !
Reported-by: noreply@ellerman.id.au
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
This is the first working implementation of 40-bit physical address
extension on ARCv2.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
-Track a Per CPU ASID counter
-mm-per-cpu ASID (multiple threads, or mm migrated around)
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
--------------->8--------------------
WARNING: vmlinux.o(.text+0x708): Section mismatch in reference from the
function read_arc_build_cfg_regs() to the function
.init.text:read_decode_cache_bcr()
WARNING: vmlinux.o(.text+0x702): Section mismatch in reference from the
function read_arc_build_cfg_regs() to the function
.init.text:read_decode_mmu_bcr()
--------------->8--------------------
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
This helps remove asid-to-mm reverse map
While mm->context.id contains the ASID assigned to a process, our ASID
allocator also used asid_mm_map[] reverse map. In a new allocation
cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this
to check if new @asid_cache belonged to some mm2 (from prev cycle).
If so, it could locate that mm using the ASID reverse map, and mark that
mm as unallocated ASID, to force it to refresh at the time of switch_mm()
However, for SMP, the reverse map has to be maintained per CPU, so
becomes 2 dimensional, hence got rid of it.
With reverse map gone, it is NOT possible to reach out to current
assignee. So we track the ASID allocation generation/cycle and
on every switch_mm(), check if the current generation of CPU ASID is
same as mm's ASID; If not it is refreshed.
(Based loosely on arch/sh implementation)
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
-Asm code already has values of SW and HW ASID values, so they can be
passed to the printing routine.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
This reorganizes the current TLB operations into psuedo-ops to better
pair with MMUv4's native Insert/Delete operations
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
* Move the various sub-system defines/types into relevant files/functions
(reduces compilation time)
* move CPU specific stuff out of asm/tlb.h into asm/mmu.h
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
ARC700 MMU provides for tagging TLB entries with a 8-bit ASID to avoid
having to flush the TLB every task switch.
It also allows for a quick way to invalidate all the TLB entries for
task useful for:
* COW sementics during fork()
* task exit()ing
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|