Age | Commit message (Collapse) | Author | Files | Lines |
|
Under some unusual context-switching patterns, it is possible to end up
with multiple threads from the same mm running concurrently with
different ASIDs:
1. CPU x schedules task t with mm p containing ASID a and generation g
This task doesn't block and the CPU doesn't context switch.
So:
* per_cpu(active_asid, x) = {g,a}
* p->context.id = {g,a}
2. Some other CPU generates an ASID rollover. The global generation is
now (g + 1). CPU x is still running t, with no context switch and
so per_cpu(reserved_asid, x) = {g,a}
3. CPU y schedules task t', which shares mm p with t. The generation
mismatches, so we take the slowpath and hit the reserved ASID from
CPU x. p is then updated so that p->context.id = {g + 1,a}
4. CPU y schedules some other task u, which has an mm != p.
5. Some other CPU generates *another* CPU rollover. The global
generation is now (g + 2). CPU x is still running t, with no context
switch and so per_cpu(reserved_asid, x) = {g,a}.
6. CPU y once again schedules task t', but now *fails* to hit the
reserved ASID from CPU x because of the generation mismatch. This
results in a new ASID being allocated, despite the fact that t is
still running on CPU x with the same mm.
Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised
between the two threads.
This patch fixes the problem by updating all of the matching reserved
ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps
the reserved ASIDs in-sync with the mm and avoids the problem.
Cc: <stable@vger.kernel.org>
Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit e1a5848e3398 ("ARM: 7924/1: mm: don't bother with reserved ttbr0
when running with LPAE") removed the use of the reserved TTBR0 value
for LPAE systems, since the ASID is held in the TTBR and can be updated
atomicly with the pgd of the next mm.
Unfortunately, this patch forgot to update flush_context, which
deliberately avoids marking the local active ASID as allocated, since we
used to switch via ASID zero and didn't need to allocate the ASID of
the previous mm. The side-effect of this is that we can allocate the
same ASID to the next mm and, between flushing the local TLB and updating
TTBR0, we can perform speculative TLB fills for userspace nG mappings
using the page table of the previous mm.
The consequence of this is that the next mm can erroneously hit some
mappings of the previous mm. Note that this was made significantly
harder to hit by a391263cd84e ("ARM: 8203/1: mm: try to re-use old ASID
assignments following a rollover") but is still theoretically possible.
This patch fixes the problem by removing the code from flush_context
that forces the allocated ASID to zero for the local CPU. Many thanks
to the Broadcom guys for tracking this one down.
Fixes: e1a5848e3398 ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE")
Cc: <stable@vger.kernel.org> # v3.14+
Reported-by: Raymond Ngun <rngun@broadcom.com>
Tested-by: Raymond Ngun <rngun@broadcom.com>
Reviewed-by: Gregory Fong <gregory.0xf0@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rather than unconditionally allocating a fresh ASID to an mm from an
older generation, attempt to re-use the old assignment where possible.
This can bring performance benefits on systems where the ASID is used to
tag things other than the TLB (e.g. branch prediction resources).
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The ASID allocator has to deal with some pretty horrible behaviours by
the CPU, so expand on some of the comments in there so I remember why
we can never allocate ASID zero to a userspace task.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Since we only clear entries in the ASID bitmap on a rollover event, the
bitmap tends to consist of a block of consecutive set bits followed by
a block of consecutive clear bits. The exception to this rule is for
ASIDs which have been carried over from a previous generation, but
these are bound by the number of CPUs.
This patch optimises our bitmap searching strategy, so that we search
from the last successful allocation, rather than search from index 1
each time we allocate a new ASID.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
With the new ASID allocation algorithm, active ASIDs at the time of a
rollover event will be marked as reserved, so active mm_structs can
continue to operate with the same ASID as before. This in turn means
that we don't need to worry about allocating a new ASID to an mm that
is currently active (installed in TTBR0).
Since updating the pgd and ASID is atomic on LPAE systems (by virtue of
the two being fields in the same hardware register), we can dispose of
the reserved TTBR0 and rely on whatever tables we currently have live.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Inner-shareable TLB invalidation is typically more expensive than local
(non-shareable) invalidation, so performing the broadcasting for
local_flush_tlb_* operations is a waste of cycles and needlessly
clobbers entries in the TLBs of other CPUs.
This patch introduces __flush_tlb_* versions for many of the TLB
invalidation functions, which only respect inner-shareable variants of
the invalidation instructions when presented with the TLB_V7_UIS_FULL
flag. The local version is also inlined to prevent SMP_ON_UP kernels
from missing flushes, where the __flush variant would be called with
the UP flags.
This gains us around 0.5% in hackbench scores for a dual-core A15, but I
would expect this to improve as more cores (and clusters) are added to
the equation.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Commit 93dc688 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) causes the following undefined instruction error on a mx53 (Cortex-A8):
Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
CPU: 0 PID: 275 Comm: modprobe Not tainted 3.11.0-rc2-next-20130722-00009-g9b0f371 #881
task: df46cc00 ti: df48e000 task.ti: df48e000
PC is at check_and_switch_context+0x17c/0x4d0
LR is at check_and_switch_context+0xdc/0x4d0
This problem happens because check_and_switch_context() calls dummy_flush_tlb_a15_erratum() without checking if we are really running on a Cortex-A15 or not.
To avoid this issue, only call dummy_flush_tlb_a15_erratum() inside
check_and_switch_context() if erratum_a15_798181() returns true, which means that we are really running on a Cortex-A15.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Conflicts:
arch/arm/Makefile
arch/arm/include/asm/glue-proc.h
|
|
Looking into the active_asids array is not enough, as we also need
to look into the reserved_asids array (they both represent processes
that are currently running).
Also, not holding the ASID allocator lock is racy, as another CPU
could schedule that process and trigger a rollover, making the erratum
workaround miss an IPI.
Exposing this outside of context.c is a little ugly on the side, so
let's define a new entry point that the erratum workaround can call
to obtain the cpumask.
Cc: <stable@vger.kernel.org> # 3.9
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
On a CPU that never ran anything, both the active and reserved ASID
fields are set to zero. In this case the ASID_TO_IDX() macro will
return -1, which is not a very useful value to index a bitmap.
Instead of trying to offset the ASID so that ASID #1 is actually
bit 0 in the asid_map bitmap, just always ignore bit 0 and start
the search from bit 1. This makes the code a bit more readable,
and without risk of OoB access.
Cc: <stable@vger.kernel.org> # 3.9
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When a CPU is running a process, the ASID for that process is
held in a per-CPU variable (the "active ASIDs" array). When
the ASID allocator handles a rollover, it copies the active
ASIDs into a "reserved ASIDs" array to ensure that a process
currently running on another CPU will continue to run unaffected.
The active array is zero-ed to indicate that a rollover occurred.
Because of this mechanism, a reserved ASID is only remembered for
a single rollover. A subsequent rollover will completely refill
the reserved ASIDs array.
In a severely oversubscribed environment where a CPU can be
prevented from running for extended periods of time (think virtual
machines), the above has a horrible side effect:
[P{a} denotes process P running with ASID a]
CPU-0 CPU-1
A{x} [active = <x 0>]
[suspended] runs B{y} [active = <x y>]
[rollover:
active = <0 0>
reserved = <x y>]
runs B{y} [active = <0 y>
reserved = <x y>]
[rollover:
active = <0 0>
reserved = <0 y>]
runs C{x} [active = <0 x>]
[resumes]
runs A{x}
At that stage, both A and C have the same ASID, with deadly
consequences.
The fix is to preserve reserved ASIDs across rollovers if
the CPU doesn't have an active ASID when the rollover occurs.
Cc: <stable@vger.kernel.org> # 3.9
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Carinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch adds TTBR accessor macros, and modifies cpu_get_pgd() and
the LPAE version of cpu_set_reserved_ttbr0() to use these instead.
In the process, we also fix these functions to correctly handle cases
where the physical address lies beyond the 4G limit of 32-bit addressing.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
operations)
On Cortex-A15 (r0p0..r3p2) the TLBI/DSB are not adequately shooting down
all use of the old entries. This patch implements the erratum workaround
which consists of:
1. Dummy TLBIMVAIS and DSB on the CPU doing the TLBI operation.
2. Send IPI to the CPUs that are running the same mm (and ASID) as the
one being invalidated (or all the online CPUs for global pages).
3. CPU receiving the IPI executes a DMB and CLREX (part of the exception
return code already).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The ARM ARM requires branch predictor maintenance if, for a given ASID,
the instructions at a specific virtual address appear to change.
From the kernel's point of view, that means:
- Changing the kernel's view of memory (e.g. switching to the
identity map)
- ASID rollover (since ASIDs will be re-allocated to new tasks)
This patch adds explicit branch predictor maintenance when either of the
two conditions above are met.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
mm->context.id is updated under asid_lock when a new ASID is allocated
to an mm_struct. However, it is also read without the lock when a task
is being scheduled and checking whether or not the current ASID
generation is up-to-date.
If two threads of the same process are being scheduled in parallel and
the bottom bits of the generation in their mm->context.id match the
current generation (that is, the mm_struct has not been used for ~2^24
rollovers) then the non-atomic, lockless access to mm->context.id may
yield the incorrect ASID.
This patch fixes this issue by making mm->context.id and atomic64_t,
ensuring that the generation is always read consistently. For code that
only requires access to the ASID bits (e.g. TLB flushing by mm), then
the value is accessed directly, which GCC converts to an ldrb.
Cc: <stable@vger.kernel.org> # 3.8
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
If a thread triggers an ASID rollover, other threads of the same process
must be made to wait until the mm->context.id for the shared mm_struct
has been updated to new generation and associated book-keeping (e.g.
TLB invalidation) has ben performed.
However, there is a *tiny* window where both mm->context.id and the
relevant active_asids entry are updated to the new generation, but the
TLB flush has not been performed, which could allow another thread to
return to userspace with a dirty TLB, potentially leading to data
corruption. In reality this will never occur because one CPU would need
to perform a context-switch in the time it takes another to do a couple
of atomic test/set operations but we should plug the race anyway.
This patch moves the active_asids update until after the potential TLB
flush on context-switch.
Cc: <stable@vger.kernel.org> # 3.8
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Since the new ASID code in b5466f8728527a05a493cc4abe9e6f034a1bbaab
("ARM: mm: remove IPI broadcasting on ASID rollover") was changed to
use 64bit operations it has broken the BE operation due to an issue
with the MM code accessing sub-fields of mm->context.id.
When running in BE mode we see the values in mm->context.id are stored
with the highest value first, so the LDR in the arch/arm/mm/proc-macros.S
reads the wrong part of this field. To resolve this, change the LDR in
the mmid macro to load from +4.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The kvm_seq value has nothing to do what so ever with this other KVM.
Given that KVM support on ARM is imminent, it's best to rename kvm_seq
into something else to clearly identify what it is about i.e. a sequence
number for vmalloc section mappings.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When allocating a new ASID, we must take care not to re-assign a
reserved ASID-value to a new mm. This requires us to check each
candidate ASID against those currently reserved by other cores before
assigning a new ASID to the current mm.
This patch improves the ASID allocation algorithm by using a
bitmap-based approach. Rather than iterating over the reserved ASID
array for each candidate ASID, we simply find the first zero bit,
ensuring that those indices corresponding to reserved ASIDs are set
when flushing during a rollover event.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
When scheduling a new mm, we take a spinlock so that we can:
1. Safely allocate a new ASID, if required
2. Update our active_asids field without worrying about parallel
updates to reserved_asids
3. Ensure that we flush our local TLB, if required
However, this has the nasty affect of serialising context-switch across
all CPUs in the system. The usual (fast) case is where the next mm has
a valid ASID for the current generation. In such a scenario, we can
avoid taking the lock and instead use atomic64_xchg to update the
active_asids variable for the current CPU. If a rollover occurs on
another CPU (which would take the lock), when copying the active_asids
into the reserved_asids another atomic64_xchg is used to replace each
active_asids with 0. The fast path can then detect this case and fall
back to spinning on the lock.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
ASIDs are allocated to MMU contexts based on a rolling counter. This
means that after 255 allocations we must invalidate all existing ASIDs
via an expensive IPI mechanism to synchronise all of the online CPUs and
ensure that all tasks execute with an ASID from the new generation.
This patch changes the rollover behaviour so that we rely instead on the
hardware broadcasting of the TLB invalidation to avoid the IPI calls.
This works by keeping track of the active ASID on each core, which is
then reserved in the case of a rollover so that currently scheduled
tasks can continue to run. For cores without hardware TLB broadcasting,
we keep track of pending flushes in a cpumask, so cores can flush their
local TLB before scheduling a new mm.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The bfi instruction is not available on ARMv6, so instead use an and/orr
sequence in the contextidr_notifier. This gets rid of the assembler
error:
Assembler messages:
Error: selected processor does not support ARM mode `bfi r3,r2,#0,#8'
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch introduces a new Kconfig option which, when enabled, causes
the kernel to write the PID of the current task into the PROCID field
of the CONTEXTIDR on context switch. This is useful when analysing
hardware trace, since writes to this register can be configured to emit
an event into the trace stream.
The thread notifier for writing the PID is deliberately kept separate
from the ASID-writing code so that we can support newer processors using
LPAE, where the ASID is stored in TTBR0. As such, the switch_mm code is
updated to perform a read-modify-write sequence to ensure that we don't
clobber the PID on CPUs using the classic 2-level page tables.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The current_mm variable was used to store the new mm between the
switch_mm() and switch_to() calls where an IPI to reset the context
could have set the wrong mm. Since the interrupts are disabled during
context switch, there is no need for this variable, current->active_mm
already points to the current mm when interrupts are re-enabled.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Since the ASIDs must be unique to an mm across all the CPUs in a system,
the __new_context() function needs to broadcast a context reset event to
all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
cannot be issued with interrupts disabled and ARM had to define
__ARCH_WANT_INTERRUPTS_ON_CTXSW.
This patch changes the check_context() function to
check_and_switch_context() called from switch_mm(). In case of
ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the
interrupts are disabled, it defers the __new_context() and
cpu_switch_mm() calls to the post-lock switch hook where the interrupts
are enabled. Setting the reserved TTBR0 was also moved to
check_and_switch_context() from cpu_v7_switch_mm().
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.
This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.
This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: add LPAE support]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
With LPAE, TTBRx registers are 64-bit. The ASID is stored in TTBR0
rather than a separate Context ID register. This patch makes the
necessary changes to handle context switching on LPAE.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Annotate the low level hardware locks which must not be preempted.
In mainline this change documents the low level nature of
the lock - otherwise there's no functional difference. Lockdep
and Sparse checking will work as usual.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
This reverts commit 45b95235b0ac86cef2ad4480b0618b8778847479.
Will Deacon reports that:
In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
I updated the ASID rollover code to use only the kernel page tables
whilst updating the ASID.
Unfortunately, the code to restore the user page tables was part of a
later patch which isn't yet in mainline, so this leaves the code
quite broken.
We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
from ARM, so lets revert these until we can properly sort out what we're
doing with the context switching.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This reverts commit 52af9c6cd863fe37d1103035ec7ee22ac1296458.
Will Deacon reports that:
In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
I updated the ASID rollover code to use only the kernel page tables
whilst updating the ASID.
Unfortunately, the code to restore the user page tables was part of a
later patch which isn't yet in mainline, so this leaves the code
quite broken.
We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
from ARM, so lets revert these until we can properly sort out what we're
doing with the ARM context switching.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Now that ASID 0 is no longer used as a reserved value, allow it to be
allocated to tasks.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.
This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.
This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The current ASID allocation algorithm doesn't ensure the notification
of the other CPUs when the ASID rolls over. This may lead to two
processes using the same ASID (but different generation) or multiple
threads of the same process using different ASIDs.
This patch adds the broadcasting of the ASID rollover event to the
other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
during the handling of the broadcast, the ASID numbering now starts at
"smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
to NR_CPUS.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Errata 411920 indicates that any "invalidate entire instruction cache"
operation can fail if the right conditions are present. This is not
limited just to those operations in flush.c, but elsewhere. Place the
workaround in the already existing __flush_icache_all() function
instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
|
|
ARMv7 can have VIPT, PIPT or ASID-tagged VIVT I-cache. This patch
adds the necessary invalidation of the I-cache when the ASID numbers
are re-used.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Close a hole in the ASID version switch, particularly the following
scenario:
CPU0 MM PID CPU1 MM PID
idle
A pid(A)
A idle(lazy tlb)
* new asid version triggered by B *
B pid(B)
A pid(A)
* MM A gets new asid version *
A idle(lazy tlb)
A pid(A)
* CPU1 doesn't see the new ASID *
The result is that CPU1 continues running with the hardware set
for the original (stale) ASID value, but mm->context.id contains
the new ASID value. The result is that the next MM fault on CPU1
updates the page table entries, but flush_tlb_page() fails due to
wrong ASID.
There is a related case with a threaded application is allocated
a new ASID on one CPU while another of its threads is running on
some different CPU. This scenario is not fixed by this commit.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
On newer architectures (ARMv6, ARMv7), the depth of the prefetch and
branch prediction is implementation defined and there is a small risk
of wrong ASID tagging when changing TTBR0 before setting the new
context id. The recommended solution is to set a reserved ASID during
TTBR changing. This patch reserves ASID 0.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rename mmu.c to context.c - it's the ARMv6 ASID context handling
code rather than generic "mmu" handling code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|