diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2016-06-25 03:51:14 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-06-25 03:51:14 +0300 |
commit | d05be0d7e87b7808525e6bc000a87e095cc80040 (patch) | |
tree | 097ef6c2c5d8c2a5e09f8d74f88c624e3856c307 /arch/arm64/mm/context.c | |
parent | 9c46a6df3ba770162deb7abc95427fa1ddf28763 (diff) | |
parent | d74b4e4f1a6dbe27acce723e071c86a6ed154bf2 (diff) | |
download | linux-d05be0d7e87b7808525e6bc000a87e095cc80040.tar.xz |
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"Here are a few more arm64 fixes, but things do finally appear to be
slowing down. The main fix is avoiding hibernation in a previously
unanticipated situation where we have CPUs parked in the kernel, but
it's all good stuff.
- Fix icache/dcache sync for anonymous pages under migration
- Correct the ASID limit check
- Fix parallel builds of Image and Image.gz
- Refuse to hibernate when we have CPUs that we can't offline"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: hibernate: Don't hibernate on systems with stuck CPUs
arm64: smp: Add function to determine if cpus are stuck in the kernel
arm64: mm: remove page_mapping check in __sync_icache_dcache
arm64: fix boot image dependencies to not generate invalid images
arm64: update ASID limit
Diffstat (limited to 'arch/arm64/mm/context.c')
-rw-r--r-- | arch/arm64/mm/context.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index b7b397802088..efcf1f7ef1e4 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -179,7 +179,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) &asid_generation); flush_context(cpu); - /* We have at least 1 ASID per CPU, so this will always succeed */ + /* We have more ASIDs than CPUs, so this will always succeed */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); set_asid: @@ -227,8 +227,11 @@ switch_mm_fastpath: static int asids_init(void) { asid_bits = get_cpu_asid_bits(); - /* If we end up with more CPUs than ASIDs, expect things to crash */ - WARN_ON(NUM_USER_ASIDS < num_possible_cpus()); + /* + * Expect allocation after rollover to fail if we don't have at least + * one more ASID than CPUs. ASID #0 is reserved for init_mm. + */ + WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); atomic64_set(&asid_generation, ASID_FIRST_VERSION); asid_map = kzalloc(BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(*asid_map), GFP_KERNEL); |