diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2020-11-03 12:27:21 +0300 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2020-11-07 01:14:55 +0300 |
commit | 39cac191ff37939544af80d5d2af6b870fd94c9b (patch) | |
tree | b693f7450c7c336161489022b1a36bab6ba67811 /arch/arm/include/asm/fixmap.h | |
parent | 157e118b55113d1e6c7f8ddfcec0a1dbf3a69511 (diff) | |
download | linux-39cac191ff37939544af80d5d2af6b870fd94c9b.tar.xz |
arc/mm/highmem: Use generic kmap atomic implementation
Adopt the map ordering to match the other architectures and the generic
code. Also make the maximum entries limited and not dependend on the number
of CPUs. With the original implementation did the following calculation:
nr_slots = mapsize >> PAGE_SHIFT;
The results in either 512 or 1024 total slots depending on
configuration. The total slots have to be divided by the number of CPUs to
get the number of slots per CPU (former KM_TYPE_NR). ARC supports up to 4k
CPUs, so this just falls apart in random ways depending on the number of
CPUs and the actual kmap (atomic) nesting. The comment in highmem.c:
* - fixmap anyhow needs a limited number of mappings. So 2M kvaddr == 256 PTE
* slots across NR_CPUS would be more than sufficient (generic code defines
* KM_TYPE_NR as 20).
is just wrong. KM_TYPE_NR (now KM_MAX_IDX) is the number of slots per CPU
because kmap_local/atomic() needs to support nested mappings (thread,
softirq, interrupt). While KM_MAX_IDX might be overestimated, the above
reasoning is just wrong and clearly the highmem code was never tested with
any system with more than a few CPUs.
Use the default number of slots and fail the build when it does not
fit. Randomly failing at runtime is not a really good option.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20201103095857.472289952@linutronix.de
Diffstat (limited to 'arch/arm/include/asm/fixmap.h')
0 files changed, 0 insertions, 0 deletions