diff options
author | Alexander Gordeev <agordeev@linux.ibm.com> | 2024-03-01 09:05:39 +0300 |
---|---|---|
committer | Alexander Gordeev <agordeev@linux.ibm.com> | 2024-04-17 14:37:59 +0300 |
commit | c8aef260c86ec86c4d6065b6cd67ce7161d1ca10 (patch) | |
tree | 61a513c0492d08a4525977899b34417ba2f45517 /arch/s390/mm | |
parent | ecf74da64defe9e7f1862d86b4f3d4041e22dc4a (diff) | |
download | linux-c8aef260c86ec86c4d6065b6cd67ce7161d1ca10.tar.xz |
s390/boot: Swap vmalloc and Lowcore/Real Memory Copy areas
This is a preparatory rework to allow uncoupling virtual
and physical addresses spaces.
Currently the order of virtual memory areas is (the lowcore
and .amode31 section are skipped, as it is irrelevant):
identity mapping (the kernel is contained within)
vmemmap
vmalloc
modules
Absolute Lowcore
Real Memory Copy
In the future the kernel will be mapped separately and placed
to the end of the virtual address space, so the layout would
turn like this:
identity mapping
vmemmap
vmalloc
modules
Absolute Lowcore
Real Memory Copy
kernel
However, the distance between kernel and modules needs to be as
little as possible, ideally - none. Thus, the Absolute Lowcore
and Real Memory Copy areas would stay in the way and therefore
need to be moved as well:
identity mapping
vmemmap
Absolute Lowcore
Real Memory Copy
vmalloc
modules
kernel
To facilitate such layout swap the vmalloc and Absolute Lowcore
together with Real Memory Copy areas. As result, the current
layout turns into:
identity mapping (the kernel is contained within)
vmemmap
Absolute Lowcore
Real Memory Copy
vmalloc
modules
This will allow to locate the kernel directly next to the
modules once it gets mapped separately.
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Diffstat (limited to 'arch/s390/mm')
-rw-r--r-- | arch/s390/mm/vmem.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index 85cddf904cb2..917b8a37383a 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -13,6 +13,7 @@ #include <linux/slab.h> #include <linux/sort.h> #include <asm/page-states.h> +#include <asm/abs_lowcore.h> #include <asm/cacheflush.h> #include <asm/nospec-branch.h> #include <asm/ctlreg.h> @@ -436,7 +437,7 @@ static int modify_pagetable(unsigned long start, unsigned long end, bool add, if (WARN_ON_ONCE(!PAGE_ALIGNED(start | end))) return -EINVAL; /* Don't mess with any tables not fully in 1:1 mapping & vmemmap area */ - if (WARN_ON_ONCE(end > VMALLOC_START)) + if (WARN_ON_ONCE(end > __abs_lowcore)) return -EINVAL; for (addr = start; addr < end; addr = next) { next = pgd_addr_end(addr, end); |