diff options
author | Ard Biesheuvel <ard.biesheuvel@linaro.org> | 2015-06-01 14:40:33 +0300 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2015-06-02 19:44:51 +0300 |
commit | 5dfe9d7d23c26d029415379630523f141a748c5b (patch) | |
tree | 8ffc7d85d1f3bb5cd4021b115fe965aa290cb837 /arch/arm64/kernel/sleep.S | |
parent | 61bd93ce801bb6df36eda257a9d2d16c02863cdd (diff) | |
download | linux-5dfe9d7d23c26d029415379630523f141a748c5b.tar.xz |
arm64: reduce ID map to a single page
Commit ea8c2e112445 ("arm64: Extend the idmap to the whole kernel
image") changed the early page table code so that the entire kernel
Image is covered by the identity map. This allows functions that
need to enable or disable the MMU to reside anywhere in the kernel
Image.
However, this change has the unfortunate side effect that the Image
cannot cross a physical 512 MB alignment boundary anymore, since the
early page table code cannot deal with the Image crossing a /virtual/
512 MB alignment boundary.
So instead, reduce the ID map to a single page, that is populated by
the contents of the .idmap.text section. Only three functions reside
there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
If new code is introduced that needs to manipulate the MMU state, it
should be added to this section as well.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/kernel/sleep.S')
-rw-r--r-- | arch/arm64/kernel/sleep.S | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index ede186cdd452..811e61a2d847 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -130,12 +130,14 @@ ENDPROC(__cpu_suspend_enter) /* * x0 must contain the sctlr value retrieved from restored context */ + .pushsection ".idmap.text", "ax" ENTRY(cpu_resume_mmu) ldr x3, =cpu_resume_after_mmu msr sctlr_el1, x0 // restore sctlr_el1 isb br x3 // global jump to virtual address ENDPROC(cpu_resume_mmu) + .popsection cpu_resume_after_mmu: mov x0, #0 // return zero on success ldp x19, x20, [sp, #16] |