From 353e228eb355be5a65a3c0996c774a0f46737fda Mon Sep 17 00:00:00 2001 From: Mark Rutland Date: Mon, 5 Oct 2020 17:43:03 +0100 Subject: arm64: initialize per-cpu offsets earlier The current initialization of the per-cpu offset register is difficult to follow and this initialization is not always early enough for upcoming instrumentation with KCSAN, where the instrumentation callbacks use the per-cpu offset. To make it possible to support KCSAN, and to simplify reasoning about early bringup code, let's initialize the per-cpu offset earlier, before we run any C code that may consume it. To do so, this patch adds a new init_this_cpu_offset() helper that's called before the usual primary/secondary start functions. For consistency, this is also used to re-initialize the per-cpu offset after the runtime per-cpu areas have been allocated (which can change CPU0's offset). So that init_this_cpu_offset() isn't subject to any instrumentation that might consume the per-cpu offset, it is marked with noinstr, preventing instrumentation. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Will Deacon Link: https://lore.kernel.org/r/20201005164303.21389-1-mark.rutland@arm.com Signed-off-by: Will Deacon --- arch/arm64/kernel/head.S | 3 +++ 1 file changed, 3 insertions(+) (limited to 'arch/arm64/kernel/head.S') diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 037421c66b14..2720e6ec6814 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -452,6 +452,8 @@ SYM_FUNC_START_LOCAL(__primary_switched) bl __pi_memset dsb ishst // Make zero page visible to PTW + bl init_this_cpu_offset + #ifdef CONFIG_KASAN bl kasan_early_init #endif @@ -758,6 +760,7 @@ SYM_FUNC_START_LOCAL(__secondary_switched) ptrauth_keys_init_cpu x2, x3, x4, x5 #endif + bl init_this_cpu_offset b secondary_start_kernel SYM_FUNC_END(__secondary_switched) -- cgit v1.2.3