Age | Commit message (Collapse) | Author | Files | Lines |
|
With CONFIG_CFI_CLANG, assembly functions indirectly called from C
code must be annotated with type identifiers to pass CFI checking. Use
SYM_TYPED_FUNC_START for the indirectly called functions, and ensure
we emit `bti c` also with SYM_TYPED_FUNC_START.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220908215504.3686827-10-samitolvanen@google.com
|
|
Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify and more consistently define function aliases across
arch/arm64.
Aliases are now defined in terms of a canonical function name. For
position-independent functions I've made the __pi_<func> name the
canonical name, and defined other alises in terms of this.
The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name,
and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro
also obscures the fact that the __pi_<func> fymbol is global and the
<func> symbol is weak. For clarity, I have removed these macros and used
SYM_FUNC_{START,END}() directly with the __pi_<func> name.
For example:
SYM_FUNC_START_WEAK_PI(func)
... asm insns ...
SYM_FUNC_END_PI(func)
EXPORT_SYMBOL(func)
... becomes:
SYM_FUNC_START(__pi_func)
... asm insns ...
SYM_FUNC_END(__pi_func)
SYM_FUNC_ALIAS_WEAK(func, __pi_func)
EXPORT_SYMBOL(func)
For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:
SYM_FUNC_START(func)
... asm insns ...
SYM_FUNC_END(func)
EXPORT_SYMBOL(func)
SYM_FUNC_ALIAS(alias, func)
EXPORT_SYMBOL(alias)
For consistency with the other string functions, I've defined strrchr as
a position-independent function, as it can safely be used as such even
though we have no users today.
As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are
removed. The common versions will be removed by a subsequent patch.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220216162229.1076788-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Not all .S files include asm/assembler.h, however the SYM_FUNC_*
definitions invoke the 'bti' macro. Include asm/assembler.h in
asm/linkage.h.
Fixes: 9be34be87cc8 ("arm64: Add macro version of the BTI instruction")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Now we have a macro for BTI C that looks like a regular instruction change
all the users of the current BTI_C macro to just emit a BTI C directly and
remove the macro.
This does mean that we now unconditionally BTI annotate all assembly
functions, meaning that they are worse in this respect than code generated
by the compiler. The overhead should be minimal for implementations with a
reasonable HINT implementation.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20211214152714.2380849-4-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Currently we only override the SYM_FUNC macros when we need to insert
BTI C into them, do this unconditionally to make it more likely that we'll
notice bugs in our override.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20211214152714.2380849-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
BTI is only available from v8.5 so we need to encode it using HINT in
generic code and for older toolchains. Add an assembler macro based on
one written by Mark Rutland which lets us use the mnemonic and update
the existing users.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20211214152714.2380849-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Add yet another set of assembly symbol annotations, this time for the
borderline-absurd situation of a function aliasing to a weak symbol
which itself also wants a position-independent alias.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/75545b3c4129b20b887474bb58a9cf302bf2132b.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Some ftrace features are broken since commit 714a8d02ca4d ("arm64: asm:
Override SYM_FUNC_START when building the kernel with BTI"). For example
the function_graph tracer:
$ echo function_graph > /sys/kernel/debug/tracing/current_tracer
[ 36.107016] WARNING: CPU: 0 PID: 115 at kernel/trace/ftrace.c:2691 ftrace_modify_all_code+0xc8/0x14c
When ftrace_modify_graph_caller() attempts to write a branch at
ftrace_graph_call, it finds the "BTI J" instruction inserted by
SYM_INNER_LABEL() instead of a NOP, and aborts.
It turns out we don't currently need the BTI landing pads inserted by
SYM_INNER_LABEL:
* ftrace_call and ftrace_graph_call are only used for runtime patching
of the active tracer. The patched code is not reached from a branch.
* install_el2_stub is reached from a CBZ instruction, which doesn't
change PSTATE.BTYPE.
* __guest_exit is reached from B instructions in the hyp-entry vectors,
which aren't subject to BTI checks either.
Remove the BTI annotation from SYM_INNER_LABEL.
Fixes: 714a8d02ca4d ("arm64: asm: Override SYM_FUNC_START when building the kernel with BTI")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20200624112253.1602786-1-jean-philippe@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
For better or worse, GDB relies on the exact instruction sequence in the
VDSO sigreturn trampoline in order to unwind from signals correctly.
Commit c91db232da48 ("arm64: vdso: Convert to modern assembler annotations")
unfortunately added a BTI C instruction to the start of __kernel_rt_sigreturn,
which breaks this check. Thankfully, it's also not required, since the
trampoline is called from a RET instruction when returning from the signal
handler
Remove the unnecessary BTI C instruction from __kernel_rt_sigreturn,
and do the same for the 32-bit VDSO as well for good measure.
Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: Tamas Zsoldos <tamas.zsoldos@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Fixes: c91db232da48 ("arm64: vdso: Convert to modern assembler annotations")
Signed-off-by: Will Deacon <will@kernel.org>
|
|
When the kernel is built for BTI override SYM_FUNC_START and related macros
to add a BTI landing pad to the start of all global functions, ensuring that
they are BTI safe. The ; at the end of the BTI_x macros is for the
benefit of the macro-generated functions in xen-hypercall.S.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
As part of an effort to make the annotations in assembly code clearer and
more consistent new macros have been introduced, including replacements
for ENTRY() and ENDPROC().
On arm64 we have ENDPIPROC(), a custom version of ENDPROC() which is
used for code that will need to run in position independent environments
like EFI, it creates an alias for the function with the prefix __pi_ and
then emits the standard ENDPROC. Add new-style macros to replace this
which expand to the standard SYM_FUNC_*() and SYM_FUNC_ALIAS_*(),
resulting in the same object code. These are added in linkage.h for
consistency with where the generic assembler code has its macros.
Signed-off-by: Mark Brown <broonie@kernel.org>
[will: Rename 'WEAK' macro, use ';' instead of ASM_NL, deprecate ENDPIPROC]
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Aarch64 instructions must be word aligned. The current 16 byte
alignment is more than enough. Relax it into 4 byte alignment.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
An interesting effect of using the generic version of linkage.h
is that the padding is defined in terms of x86 NOPs, which can have
even more interesting effects when the assembly code looks like this:
ENTRY(func1)
mov x0, xzr
ENDPROC(func1)
// fall through
ENTRY(func2)
mov x0, #1
ret
ENDPROC(func2)
Admittedly, the code is not very nice. But having code from another
architecture doesn't look completely sane either.
The fix is to add arm64's version of linkage.h, which causes the insertion
of proper AArch64 NOPs.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|