summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/ptrace.h
AgeCommit message (Collapse)AuthorFilesLines
2017-02-15arm64: ptrace: add XZR-safe regs accessorsMark Rutland1-0/+20
In A64, XZR and the SP share the same encoding (31), and whether an instruction accesses XZR or SP for a particular register parameter depends on the definition of the instruction. We store the SP in pt_regs::regs[31], and thus when emulating instructions, we must be careful to not erroneously read from or write back to the saved SP. Unfortunately, we often fail to be this careful. In all cases, instructions using a transfer register parameter Xt use this to refer to XZR rather than SP. This patch adds helpers so that we can more easily and consistently handle these cases. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-07arm64: Add uprobe supportPratyush Anand1-0/+8
This patch adds support for uprobe on ARM64 architecture. Unit tests for following have been done so far and they have been found working 1. Step-able instructions, like sub, ldr, add etc. 2. Simulation-able like ret, cbnz, cbz etc. 3. uretprobe 4. Reject-able instructions like sev, wfe etc. 5. trapped and abort xol path 6. probe at unaligned user address. 7. longjump test cases Currently it does not support aarch32 instruction probing. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-27Merge tag 'arm64-upstream' of ↵Linus Torvalds1-3/+61
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - Kexec support for arm64 - Kprobes support - Expose MIDR_EL1 and REVIDR_EL1 CPU identification registers to sysfs - Trapping of user space cache maintenance operations and emulation in the kernel (CPU errata workaround) - Clean-up of the early page tables creation (kernel linear mapping, EFI run-time maps) to avoid splitting larger blocks (e.g. pmds) into smaller ones (e.g. ptes) - VDSO support for CLOCK_MONOTONIC_RAW in clock_gettime() - ARCH_HAS_KCOV enabled for arm64 - Optimise IP checksum helpers - SWIOTLB optimisation to only allocate/initialise the buffer if the available RAM is beyond the 32-bit mask - Properly handle the "nosmp" command line argument - Fix for the initialisation of the CPU debug state during early boot - vdso-offsets.h build dependency workaround - Build fix when RANDOMIZE_BASE is enabled with MODULES off * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (64 commits) arm64: arm: Fix-up the removal of the arm64 regs_query_register_name() prototype arm64: Only select ARM64_MODULE_PLTS if MODULES=y arm64: mm: run pgtable_page_ctor() on non-swapper translation table pages arm64: mm: make create_mapping_late() non-allocating arm64: Honor nosmp kernel command line option arm64: Fix incorrect per-cpu usage for boot CPU arm64: kprobes: Add KASAN instrumentation around stack accesses arm64: kprobes: Cleanup jprobe_return arm64: kprobes: Fix overflow when saving stack arm64: kprobes: WARN if attempting to step with PSTATE.D=1 arm64: debug: remove unused local_dbg_{enable, disable} macros arm64: debug: remove redundant spsr manipulation arm64: debug: unmask PSTATE.D earlier arm64: localise Image objcopy flags arm64: ptrace: remove extra define for CPSR's E bit kprobes: Add arm64 case in kprobe example module arm64: Add kernel return probes support (kretprobes) arm64: Add trampoline code for kretprobes arm64: kprobes instruction simulation support arm64: Treat all entry code as non-kprobe-able ...
2016-07-27arm64: arm: Fix-up the removal of the arm64 regs_query_register_name() prototypeCatalin Marinas1-1/+0
Commit 0a8ea52c3eb1 ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature") inadvertently removed the arch/arm prototype instead of the arm64 one introduced by the original patch. There should not be any bisection issues since this function is not called from anywhere else (it could as well be removed from arch/arm at some point). Fixes: 0a8ea52c3eb1 ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-21Merge branch 'for-next/kprobes' into for-next/coreCatalin Marinas1-2/+62
* kprobes: arm64: kprobes: Add KASAN instrumentation around stack accesses arm64: kprobes: Cleanup jprobe_return arm64: kprobes: Fix overflow when saving stack arm64: kprobes: WARN if attempting to step with PSTATE.D=1 kprobes: Add arm64 case in kprobe example module arm64: Add kernel return probes support (kretprobes) arm64: Add trampoline code for kretprobes arm64: kprobes instruction simulation support arm64: Treat all entry code as non-kprobe-able arm64: Blacklist non-kprobe-able symbol arm64: Kprobes with single stepping support arm64: add conditional instruction simulation support arm64: Add more test functions to insn.c arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature
2016-07-19arm64: ptrace: remove extra define for CPSR's E bitVladimir Murzin1-1/+0
...and do not confuse source navigation tools ;) Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-19arm64: Kprobes with single stepping supportSandeepa Prabhu1-2/+12
Add support for basic kernel probes(kprobes) and jump probes (jprobes) for ARM64. Kprobes utilizes software breakpoint and single step debug exceptions supported on ARM v8. A software breakpoint is placed at the probe address to trap the kernel execution into the kprobe handler. ARM v8 supports enabling single stepping before the break exception return (ERET), with next PC in exception return address (ELR_EL1). The kprobe handler prepares an executable memory slot for out-of-line execution with a copy of the original instruction being probed, and enables single stepping. The PC is set to the out-of-line slot address before the ERET. With this scheme, the instruction is executed with the exact same register context except for the PC (and DAIF) registers. Debug mask (PSTATE.D) is enabled only when single stepping a recursive kprobe, e.g.: during kprobes reenter so that probed instruction can be single stepped within the kprobe handler -exception- context. The recursion depth of kprobe is always 2, i.e. upon probe re-entry, any further re-entry is prevented by not calling handlers and the case counted as a missed kprobe). Single stepping from the x-o-l slot has a drawback for PC-relative accesses like branching and symbolic literals access as the offset from the new PC (slot address) may not be ensured to fit in the immediate value of the opcode. Such instructions need simulation, so reject probing them. Instructions generating exceptions or cpu mode change are rejected for probing. Exclusive load/store instructions are rejected too. Additionally, the code is checked to see if it is inside an exclusive load/store sequence (code from Pratyush). System instructions are mostly enabled for stepping, except MSR/MRS accesses to "DAIF" flags in PSTATE, which are not safe for probing. This also changes arch/arm64/include/asm/ptrace.h to use include/asm-generic/ptrace.h. Thanks to Steve Capper and Pratyush Anand for several suggested Changes. Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Pratyush Anand <panand@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-19arm64: Add HAVE_REGS_AND_STACK_ACCESS_API featureDavid A. Long1-0/+50
Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting functions and defines. Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Remove unused functions] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-07arm64: kernel: Save and restore UAO and addr_limit on exception entryJames Morse1-0/+2
If we take an exception while at EL1, the exception handler inherits the original context's addr_limit and PSTATE.UAO values. To be consistent always reset addr_limit and PSTATE.UAO on (re-)entry to EL1. This prevents accidental re-use of the original context's addr_limit. Based on a similar patch for arm from Russell King. Cc: <stable@vger.kernel.org> # 4.6- Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-03-02arm64: Rework valid_user_regsMark Rutland1-29/+4
We validate pstate using PSR_MODE32_BIT, which is part of the user-provided pstate (and cannot be trusted). Also, we conflate validation of AArch32 and AArch64 pstate values, making the code difficult to reason about. Instead, validate the pstate value based on the associated task. The task may or may not be current (e.g. when using ptrace), so this must be passed explicitly by callers. To avoid circular header dependencies via sched.h, is_compat_task is pulled out of asm/ptrace.h. To make the code possible to reason about, the AArch64 and AArch32 validation is split into separate functions. Software must respect the RES0 policy for SPSR bits, and thus the kernel mirrors the hardware policy (RAZ/WI) for bits as-yet unallocated. When these acquire an architected meaning writes may be permitted (potentially with additional validation). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-29arm64: Fix compat register mappingsRobin Murphy1-8/+8
For reasons not entirely apparent, but now enshrined in history, the architectural mapping of AArch32 banked registers to AArch64 registers actually orders SP_<mode> and LR_<mode> backwards compared to the intuitive r13/r14 order, for all modes except FIQ. Fix the compat_<reg>_<mode> macros accordingly, in the hope of avoiding subtle bugs with KVM and AArch32 guests. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-07-27arm64: force CONFIG_SMP=y and remove redundant #ifdefsWill Deacon1-4/+0
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-01-23arm64: Emulate SETEND for AArch32 tasksSuzuki K. Poulose1-0/+7
Emulate deprecated 'setend' instruction for AArch32 bit tasks. setend [le/be] - Sets the endianness of EL0 On systems with CPUs which support mixed endian at EL0, the hardware support for the instruction can be enabled by setting the SCTLR_EL1.SED bit. Like the other emulated instructions it is controlled by an entry in /proc/sys/abi/. For more information see : Documentation/arm64/legacy_instructions.txt The instruction is emulated by setting/clearing the SPSR_EL1.E bit, which will be reflected in the PSTATE.E in AArch32 context. This patch also restores the native endianness for the execution of signal handlers, since the process could have changed the endianness. Note: All CPUs on the system must have mixed endian support at EL0. Once the handler is registered, hotplugging a CPU which doesn't support mixed endian, could lead to unexpected results/behavior in applications. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-08-29arm64: Add brackets around user_stack_pointer()Catalin Marinas1-1/+1
Commit 5f888a1d33 (ARM64: perf: support dwarf unwinding in compat mode) changes user_stack_pointer() to return the compat SP for 32-bit tasks but without brackets around the whole definition, with possible issues on the call sites (noticed with a subsequent fix for KSTK_ESP). Fixes: 5f888a1d33c4 (ARM64: perf: support dwarf unwinding in compat mode) Reported-by: Sudeep Holla <sudeep.holla@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-04arm64: fix el2_setup check of CurrentELMarc Zyngier1-0/+4
The CurrentEL system register reports the Current Exception Level of the CPU. It doesn't say anything about the stack handling, and yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h. It works by chance because PSR_MODE_EL2t happens to match the right bits, but that's otherwise a very bad idea. Just check for the EL value instead. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-12arm64: Add regs_return_value() in syscall.hAKASHI Takahiro1-0/+5
This macro, regs_return_value, is used mainly for audit to record system call's results, but may also be used in test_kprobes.c. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Richard Guy Briggs <rgb@redhat.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-13ARM64: perf: support dwarf unwinding in compat modeJean Pihet1-1/+1
Add support for unwinding using the dwarf information in compat mode. Using the correct user stack pointer allows perf to record the frames correctly in the native and compat modes. Note that although the dwarf frame unwinding works ok using libunwind in native mode (on ARMv7 & ARMv8), some changes are required to the libunwind code for the compat mode. Those changes are posted separately on the libunwind mailing list. Tested on ARMv8 platform with v8 and compat v7 binaries, the latter are statically built. Signed-off-by: Jean Pihet <jean.pihet@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-13ARM64: perf: add support for perf registers APIJean Pihet1-0/+1
This patch implements the functions required for the perf registers API, allowing the perf tool to interface kernel register dumps with libunwind in order to provide userspace backtracing. Compat mode is also supported. Only the general purpose user space registers are exported, i.e.: PERF_REG_ARM_X0, ... PERF_REG_ARM_X28, PERF_REG_ARM_FP, PERF_REG_ARM_LR, PERF_REG_ARM_SP, PERF_REG_ARM_PC and not the PERF_REG_ARM_V* registers. Signed-off-by: Jean Pihet <jean.pihet@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-26misc: debug: remove compilation warningsVijaya Kumar K1-1/+1
typecast instruction_pointer macro to unsigned long to resolve following compiler warnings like warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'u64' [-Wformat] Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25arm64: compat: add support for big-endian (BE8) AArch32 binariesWill Deacon1-0/+1
This patch adds support for BE8 AArch32 tasks to the compat layer. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-06-12arm64: debug: consolidate software breakpoint handlersWill Deacon1-2/+0
The software breakpoint handlers are hooked in directly from ptrace, which makes it difficult to add additional handlers for things like kprobes and kgdb. This patch moves the handling code into debug-monitors.c, where we can dispatch to different debug subsystems more easily. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-01-29arm64: add COMPAT_PSR_*_BIT flagsMarc Zyngier1-0/+10
In order to mess with the processor state when running 32bit guests, define all the AArch32 PSR flags. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05arm64: add AArch32 execution modes to ptrace.hMarc Zyngier1-0/+10
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05arm64: expand register mapping between AArch32 and AArch64Marc Zyngier1-2/+19
The general purpose registers in AArch32 are mapped in an architecturally defined manner into the AArch64 registers. It allows the AArch32 registers of an application or a virtual machine to be inspected by the OS or an hypervisor. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-10-11UAPI: (Scripted) Disintegrate arch/arm64/include/asmDavid Howells1-74/+1
Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Michael Kerrisk <mtk.manpages@gmail.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Dave Jones <davej@redhat.com>
2012-10-11arm64: Do not export the compat-specific definitions to the userCatalin Marinas1-3/+9
This patch adds #ifdef __KERNEL__ guards around the COMPAT_* definitions to avoid exporting them to user. AArch32 user requiring the kernel headers must use those generated with ARCH=arm. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2012-09-27arm64: ptrace: remove obsolete ptrace request numbers from user headersWill Deacon1-11/+6
The use of regsets has removed the need for many private ptrace requests, so remove the corresponding definitions from the user-visible ptrace.h Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-09-17arm64: Exception handlingCatalin Marinas1-0/+212
The patch contains the exception entry code (kernel/entry.S), pt_regs structure and related accessors, undefined instruction trapping and stack tracing. AArch64 Linux kernel (including kernel threads) runs in EL1 mode using the SP1 stack. The vectors don't have a fixed address, only alignment (2^11) requirements. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Arnd Bergmann <arnd@arndb.de>