summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2016-12-06arm64: Work around broken .inst when defective gas is detectedMarc Zyngier1-4/+25
.inst being largely broken with older binutils, it'd be better not to emit it altogether when detecting such configuration (as it leads to all kind of horrors when using alternatives). Generalize the __emit_inst macro and use it extensively in asm/sysreg.h, and make it generate a .long when a broken gas is detected. The disassembly will be crap, but at least we can write semi-sane code. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-12-06arm64: Add detection code for broken .inst support in binutilsMarc Zyngier1-2/+8
Binutils version up to (and including) 2.25 have a pathological behaviour when it comes to mixing .inst directive and arithmetic involving labels. The assembler complains about non-constant expressions and compilation stops pretty quickly. In order to detect this and work around it, let's add a bit of detection code that will set the CONFIG_BROKEN_GAS_INST option should a broken gas be detected. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-12-05arm64: Remove reference to asm/opcodes.hMarc Zyngier1-2/+0
The asm/opcodes.h file is now gone, but probes.h still references it for not obvious reason. Removing the #include directive fixes the compilation. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-12-02arm64: Get rid of asm/opcodes.hMarc Zyngier4-13/+14
The opcodes.h drags in a lot of definition from the 32bit port, most of which is not required at all. Clean things up a bit by moving the bare minimum of what is required next to the actual users, and drop the include file. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-12-02arm64: smp: Prevent raw_smp_processor_id() recursionRobin Murphy1-1/+3
Under CONFIG_DEBUG_PREEMPT=y, this_cpu_ptr() ends up calling back into raw_smp_processor_id(), resulting in some hilariously catastrophic infinite recursion. In the normal case, we have: #define this_cpu_ptr(ptr) raw_cpu_ptr(ptr) and everything is dandy. However for CONFIG_DEBUG_PREEMPT, this_cpu_ptr() is defined in terms of my_cpu_offset, wherein the fun begins: #define my_cpu_offset per_cpu_offset(smp_processor_id()) ... #define smp_processor_id() debug_smp_processor_id() ... notrace unsigned int debug_smp_processor_id(void) { return check_preemption_disabled("smp_processor_id", ""); ... notrace static unsigned int check_preemption_disabled(const char *what1, const char *what2) { int this_cpu = raw_smp_processor_id(); and bang. Use raw_cpu_ptr() directly to avoid that. Fixes: 57c82954e77f ("arm64: make cpu number a percpu variable") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-29Merge Will Deacon's for-next/perf branch into for-next/coreCatalin Marinas7-43/+372
* will/for-next/perf: selftests: arm64: add test for unaligned/inexact watchpoint handling arm64: Allow hw watchpoint of length 3,5,6 and 7 arm64: hw_breakpoint: Handle inexact watchpoint addresses arm64: Allow hw watchpoint at varied offset from base address hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
2016-11-29arm64: head.S: Fix CNTHCTL_EL2 access on VHE systemJintack1-1/+12
Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit. EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they are 11th and 10th bits respectively when E2H is set. Current code is unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set. In fact, we don't need to set those two bits, which allow EL1 and EL0 to access physical timer and counter respectively, if E2H and TGE are set for the host kernel. They will be configured later as necessary. First, we don't need to configure those bits for EL1, since the host kernel runs in EL2. It is a hypervisor's responsibility to configure them before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses are configured in the later stage of boot process. Signed-off-by: Jintack Lim <jintack@cs.columbia.edu> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-23arm64: Remove I-cache invalidation from flush_cache_range()Catalin Marinas2-8/+5
The flush_cache_range() function (similarly for flush_cache_page()) is called when the kernel is changing an existing VA->PA mapping range to either a new PA or to different attributes. Since ARMv8 has PIPT-like D-caches, this function does not need to perform any D-cache maintenance. The I-cache maintenance is already handled via set_pte_at() and flush_cache_range() cannot anyway guarantee that there are no cache lines left after invalidation due to the speculative loads. This patch makes flush_cache_range() a no-op. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-23arm64: Enable HIBERNATION in defconfigCatalin Marinas1-0/+1
This patch adds CONFIG_HIBERNATION to the arm64 defconfig. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Enable CONFIG_ARM64_SW_TTBR0_PANCatalin Marinas1-0/+8
This patch adds the Kconfig option to enable support for TTBR0 PAN emulation. The option is default off because of a slight performance hit when enabled, caused by the additional TTBR0_EL1 switching during user access operations or exception entry/exit code. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: xen: Enable user access before a privcmd hvc callCatalin Marinas1-0/+15
Privcmd calls are issued by the userspace. The kernel needs to enable access to TTBR0_EL1 as the hypervisor would issue stage 1 translations to user memory via AT instructions. Since AT instructions are not affected by the PAN bit (ARMv8.1), we only need the explicit uaccess_enable/disable if the TTBR0 PAN option is enabled. Reviewed-by: Julien Grall <julien.grall@arm.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Handle faults caused by inadvertent user access with PAN enabledCatalin Marinas1-4/+10
When TTBR0_EL1 is set to the reserved page, an erroneous kernel access to user space would generate a translation fault. This patch adds the checks for the software-set PSR_PAN_BIT to emulate a permission fault and report it accordingly. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Disable TTBR0_EL1 during normal kernel executionCatalin Marinas7-20/+153
When the TTBR0 PAN feature is enabled, the kernel entry points need to disable access to TTBR0_EL1. The PAN status of the interrupted context is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22). Restoring access to TTBR0_EL1 is done on exception return if returning to user or returning to a context where PAN was disabled. Context switching via switch_mm() must defer the update of TTBR0_EL1 until a return to user or an explicit uaccess_enable() call. Special care needs to be taken for two cases where TTBR0_EL1 is set outside the normal kernel context switch operation: EFI run-time services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap). Code has been added to avoid deferred TTBR0_EL1 switching as in switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the special TTBR0_EL1. User cache maintenance (user_cache_maint_handler and __flush_cache_user_range) needs the TTBR0_EL1 re-instated since the operations are performed by user virtual address. This patch also removes a stale comment on the switch_mm() function. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1Catalin Marinas10-13/+146
This patch adds the uaccess macros/functions to disable access to user space by setting TTBR0_EL1 to a reserved zeroed page. Since the value written to TTBR0_EL1 must be a physical address, for simplicity this patch introduces a reserved_ttbr0 page at a constant offset from swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value adjusted by the reserved_ttbr0 offset. Enabling access to user is done by restoring TTBR0_EL1 with the value from the struct thread_info ttbr0 variable. Interrupts must be disabled during the uaccess_ttbr0_enable code to ensure the atomicity of the thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the get_thread_info asm macro from entry.S to assembler.h for reuse in the uaccess_ttbr0_* macros. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macroCatalin Marinas2-5/+14
This patch takes the errata workaround code out of cpu_do_switch_mm into a dedicated post_ttbr0_update_workaround macro which will be reused in a subsequent patch. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Factor out PAN enabling/disabling into separate uaccess_* macrosCatalin Marinas7-58/+93
This patch moves the directly coded alternatives for turning PAN on/off into separate uaccess_{enable,disable} macros or functions. The asm macros take a few arguments which will be used in subsequent patches. Note that any (unlikely) access that the compiler might generate between uaccess_enable() and uaccess_disable(), other than those explicitly specified by the user access code, will not be protected by PAN. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-21arm64: Update the synchronous external abort fault descriptionCatalin Marinas1-4/+4
This patch updates the description of the synchronous external aborts on translation table walks. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-18selftests: arm64: add test for unaligned/inexact watchpoint handlingPratyush Anand2-1/+240
ARM64 hardware expects 64bit aligned address for watchpoint invocation. However, it provides byte selection method to select any number of consecutive byte set within the range of 1-8. This patch adds support to test all such byte selection option for different memory write sizes. Patch also adds a test for handling the case when the cpu does not report an address which exactly matches one of the regions we have been watching (which is a situation permitted by the spec if an instruction accesses both watched and unwatched regions). The test was failing on a MSM8996pro before this patch series and is passing now. Signed-off-by: Pavel Labath <labath@google.com> Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-18arm64: Allow hw watchpoint of length 3,5,6 and 7Pratyush Anand2-0/+40
Since, arm64 can support all offset within a double word limit. Therefore, now support other lengths within that range as well. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-18arm64: hw_breakpoint: Handle inexact watchpoint addressesPavel Labath1-27/+69
Arm64 hardware does not always report a watchpoint hit address that matches one of the watchpoints set. It can also report an address "near" the watchpoint if a single instruction access both watched and unwatched addresses. There is no straight-forward way, short of disassembling the offending instruction, to map that address back to the watchpoint. Previously, when the hardware reported a watchpoint hit on an address that did not match our watchpoint (this happens in case of instructions which access large chunks of memory such as "stp") the process would enter a loop where we would be continually resuming it (because we did not recognise that watchpoint hit) and it would keep hitting the watchpoint again and again. The tracing process would never get notified of the watchpoint hit. This commit fixes the problem by looking at the watchpoints near the address reported by the hardware. If the address does not exactly match one of the watchpoints we have set, it attributes the hit to the nearest watchpoint we have. This heuristic is a bit dodgy, but I don't think we can do much more, given the hardware limitations. Signed-off-by: Pavel Labath <labath@google.com> [panand: reworked to rebase on his patches] Signed-off-by: Pratyush Anand <panand@redhat.com> [will: use __ffs instead of ffs - 1] Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-18arm64: Allow hw watchpoint at varied offset from base addressPratyush Anand3-28/+28
ARM64 hardware supports watchpoint at any double word aligned address. However, it can select any consecutive bytes from offset 0 to 7 from that base address. For example, if base address is programmed as 0x420030 and byte select is 0x1C, then access of 0x420032,0x420033 and 0x420034 will generate a watchpoint exception. Currently, we do not have such modularity. We can only program byte, halfword, word and double word access exception from any base address. This patch adds support to overcome above limitations. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-18hw_breakpoint: Allow watchpoint of length 3,5,6 and 7Pratyush Anand2-0/+8
We only support breakpoint/watchpoint of length 1, 2, 4 and 8. If we can support other length as well, then user may watch more data with less number of watchpoints (provided hardware supports it). For example: if we have to watch only 4th, 5th and 6th byte from a 64 bit aligned address, we will have to use two slots to implement it currently. One slot will watch a half word at offset 4 and other a byte at offset 6. If we can have a watchpoint of length 3 then we can watch it with single slot as well. ARM64 hardware does support such functionality, therefore adding these new definitions in generic layer. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-16arm64: Support systems without FP/ASIMDSuzuki K Poulose7-4/+65
The arm64 kernel assumes that FP/ASIMD units are always present and accesses the FP/ASIMD specific registers unconditionally. This could cause problems when they are absent. This patch adds the support for kernel handling systems without FP/ASIMD by skipping the register access within the kernel. For kvm, we trap the accesses to FP/ASIMD and inject an undefined instruction exception to the VM. The callers of the exported kernel_neon_begin_partial() should make sure that the FP/ASIMD is supported. Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: add comment on the ARM64_HAS_NO_FPSIMD conflict and the new location] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-16arm64: Add hypervisor safe helper for checking constant capabilitiesSuzuki K Poulose4-21/+15
The hypervisor may not have full access to the kernel data structures and hence cannot safely use cpus_have_cap() helper for checking the system capability. Add a safe helper for hypervisors to check a constant system capability, which *doesn't* fall back to checking the bitmap maintained by the kernel. With this, make the cpus_have_cap() only check the bitmask and force constant cap checks to use the new API for quicker checks. Cc: Robert Ritcher <rritcher@cavium.com> Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: split thread_info from task stackMark Rutland10-54/+73
This patch moves arm64's struct thread_info from the task stack into task_struct. This protects thread_info from corruption in the case of stack overflows, and makes its address harder to determine if stack addresses are leaked, making a number of attacks more difficult. Precise detection and handling of overflow is left for subsequent patches. Largely, this involves changing code to store the task_struct in sp_el0, and acquire the thread_info from the task struct. Core code now implements current_thread_info(), and as noted in <linux/sched.h> this relies on offsetof(task_struct, thread_info) == 0, enforced by core code. This change means that the 'tsk' register used in entry.S now points to a task_struct, rather than a thread_info as it used to. To make this clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets appropriately updated to account for the structural change. Userspace clobbers sp_el0, and we can no longer restore this from the stack. Instead, the current task is cached in a per-cpu variable that we can safely access from early assembly as interrupts are disabled (and we are thus not preemptible). Both secondary entry and idle are updated to stash the sp and task pointer separately. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: assembler: introduce ldr_this_cpuMark Rutland2-5/+16
Shortly we will want to load a percpu variable in the return from userspace path. We can save an instruction by folding the addition of the percpu offset into the load instruction, and this patch adds a new helper to do so. At the same time, we clean up this_cpu_ptr for consistency. As with {adr,ldr,str}_l, we change the template to take the destination register first, and name this dst. Secondly, we rename the macro to adr_this_cpu, following the scheme of adr_l, and matching the newly added ldr_this_cpu. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: make cpu number a percpu variableMark Rutland2-1/+15
In the absence of CONFIG_THREAD_INFO_IN_TASK, core code maintains thread_info::cpu, and low-level architecture code can access this to build raw_smp_processor_id(). With CONFIG_THREAD_INFO_IN_TASK, core code maintains task_struct::cpu, which for reasons of hte header soup is not accessible to low-level arch code. Instead, we can maintain a percpu variable containing the cpu number. For both the old and new implementation of raw_smp_processor_id(), we read a syreg into a GPR, add an offset, and load the result. As the offset is now larger, it may not be folded into the load, but otherwise the assembly shouldn't change much. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: smp: prepare for smp_processor_id() reworkMark Rutland1-3/+4
Subsequent patches will make smp_processor_id() use a percpu variable. This will make smp_processor_id() dependent on the percpu offset, and thus we cannot use smp_processor_id() to figure out what to initialise the offset to. Prepare for this by initialising the percpu offset based on current::cpu, which will work regardless of how smp_processor_id() is implemented. Also, make this relationship obvious by placing this code together at the start of secondary_start_kernel(). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: move sp_el0 and tpidr_el1 into cpu_suspend_ctxMark Rutland4-10/+7
When returning from idle, we rely on the fact that thread_info lives at the end of the kernel stack, and restore this by masking the saved stack pointer. Subsequent patches will sever the relationship between the stack and thread_info, and to cater for this we must save/restore sp_el0 explicitly, storing it in cpu_suspend_ctx. As cpu_suspend_ctx must be doubleword aligned, this leaves us with an extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1 in the same way, which simplifies the code, avoiding pointer chasing on the restore path (as we no longer need to load thread_info::cpu followed by the relevant slot in __per_cpu_offset based on this). This patch stashes both registers in cpu_suspend_ctx. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: James Morse <james.morse@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: prep stack walkers for THREAD_INFO_IN_TASKMark Rutland3-6/+24
When CONFIG_THREAD_INFO_IN_TASK is selected, task stacks may be freed before a task is destroyed. To account for this, the stacks are refcounted, and when manipulating the stack of another task, it is necessary to get/put the stack to ensure it isn't freed and/or re-used while we do so. This patch reworks the arm64 stack walking code to account for this. When CONFIG_THREAD_INFO_IN_TASK is not selected these perform no refcounting, and this should only be a structural change that does not affect behaviour. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: unexport walk_stackframeMark Rutland1-1/+0
The walk_stackframe functions is architecture-specific, with a varying prototype, and common code should not use it directly. None of its current users can be built as modules. With THREAD_INFO_IN_TASK, users will also need to hold a stack reference before calling it. There's no reason for it to be exported, and it's very easy to misuse, so unexport it for now. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: traps: simplify die() and __die()Mark Rutland1-7/+6
In arm64's die and __die routines we pass around a thread_info, and subsequently use this to determine the relevant task_struct, and the end of the thread's stack. Subsequent patches will decouple thread_info from the stack, and this approach will no longer work. To figure out the end of the stack, we can use the new generic end_of_stack() helper. As we only call __die() from die(), and die() always deals with the current task, we can remove the parameter and have both acquire current directly, which also makes it clear that __die can't be called for arbitrary tasks. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: factor out current_stack_pointerMark Rutland7-5/+17
We define current_stack_pointer in <asm/thread_info.h>, though other files and header relying upon it do not have this necessary include, and are thus fragile to changes in the header soup. Subsequent patches will affect the header soup such that directly including <asm/thread_info.h> may result in a circular header include in some of these cases, so we can't simply include <asm/thread_info.h>. Instead, factor current_thread_info into its own header, and have all existing users include this explicitly. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: asm-offsets: remove unused definitionsMark Rutland1-2/+0
Subsequent patches will move the thread_info::{task,cpu} fields, and the current TI_{TASK,CPU} offset definitions are not used anywhere. This patch removes the redundant definitions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11arm64: thread_info remove stale itemsMark Rutland1-2/+0
We have a comment claiming __switch_to() cares about where cpu_context is located relative to cpu_domain in thread_info. However arm64 has never had a thread_info::cpu_domain field, and neither __switch_to nor cpu_switch_to care where the cpu_context field is relative to others. Additionally, the init_thread_info alias is never used anywhere in the kernel, and will shortly become problematic when thread_info is moved into task_struct. This patch removes both. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <labbott@redhat.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11thread_info: include <current.h> for THREAD_INFO_IN_TASKMark Rutland1-0/+6
When CONFIG_THREAD_INFO_IN_TASK is selected, the current_thread_info() macro relies on current having been defined prior to its use. However, not all users of current_thread_info() include <asm/current.h>, and thus current is not guaranteed to be defined. When CONFIG_THREAD_INFO_IN_TASK is not selected, it's possible that get_current() / current are based upon current_thread_info(), and <asm/current.h> includes <asm/thread_info.h>. Thus always including <asm/current.h> would result in circular dependences on some platforms. To ensure both cases work, this patch includes <asm/current.h>, but only when CONFIG_THREAD_INFO_IN_TASK is selected. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-11thread_info: factor out restart_blockMark Rutland2-40/+52
Since commit f56141e3e2d9aabf ("all arches, signal: move restart_block to struct task_struct"), thread_info and restart_block have been logically distinct, yet struct restart_block is still defined in <linux/thread_info.h>. At least one architecture (erroneously) uses restart_block as part of its thread_info, and thus the definition of restart_block must come before the include of <asm/thread_info>. Subsequent patches in this series need to shuffle the order of includes and definitions in <linux/thread_info.h>, and will make this ordering fragile. This patch moves the definition of restart_block out to its own header. This serves as generic cleanup, logically separating thread_info and restart_block, and also makes it easier to avoid fragility. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-09arm64: percpu: kill off final ACCESS_ONCE() usesMark Rutland1-8/+8
For several reasons it is preferable to use {READ,WRITE}_ONCE() rather than ACCESS_ONCE(). For example, these handle aggregate types, result in shorter source code, and better document the intended access (which may be useful for instrumentation features such as the upcoming KTSAN). Over a number of patches, most uses of ACCESS_ONCE() in arch/arm64 have been migrated to {READ,WRITE}_ONCE(). For consistency, and the above reasons, this patch migrates the final remaining uses. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-09arm64: hugetlb: fix the wrong address for several functionsHuang Shijie1-4/+4
The libhugetlbfs meets several failures since the following functions do not use the correct address: huge_ptep_get_and_clear() huge_ptep_set_access_flags() huge_ptep_set_wrprotect() huge_ptep_clear_flush() This patch fixes the wrong address for them. Signed-off-by: Huang Shijie <shijie.huang@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-09arm64: hugetlb: remove the wrong pmd check in find_num_contig()Huang Shijie1-12/+0
The find_num_contig() will return 1 when the pmd is not present. It will cause a kernel dead loop in the following scenaro: 1.) pmd entry is not present. 2.) the page fault occurs: ... hugetlb_fault() --> hugetlb_no_page() --> set_huge_pte_at() 3.) set_huge_pte_at() will only set the first PMD entry, since the find_num_contig just return 1 in this case. So the PMD entries are all empty except the first one. 4.) when kernel accesses the address mapped by the second PMD entry, a new page fault occurs: ... hugetlb_fault() --> huge_ptep_set_access_flags() The second PMD entry is still empty now. 5.) When the kernel returns, the access will cause a page fault again. The kernel will run like the "4)" above. We will see a dead loop since here. The dead loop is caught in the 32M hugetlb page (2M PMD + Contiguous bit). This patch removes wrong pmd check, and fixes this dead loop. This patch also removes the redundant checks for PGD/PUD in the find_num_contig(). Acked-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Huang Shijie <shijie.huang@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-09arm64: Fix typo in add_default_hugepagesz() for 64K pagesCatalin Marinas1-1/+1
The default hugepage size when 64K pages are enabled is set to 2MB using the contiguous PTE bit. The add_default_hugepagesz(), however, uses CONT_PMD_SHIFT instead of CONT_PTE_SHIFT. There is no functional change since the values are the same. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: fix error: conflicting types for 'kprobe_fault_handler'Pratyush Anand1-1/+0
When CONFIG_KPROBE is disabled but CONFIG_UPROBE_EVENT is enabled, we get following compilation error: In file included from .../arch/arm64/kernel/probes/decode-insn.c:20:0: .../arch/arm64/include/asm/kprobes.h:52:5: error: conflicting types for 'kprobe_fault_handler' int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr); ^~~~~~~~~~~~~~~~~~~~ In file included from .../arch/arm64/kernel/probes/decode-insn.c:17:0: .../include/linux/kprobes.h:398:90: note: previous definition of 'kprobe_fault_handler' was here static inline int kprobe_fault_handler(struct pt_regs *regs, int trapnr) ^ .../scripts/Makefile.build:290: recipe for target 'arch/arm64/kernel/probes/decode-insn.o' failed <asm/kprobes.h> is already included from <linux/kprobes.h> under #ifdef CONFIG_KPROBE. So, this patch fixes the error by removing it from decode-insn.c. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: Add uprobe supportPratyush Anand10-2/+277
This patch adds support for uprobe on ARM64 architecture. Unit tests for following have been done so far and they have been found working 1. Step-able instructions, like sub, ldr, add etc. 2. Simulation-able like ret, cbnz, cbz etc. 3. uretprobe 4. Reject-able instructions like sev, wfe etc. 5. trapped and abort xol path 6. probe at unaligned user address. 7. longjump test cases Currently it does not support aarch32 instruction probing. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: introduce mm context flag to keep 32 bit task informationPratyush Anand2-2/+11
We need to decide in some cases like uprobe instruction analysis that whether the current mm context belongs to a 32 bit task or 64 bit. This patch has introduced an unsigned flag variable in mm_context_t. Currently, we set and clear TIF_32BIT depending on the condition that whether an elf binary load sets personality for 32 bit or 64 bit respectively. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: Handle TRAP_BRKPT for user mode as wellPratyush Anand1-7/+11
uprobe is registered at break_hook with a unique ESR code. So, when a TRAP_BRKPT occurs, call_break_hook checks if it was for uprobe. If not, then send a SIGTRAP to user. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: Handle TRAP_TRACE for user mode as wellPratyush Anand1-10/+12
uprobe registers a handler at step_hook. So, single_step_handler now checks for user mode as well if there is a valid hook. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: kgdb_step_brk_fn: ignore other's exceptionPratyush Anand1-0/+3
ARM64 step exception does not have any syndrome information. So, it is responsibility of exception handler to take care that they handle it only if exception was raised for them. Since kgdb_step_brk_fn() always returns 0, therefore we might have problem when we will have other step handler registered as well. This patch fixes kgdb_step_brk_fn() to return error in case of step handler was not meant for kgdb. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: kprobe: protect/rename few definitions to be reused by uprobePratyush Anand4-43/+52
decode-insn code has to be reused by arm64 uprobe implementation as well. Therefore, this patch protects some portion of kprobe code and renames few other, so that decode-insn functionality can be reused by uprobe even when CONFIG_KPROBES is not defined. kprobe_opcode_t and struct arch_specific_insn are also defined by linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these definitions in asm/probes.h. linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion of asm/kprobes.h from decode-insn.c. There are some definitions like kprobe_insn and kprobes_handler_t etc can be re-used by uprobe. So, it would be better to remove 'k' from their names. struct arch_specific_insn is specific to kprobe. Therefore, introduce a new struct arch_probe_insn which will be common for both kprobe and uprobe, so that decode-insn code can be shared. Modify kprobe code accordingly. Function arm_probe_decode_insn() will be needed by uprobe as well. So make it global. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: dump: Add checking for writable and exectuable pagesLaura Abbott4-0/+94
Page mappings with full RWX permissions are a security risk. x86 has an option to walk the page tables and dump any bad pages. (See e1a58320a38d ("x86/mm: Warn on W^X mappings")). Add a similar implementation for arm64. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [catalin.marinas@arm.com: folded fix for KASan out of bounds from Mark Rutland] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: dump: Remove max_addrLaura Abbott1-1/+0
max_addr was added as part of struct ptdump_info but has never actually been used. Remove it. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>