summaryrefslogtreecommitdiff
path: root/arch/arm64/include
AgeCommit message (Collapse)AuthorFilesLines
2020-11-30arm64: entry: fix NMI {user, kernel}->kernel transitionsMark Rutland1-0/+2
Exceptions which can be taken at (almost) any time are consdiered to be NMIs. On arm64 that includes: * SDEI events * GICv3 Pseudo-NMIs * Kernel stack overflows * Unexpected/unhandled exceptions ... but currently debug exceptions (BRKs, breakpoints, watchpoints, single-step) are not considered NMIs. As these can be taken at any time, kernel features (lockdep, RCU, ftrace) may not be in a consistent kernel state. For example, we may take an NMI from the idle code or partway through an entry/exit path. While nmi_enter() and nmi_exit() handle most of this state, notably they don't save/restore the lockdep state across an NMI being taken and handled. When interrupts are enabled and an NMI is taken, lockdep may see interrupts become disabled within the NMI code, but not see interrupts become enabled when returning from the NMI, leaving lockdep believing interrupts are disabled when they are actually disabled. The x86 code handles this in idtentry_{enter,exit}_nmi(), which will shortly be moved to the generic entry code. As we can't use either yet, we copy the x86 approach in arm64-specific helpers. All the NMI entrypoints are marked as noinstr to prevent any instrumentation handling code being invoked before the state has been corrected. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130115950.22492-11-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-30arm64: entry: fix non-NMI kernel<->kernel transitionsMark Rutland1-0/+3
There are periods in kernel mode when RCU is not watching and/or the scheduler tick is disabled, but we can still take exceptions such as interrupts. The arm64 exception handlers do not account for this, and it's possible that RCU is not watching while an exception handler runs. The x86/generic entry code handles this by ensuring that all (non-NMI) kernel exception handlers call irqentry_enter() and irqentry_exit(), which handle RCU, lockdep, and IRQ flag tracing. We can't yet move to the generic entry code, and already hadnle the user<->kernel transitions elsewhere, so we add new kernel<->kernel transition helpers alog the lines of the generic entry code. Since we now track interrupts becoming masked when an exception is taken, local_daif_inherit() is modified to track interrupts becoming re-enabled when the original context is inherited. To balance the entry/exit paths, each handler masks all DAIF exceptions before exit_to_kernel_mode(). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130115950.22492-10-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-30arm64: ptrace: prepare for EL1 irq/rcu trackingMark Rutland1-0/+4
Exceptions from EL1 may be taken when RCU isn't watching (e.g. in idle sequences), or when the lockdep hardirqs transiently out-of-sync with the hardware state (e.g. in the middle of local_irq_enable()). To correctly handle these cases, we'll need to save/restore this state across some exceptions taken from EL1. A series of subsequent patches will update EL1 exception handlers to handle this. In preparation for this, and to avoid dependencies between those patches, this patch adds two new fields to struct pt_regs so that exception handlers can track this state. Note that this is placed in pt_regs as some entry/exit sequences such as el1_irq are invoked from assembly, which makes it very difficult to add a separate structure as with the irqentry_state used by x86. We can separate this once more of the exception logic is moved to C. While the fields only need to be bool, they are both made u64 to keep pt_regs 16-byte aligned. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130115950.22492-9-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-30arm64: entry: fix non-NMI user<->kernel transitionsMark Rutland1-0/+1
When built with PROVE_LOCKING, NO_HZ_FULL, and CONTEXT_TRACKING_FORCE will WARN() at boot time that interrupts are enabled when we call context_tracking_user_enter(), despite the DAIF flags indicating that IRQs are masked. The problem is that we're not tracking IRQ flag changes accurately, and so lockdep believes interrupts are enabled when they are not (and vice-versa). We can shuffle things so to make this more accurate. For kernel->user transitions there are a number of constraints we need to consider: 1) When we call __context_tracking_user_enter() HW IRQs must be disabled and lockdep must be up-to-date with this. 2) Userspace should be treated as having IRQs enabled from the PoV of both lockdep and tracing. 3) As context_tracking_user_enter() stops RCU from watching, we cannot use RCU after calling it. 4) IRQ flag tracing and lockdep have state that must be manipulated before RCU is disabled. ... with similar constraints applying for user->kernel transitions, with the ordering reversed. The generic entry code has enter_from_user_mode() and exit_to_user_mode() helpers to handle this. We can't use those directly, so we add arm64 copies for now (without the instrumentation markers which aren't used on arm64). These replace the existing user_exit() and user_exit_irqoff() calls spread throughout handlers, and the exception unmasking is left as-is. Note that: * The accounting for debug exceptions from userspace now happens in el0_dbg() and ret_to_user(), so this is removed from debug_exception_enter() and debug_exception_exit(). As user_exit_irqoff() wakes RCU, the userspace-specific check is removed. * The accounting for syscalls now happens in el0_svc(), el0_svc_compat(), and ret_to_user(), so this is removed from el0_svc_common(). This does not adversely affect the workaround for erratum 1463225, as this does not depend on any of the state tracking. * In ret_to_user() we mask interrupts with local_daif_mask(), and so we need to inform lockdep and tracing. Here a trace_hardirqs_off() is sufficient and safe as we have not yet exited kernel context and RCU is usable. * As PROVE_LOCKING selects TRACE_IRQFLAGS, the ifdeferry in entry.S only needs to check for the latter. * EL0 SError handling will be dealt with in a subsequent patch, as this needs to be treated as an NMI. Prior to this patch, booting an appropriately-configured kernel would result in spats as below: | DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled()) | WARNING: CPU: 2 PID: 1 at kernel/locking/lockdep.c:5280 check_flags.part.54+0x1dc/0x1f0 | Modules linked in: | CPU: 2 PID: 1 Comm: init Not tainted 5.10.0-rc3 #3 | Hardware name: linux,dummy-virt (DT) | pstate: 804003c5 (Nzcv DAIF +PAN -UAO -TCO BTYPE=--) | pc : check_flags.part.54+0x1dc/0x1f0 | lr : check_flags.part.54+0x1dc/0x1f0 | sp : ffff80001003bd80 | x29: ffff80001003bd80 x28: ffff66ce801e0000 | x27: 00000000ffffffff x26: 00000000000003c0 | x25: 0000000000000000 x24: ffffc31842527258 | x23: ffffc31842491368 x22: ffffc3184282d000 | x21: 0000000000000000 x20: 0000000000000001 | x19: ffffc318432ce000 x18: 0080000000000000 | x17: 0000000000000000 x16: ffffc31840f18a78 | x15: 0000000000000001 x14: ffffc3184285c810 | x13: 0000000000000001 x12: 0000000000000000 | x11: ffffc318415857a0 x10: ffffc318406614c0 | x9 : ffffc318415857a0 x8 : ffffc31841f1d000 | x7 : 647261685f706564 x6 : ffffc3183ff7c66c | x5 : ffff66ce801e0000 x4 : 0000000000000000 | x3 : ffffc3183fe00000 x2 : ffffc31841500000 | x1 : e956dc24146b3500 x0 : 0000000000000000 | Call trace: | check_flags.part.54+0x1dc/0x1f0 | lock_is_held_type+0x10c/0x188 | rcu_read_lock_sched_held+0x70/0x98 | __context_tracking_enter+0x310/0x350 | context_tracking_enter.part.3+0x5c/0xc8 | context_tracking_user_enter+0x6c/0x80 | finish_ret_to_user+0x2c/0x13cr Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130115950.22492-8-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-30arm64: entry: move el1 irq/nmi logic to CMark Rutland1-0/+2
In preparation for reworking the EL1 irq/nmi entry code, move the existing logic to C. We no longer need the asm_nmi_enter() and asm_nmi_exit() wrappers, so these are removed. The new C functions are marked noinstr, which prevents compiler instrumentation and runtime probing. In subsequent patches we'll want the new C helpers to be called in all cases, so we don't bother wrapping the calls with ifdeferry. Even when the new C functions are stubs the trivial calls are unlikely to have a measurable impact on the IRQ or NMI paths anyway. Prototypes are added to <asm/exception.h> as otherwise (in some configurations) GCC will complain about the lack of a forward declaration. We already do this for existing function, e.g. enter_from_user_mode(). The new helpers are marked as noinstr (which prevents all instrumentation, tracing, and kprobes). Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130115950.22492-7-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-28arm64: Make the Meltdown mitigation state availableMarc Zyngier1-0/+2
Our Meltdown mitigation state isn't exposed outside of cpufeature.c, contrary to the rest of the Spectre mitigation state. As we are going to use it in KVM, expose a arm64_get_meltdown_state() helper which returns the same possible values as arm64_get_spectre_v?_state(). Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27Merge branch 'kvm-arm64/misc-5.11' into kvmarm-master/nextMarc Zyngier3-5/+8
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27KVM: arm64: Remove unused __extended_idmap_trampoline() prototypeWill Deacon1-1/+0
__extended_idmap_trampoline() was removed a long time ago by 3421e9d88d7a ("arm64: KVM: Simplify HYP init/teardown") so remove the unused function prototype. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201118194402.2892-4-will@kernel.org
2020-11-27KVM: arm64: Remove kvm_arch_vm_ioctl_check_extension()Will Deacon2-1/+5
kvm_arch_vm_ioctl_check_extension() is only called from kvm_vm_ioctl_check_extension(), so we can inline it and remove the extra function. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201118194402.2892-3-will@kernel.org
2020-11-27KVM: arm64: Move 'struct kvm_arch_memory_slot' out of uapi/Will Deacon2-3/+3
'struct kvm_arch_memory_slot' isn't part of the user ABI, so move it out of the uapi/ headers in case we start using it in future and accidentally back ourselves into a corner. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201118194402.2892-2-will@kernel.org
2020-11-27Merge tag 'arm64-fixes' of ↵Linus Torvalds2-16/+20
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "The main changes are relating to our handling of access/dirty bits, where our low-level page-table helpers could lead to stale young mappings and loss of the dirty bit in some cases (the latter has not been observed in practice, but could happen when clearing "soft-dirty" if we enabled that). These were posted as part of a larger series, but the rest of that is less urgent and needs a v2 which I'll get to shortly. In other news, we've now got a set of fixes to resolve the lockdep/tracing problems that have been plaguing us for a while, but they're still a bit "fresh" and I plan to send them to you next week after we've got some more confidence in them (although initial CI results look good). Summary: - Fix kerneldoc warnings generated by ACPI IORT code - Fix pte_accessible() so that access flag is ignored - Fix missing header #include - Fix loss of software dirty bit across pte_wrprotect() when HW DBM is enabled" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect() arm64: pgtable: Fix pte_accessible() ACPI/IORT: Fix doc warnings in iort.c arm64/fpsimd: add <asm/insn.h> to <asm/kprobes.h> to fix fpsimd build
2020-11-27Merge branch 'kvm-arm64/vector-rework' into kvmarm-master/nextMarc Zyngier5-81/+64
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27Merge branch 'kvm-arm64/pmu-undef' into kvmarm-master/nextMarc Zyngier1-0/+3
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27KVM: arm64: Add kvm_vcpu_has_pmu() helperMarc Zyngier1-0/+3
There are a number of places where we check for the KVM_ARM_VCPU_PMU_V3 feature. Wrap this check into a new kvm_vcpu_has_pmu(), and use it at the existing locations. No functional change. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27Merge branch 'kvm-arm64/host-hvc-table' into kvmarm-master/nextMarc Zyngier3-2/+41
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27Merge branch 'kvm-arm64/copro-no-more' into kvmarm-master/nextMarc Zyngier2-93/+18
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27Merge branch 'kvm-arm64/el2-pc' into kvmarm-master/nextMarc Zyngier2-57/+131
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27KVM: arm64: Simplify __kvm_enable_ssbs()Marc Zyngier2-2/+1
Move the setting of SSBS directly into the HVC handler, using the C helpers rather than the inline asssembly code. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-27KVM: arm64: Patch kimage_voffset instead of loading the EL1 valueMarc Zyngier1-0/+22
Directly using the kimage_voffset variable is fine for now, but will become more problematic as we start distrusting EL1. Instead, patch the kimage_voffset into the HYP text, ensuring we don't have to load an untrusted value later on. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-25kasan: arm64: set TCR_EL1.TBID1 when enabledPeter Collingbourne1-0/+1
On hardware supporting pointer authentication, we previously ended up enabling TBI on instruction accesses when tag-based ASAN was enabled, but this was costing us 8 bits of PAC entropy, which was unnecessary since tag-based ASAN does not require TBI on instruction accesses. Get them back by setting TCR_EL1.TBID1. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Link: https://linux-review.googlesource.com/id/I3dded7824be2e70ea64df0aabab9598d5aebfcc4 Link: https://lore.kernel.org/r/20f64e26fc8a1309caa446fffcb1b4e2fe9e229f.1605952129.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-23arm64: expose FAR_EL1 tag bits in siginfoPeter Collingbourne4-5/+30
The kernel currently clears the tag bits (i.e. bits 56-63) in the fault address exposed via siginfo.si_addr and sigcontext.fault_address. However, the tag bits may be needed by tools in order to accurately diagnose memory errors, such as HWASan [1] or future tools based on the Memory Tagging Extension (MTE). Expose these bits via the arch_untagged_si_addr mechanism, so that they are only exposed to signal handlers with the SA_EXPOSE_TAGBITS flag set. [1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://linux-review.googlesource.com/id/Ia8876bad8c798e0a32df7c2ce1256c4771c81446 Link: https://lore.kernel.org/r/0010296597784267472fa13b39f8238d87a72cf8.1605904350.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-23arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect()Will Deacon1-13/+14
With hardware dirty bit management, calling pte_wrprotect() on a writable, dirty PTE will lose the dirty state and return a read-only, clean entry. Move the logic from ptep_set_wrprotect() into pte_wrprotect() to ensure that the dirty bit is preserved for writable entries, as this is required for soft-dirty bit management if we enable it in the future. Cc: <stable@vger.kernel.org> Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201120143557.6715-3-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23arm64: pgtable: Fix pte_accessible()Will Deacon1-3/+4
pte_accessible() is used by ptep_clear_flush() to figure out whether TLB invalidation is necessary when unmapping pages for reclaim. Although our implementation is correct according to the architecture, returning true only for valid, young ptes in the absence of racing page-table modifications, this is in fact flawed due to lazy invalidation of old ptes in ptep_clear_flush_young() where we elide the expensive DSB instruction for completing the TLB invalidation. Rather than penalise the aging path, adjust pte_accessible() to return true for any valid pte, even if the access flag is cleared. Cc: <stable@vger.kernel.org> Fixes: 76c714be0e5e ("arm64: pgtable: implement pte_accessible()") Reported-by: Yu Zhao <yuzhao@google.com> Acked-by: Yu Zhao <yuzhao@google.com> Reviewed-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201120143557.6715-2-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23arm64/fpsimd: add <asm/insn.h> to <asm/kprobes.h> to fix fpsimd buildRandy Dunlap1-0/+2
Adding <asm/exception.h> brought in <asm/kprobes.h> which uses <asm/probes.h>, which uses 'pstate_check_t' so the latter needs to #include <asm/insn.h> for this typedef. Fixes this build error: In file included from arch/arm64/include/asm/kprobes.h:24, from arch/arm64/include/asm/exception.h:11, from arch/arm64/kernel/fpsimd.c:35: arch/arm64/include/asm/probes.h:16:2: error: unknown type name 'pstate_check_t' 16 | pstate_check_t *pstate_cc; Fixes: c6b90d5cf637 ("arm64/fpsimd: Fix missing-prototypes in fpsimd.c") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: Tian Tao <tiantao6@hisilicon.com> Link: https://lore.kernel.org/r/20201123044510.9942-1-rdunlap@infradead.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23arm64: irqstat: Get rid of duplicated declarationThomas Gleixner1-5/+2
irq_cpustat_t is exactly the same as the asm-generic one. Define ack_bad_irq so the generic header does not emit the generic version of it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Acked-by: Will Deacon <will@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201113141733.392015387@linutronix.de
2020-11-20arm64: Enable seccomp architecture trackingKees Cook1-0/+9
To enable seccomp constant action bitmaps, we need to have a static mapping to the audit architecture and system call table size. Add these for arm64. Signed-off-by: Kees Cook <keescook@chromium.org>
2020-11-16KVM: arm64: Remove redundant hyp vectors entryWill Deacon1-1/+1
The hyp vectors entry corresponding to HYP_VECTOR_DIRECT (i.e. when neither Spectre-v2 nor Spectre-v3a are present) is unused, as we can simply dispatch straight to __kvm_hyp_vector in this case. Remove the redundant vector, and massage the logic for resolving a slot to a vectors entry. Reported-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201113113847.21619-11-will@kernel.org
2020-11-16arm64: spectre: Consolidate spectre-v3a detectionWill Deacon1-0/+1
The spectre-v3a mitigation is split between cpu_errata.c and spectre.c, with the former handling detection of the problem and the latter handling enabling of the workaround. Move the detection logic alongside the enabling logic, like we do for the other spectre mitigations. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201113113847.21619-10-will@kernel.org
2020-11-16arm64: spectre: Rename ARM64_HARDEN_EL2_VECTORS to ARM64_SPECTRE_V3AWill Deacon2-2/+2
Since ARM64_HARDEN_EL2_VECTORS is really a mitigation for Spectre-v3a, rename it accordingly for consistency with the v2 and v4 mitigation. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201113113847.21619-9-will@kernel.org
2020-11-16KVM: arm64: Allocate hyp vectors staticallyWill Deacon2-7/+34
The EL2 vectors installed when a guest is running point at one of the following configurations for a given CPU: - Straight at __kvm_hyp_vector - A trampoline containing an SMC sequence to mitigate Spectre-v2 and then a direct branch to __kvm_hyp_vector - A dynamically-allocated trampoline which has an indirect branch to __kvm_hyp_vector - A dynamically-allocated trampoline containing an SMC sequence to mitigate Spectre-v2 and then an indirect branch to __kvm_hyp_vector The indirect branches mean that VA randomization at EL2 isn't trivially bypassable using Spectre-v3a (where the vector base is readable by the guest). Rather than populate these vectors dynamically, configure everything statically and use an enumerated type to identify the vector "slot" corresponding to one of the configurations above. This both simplifies the code, but also makes it much easier to implement at EL2 later on. Signed-off-by: Will Deacon <will@kernel.org> [maz: fixed double call to kvm_init_vector_slots() on nVHE] Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201113113847.21619-8-will@kernel.org
2020-11-16KVM: arm64: Move BP hardening helpers into spectre.hWill Deacon2-29/+30
The BP hardening helpers are an integral part of the Spectre-v2 mitigation, so move them into asm/spectre.h and inline the arm64_get_bp_hardening_data() function at the same time. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201113113847.21619-6-will@kernel.org
2020-11-16KVM: arm64: Make BP hardening globals static insteadWill Deacon1-3/+0
Branch predictor hardening of the hyp vectors is partially driven by a couple of global variables ('__kvm_bp_vect_base' and '__kvm_harden_el2_vector_slot'). However, these are only used within a single compilation unit, so internalise them there instead. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201113113847.21619-5-will@kernel.org
2020-11-16KVM: arm64: Move kvm_get_hyp_vector() out of header fileWill Deacon1-43/+0
kvm_get_hyp_vector() has only one caller, so move it out of kvm_mmu.h and inline it into a new function, cpu_set_hyp_vector(), for setting the vector. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201113113847.21619-4-will@kernel.org
2020-11-15Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-0/+6
Pull kvm fixes from Paolo Bonzini: "Fixes for ARM and x86, the latter especially for old processors without two-dimensional paging (EPT/NPT)" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: kvm: mmu: fix is_tdp_mmu_check when the TDP MMU is not in use KVM: SVM: Update cr3_lm_rsvd_bits for AMD SEV guests KVM: x86: Introduce cr3_lm_rsvd_bits in kvm_vcpu_arch KVM: x86: clflushopt should be treated as a no-op by emulation KVM: arm64: Handle SCXTNUM_ELx traps KVM: arm64: Unify trap handlers injecting an UNDEF KVM: arm64: Allow setting of ID_AA64PFR0_EL1.CSV2 from userspace
2020-11-13arm64: implement CPPC FFH support using AMUsIonela Voinescu1-0/+3
If Activity Monitors (AMUs) are present, two of the counters can be used to implement support for CPPC's (Collaborative Processor Performance Control) delivered and reference performance monitoring functionality using FFH (Functional Fixed Hardware). Given that counters for a certain CPU can only be read from that CPU, while FFH operations can be called from any CPU for any of the CPUs, use smp_call_function_single() to provide the requested values. Therefore, depending on the register addresses, the following values are returned: - 0x0 (DeliveredPerformanceCounterRegister): AMU core counter - 0x1 (ReferencePerformanceCounterRegister): AMU constant counter The use of Activity Monitors is hidden behind the generic cpu_read_{corecnt,constcnt}() functions. Read functionality for these two registers represents the only current FFH support for CPPC. Read operations for other register values or write operation for all registers are unsupported. Therefore, keep CPPC's FFH unsupported if no CPUs have valid AMU frequency counters. For this purpose, the get_cpu_with_amu_feat() is introduced. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-4-ionela.voinescu@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-13arm64: wrap and generalise counter read functionsIonela Voinescu2-1/+8
In preparation for other uses of Activity Monitors (AMU) cycle counters, place counter read functionality in generic functions that can reused: read_corecnt() and read_constcnt(). As a result, implement update_freq_counters_refs() to replace init_cpu_freq_invariance_counters() and both initialise and update the per-cpu reference variables. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-2-ionela.voinescu@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-13Merge tag 'arm64-fixes' of ↵Linus Torvalds2-0/+6
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: - Spectre/Meltdown safelisting for some Qualcomm KRYO cores - Fix RCU splat when failing to online a CPU due to a feature mismatch - Fix a recently introduced sparse warning in kexec() - Fix handling of CPU erratum 1418040 for late CPUs - Ensure hot-added memory falls within linear-mapped region * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: cpu_errata: Apply Erratum 845719 to KRYO2XX Silver arm64: proton-pack: Add KRYO2XX silver CPUs to spectre-v2 safe-list arm64: kpti: Add KRYO2XX gold/silver CPU cores to kpti safelist arm64: Add MIDR value for KRYO2XX gold/silver CPU cores arm64/mm: Validate hotplug range before creating linear mapping arm64: smp: Tell RCU about CPUs that fail to come online arm64: psci: Avoid printing in cpu_psci_cpu_die() arm64: kexec_file: Fix sparse warning arm64: errata: Fix handling of 1418040 with late CPU onlining
2020-11-13Merge tag 'kvmarm-fixes-5.10-3' of ↵Paolo Bonzini2-0/+6
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for v5.10, take #3 - Allow userspace to downgrade ID_AA64PFR0_EL1.CSV2 - Inject UNDEF on SCXTNUM_ELx access
2020-11-13arm64: Add MIDR value for KRYO2XX gold/silver CPU coresKonrad Dybcio1-0/+4
Add MIDR value for KRYO2XX gold (big) and silver (LITTLE) CPU cores which are used in Qualcomm Technologies, Inc. SoCs. This will be used to identify and apply errata which are applicable for these CPU cores. Signed-off-by: Konrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20201104232218.198800-2-konrad.dybcio@somainline.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-13KVM: arm64: Handle SCXTNUM_ELx trapsMarc Zyngier1-0/+4
As the kernel never sets HCR_EL2.EnSCXT, accesses to SCXTNUM_ELx will trap to EL2. Let's handle that as gracefully as possible by injecting an UNDEF exception into the guest. This is consistent with the guest's view of ID_AA64PFR0_EL1.CSV2 being at most 1. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201110141308.451654-4-maz@kernel.org
2020-11-13KVM: arm64: Allow setting of ID_AA64PFR0_EL1.CSV2 from userspaceMarc Zyngier1-0/+2
We now expose ID_AA64PFR0_EL1.CSV2=1 to guests running on hosts that are immune to Spectre-v2, but that don't have this field set, most likely because they predate the specification. However, this prevents the migration of guests that have started on a host the doesn't fake this CSV2 setting to one that does, as KVM rejects the write to ID_AA64PFR0_EL2 on the grounds that it isn't what is already there. In order to fix this, allow userspace to set this field as long as this doesn't result in a promising more than what is already there (setting CSV2 to 0 is acceptable, but setting it to 1 when it is already set to 0 isn't). Fixes: e1026237f9067 ("KVM: arm64: Set CSV2 for guests on hardware unaffected by Spectre-v2") Reported-by: Peng Liang <liangpeng10@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201110141308.451654-2-maz@kernel.org
2020-11-13Merge tag 'v5.10-rc1' into kvmarm-master/nextMarc Zyngier51-138/+541
Linux 5.10-rc1 Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-12arm64: mm: don't assume struct page is always 64 bytesArd Biesheuvel1-3/+5
Commit 8c96400d6a39be7 simplified the page-to-virt and virt-to-page conversions, based on the assumption that struct page is always 64 bytes in size, in which case we can use a single signed shift to perform the conversion (provided that the vmemmap array is placed appropriately in the kernel VA space) Unfortunately, this assumption turns out not to hold, and so we need to revert part of this commit, and go back to an affine transformation. Given that all the quantities involved are compile time constants, this should not make any practical difference. Fixes: 8c96400d6a39 ("arm64: mm: make vmemmap region a projection of the linear region") Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20201110180511.29083-1-ardb@kernel.org Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: consistently use reserved_pg_dirMark Rutland5-13/+8
Depending on configuration options and specific code paths, we either use the empty_zero_page or the configuration-dependent reserved_ttbr0 as a reserved value for TTBR{0,1}_EL1. To simplify this code, let's always allocate and use the same reserved_pg_dir, replacing reserved_ttbr0. Note that this is allocated (and hence pre-zeroed), and is also marked as read-only in the kernel Image mapping. Keeping this separate from the empty_zero_page potentially helps with robustness as the empty_zero_page is used in a number of cases where a failure to map it read-only could allow it to become corrupted. The (presently unused) swapper_pg_end symbol is also removed, and comments are added wherever we rely on the offsets between the pre-allocated pg_dirs to keep these cases easily identifiable. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201103102229.8542-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: kprobes: Remove redundant kprobe_step_ctxMasami Hiramatsu1-7/+0
The kprobe_step_ctx (kcb->ss_ctx) has ss_pending and match_addr, but those are redundant because those can be replaced by KPROBE_HIT_SS and &cur_kprobe->ainsn.api.insn[1] respectively. To simplify the code, remove the kprobe_step_ctx. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201103134900.337243-2-jean-philippe@linaro.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: errata: Fix handling of 1418040 with late CPU onliningWill Deacon1-0/+2
In a surprising turn of events, it transpires that CPU capabilities configured as ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE are never set as the result of late-onlining. Therefore our handling of erratum 1418040 does not get activated if it is not required by any of the boot CPUs, even though we allow late-onlining of an affected CPU. In order to get things working again, replace the cpus_have_const_cap() invocation with an explicit check for the current CPU using this_cpu_has_cap(). Cc: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Cc: Stephen Boyd <swboyd@chromium.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201106114952.10032-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-10KVM: arm64: Drop kvm_coproc.hMarc Zyngier2-38/+17
kvm_coproc.h used to serve as a compatibility layer for the files shared between the 32 and 64 bit ports. Another one bites the dust... Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-10KVM: arm64: Drop legacy copro shadow registerMarc Zyngier1-47/+1
Finally remove one of the biggest 32bit legacy: the copro shadow mapping. We won't missit. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-10KVM: arm64: Map AArch32 cp14 register to AArch64 sysregsMarc Zyngier1-8/+0
Similarly to what has been done on the cp15 front, repaint the debug registers to use their AArch64 counterparts. This results in some simplification as we can remove the 32bit-specific accessors. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-11-10KVM: arm64: Move AArch32 exceptions over to AArch64 sysregsMarc Zyngier1-1/+0
The use of the AArch32-specific accessors have always been a bit annoying on 64bit, and it is time for a change. Let's move the AArch32 exception injection over to the AArch64 encoding, which requires us to split the two halves of FAR_EL1 into DFAR and IFAR. This enables us to drop the preempt_disable() games on VHE, and to kill the last user of the vcpu_cp15() macro. Signed-off-by: Marc Zyngier <maz@kernel.org>