summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2020-09-29arm64: Rewrite Spectre-v4 mitigation codeWill Deacon7-335/+394
Rewrite the Spectre-v4 mitigation handling code to follow the same approach as that taken by Spectre-v2. For now, report to KVM that the system is vulnerable (by forcing 'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in subsequent steps. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Move SSBD prctl() handler alongside other spectre mitigation codeWill Deacon3-130/+119
As part of the spectre consolidation effort to shift all of the ghosts into their own proton pack, move all of the horrible SSBD prctl() code out of its own 'ssbd.c' file. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4Will Deacon1-1/+1
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Treat SSBS as a non-strict system featureWill Deacon1-3/+3
If all CPUs discovered during boot have SSBS, then spectre-v4 will be considered to be "mitigated". However, we still allow late CPUs without SSBS to be onlined, albeit with a "SANITY CHECK" warning. This is problematic for userspace because it means that the system can quietly transition to "Vulnerable" at runtime. Avoid this by treating SSBS as a non-strict system feature: if all of the CPUs discovered during boot have SSBS, then late arriving secondaries better have it as well. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rewrite Spectre-v2 mitigation codeWill Deacon2-233/+291
The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain. This is largely due to it being written hastily, without much clue as to how things would pan out, and also because it ends up mixing policy and state in such a way that it is very difficult to figure out what's going on. Rewrite the Spectre-v2 mitigation so that it clearly separates state from policy and follows a more structured approach to handling the mitigation. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Introduce separate file for spectre mitigations and reportingWill Deacon3-7/+33
The spectre mitigation code is spread over a few different files, which makes it both hard to follow, but also hard to remove it should we want to do that in future. Introduce a new file for housing the spectre mitigations, and populate it with the spectre-v1 reporting code to start with. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2Will Deacon1-1/+1
For better or worse, the world knows about "Spectre" and not about "Branch predictor hardening". Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 as part of moving all of the Spectre mitigations into their own little corner. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Simplify install_bp_hardening_cb()Will Deacon1-20/+7
Use is_hyp_mode_available() to detect whether or not we need to patch the KVM vectors for branch hardening, which avoids the need to take the vector pointers as parameters. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASEWill Deacon1-2/+2
The removal of CONFIG_HARDEN_BRANCH_PREDICTOR means that CONFIG_KVM_INDIRECT_VECTORS is synonymous with CONFIG_RANDOMIZE_BASE, so replace it. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Remove Spectre-related CONFIG_* optionsWill Deacon4-27/+3
The spectre mitigations are too configurable for their own good, leading to confusing logic trying to figure out when we should mitigate and when we shouldn't. Although the plethora of command-line options need to stick around for backwards compatibility, the default-on CONFIG options that depend on EXPERT can be dropped, as the mitigations only do anything if the system is vulnerable, a mitigation is available and the command-line hasn't disabled it. Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of enabling this code unconditionally. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUsMarc Zyngier1-0/+7
Commit 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") changed the way we deal with per-CPU errata by only calling the .matches() callback until one CPU is found to be affected. At this point, .matches() stop being called, and .cpu_enable() will be called on all CPUs. This breaks the ARCH_WORKAROUND_2 handling, as only a single CPU will be mitigated. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") Cc: <stable@vger.kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: cpufeature: Export symbol read_sanitised_ftr_reg()Jean-Philippe Brucker1-0/+1
The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to share CPU page tables with devices. Allow the driver to be built as module by exporting the read_sanitized_ftr_reg() cpufeature symbol. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200918101852.582559-7-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Defer irq_work to IPI_IRQ_WORKJulien Thierry1-9/+5
When handling events, armv8pmu_handle_irq() calls perf_event_overflow(), and subsequently calls irq_work_run() to handle any work queued by perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when queuing the work, this isn't strictly necessary and the work could be handled as part of the IPI_IRQ_WORK handler. In the common case the IPI handler will run immediately after the PMU IRQ handler, and where the PE is heavily loaded with interrupts other handlers may run first, widening the window where some counters are disabled. In practice this window is unlikely to be a significant issue, and removing the call to irq_work_run() would make the PMU IRQ handler NMI safe in addition to making it simpler, so let's do that. [Alexandru E.: Reworded commit message] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-5-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Remove PMU lockingJulien Thierry1-28/+0
The PMU is disabled and enabled, and the counters are programmed from contexts where interrupts or preemption is disabled. The functions to toggle the PMU and to program the PMU counters access the registers directly and don't access data modified by the interrupt handler. That, and the fact that they're always called from non-preemptible contexts, means that we don't need to disable interrupts or use a spinlock. [Alexandru E.: Explained why locking is not needed, removed WARN_ONs] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-4-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Avoid PMXEV* indirectionMark Rutland1-14/+85
Currently we access the counter registers and their respective type registers indirectly. This requires us to write to PMSELR, issue an ISB, then access the relevant PMXEV* registers. This is unfortunate, because: * Under virtualization, accessing one register requires two traps to the hypervisor, even though we could access the register directly with a single trap. * We have to issue an ISB which we could otherwise avoid the cost of. * When we use NMIs, the NMI handler will have to save/restore the select register in case the code it preempted was attempting to access a counter or its type register. We can avoid these issues by directly accessing the relevant registers. This patch adds helpers to do so. In armv8pmu_enable_event() we still need the ISB to prevent the PE from reordering the write to PMINTENSET_EL1 register. If the interrupt is enabled before we disable the counter and the new event is configured, we might get an interrupt triggered by the previously programmed event overflowing, but which we wrongly attribute to the event that we are enabling. Execute an ISB after we disable the counter. In the process, remove the comment that refers to the ARMv7 PMU. [Julien T.: Don't inline read/write functions to avoid big code-size increase, remove unused read_pmevtypern function, fix counter index issue.] [Alexandru E.: Removed comment, removed trailing semicolons in macros, added ISB] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-3-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Add missing ISB in armv8pmu_enable_counter()Alexandru Elisei1-0/+5
Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In armv8pmu_enable_event(), the PE can reorder configuring the event type after we have enabled the counter and the interrupt. This can lead to an interrupt being asserted because of the previous event type that we were counting using the same counter, not the one that we've just configured. The same rationale applies to writes to the PMINTENSET_EL1 register. The PE can reorder enabling the interrupt at any point in the future after we have enabled the event. Prevent both situations from happening by adding an ISB just before we enable the event counter. Fixes: 030896885ade ("arm64: Performance counters support") Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-2-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28arm64: perf: Add support caps under sysfsShaokun Zhang1-33/+70
ARMv8.4-PMU introduces the PMMIR_EL1 registers and some new PMU events, like STALL_SLOT etc, are related to it. Let's add a caps directory to /sys/bus/event_source/devices/armv8_pmuv3_0/ and support slots from PMMIR_EL1 registers in this entry. The user programs can get the slots from sysfs directly. /sys/bus/event_source/devices/armv8_pmuv3_0/caps/slots is exposed under sysfs. Both ARMv8.4-PMU and STALL_SLOT event are implemented, it returns the slots from PMMIR_EL1, otherwise it will return 0. Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1600754025-53535-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28Merge branch 'irq/ipi-as-irq', remote-tracking branches 'origin/irq/dw' and ↵Marc Zyngier1-1/+3
'origin/irq/owl' into irq/irqchip-next Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-24kbuild: remove cc-option test of -Werror=date-timeMasahiro Yamada1-1/+1
The minimal compiler versions, GCC 4.9 and Clang 10 support this flag. Here is the godbolt: https://godbolt.org/z/xvjcMa Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Will Deacon <will@kernel.org>
2020-09-24kbuild: remove cc-option test of -fno-strict-overflowMasahiro Yamada1-1/+1
The minimal compiler versions, GCC 4.9 and Clang 10 support this flag. Here is the godbolt: https://godbolt.org/z/odq8h9 Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Will Deacon <will@kernel.org>
2020-09-24kbuild: preprocess module linker scriptMasahiro Yamada1-5/+0
There was a request to preprocess the module linker script like we do for the vmlinux one. (https://lkml.org/lkml/2020/8/21/512) The difference between vmlinux.lds and module.lds is that the latter is needed for external module builds, thus must be cleaned up by 'make mrproper' instead of 'make clean'. Also, it must be created by 'make modules_prepare'. You cannot put it in arch/$(SRCARCH)/kernel/, which is cleaned up by 'make clean'. I moved arch/$(SRCARCH)/kernel/module.lds to arch/$(SRCARCH)/include/asm/module.lds.h, which is included from scripts/module.lds.S. scripts/module.lds is fine because 'make clean' keeps all the build artifacts under scripts/. You can add arch-specific sections in <asm/module.lds.h>. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Tested-by: Jessica Yu <jeyu@kernel.org> Acked-by: Will Deacon <will@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Jessica Yu <jeyu@kernel.org>
2020-09-21arm64: Move console stack display code to stacktrace.cMark Brown2-65/+65
Currently the code for displaying a stack trace on the console is located in traps.c rather than stacktrace.c, using the unwinding code that is in stacktrace.c. This can be confusing and make the code hard to find since such output is often referred to as a stack trace which might mislead the unwary. Due to this and since traps.c doesn't interact with this code except for via the public interfaces move the code to stacktrace.c to make it easier to find. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200921122341.11280-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUsMarc Zyngier1-0/+8
Commit 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") changed the way we deal with ARCH_WORKAROUND_1, by moving most of the enabling code to the .matches() callback. This has the unfortunate effect that the workaround gets only enabled on the first affected CPU, and no other. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") Cc: <stable@vger.kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64: Make use of ARCH_WORKAROUND_1 even when KVM is not enabledMarc Zyngier1-2/+5
We seem to be pretending that we don't have any firmware mitigation when KVM is not compiled in, which is not quite expected. Bring back the mitigation in this case. Fixes: 4db61fef16a1 ("arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations") Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/sve: Implement a helper to load SVE registers from FPSIMD stateJulien Grall1-0/+17
In a follow-up patch, we may save the FPSIMD rather than the full SVE state when the state has to be zeroed on return to userspace (e.g during a syscall). Introduce an helper to load SVE vectors from FPSIMD state and zero the rest of SVE registers. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-7-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/sve: Implement a helper to flush SVE registersJulien Grall1-0/+8
Introduce a new helper that will zero all SVE registers but the first 128-bits of each vector. This will be used by subsequent patches to avoid costly store/maipulate/reload sequences in places like do_sve_acc(). Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-6-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/signal: Update the comment in preserve_sve_contextJulien Grall1-1/+2
The SVE state is saved by fpsimd_signal_preserve_current_state() and not preserve_fpsimd_context(). Update the comment in preserve_sve_context to reflect the current behavior. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21arm64/fpsimd: Update documentation of do_sve_accJulien Grall1-1/+1
fpsimd_restore_current_state() enables and disables the SVE access trap based on TIF_SVE, not task_fpsimd_load(). Update the documentation of do_sve_acc to reflect this behavior. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arch_topology, arm, arm64: define arch_scale_freq_invariant()Valentin Schneider1-0/+7
arch_scale_freq_invariant() is used by schedutil to determine whether the scheduler's load-tracking signals are frequency invariant. Its definition is overridable, though by default it is hardcoded to 'true' if arch_scale_freq_capacity() is defined ('false' otherwise). This behaviour is not overridden on arm, arm64 and other users of the generic arch topology driver, which is somewhat precarious: arch_scale_freq_capacity() will always be defined, yet not all cpufreq drivers are guaranteed to drive the frequency invariance scale factor setting. In other words, the load-tracking signals may very well *not* be frequency invariant. Now that cpufreq can be queried on whether the current driver is driving the Frequency Invariance (FI) scale setting, the current situation can be improved. This combines the query of whether cpufreq supports the setting of the frequency scale factor, with whether all online CPUs are counter-based FI enabled. While cpufreq FI enablement applies at system level, for all CPUs, counter-based FI support could also be used for only a subset of CPUs to set the invariance scale factor. Therefore, if cpufreq-based FI support is present, we consider the system to be invariant. If missing, we require all online CPUs to be counter-based FI enabled in order for the full system to be considered invariant. If the system ends up not being invariant, a new condition is needed in the counter initialization code that disables all scale factor setting based on counters. Precedence of counters over cpufreq use is not important here. The invariant status is only given to the system if all CPUs have at least one method of setting the frequency scale factor. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-18arch_topology, cpufreq: constify arch_* cpumasksValentin Schneider1-1/+1
The passed cpumask arguments to arch_set_freq_scale() and arch_freq_counters_available() are only iterated over, so reflect this in the prototype. This also allows to pass system cpumasks like cpu_online_mask without getting a warning. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-18arm64: Fix -Wunused-function warning when !CONFIG_HOTPLUG_CPUYueHaibing1-1/+3
If CONFIG_HOTPLUG_CPU is n, gcc warns: arch/arm64/kernel/smp.c:967:13: warning: ‘ipi_teardown’ defined but not used [-Wunused-function] static void ipi_teardown(int cpu) ^~~~~~~~~~~~ Use #ifdef guard this. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200918123318.23764-1-yuehaibing@huawei.com
2020-09-18arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMMWill Deacon2-1/+18
When generating instructions at runtime, for example due to kernel text patching or the BPF JIT, we can emit a trapping BRK instruction if we are asked to encode an invalid instruction such as an out-of-range] branch. This is indicative of a bug in the caller, and will result in a crash on executing the generated code. Unfortunately, the message from the crash is really unhelpful, and mumbles something about ptrace: | Unexpected kernel BRK exception at EL1 | Internal error: ptrace BRK handler: f2000100 [#1] SMP We can do better than this. Install a break handler for FAULT_BRK_IMM, which is the immediate used to encode the "I've been asked to generate an invalid instruction" error, and triage the faulting PC to determine whether or not the failure occurred in the BPF JIT. Link: https://lore.kernel.org/r/20200915141707.GB26439@willie-the-truck Reported-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64/fpsimd: Fix missing-prototypes in fpsimd.cTian Tao1-0/+2
Fix the following warnings. arch/arm64/kernel/fpsimd.c:935:6: warning: no previous prototype for ‘do_sve_acc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:962:6: warning: no previous prototype for ‘do_fpsimd_acc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:971:6: warning: no previous prototype for ‘do_fpsimd_exc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:1266:6: warning: no previous prototype for ‘kernel_neon_begin’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:1292:6: warning: no previous prototype for ‘kernel_neon_end’ [-Wmissing-prototypes] Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/1600157999-14802-1-git-send-email-tiantao6@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64: stacktrace: Convert to ARCH_STACKWALKMark Brown1-69/+10
Historically architectures have had duplicated code in their stack trace implementations for filtering what gets traced. In order to avoid this duplication some generic code has been provided using a new interface arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which factors all this out into the generic stack trace code. Convert arm64 to use this common infrastructure. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lore.kernel.org/r/20200914153409.25097-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-18arm64: stacktrace: Make stack walk callback consistent with generic codeMark Brown3-13/+12
As with the generic arch_stack_walk() code the arm64 stack walk code takes a callback that is called per stack frame. Currently the arm64 code always passes a struct stackframe to the callback and the generic code just passes the pc, however none of the users ever reference anything in the struct other than the pc value. The arm64 code also uses a return type of int while the generic code uses a return type of bool though in both cases the return value is a boolean value and the sense is inverted between the two. In order to reduce code duplication when arm64 is converted to use arch_stack_walk() change the signature and return sense of the arm64 specific callback to match that of the generic code. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lore.kernel.org/r/20200914153409.25097-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-17arm64: paravirt: Initialize steal time when cpu is onlineAndrew Jones1-11/+15
Steal time initialization requires mapping a memory region which invokes a memory allocation. Doing this at CPU starting time results in the following trace when CONFIG_DEBUG_ATOMIC_SLEEP is enabled: BUG: sleeping function called from invalid context at mm/slab.h:498 in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 0, name: swapper/1 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.9.0-rc5+ #1 Call trace: dump_backtrace+0x0/0x208 show_stack+0x1c/0x28 dump_stack+0xc4/0x11c ___might_sleep+0xf8/0x130 __might_sleep+0x58/0x90 slab_pre_alloc_hook.constprop.101+0xd0/0x118 kmem_cache_alloc_node_trace+0x84/0x270 __get_vm_area_node+0x88/0x210 get_vm_area_caller+0x38/0x40 __ioremap_caller+0x70/0xf8 ioremap_cache+0x78/0xb0 memremap+0x9c/0x1a8 init_stolen_time_cpu+0x54/0xf0 cpuhp_invoke_callback+0xa8/0x720 notify_cpu_starting+0xc8/0xd8 secondary_start_kernel+0x114/0x180 CPU1: Booted secondary processor 0x0000000001 [0x431f0a11] However we don't need to initialize steal time at CPU starting time. We can simply wait until CPU online time, just sacrificing a bit of accuracy by returning zero for steal time until we know better. While at it, add __init to the functions that are only called by pv_time_init() which is __init. Signed-off-by: Andrew Jones <drjones@redhat.com> Fixes: e0685fa228fd ("arm64: Retrieve stolen time as paravirtualized guest") Cc: stable@vger.kernel.org Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20200916154530.40809-1-drjones@redhat.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-09-17Merge remote-tracking branch 'origin/irq/ipi-as-irq' into irq/irqchip-nextMarc Zyngier2-54/+84
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-17arm64: Remove custom IRQ stat accountingMarc Zyngier2-28/+13
Let's switch the arm64 code to the core accounting, which already does everything we need. Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-17arm64: Kill __smp_cross_call and coMarc Zyngier1-31/+7
The old IPI registration interface is now unused on arm64, so let's get rid of it. Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-14arm64: hibernate: Remove unused including <linux/version.h>Tian Tao1-1/+0
Remove including <linux/version.h> that don't need it. Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Link: https://lore.kernel.org/r/1600068522-54499-1-git-send-email-tiantao6@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64/mm: Refactor {pgd, pud, pmd, pte}_ERROR()Gavin Shan1-20/+0
The function __{pgd, pud, pmd, pte}_error() are introduced so that they can be called by {pgd, pud, pmd, pte}_ERROR(). However, some of the functions could never be called when the corresponding page table level isn't enabled. For example, __{pud, pmd}_error() are unused when PUD and PMD are folded to PGD. This removes __{pgd, pud, pmd, pte}_error() and call pr_err() from {pgd, pud, pmd, pte}_ERROR() directly, similar to what x86/powerpc are doing. With this, the code looks a bit simplified either. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20200913234730.23145-1-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: kprobe: clarify the comment of steppable hint instructionsAmit Daniel Kachhap1-2/+4
The existing comment about steppable hint instruction is not complete and only describes NOP instructions as steppable. As the function aarch64_insn_is_steppable_hint allows all white-listed instruction to be probed so the comment is updated to reflect this. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-7-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: kprobe: disable probe of fault prone ptrauth instructionAmit Daniel Kachhap1-6/+0
With the addition of ARMv8.3-FPAC feature, the probe of authenticate ptrauth instructions (AUT*) may cause ptrauth fault exception in case of authenticate failure so they cannot be safely single stepped. Hence the probe of authenticate instructions is disallowed but the corresponding pac ptrauth instruction (PAC*) is not affected and they can still be probed. Also AUTH* instructions do not make sense at function entry points so most realistic probes would be unaffected by this change. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-6-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: cpufeature: Modify address authentication cpufeature to exactAmit Daniel Kachhap1-9/+35
The current address authentication cpufeature levels are set as LOWER_SAFE which is not compatible with the different configurations added for Armv8.3 ptrauth enhancements as the different levels have different behaviour and there is no tunable to enable the lower safe versions. This is rectified by setting those cpufeature type as EXACT. The current cpufeature framework also does not interfere in the booting of non-exact secondary cpus but rather marks them as tainted. As a workaround this is fixed by replacing the generic match handler with a new handler specific to ptrauth. After this change, if there is any variation in ptrauth configurations in secondary cpus from boot cpu then those mismatched cpus are parked in an infinite loop. Following ptrauth crash log is observed in Arm fastmodel with simulated mismatched cpus without this fix, CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64ISAR1_EL1. Boot CPU: 0x11111110211402, CPU4: 0x11111110211102 CPU features: Unsupported CPU feature variation detected. GICv3: CPU4: found redistributor 100 region 0:0x000000002f180000 CPU4: Booted secondary processor 0x0000000100 [0x410fd0f0] Unable to handle kernel paging request at virtual address bfff800010dadf3c Mem abort info: ESR = 0x86000004 EC = 0x21: IABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 [bfff800010dadf3c] address between user and kernel address ranges Internal error: Oops: 86000004 [#1] PREEMPT SMP Modules linked in: CPU: 4 PID: 29 Comm: migration/4 Tainted: G S 5.8.0-rc4-00005-ge658591d66d1-dirty #158 Hardware name: Foundation-v8A (DT) pstate: 60000089 (nZCv daIf -PAN -UAO BTYPE=--) pc : 0xbfff800010dadf3c lr : __schedule+0x2b4/0x5a8 sp : ffff800012043d70 x29: ffff800012043d70 x28: 0080000000000000 x27: ffff800011cbe000 x26: ffff00087ad37580 x25: ffff00087ad37000 x24: ffff800010de7d50 x23: ffff800011674018 x22: 0784800010dae2a8 x21: ffff00087ad37000 x20: ffff00087acb8000 x19: ffff00087f742100 x18: 0000000000000030 x17: 0000000000000000 x16: 0000000000000000 x15: ffff800011ac1000 x14: 00000000000001bd x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 71519a147ddfeb82 x9 : 825d5ec0fb246314 x8 : ffff00087ad37dd8 x7 : 0000000000000000 x6 : 00000000fffedb0e x5 : 00000000ffffffff x4 : 0000000000000000 x3 : 0000000000000028 x2 : ffff80086e11e000 x1 : ffff00087ad37000 x0 : ffff00087acdc600 Call trace: 0xbfff800010dadf3c schedule+0x78/0x110 schedule_preempt_disabled+0x24/0x40 __kthread_parkme+0x68/0xd0 kthread+0x138/0x160 ret_from_fork+0x10/0x34 Code: bad PC value After this fix, the mismatched CPU4 is parked as, CPU features: CPU4: Detected conflict for capability 39 (Address authentication (IMP DEF algorithm)), System: 1, CPU: 0 CPU4: will not boot CPU4: failed to come online CPU4: died during early boot [Suzuki: Introduce new matching function for address authentication] Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-5-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancementsAmit Daniel Kachhap2-0/+33
Some Armv8.3 Pointer Authentication enhancements have been introduced which are mandatory for Armv8.6 and optional for Armv8.3. These features are, * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens finding the correct PAC value of the authenticated pointer. * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication instruction fails in authenticating the PAC present in the address. This is different from earlier case when such failures just adds an error code in the top byte and waits for subsequent load/store to abort. The ptrauth instructions which may cause this fault are autiasp, retaa etc. The above features are now represented by additional configurations for the Address Authentication cpufeature and a new ESR exception class. The userspace fault received in the kernel due to ARMv8.3-FPAC is treated as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN as the signal code. Note that this is different from earlier ARMv8.3 ptrauth where signal SIGSEGV is issued due to Pointer authentication failures. The in-kernel PAC fault causes kernel to crash. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-4-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: traps: Allow force_signal_inject to pass esr error codeAmit Daniel Kachhap2-9/+9
Some error signal need to pass proper ARM esr error code to userspace to better identify the cause of the signal. So the function force_signal_inject is extended to pass this as a parameter. The existing code is not affected by this change. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-3-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: kprobe: add checks for ARMv8.3-PAuth combined instructionsAmit Daniel Kachhap2-2/+6
Currently the ARMv8.3-PAuth combined branch instructions (braa, retaa etc.) are not simulated for out-of-line execution with a handler. Hence the uprobe of such instructions leads to kernel warnings in a loop as they are not explicitly checked and fall into INSN_GOOD categories. Other combined instructions like LDRAA and LDRBB can be probed. The issue of the combined branch instructions is fixed by adding group definitions of all such instructions and rejecting their probes. The instruction groups added are br_auth(braa, brab, braaz and brabz), blr_auth(blraa, blrab, blraaz and blrabz), ret_auth(retaa and retab) and eret_auth(eretaa and eretab). Warning log: WARNING: CPU: 0 PID: 156 at arch/arm64/kernel/probes/uprobes.c:182 uprobe_single_step_handler+0x34/0x50 Modules linked in: CPU: 0 PID: 156 Comm: func Not tainted 5.9.0-rc3 #188 Hardware name: Foundation-v8A (DT) pstate: 804003c9 (Nzcv DAIF +PAN -UAO BTYPE=--) pc : uprobe_single_step_handler+0x34/0x50 lr : single_step_handler+0x70/0xf8 sp : ffff800012af3e30 x29: ffff800012af3e30 x28: ffff000878723b00 x27: 0000000000000000 x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 x23: 0000000060001000 x22: 00000000cb000022 x21: ffff800012065ce8 x20: ffff800012af3ec0 x19: ffff800012068d50 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000000 x9 : ffff800010085c90 x8 : 0000000000000000 x7 : 0000000000000000 x6 : ffff80001205a9c8 x5 : ffff80001205a000 x4 : ffff80001233db80 x3 : ffff8000100a7a60 x2 : 0020000000000003 x1 : 0000fffffffff008 x0 : ffff800012af3ec0 Call trace: uprobe_single_step_handler+0x34/0x50 single_step_handler+0x70/0xf8 do_debug_exception+0xb8/0x130 el0_sync_handler+0x138/0x1b8 el0_sync+0x158/0x180 Fixes: 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing") Fixes: 04ca3204fa09 ("arm64: enable pointer authentication") Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-2-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-13irqchip/gic-v3: Support pseudo-NMIs when SCR_EL3.FIQ == 0Alexandru Elisei1-0/+2
The GIC's internal view of the priority mask register and the assigned interrupt priorities are based on whether GIC security is enabled and whether firmware routes Group 0 interrupts to EL3. At the moment, we support priority masking when ICC_PMR_EL1 and interrupt priorities are either both modified by the GIC, or both left unchanged. Trusted Firmware-A's default interrupt routing model allows Group 0 interrupts to be delivered to the non-secure world (SCR_EL3.FIQ == 0). Unfortunately, this is precisely the case that the GIC driver doesn't support: ICC_PMR_EL1 remains unchanged, but the GIC's view of interrupt priorities is different from the software programmed values. Support pseudo-NMIs when SCR_EL3.FIQ == 0 by using a different value to mask regular interrupts. All the other values remain the same. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200912153707.667731-3-alexandru.elisei@arm.com
2020-09-13arm64: Allow IPIs to be handled as normal interruptsMarc Zyngier1-12/+81
In order to deal with IPIs as normal interrupts, let's add a new way to register them with the architecture code. set_smp_ipi_range() takes a range of interrupts, and allows the arch code to request them as if the were normal interrupts. A standard handler is then called by the core IRQ code to deal with the IPI. This means that we don't need to call irq_enter/irq_exit, and that we don't need to deal with set_irq_regs either. So let's move the dispatcher into its own function, and leave handle_IPI() as a compatibility function. On the sending side, let's make use of ipi_send_mask, which already exists for this purpose. One of the major difference is that we end up, in some cases (such as when performing IRQ time accounting on the scheduler IPI), end up with nested irq_enter()/irq_exit() pairs. Other than the (relatively small) overhead, there should be no consequences to it (these pairs are designed to nest correctly, and the accounting shouldn't be off). Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-13arm64: Allow CPUs unffected by ARM erratum 1418040 to come in lateMarc Zyngier1-2/+6
Now that we allow CPUs affected by erratum 1418040 to come in late, this prevents their unaffected sibblings from coming in late (or coming back after a suspend or hotplug-off, which amounts to the same thing). To allow this, we need to add ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU, which amounts to set .type to ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE. Fixes: bf87bb0881d0 ("arm64: Allow booting of late CPUs affected by erratum 1418040") Reported-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Tested-by: Matthias Kaehlcke <mka@chromium.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200911181611.2073183-1-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>