summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2019-06-26arm64: vdso: Fix compilation with clang older than 8Vincenzo Frascino1-0/+7
clang versions older than 8 do not support -mcmodel=tiny. Add a check to the vDSO Makefile for arm64 to remove the flag when these versions of the compiler are detected. Reported-by: Qian Cai <cai@lca.pw> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Qian Cai <cai@lca.pw> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: catalin.marinas@arm.com Cc: will.deacon@arm.com Cc: arnd@arndb.de Cc: linux@armlinux.org.uk Cc: ralf@linux-mips.org Cc: paul.burton@mips.com Cc: daniel.lezcano@linaro.org Cc: salyzyn@android.com Cc: pcc@google.com Cc: shuah@kernel.org Cc: 0x7f454c46@gmail.com Cc: linux@rasmusvillemoes.dk Cc: huw@codeweavers.com Cc: sthotton@marvell.com Cc: andre.przywara@arm.com Cc: luto@kernel.org Link: https://lkml.kernel.org/r/20190626113632.9295-1-vincenzo.frascino@arm.com
2019-06-26arm64/efi: Mark __efistub_stext_offset as an absolute symbol explicitlyNathan Chancellor1-1/+5
After r363059 and r363928 in LLVM, a build using ld.lld as the linker with CONFIG_RANDOMIZE_BASE enabled fails like so: ld.lld: error: relocation R_AARCH64_ABS32 cannot be used against symbol __efistub_stext_offset; recompile with -fPIC Fangrui and Peter figured out that ld.lld is incorrectly considering __efistub_stext_offset as a relative symbol because of the order in which symbols are evaluated. _text is treated as an absolute symbol and stext is a relative symbol, making __efistub_stext_offset a relative symbol. Adding ABSOLUTE will force ld.lld to evalute this expression in the right context and does not change ld.bfd's behavior. ld.lld will need to be fixed but the developers do not see a quick or simple fix without some research (see the linked issue for further explanation). Add this simple workaround so that ld.lld can continue to link kernels. Link: https://github.com/ClangBuiltLinux/linux/issues/561 Link: https://github.com/llvm/llvm-project/commit/025a815d75d2356f2944136269aa5874721ec236 Link: https://github.com/llvm/llvm-project/commit/249fde85832c33f8b06c6b4ac65d1c4b96d23b83 Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Debugged-by: Fangrui Song <maskray@google.com> Debugged-by: Peter Smith <peter.smith@linaro.org> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> [will: add comment] Signed-off-by: Will Deacon <will@kernel.org>
2019-06-26arm64: kaslr: keep modules inside module region when KASAN is enabledArd Biesheuvel1-2/+6
When KASLR and KASAN are both enabled, we keep the modules where they are, and randomize the placement of the kernel so it is within 2 GB of the module region. The reason for this is that putting modules in the vmalloc region (like we normally do when KASLR is enabled) is not possible in this case, given that the entire vmalloc region is already backed by KASAN zero shadow pages, and so allocating dedicated KASAN shadow space as required by loaded modules is not possible. The default module allocation window is set to [_etext - 128MB, _etext] in kaslr.c, which is appropriate for KASLR kernels booted without a seed or with 'nokaslr' on the command line. However, as it turns out, it is not quite correct for the KASAN case, since it still intersects the vmalloc region at the top, where attempts to allocate shadow pages will collide with the KASAN zero shadow pages, causing a WARN() and all kinds of other trouble. So cap the top end to MODULES_END explicitly when running with KASAN. Cc: <stable@vger.kernel.org> # 4.9+ Acked-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-06-26arm64: vdso: Remove unnecessary asm-offsets.c definitionsCatalin Marinas1-39/+0
Since the VDSO code has moved to C from assembly, there is no need to define and maintain the corresponding asm offsets. Fixes: 28b1a824a4f4 ("arm64: vdso: Substitute gettimeofday() with C implementation") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Cc: Shijith Thotton <sthotton@marvell.com> Cc: Andre Przywara <andre.przywara@arm.com> Link: https://lkml.kernel.org/r/20190624135812.GC29120@arrakis.emea.arm.com
2019-06-25arm64: Implement panic_smp_self_stop()Aaro Koskinen1-6/+13
Currently arm64 uses the default implementation of panic_smp_self_stop() where the CPU runs in a cpu_relax() loop unable to receive IPIs anymore. As a result, when two CPUs panic() simultaneously we get "SMP: failed to stop secondary CPUs" warnings and extra delays before a reset, because smp_send_stop() still tries to stop the other paniced CPU. Provide an implementation of panic_smp_self_stop() that is identical to the IPI CPU stop handler, so that the online status of stopped CPUs gets properly updated. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-25arm64: Improve parking of stopped CPUsJayachandran C1-3/+1
The current code puts the stopped cpus in an 'yield' instruction loop. Using a busy loop here is unnecessary, we can use the cpu_park_loop() function here to do a wfi/wfe. Signed-off-by: Jayachandran C <jnair@caviumnetworks.com> Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-25arm64: Expose FRINT capabilities to userspaceMark Brown2-0/+2
ARMv8.5 introduces the FRINT series of instructions for rounding floating point numbers to integers. Provide a capability to userspace in order to allow applications to determine if the system supports these instructions. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-25arm64: Expose ARMv8.5 CondM capability to userspaceMark Brown2-0/+2
ARMv8.5 adds new instructions XAFLAG and AXFLAG to translate the representation of the results of floating point comparisons between the native ARM format and an alternative format used by some software. Add a hwcap allowing userspace to determine if they are present, since we referred to earlier CondM extensions as FLAGM call these extensions FLAGM2. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-24arm64/kprobes: set VM_FLUSH_RESET_PERMS on kprobe instruction pagesArd Biesheuvel1-1/+3
In order to avoid transient inconsistencies where freed code pages are remapped writable while stale TLB entries still exist on other cores, mark the kprobes text pages with the VM_FLUSH_RESET_PERMS attribute. This instructs the core vmalloc code not to defer the TLB flush when this region is unmapped and returned to the page allocator. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-24arm64: module: create module allocations without exec permissionsArd Biesheuvel1-2/+2
Now that the core code manages the executable permissions of code regions of modules explicitly, it is no longer necessary to create the module vmalloc regions with RWX permissions, and we can create them with RW- permissions instead, which is preferred from a security perspective. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-24acpi/arm64: ignore 5.1 FADTs that are reported as 5.0Ard Biesheuvel1-3/+7
Some Qualcomm Snapdragon based laptops built to run Microsoft Windows are clearly ACPI 5.1 based, given that that is the first ACPI revision that supports ARM, and introduced the FADT 'arm_boot_flags' field, which has a non-zero field on those systems. So in these cases, infer from the ARM boot flags that the FADT must be 5.1 or later, and treat it as 5.1. Acked-by: Sudeep Holla <sudeep.holla@arm.com> Tested-by: Lee Jones <lee.jones@linaro.org> Reviewed-by: Graeme Gregory <graeme.gregory@linaro.org> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <guohanjun@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-22arm64: vdso: Enable vDSO compat supportVincenzo Frascino1-1/+5
Add vDSO compat support to the arm64 build system. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-16-vincenzo.frascino@arm.com
2019-06-22arm64: compat: Get sigreturn trampolines from vDSOVincenzo Frascino1-0/+26
When the compat vDSO is enabled, the sigreturn trampolines are not anymore available through [sigpage] but through [vdso]. Add the relevant code the enable the feature. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-15-vincenzo.frascino@arm.com
2019-06-22arm64: compat: VDSO setup for compat layerVincenzo Frascino1-2/+88
If CONFIG_GENERIC_COMPAT_VDSO is enabled, compat vDSO is installed in a compat (32 bit) process instead of sigpage. Add the necessary code to setup the vDSO required pages. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-13-vincenzo.frascino@arm.com
2019-06-22arm64: vdso: Refactor vDSO codeVincenzo Frascino1-71/+144
Most of the code for initializing the vDSOs in arm64 and compat will be shared, hence refactoring of the current code is required to avoid duplication and to simplify maintainability. No functional change. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-12-vincenzo.frascino@arm.com
2019-06-22arm64: compat: Add vDSOVincenzo Frascino7-0/+425
Provide the arm64 compat (AArch32) vDSO in kernel/vdso32 in a similar way to what happens in kernel/vdso. The compat vDSO leverages on an adaptation of the arm architecture code with few changes: - Use of lib/vdso for gettimeofday - Implement a syscall based fallback - Introduce clock_getres() for the compat library - Implement trampolines - Implement elf note To build the compat vDSO a 32 bit compiler is required and needs to be specified via CONFIG_CROSS_COMPILE_COMPAT_VDSO. The code is not yet enabled as other prerequisites are missing. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-11-vincenzo.frascino@arm.com
2019-06-22arm64: compat: Generate asm offsets for signalsVincenzo Frascino1-0/+6
Update asm-offsets for arm64 to generate the correct offsets for compat signals. They will be useful for the implementation of the compat sigreturn trampolines in vDSO context. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-9-vincenzo.frascino@arm.com
2019-06-22arm64: compat: Expose signal related structuresVincenzo Frascino1-46/+0
The compat signal data structures are required as part of the compat vDSO implementation in order to provide the unwinding information for the sigreturn trampolines. Expose these data structures as part of signal32.h. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-8-vincenzo.frascino@arm.com
2019-06-22arm64: vdso: Build vDSO with -ffixed-x18Peter Collingbourne1-1/+1
The vDSO needs to be built with x18 reserved in order to accommodate userspace platform ABIs built on top of Linux that use the register to carry inter-procedural state, as provided for by the AAPCS. An example of such a platform ABI is the one that will be used by an upcoming version of Android. Although this change is currently a no-op due to the fact that the vDSO is currently implemented in pure assembly on arm64, it is necessary in order to prepare for using the generic C implementation of the vDSO. [ tglx: Massaged changelog ] Signed-off-by: Peter Collingbourne <pcc@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Cc: Mark Salyzyn <salyzyn@google.com> Link: https://lkml.kernel.org/r/20190621095252.32307-6-vincenzo.frascino@arm.com
2019-06-22arm64: vdso: Substitute gettimeofday() with C implementationVincenzo Frascino5-387/+81
To take advantage of the commonly defined vdso interface for gettimeofday() the architectural code requires an adaptation. Re-implement the gettimeofday VDSO in C in order to use lib/vdso. With the new implementation arm64 gains support for CLOCK_BOOTTIME and CLOCK_TAI. [ tglx: Reformatted the function line breaks ] Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Shijith Thotton <sthotton@marvell.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Link: https://lkml.kernel.org/r/20190621095252.32307-5-vincenzo.frascino@arm.com
2019-06-22arm64: PCI: Preserve firmware configuration when desiredBenjamin Herrenschmidt1-0/+10
If we must preserve the firmware resource assignments, claim the existing resources rather than reassigning everything. Link: https://lore.kernel.org/r/20190615002359.29577-4-benh@kernel.crashing.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [bhelgaas: commit log] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2019-06-22arm64: PCI: Allow resource reallocation if necessaryBenjamin Herrenschmidt1-2/+1
Call pci_assign_unassigned_root_bus_resources() instead of the simpler: pci_bus_size_bridges(bus); pci_bus_assign_resources(bus); pci_assign_unassigned_root_bus_resources() calls: __pci_bus_size_bridges(bus, add_list); __pci_bus_assign_resources(bus, add_list, &fail_head); so this should be equivalent as long as we're able to assign everything. If we were unable to assign something, previously we did nothing and left it unassigned, but after this patch, we will attempt to do some reallocation. Once we start honoring FW resource allocations, this will bring up the "reallocation" feature which can help making room for SR-IOV when necessary. Link: https://lore.kernel.org/r/20190615002359.29577-1-benh@kernel.crashing.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [bhelgaas: commit log] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2019-06-21Merge tag 'spdx-5.2-rc6' of ↵Linus Torvalds64-604/+64
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx Pull still more SPDX updates from Greg KH: "Another round of SPDX updates for 5.2-rc6 Here is what I am guessing is going to be the last "big" SPDX update for 5.2. It contains all of the remaining GPLv2 and GPLv2+ updates that were "easy" to determine by pattern matching. The ones after this are going to be a bit more difficult and the people on the spdx list will be discussing them on a case-by-case basis now. Another 5000+ files are fixed up, so our overall totals are: Files checked: 64545 Files with SPDX: 45529 Compared to the 5.1 kernel which was: Files checked: 63848 Files with SPDX: 22576 This is a huge improvement. Also, we deleted another 20000 lines of boilerplate license crud, always nice to see in a diffstat" * tag 'spdx-5.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: (65 commits) treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 507 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 506 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 505 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 504 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 503 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 502 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 501 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 499 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 498 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 497 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 496 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 495 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 491 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 490 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 489 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 488 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 487 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 486 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 485 ...
2019-06-21arm64: fix kernel stack overflow in kdump capture kernelWei Li2-7/+10
When enabling ARM64_PSEUDO_NMI feature in kdump capture kernel, it will report a kernel stack overflow exception: [ 0.000000] CPU features: detected: IRQ priority masking [ 0.000000] alternatives: patching kernel code [ 0.000000] Insufficient stack space to handle exception! [ 0.000000] ESR: 0x96000044 -- DABT (current EL) [ 0.000000] FAR: 0x0000000000000040 [ 0.000000] Task stack: [0xffff0000097f0000..0xffff0000097f4000] [ 0.000000] IRQ stack: [0x0000000000000000..0x0000000000004000] [ 0.000000] Overflow stack: [0xffff80002b7cf290..0xffff80002b7d0290] [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.34-lw+ #3 [ 0.000000] pstate: 400003c5 (nZcv DAIF -PAN -UAO) [ 0.000000] pc : el1_sync+0x0/0xb8 [ 0.000000] lr : el1_irq+0xb8/0x140 [ 0.000000] sp : 0000000000000040 [ 0.000000] pmr_save: 00000070 [ 0.000000] x29: ffff0000097f3f60 x28: ffff000009806240 [ 0.000000] x27: 0000000080000000 x26: 0000000000004000 [ 0.000000] x25: 0000000000000000 x24: ffff000009329028 [ 0.000000] x23: 0000000040000005 x22: ffff000008095c6c [ 0.000000] x21: ffff0000097f3f70 x20: 0000000000000070 [ 0.000000] x19: ffff0000097f3e30 x18: ffffffffffffffff [ 0.000000] x17: 0000000000000000 x16: 0000000000000000 [ 0.000000] x15: ffff0000097f9708 x14: ffff000089a382ef [ 0.000000] x13: ffff000009a382fd x12: ffff000009824000 [ 0.000000] x11: ffff0000097fb7b0 x10: ffff000008730028 [ 0.000000] x9 : ffff000009440018 x8 : 000000000000000d [ 0.000000] x7 : 6b20676e69686374 x6 : 000000000000003b [ 0.000000] x5 : 0000000000000000 x4 : ffff000008093600 [ 0.000000] x3 : 0000000400000008 x2 : 7db2e689fc2b8e00 [ 0.000000] x1 : 0000000000000000 x0 : ffff0000097f3e30 [ 0.000000] Kernel panic - not syncing: kernel stack overflow [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.34-lw+ #3 [ 0.000000] Call trace: [ 0.000000] dump_backtrace+0x0/0x1b8 [ 0.000000] show_stack+0x24/0x30 [ 0.000000] dump_stack+0xa8/0xcc [ 0.000000] panic+0x134/0x30c [ 0.000000] __stack_chk_fail+0x0/0x28 [ 0.000000] handle_bad_stack+0xfc/0x108 [ 0.000000] __bad_stack+0x90/0x94 [ 0.000000] el1_sync+0x0/0xb8 [ 0.000000] init_gic_priority_masking+0x4c/0x70 [ 0.000000] smp_prepare_boot_cpu+0x60/0x68 [ 0.000000] start_kernel+0x1e8/0x53c [ 0.000000] ---[ end Kernel panic - not syncing: kernel stack overflow ]--- The reason is init_gic_priority_masking() may unmask PSR.I while the irq stacks are not inited yet. Some "NMI" could be raised unfortunately and it will just go into this exception. In this patch, we just write the PMR in smp_prepare_boot_cpu(), and delay unmasking PSR.I after irq stacks inited in init_IRQ(). Fixes: e79321883842 ("arm64: Switch to PMR masking when starting CPUs") Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Wei Li <liwei391@huawei.com> [JT: make init_gic_priority_masking() not modify daif, rebase on other priority masking fixes] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-21arm64: Fix incorrect irqflag restore for priority maskingJulien Thierry3-9/+39
When using IRQ priority masking to disable interrupts, in order to deal with the PSR.I state, local_irq_save() would convert the I bit into a PMR value (GIC_PRIO_IRQOFF). This resulted in local_irq_restore() potentially modifying the value of PMR in undesired location due to the state of PSR.I upon flag saving [1]. In an attempt to solve this issue in a less hackish manner, introduce a bit (GIC_PRIO_IGNORE_PMR) for the PMR values that can represent whether PSR.I is being used to disable interrupts, in which case it takes precedence of the status of interrupt masking via PMR. GIC_PRIO_PSR_I_SET is chosen such that (<pmr_value> | GIC_PRIO_PSR_I_SET) does not mask more interrupts than <pmr_value> as some sections (e.g. arch_cpu_idle(), interrupt acknowledge path) requires PMR not to mask interrupts that could be signaled to the CPU when using only PSR.I. [1] https://www.spinics.net/lists/arm-kernel/msg716956.html Fixes: 4a503217ce37 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking") Cc: <stable@vger.kernel.org> # 5.1.x- Reported-by: Zenghui Yu <yuzenghui@huawei.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wei Li <liwei391@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com> Cc: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-21arm64: Fix interrupt tracing in the presence of NMIsJulien Thierry2-11/+50
In the presence of any form of instrumentation, nmi_enter() should be done before calling any traceable code and any instrumentation code. Currently, nmi_enter() is done in handle_domain_nmi(), which is much too late as instrumentation code might get called before. Move the nmi_enter/exit() calls to the arch IRQ vector handler. On arm64, it is not possible to know if the IRQ vector handler was called because of an NMI before acknowledging the interrupt. However, It is possible to know whether normal interrupts could be taken in the interrupted context (i.e. if taking an NMI in that context could introduce a potential race condition). When interrupting a context with IRQs disabled, call nmi_enter() as soon as possible. In contexts with IRQs enabled, defer this to the interrupt controller, which is in a better position to know if an interrupt taken is an NMI. Fixes: bc3c03ccb464 ("arm64: Enable the support of pseudo-NMIs") Cc: <stable@vger.kernel.org> # 5.1.x- Cc: Will Deacon <will.deacon@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-21arm64: Do not enable IRQs for ct_user_exitJulien Thierry1-2/+2
For el0_dbg and el0_error, DAIF bits get explicitly cleared before calling ct_user_exit. When context tracking is disabled, DAIF gets set (almost) immediately after. When context tracking is enabled, among the first things done is disabling IRQs. What is actually needed is: - PSR.D = 0 so the system can be debugged (should be already the case) - PSR.A = 0 so async error can be handled during context tracking Do not clear PSR.I in those two locations. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-20Merge tag 'arm64-fixes' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "This is mainly a couple of email address updates to MAINTAINERS, but we've also fixed a UAPI build issue with musl libc and an accidental double-initialisation of our pgd_cache due to a naming conflict with a weak symbol. There are a couple of outstanding issues that have been reported, but it doesn't look like they're new and we're still a long way off from fully debugging them. Summary: - Fix use of #include in UAPI headers for compatability with musl libc - Update email addresses in MAINTAINERS - Fix initialisation of pgd_cache due to name collision with weak symbol" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64/mm: don't initialize pgd_cache twice MAINTAINERS: Update my email address arm64/sve: <uapi/asm/ptrace.h> should not depend on <uapi/linux/prctl.h> arm64: ssbd: explicitly depend on <linux/prctl.h> MAINTAINERS: Update my email address to use @kernel.org
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner21-89/+21
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 452Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software void you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http void www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 1 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081201.003433009@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner42-503/+42
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-17arm64: ssbd: explicitly depend on <linux/prctl.h>Anisse Astier1-0/+1
Fix ssbd.c which depends implicitly on asm/ptrace.h including linux/prctl.h (through for example linux/compat.h, then linux/time.h, linux/seqlock.h, linux/spinlock.h and linux/irqflags.h), and uses PR_SPEC* defines. This is an issue since we'll soon be removing the include from asm/ptrace.h. Fixes: 9cdc0108baa8 ("arm64: ssbd: Add prctl interface for per-thread mitigation") Cc: stable@vger.kernel.org Signed-off-by: Anisse Astier <aastier@freebox.fr> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-17arm64/mm: Correct the cache line size warning with non coherent deviceMasayoshi Mizuma1-3/+1
If the cache line size is greater than ARCH_DMA_MINALIGN (128), the warning shows and it's tainted as TAINT_CPU_OUT_OF_SPEC. However, it's not good because as discussed in the thread [1], the cpu cache line size will be problem only on non-coherent devices. Since the coherent flag is already introduced to struct device, show the warning only if the device is non-coherent device and ARCH_DMA_MINALIGN is smaller than the cpu cache size. [1] https://lore.kernel.org/linux-arm-kernel/20180514145703.celnlobzn3uh5tc2@localhost/ Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> Reviewed-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Tested-by: Zhang Lei <zhang.lei@jp.fujitsu.com> [catalin.marinas@arm.com: removed 'if' block for WARN_TAINT] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-14docs: arm64: convert docs to ReST and rename to .rstMauro Carvalho Chehab1-1/+1
The documentation is in a format that is very close to ReST format. The conversion is actually: - add blank lines in order to identify paragraphs; - fixing tables markups; - adding some lists markups; - marking literal blocks; - adjust some title markups. At its new index.rst, let's add a :orphan: while this is not linked to the main index.rst file, in order to avoid build warnings. Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2019-06-14Merge tag 'v5.2-rc4' into mauroJonathan Corbet20-124/+142
We need to pick up post-rc1 changes to various document files so they don't get lost in Mauro's massive RST conversion push.
2019-06-14Merge tag 'arm64-fixes' of ↵Linus Torvalds1-9/+33
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "Here are some arm64 fixes for -rc5. The only non-trivial change (in terms of the diffstat) is fixing our SVE ptrace API for big-endian machines, but the majority of this is actually the addition of much-needed comments and updates to the documentation to try to avoid this mess biting us again in future. There are still a couple of small things on the horizon, but nothing major at this point. Summary: - Fix broken SVE ptrace API when running in a big-endian configuration - Fix performance regression due to off-by-one in TLBI range checking - Fix build regression when using Clang" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64/sve: Fix missing SVE/FPSIMD endianness conversions arm64: tlbflush: Ensure start/end of address range are aligned to stride arm64: Don't unconditionally add -Wno-psabi to KBUILD_CFLAGS
2019-06-13arm64/sve: Fix missing SVE/FPSIMD endianness conversionsDave Martin1-9/+33
The in-memory representation of SVE and FPSIMD registers is different: the FPSIMD V-registers are stored as single 128-bit host-endian values, whereas SVE registers are stored in an endianness-invariant byte order. This means that the two representations differ when running on a big-endian host. But we blindly copy data from one representation to another when converting between the two, resulting in the register contents being unintentionally byteswapped in certain situations. Currently this can be triggered by the first SVE instruction after a syscall, for example (though the potential trigger points may vary in future). So, fix the conversion functions fpsimd_to_sve(), sve_to_fpsimd() and sve_sync_from_fpsimd_zeropad() to swab where appropriate. There is no common swahl128() or swab128() that we could use here. Maybe it would be worth making this generic, but for now add a simple local hack. Since the byte order differences are exposed in ABI, also clarify the documentation. Cc: Alex Bennée <alex.bennee@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Alan Hayward <alan.hayward@arm.com> Cc: Julien Grall <julien.grall@arm.com> Fixes: bc0ee4760364 ("arm64/sve: Core task context handling") Fixes: 8cd969d28fd2 ("arm64/sve: Signal handling support") Fixes: 43d4da2c45b2 ("arm64/sve: ptrace and ELF coredump support") Signed-off-by: Dave Martin <Dave.Martin@arm.com> [will: Fix typos in comments and docs spotted by Julien] Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-08Merge tag 'spdx-5.2-rc4' of ↵Linus Torvalds1-12/+1
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull yet more SPDX updates from Greg KH: "Another round of SPDX header file fixes for 5.2-rc4 These are all more "GPL-2.0-or-later" or "GPL-2.0-only" tags being added, based on the text in the files. We are slowly chipping away at the 700+ different ways people tried to write the license text. All of these were reviewed on the spdx mailing list by a number of different people. We now have over 60% of the kernel files covered with SPDX tags: $ ./scripts/spdxcheck.py -v 2>&1 | grep Files Files checked: 64533 Files with SPDX: 40392 Files with errors: 0 I think the majority of the "easy" fixups are now done, it's now the start of the longer-tail of crazy variants to wade through" * tag 'spdx-5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (159 commits) treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 450 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 449 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 448 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 446 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 445 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 444 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 443 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 442 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 441 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 440 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 438 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 437 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 436 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 435 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 434 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 433 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 432 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 431 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 430 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 429 ...
2019-06-08docs: fix broken documentation linksMauro Carvalho Chehab1-1/+1
Mostly due to x86 and acpi conversion, several documentation links are still pointing to the old file. Fix them. Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Reviewed-by: Wolfram Sang <wsa@the-dreams.de> Reviewed-by: Sven Van Asbroeck <TheSven73@gmail.com> Reviewed-by: Bhupesh Sharma <bhsharma@redhat.com> Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2019-06-07Merge tag 'arm64-fixes' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "Another round of mostly-benign fixes, the exception being a boot crash on SVE2-capable CPUs (although I don't know where you'd find such a thing, so maybe it's benign too). We're in the process of resolving some big-endian ptrace breakage, so I'll probably have some more for you next week. Summary: - Fix boot crash on platforms with SVE2 due to missing register encoding - Fix architected timer accessors when CONFIG_OPTIMIZE_INLINING=y - Move cpu_logical_map into smp.h for use by upcoming irqchip drivers - Trivial typo fix in comment - Disable some useless, noisy warnings from GCC 9" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Silence gcc warnings about arch ABI drift ARM64: trivial: s/TIF_SECOMP/TIF_SECCOMP/ comment typo fix arm64: arch_timer: mark functions as __always_inline arm64: smp: Moved cpu_logical_map[] to smp.h arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()
2019-06-05arm64: ptrace: add support for syscall emulationSudeep Holla1-1/+5
Add PTRACE_SYSEMU and PTRACE_SYSEMU_SINGLESTEP support on arm64. We don't need any special handling for PTRACE_SYSEMU_SINGLESTEP. It's quite difficult to generalize handling PTRACE_SYSEMU cross architectures and avoid calls to tracehook_report_syscall_entry twice. Different architecture have different mechanism to indicate NO_SYSCALL and trying to generalise adds more code for no gain. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 252Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed as is without any warranty of any kind whether express or implied without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 2 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Richard Fontana <rfontana@redhat.com> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141332.617181045@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-05arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()Dave Martin1-0/+1
In commit 06a916feca2b ("arm64: Expose SVE2 features for userspace"), new hwcaps are added that are detected via fields in the SVE-specific ID register ID_AA64ZFR0_EL1. In order to check compatibility of secondary cpus with the hwcaps established at boot, the cpufeatures code uses __read_sysreg_by_encoding() to read this ID register based on the sys_reg field of the arm64_elf_hwcaps[] table. This leads to a kernel splat if an hwcap uses an ID register that __read_sysreg_by_encoding() doesn't explicitly handle, as now happens when exercising cpu hotplug on an SVE2-capable platform. So fix it by adding the required case in there. Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-04arm64: kernel: use aff3 instead of aff2 in commentLiu Song1-1/+1
Should use aff3 instead of aff2 in comment. Signed-off-by: Liu Song <liu.song11@zte.com.cn> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-04arm64/cpufeature: Convert hook_lock to raw_spin_lock_t in cpu_enable_ssbs()Julien Grall1-3/+3
cpu_enable_ssbs() is called via stop_machine() as part of the cpu_enable callback. A spin lock is used to ensure the hook is registered before the rest of the callback is executed. On -RT spin_lock() may sleep. However, all the callees in stop_machine() are expected to not sleep. Therefore a raw_spin_lock() is required here. Given this is already done under stop_machine() and the work done under the lock is quite small, the latency should not increase too much. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-04arm64: cacheinfo: Update cache_line_size detected from DT or PPTTShaokun Zhang1-0/+11
cache_line_size is derived from CTR_EL0.CWG field and is called mostly for I/O device drivers. For some platforms like the HiSilicon Kunpeng920 server SoC, cache line sizes are different between L1/2 cache and L3 cache while L1 cache line size is 64-byte and L3 is 128-byte, but CTR_EL0.CWG is misreporting using L1 cache line size. We shall correct the right value which is important for I/O performance. Let's update the cache line size if it is detected from DT or PPTT information. Cc: Will Deacon <will.deacon@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Zhenfa Qiu <qiuzhenfa@hisilicon.com> Reported-by: Zhenfa Qiu <qiuzhenfa@hisilicon.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-04arm64/fpsimd: Don't disable softirq when touching FPSIMD/SVE stateJulien Grall1-40/+84
When the kernel is compiled with CONFIG_KERNEL_MODE_NEON, some part of the kernel may be able to use FPSIMD/SVE. This is for instance the case for crypto code. Any use of FPSIMD/SVE in the kernel are clearly marked by using the function kernel_neon_{begin, end}. Furthermore, this can only be used when may_use_simd() returns true. The current implementation of may_use_simd() allows softirq to use FPSIMD/SVE unless it is currently in use (i.e kernel_neon_busy is true). When in use, softirqs usually fall back to a software method. At the moment, as a softirq may use FPSIMD/SVE, softirqs are disabled when touching the FPSIMD/SVE context. This has the drawback to disable all softirqs even if they are not using FPSIMD/SVE. Since a softirq is supposed to check may_use_simd() anyway before attempting to use FPSIMD/SVE, there is limited reason to keep softirq disabled when touching the FPSIMD/SVE context. Instead, we can simply disable preemption and mark the FPSIMD/SVE context as in use by setting CPU's fpsimd_context_busy flag. Two new helpers {get, put}_cpu_fpsimd_context are introduced to mark the area using FPSIMD/SVE context and they are used to replace local_bh_{disable, enable}. The functions kernel_neon_{begin, end} are also re-implemented to use the new helpers. Additionally, double-underscored versions of the helpers are provided to called when preemption is already disabled. These are only relevant on paths where irqs are disabled anyway, so they are not needed for correctness in the current code. Let's use them anyway though: this marks critical sections clearly and will help to avoid mistakes during future maintenance. The change has been benchmarked on Linux 5.1-rc4 with defconfig. On Juno2: * hackbench 100 process 1000 (10 times) * .7% quicker On ThunderX 2: * hackbench 1000 process 1000 (20 times) * 3.4% quicker Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-04arm64/fpsimd: Introduce fpsimd_save_and_flush_cpu_state() and use itJulien Grall1-4/+13
The only external user of fpsimd_save() and fpsimd_flush_cpu_state() is the KVM FPSIMD code. A following patch will introduce a mechanism to acquire owernship of the FPSIMD/SVE context for performing context management operations. Rather than having to export the new helpers to get/put the context, we can just introduce a new function to combine fpsimd_save() and fpsimd_flush_cpu_state(). This has also the advantage to remove any external call of fpsimd_save() and fpsimd_flush_cpu_state(), so they can be turned static. Lastly, the new function can also be used in the PM notifier. Reviewed-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-05-31Merge tag 'spdx-5.2-rc3-1' of ↵Linus Torvalds9-74/+9
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull yet more SPDX updates from Greg KH: "Here is another set of reviewed patches that adds SPDX tags to different kernel files, based on a set of rules that are being used to parse the comments to try to determine that the license of the file is "GPL-2.0-or-later" or "GPL-2.0-only". Only the "obvious" versions of these matches are included here, a number of "non-obvious" variants of text have been found but those have been postponed for later review and analysis. There is also a patch in here to add the proper SPDX header to a bunch of Kbuild files that we have missed in the past due to new files being added and forgetting that Kbuild uses two different file names for Makefiles. This issue was reported by the Kbuild maintainer. These patches have been out for review on the linux-spdx@vger mailing list, and while they were created by automatic tools, they were hand-verified by a bunch of different people, all whom names are on the patches are reviewers" * tag 'spdx-5.2-rc3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (82 commits) treewide: Add SPDX license identifier - Kbuild treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 225 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 224 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 223 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 222 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 221 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 220 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 218 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 217 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 216 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 215 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 214 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 213 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 211 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 210 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 209 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 207 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 206 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 203 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 201 ...
2019-05-31Merge tag 'arm64-fixes' of ↵Linus Torvalds4-20/+46
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "The fixes are still trickling in for arm64, but the only really significant one here is actually fixing a regression in the botched module relocation range checking merged for -rc2. Hopefully we've nailed it this time. - Fix implementation of our set_personality() system call, which wasn't being wrapped properly - Fix system call function types to keep CFI happy - Fix siginfo layout when delivering SIGKILL after a kernel fault - Really fix module relocation range checking" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: use the correct function type for __arm64_sys_ni_syscall arm64: use the correct function type in SYSCALL_DEFINE0 arm64: fix syscall_fn_t type signal/arm64: Use force_sig not force_sig_fault for SIGKILL arm64/module: revert to unsigned interpretation of ABS16/32 relocations arm64: Fix the arm64_personality() syscall wrapper redirection