summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2020-05-19hardirq/nmi: Allow nested nmi_enter()Peter Zijlstra2-18/+4
Since there are already a number of sites (ARM64, PowerPC) that effectively nest nmi_enter(), make the primitive support this before adding even more. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Link: https://lkml.kernel.org/r/20200505134100.864179229@linutronix.de
2020-05-18arm64: stacktrace: Factor out some common code into on_stack()Yunfeng Ye1-26/+2
There are some common codes for stack checking, so factors it out into the function on_stack(). No functional change. Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Link: https://lore.kernel.org/r/07b3b0e6-3f58-4fed-07ea-7d17b7508948@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18arm64: Call debug_traps_init() from trap_init() to help early kgdbDouglas Anderson2-4/+2
A new kgdb feature will soon land (kgdb_earlycon) that lets us run kgdb much earlier. In order for everything to work properly it's important that the break hook is setup by the time we process "kgdbwait". Right now the break hook is setup in debug_traps_init() and that's called from arch_initcall(). That's a bit too late since kgdb_earlycon really needs things to be setup by the time the system calls dbg_late_init(). We could fix this by adding call_break_hook() into early_brk64() and that works fine. However, it's a little ugly. Instead, let's just add a call to debug_traps_init() straight from trap_init(). There's already a documented dependency between trap_init() and debug_traps_init() and this makes the dependency more obvious rather than just relying on a comment. NOTE: this solution isn't early enough to let us select the "ARCH_HAS_EARLY_DEBUG" KConfig option that is introduced by the kgdb_earlycon patch series. That would only be set if we could do breakpoints when early params are parsed. This patch only enables "late early" breakpoints, AKA breakpoints when dbg_late_init() is called. It's expected that this should be fine for most people. It should also be noted that if you crash you can still end up in kgdb earlier than debug_traps_init(). Since you don't need breakpoints to debug a crash that's fine. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Douglas Anderson <dianders@chromium.org> Acked-by: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200513160501.1.I0b5edf030cc6ebef6ab4829f8867cdaea42485d8@changeid Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18arm64: entry-ftrace.S: Update comment to indicate that x18 is liveWill Deacon1-2/+3
The Shadow Call Stack pointer is held in x18, so update the ftrace entry comment to indicate that it cannot be safely clobbered. Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18scs: Move DEFINE_SCS macro into core codeWill Deacon1-4/+0
Defining static shadow call stacks is not architecture-specific, so move the DEFINE_SCS() macro into the core header file. Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18scs: Move scs_overflow_check() out of architecture codeWill Deacon2-3/+1
There is nothing architecture-specific about scs_overflow_check() as it's just a trivial wrapper around scs_corrupted(). For parity with task_stack_end_corrupted(), rename scs_corrupted() to task_scs_end_corrupted() and call it from schedule_debug() when CONFIG_SCHED_STACK_END_CHECK_is enabled, which better reflects its purpose as a debug feature to catch inadvertent overflow of the SCS. Finally, remove the unused scs_overflow_check() function entirely. This has absolutely no impact on architectures that do not support SCS (currently arm64 only). Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18arm64: scs: Use 'scs_sp' register alias for x18Will Deacon2-6/+6
x18 holds the SCS stack pointer value, so introduce a register alias to make this easier to read in assembly code. Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18arm64: scs: Store absolute SCS stack pointer value in thread_infoWill Deacon1-1/+1
Storing the SCS information in thread_info as a {base,offset} pair introduces an additional load instruction on the ret-to-user path, since the SCS stack pointer in x18 has to be converted back to an offset by subtracting the base. Replace the offset with the absolute SCS stack pointer value instead and avoid the redundant load. Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-16KVM: arm64: Kill off CONFIG_KVM_ARM_HOSTWill Deacon3-3/+3
CONFIG_KVM_ARM_HOST is just a proxy for CONFIG_KVM, so remove it in favour of the latter. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200505154520.194120-2-tabba@google.com
2020-05-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-0/+1
Move the bpf verifier trace check into the new switch statement in HEAD. Resolve the overlapping changes in hinic, where bug fixes overlap the addition of VF support. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-15arm64: scs: Add shadow stacks for SDEISami Tolvanen2-1/+18
This change adds per-CPU shadow call stacks for the SDEI handler. Similarly to how the kernel stacks are handled, we add separate shadow stacks for normal and critical events. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: James Morse <james.morse@arm.com> Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: Implement Shadow Call StackSami Tolvanen6-2/+50
This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: vdso: Disable Shadow Call StackSami Tolvanen1-1/+1
Shadow stacks are only available in the kernel, so disable SCS instrumentation for the vDSO. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: efi: Restore register x18 if it was corruptedSami Tolvanen1-1/+10
If we detect a corrupted x18, restore the register before jumping back to potentially SCS instrumented code. This is safe, because the wrapper is called with preemption disabled and a separate shadow stack is used for interrupt handling. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-12arm64: cpufeature: Add "or" to mitigations for multiple errataGeert Uytterhoeven1-3/+3
Several actions are not mitigations for a single erratum, but for multiple errata. However, printing a line like CPU features: detected: ARM errata 1165522, 1530923 may give the false impression that all three listed errata have been detected. This can confuse the user, who may think his Cortex-A55 is suddenly affected by a Cortex-A76 erratum. Add "or" to all descriptions for mitigations for multiple errata, to make it clear that only one or more of the errata printed are applicable, and not necessarily all of them. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20200512145255.5520-1-geert+renesas@glider.be Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11arm64/crash_core: Export KERNELPACMASK in vmcoreinfoAmit Daniel Kachhap1-0/+4
Recently arm64 linux kernel added support for Armv8.3-A Pointer Authentication feature. If this feature is enabled in the kernel and the hardware supports address authentication then the return addresses are signed and stored in the stack to prevent ROP kind of attack. Kdump tool will now dump the kernel with signed lr values in the stack. Any user analysis tool for this kernel dump may need the kernel pac mask information in vmcoreinfo to generate the correct return address for stacktrace purpose as well as to resolve the symbol name. This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC bit positions via ptrace") which exposes pac mask information via ptrace interfaces. The config gaurd ARM64_PTR_AUTH is removed form asm/compiler.h so macros like ptrauth_kernel_pac_mask can be used ungaurded. This config protection is confusing as the pointer authentication feature may be missing at runtime even though this config is present. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1589202116-18265-1-git-send-email-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11arm64: insn: Fix two bugs in encoding 32-bit logical immediatesLuke Nelson1-7/+7
This patch fixes two issues present in the current function for encoding arm64 logical immediates when using the 32-bit variants of instructions. First, the code does not correctly reject an all-ones 32-bit immediate, and returns an undefined instruction encoding. Second, the code incorrectly rejects some 32-bit immediates that are actually encodable as logical immediates. The root cause is that the code uses a default mask of 64-bit all-ones, even for 32-bit immediates. This causes an issue later on when the default mask is used to fill the top bits of the immediate with ones, shown here: /* * Pattern: 0..01..10..01..1 * * Fill the unused top bits with ones, and check if * the result is a valid immediate (all ones with a * contiguous ranges of zeroes). */ imm |= ~mask; if (!range_of_ones(~imm)) return AARCH64_BREAK_FAULT; To see the problem, consider an immediate of the form 0..01..10..01..1, where the upper 32 bits are zero, such as 0x80000001. The code checks if ~(imm | ~mask) contains a range of ones: the incorrect mask yields 1..10..01..10..0, which fails the check; the correct mask yields 0..01..10..0, which succeeds. The fix for both issues is to generate a correct mask based on the instruction immediate size, and use the mask to check for all-ones, all-zeroes, and values wider than the mask. Currently, arch/arm64/kvm/va_layout.c is the only user of this function, which uses 64-bit immediates and therefore won't trigger these bugs. We tested the new code against llvm-mc with all 1,302 encodable 32-bit logical immediates and all 5,334 encodable 64-bit logical immediates. Fixes: ef3935eeebff ("arm64: insn: Add encoder for bitwise operations using literals") Suggested-by: Will Deacon <will@kernel.org> Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200508181547.24783-2-luke.r.nels@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11arm64: fix the flush_icache_range arguments in machine_kexecChristoph Hellwig1-0/+1
The second argument is the end "pointer", not the length. Fixes: d28f6df1305a ("arm64/kexec: Add core kexec support") Cc: <stable@vger.kernel.org> # 4.8.x- Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-05-07arm64: vdso: Map the vDSO text with guarded pages when built for BTIMark Brown1-1/+5
The kernel is responsible for mapping the vDSO into userspace processes, including mapping the text section as executable. Handle the mapping of the vDSO for BTI similarly, mapping the text section as guarded pages so the BTI annotations in the vDSO become effective when they are present. This will mean that we can have BTI active for the vDSO in processes that do not otherwise support BTI. This should not be an issue for any expected use of the vDSO. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-12-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Force the vDSO to be linked as BTI when built for BTIMark Brown1-1/+3
When the kernel and hence vDSO are built with BTI enabled force the linker to link the vDSO as BTI. This will cause the linker to warn if any of the input files do not have the BTI annotation, ensuring that we don't silently fail to provide a vDSO that is built and annotated for BTI when the kernel is being built with BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-11-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Annotate for BTIMark Brown3-0/+9
Generate BTI annotations for all assembly files included in the 64 bit vDSO. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-10-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: Set GP bit in kernel page tables to enable BTI for the kernelMark Brown1-0/+4
Now that the kernel is built with BTI annotations enable the feature by setting the GP bit in the stage 1 translation tables. This is done based on the features supported by the boot CPU so that we do not need to rewrite the translation tables. In order to avoid potential issues on big.LITTLE systems when there are a mix of BTI and non-BTI capable CPUs in the system when we have enabled kernel mode BTI we change BTI to be a _STRICT_BOOT_CPU_FEATURE when we have kernel BTI. This will prevent any CPUs that don't support BTI being started if the boot CPU supports BTI rather than simply not using BTI as we do when supporting BTI only in userspace. The main concern is the possibility of BTYPE being preserved by a CPU that does not implement BTI when a thread is migrated to it resulting in an incorrect state which could generate an exception when the thread migrates back to a CPU that does support BTI. If we encounter practical systems which mix BTI and non-BTI CPUs we will need to revisit this implementation. Since we currently do not generate landing pads in the BPF JIT we only map the base kernel text in this way. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Add --eh-frame-hdr to ldflagsVincenzo Frascino1-1/+1
LLVM's unwinder depends on the .eh_frame_hdr being present for unwinding. However, when compiling Linux with GCC, the section is not present in the vdso library object and when compiling with Clang, it is present, but it has zero length. With GCC the problem was not spotted because libgcc unwinder does not require the .eh_frame_hdr section to be present. Add --eh-frame-hdr to ldflags to correctly generate and populate the section for both GCC and LLVM. Fixes: 28b1a824a4f44 ("arm64: vdso: Substitute gettimeofday() with C implementation") Reported-by: Tamas Zsoldos <tamas.zsoldos@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Tamas Zsoldos <tamas.zsoldos@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200507104049.47834-1-vincenzo.frascino@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-1/+1
Conflicts were all overlapping changes. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-05Merge branches 'for-next/asm' and 'for-next/insn' into for-next/btiWill Deacon13-92/+105
Merge in dependencies for in-kernel Branch Target Identification support. * for-next/asm: arm64: Disable old style assembly annotations arm64: kernel: Convert to modern annotations for assembly functions arm64: entry: Refactor and modernise annotation for ret_to_user x86/asm: Provide a Kconfig symbol for disabling old assembly annotations x86/32: Remove CONFIG_DOUBLEFAULT * for-next/insn: arm64: insn: Report PAC and BTI instructions as skippable arm64: insn: Don't assume unrecognized HINTs are skippable arm64: insn: Provide a better name for aarch64_insn_is_nop() arm64: insn: Add constants for new HINT instruction decode
2020-05-05Merge branch 'for-next/bti-user' into for-next/btiWill Deacon8-63/+190
Merge in user support for Branch Target Identification, which narrowly missed the cut for 5.7 after a late ABI concern. * for-next/bti-user: arm64: bti: Document behaviour for dynamically linked binaries arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY arm64: BTI: Add Kconfig entry for userspace BTI mm: smaps: Report arm64 guarded pages in smaps arm64: mm: Display guarded pages in ptdump KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: traps: Shuffle code to eliminate forward declarations arm64: unify native/compat instruction skipping arm64: BTI: Decode BYTPE bits when printing PSTATE arm64: elf: Enable BTI at exec based on ELF program properties elf: Allow arch to tweak initial mmap prot flags arm64: Basic Branch Target Identification support ELF: Add ELF program property parsing support ELF: UAPI and Kconfig additions for ELF program properties
2020-05-05arm64: cpufeature: Extend comment to describe absence of field infoWill Deacon1-0/+5
When a feature register field is omitted from the description of the register, the corresponding bits are treated as STRICT RES0, including for KVM guests. This is subtly different to declaring the field as HIDDEN/STRICT/EXACT/0, so update the comment to call this out. Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()Anshuman Khandual1-2/+2
This moves device_initcall() near cpuinfo_regs_init() making the calling sequence clear. Besides it is a standard practice to have device_initcall() (any __initcall for that matter) just after the function it actually calls. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1588595377-4503-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Report PAC and BTI instructions as skippableMark Brown1-0/+17
The PAC and BTI instructions can be safely skipped so report them as such, allowing them to be probed. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Don't assume unrecognized HINTs are skippableMark Brown1-7/+3
Currently the kernel assumes that any HINT which it does not explicitly recognise is skippable. This is not robust as new instructions may be added which need special handling, and in any case software should only be using explicit NOP instructions for deliberate NOPs. This has the effect of rendering PAC and BTI instructions unprobeable which means that probes can't be inserted on the first instruction of functions built with those features. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Provide a better name for aarch64_insn_is_nop()Mark Brown2-3/+2
The current aarch64_insn_is_nop() has exactly one caller which uses it solely to identify if the instruction is a HINT that can safely be stepped, requiring us to list things that aren't NOPs and make things more confusing than they need to be. Rename the function to reflect the actual usage and make things more clear. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Add constants for new HINT instruction decodeMark Brown1-1/+1
Add constants for decoding newer instructions defined in the HINT space. Since we are now decoding both the op2 and CRm fields rename the enum as well; this is compatible with what the existing users are doing. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}Andrew Scull1-14/+11
Errata 1165522, 1319367 and 1530923 each allow TLB entries to be allocated as a result of a speculative AT instruction. In order to avoid mandating VHE on certain affected CPUs, apply the workaround to both the nVHE and the VHE case for all affected CPUs. Signed-off-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> CC: Marc Zyngier <maz@kernel.org> CC: James Morse <james.morse@arm.com> CC: Suzuki K Poulose <suzuki.poulose@arm.com> CC: Will Deacon <will@kernel.org> CC: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20200504094858.108917-1-ascull@google.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: kernel: Convert to modern annotations for assembly functionsMark Brown10-68/+68
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: entry: Refactor and modernise annotation for ret_to_userMark Brown1-13/+14
As part of an effort to clarify and clean up the assembler annotations new macros have been introduced which annotate the start and end of blocks of code in assembler files. Currently ret_to_user has an out of line slow path work_pending placed above the main function which makes annotating the start and end of these blocks of code awkward. Since work_pending is only referenced from within ret_to_user try to make things a bit clearer by moving it after the current ret_to_user and then marking both ret_to_user and work_pending as part of a single ret_to_user code block. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-01PCI: Constify struct pci_ecam_opsRob Herring1-2/+2
struct pci_ecam_ops is typically DT match table data which is defined to be const. It's also best practice for ops structs to be const. Ideally, we'd make struct pci_ops const as well, but that becomes pretty invasive, so for now we just cast it where needed. Link: https://lore.kernel.org/r/20200409234923.21598-2-robh@kernel.org Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Andrew Murray <amurray@thegoodpenguin.co.uk> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Len Brown <lenb@kernel.org> Cc: Jonathan Chocron <jonnyc@amazon.com> Cc: Zhou Wang <wangzhou1@hisilicon.com> Cc: Robert Richter <rrichter@marvell.com> Cc: Toan Le <toan@os.amperecomputing.com> Cc: Marc Gonzalez <marc.w.gonzalez@free.fr> Cc: Mans Rullgard <mans@mansr.com> Cc: linux-acpi@vger.kernel.org
2020-04-30arm64: kexec_file: print appropriate variableŁukasz Stelmach1-3/+3
The value of kbuf->memsz may be different than kbuf->bufsz after calling kexec_add_buffer(). Hence both values should be logged. Fixes: 52b2a8af74360 ("arm64: kexec_file: load initrd and device-tree") Fixes: 3751e728cef29 ("arm64: kexec_file: add crash dump support") Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Bhupesh Sharma <bhsharma@redhat.com> Link: https://lore.kernel.org/r/20200430163142.27282-2-l.stelmach@samsung.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-30arm64: vdso: Add -fasynchronous-unwind-tables to cflagsVincenzo Frascino1-1/+1
On arm64 linux gcc uses -fasynchronous-unwind-tables -funwind-tables by default since gcc-8, so now the de facto platform ABI is to allow unwinding from async signal handlers. However on bare metal targets (aarch64-none-elf), and on old gcc, async and sync unwind tables are not enabled by default to avoid runtime memory costs. This means if linux is built with a baremetal toolchain the vdso.so may not have unwind tables which breaks the gcc platform ABI guarantee in userspace. Add -fasynchronous-unwind-tables explicitly to the vgettimeofday.o cflags to address the ABI change. Fixes: 28b1a824a4f4 ("arm64: vdso: Substitute gettimeofday() with C implementation") Cc: Will Deacon <will@kernel.org> Reported-by: Szabolcs Nagy <szabolcs.nagy@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-04-29arm64: vdso: use consistent 'map' nomenclatureMark Rutland1-38/+26
The current code doesn't use a consistent naming scheme for structures, enums, or variables, making it harder than necessary to determine the relationship between these. Let's make this easier by consistently using 'map' nomenclature for mappings created in userspace, minimizing redundant comments, and using designated array initializers to tie indices to their respective elements. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200428164921.41641-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-29arm64: vdso: use consistent 'abi' nomenclatureMark Rutland1-35/+34
The current code doesn't use a consistent naming scheme for structures, enums, or variables, making it harder than necessary to determine the relationship between these. Let's make this easier by consistently using 'vdso_abi' nomenclature. The 'vdso_lookup' array is renamed to 'vdso_info' to describe what it contains rather than how it is consumed. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200428164921.41641-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-29arm64: vdso: simplify arch_vdso_type ifdefferyMark Rutland1-10/+5
Currently we have some ifdeffery to determine the number of elements in enum arch_vdso_type as VDSO_TYPES, rather that the usual pattern of having the enum define this: | enum foo_type { | FOO_TYPE_A, | FOO_TYPE_B, | #ifdef CONFIG_C | FOO_TYPE_C, | #endif | NR_FOO_TYPES | } ... however, given we only use this number to size the vdso_lookup[] array, this is redundant anyway as the compiler can automatically size the array to fit all defined elements. So let's remove the VDSO_TYPES to simplify the code. At the same time, let's use designated initializers for the array elements so that these are guarnateed to be at the expected indices, regardless of how we modify the structure. For clariy the redundant explicit initialization of the enum elements is dropped. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200428164921.41641-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-29arm64: vdso: remove aarch32_vdso_pages[]Mark Rutland1-7/+12
The aarch32_vdso_pages[] array is unnecessarily confusing. We only ever use the C_VECTORS and C_SIGPAGE slots, and the other slots are unused despite having corresponding mappings (sharing pages with the AArch64 vDSO). Let's make this clearer by using separate variables for the vectors page and the sigreturn page. A subsequent patch will clean up the C_* naming and conflation of pages with mappings. Note that since both the vectors page and sig page are single pages, and the mapping is a single page long, their pages array do not need to be NULL-terminated (and this was not the case with the existing code for the sig page as it was the last entry in the aarch32_vdso_pages array). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200428164921.41641-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28Merge branch 'work.sysctl' of ↵Daniel Borkmann2-3/+2
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull in Christoph Hellwig's series that changes the sysctl's ->proc_handler methods to take kernel pointers instead. It gets rid of the set_fs address space overrides used by BPF. As per discussion, pull in the feature branch into bpf-next as it relates to BPF sysctl progs. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200427071508.GV23230@ZenIV.linux.org.uk/T/
2020-04-28efi/libstub/arm64: align PE/COFF sections to segment alignmentArd Biesheuvel2-2/+3
The arm64 kernel's segment alignment is fixed at 64 KB for any page size, and relocatable kernels are able to fix up any misalignment of the kernel image with respect to the 2 MB section alignment that is mandated by the arm64 boot protocol. Let's increase the PE/COFF section alignment to the same value, so that kernels loaded by the UEFI PE/COFF loader are guaranteed to end up at an address that doesn't require any reallocation to be done if the kernel is relocatable. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200413155521.24698-6-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: vdso: Add '-Bsymbolic' to ldflagsVincenzo Frascino1-3/+5
Commit 28b1a824a4f44 ("arm64: vdso: Substitute gettimeofday() with C implementation") introduced an unused 'VDSO_LDFLAGS' variable to the vdso Makefile, suggesting that we should be passing '-Bsymbolic' to the linker, as we do when linking the compat vDSO. Although it's not strictly necessary to pass this flag, it would be required if we were to add any internal references to the exported symbols. It's also consistent with how we link the compat vdso so, since there's no real downside from passing it, add '-Bsymbolic' to the ldflags for the native vDSO. Fixes: 28b1a824a4f44 ("arm64: vdso: Substitute gettimeofday() with C implementation") Reported-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200428150854.33130-1-vincenzo.frascino@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64/kernel: Fix range on invalidating dcache for boot page tablesGavin Shan2-3/+10
Prior to commit 8eb7e28d4c642c31 ("arm64/mm: move runtime pgds to rodata"), idmap_pgd_dir, tramp_pg_dir, reserved_ttbr0, swapper_pg_dir, and init_pg_dir were contiguous at the end of the kernel image. The maintenance at the end of __create_page_tables assumed these were contiguous, and affected everything from the start of idmap_pg_dir to the end of init_pg_dir. That commit moved all but init_pg_dir into the .rodata section, with other data placed between idmap_pg_dir and init_pg_dir, but did not update the maintenance. Hence the maintenance is performed on much more data than necessary (but as the bootloader previously made this clean to the PoC there is no functional problem). As we only alter idmap_pg_dir, and init_pg_dir, we only need to perform maintenance for these. As the other dirs are in .rodata, the bootloader will have initialised them as expected and cleaned them to the PoC. The kernel will initialize them as necessary after enabling the MMU. This patch reworks the maintenance to only cover the idmap_pg_dir and init_pg_dir to avoid this unnecessary work. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200427235700.112220-1-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: cpufeature: Add an overview comment for the cpufeature frameworkWill Deacon1-0/+50
Now that Suzuki isn't within throwing distance, I thought I'd better add a rough overview comment to cpufeature.c so that it doesn't take me days to remember how it works next time. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200421142922.18950-9-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: cpufeature: Relax checks for AArch32 support at EL[0-2]Will Deacon1-7/+3
We don't need to be quite as strict about mismatched AArch32 support, which is good because the friendly hardware folks have been busy mismatching this to their hearts' content. * We don't care about EL2 or EL3 (there are silly comments concerning the latter, so remove those) * EL1 support is gated by the ARM64_HAS_32BIT_EL1 capability and handled gracefully when a mismatch occurs * EL0 support is gated by the ARM64_HAS_32BIT_EL0 capability and handled gracefully when a mismatch occurs Relax the AArch32 checks to FTR_NONSTRICT. Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200421142922.18950-8-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: cpufeature: Relax AArch32 system checks if EL1 is 64-bit onlyWill Deacon1-1/+32
If AArch32 is not supported at EL1, the AArch32 feature register fields no longer advertise support for some system features: * ISAR4.SMC * PFR1.{Virt_frac, Sec_frac, Virtualization, Security, ProgMod} In which case, we don't need to emit "SANITY CHECK" failures for all of them. Add logic to relax the strictness of individual feature register fields at runtime and use this for the fields above if 32-bit EL1 is not supported. Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200421142922.18950-7-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: cpufeature: Factor out checking of AArch32 featuresWill Deacon1-47/+65
update_cpu_features() is pretty large, so split out the checking of the AArch32 features into a separate function and call it after checking the AArch64 features. Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200421142922.18950-6-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>