summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2021-08-02arm64: unnecessary end 'return;' in void functionsJason Wang1-2/+0
The end 'return;' in a void function is useless and verbose. It can be removed safely. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Link: https://lore.kernel.org/r/20210726123940.63232-1-wangborong@cdjrlc.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-30arm64: SSBS/DIT: print SSBS and DIT bit when printing PSTATELingyan Huang1-3/+7
The current code to print PSTATE when generating backtraces does not include SSBS bit and DIT bit, so add this information. Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Lingyan Huang <huanglingyan2@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1626920436-54816-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-30arm64: cpufeature: Use defined macro instead of magic numbersShaokun Zhang1-1/+1
Use defined macro to simplify the code and make it more readable. Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1626415089-57584-1-git-send-email-zhangshaokun@hisilicon.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: avoid double ISB on kernel entryPeter Collingbourne1-2/+13
Although an ISB is required in order to make the MTE-related system register update to GCR_EL1 effective, and the same is true for PAC-related updates to SCTLR_EL1 or APIAKey{Hi,Lo}_EL1, we issue two ISBs on machines that support both features while we only need to issue one. To avoid the unnecessary additional ISB, remove the ISBs from the PAC and MTE-specific alternative blocks and add a couple of additional blocks that cause us to only execute one ISB if both features are supported. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Idee7e8114d5ae5a0b171d06220a0eb4bb015a51c Link: https://lore.kernel.org/r/20210727205439.2557419-1-pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: optimize GCR_EL1 modification on kernel entry/exitPeter Collingbourne1-8/+4
Accessing GCR_EL1 and issuing an ISB can be expensive on some microarchitectures. Although we must write to GCR_EL1, we can restructure the code to avoid reading from it because the new value can be derived entirely from the exclusion mask, which is already in a GPR. Do so. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75 Link: https://lore.kernel.org/r/20210714013638.3995315-1-pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: introduce a per-CPU tag checking mode preferencePeter Collingbourne1-2/+63
Add a per-CPU sysfs node, mte_tcf_preferred, that allows the preferred tag checking mode to be configured. The current possible values are async and sync. Link: https://linux-review.googlesource.com/id/I7493dcd533a2785a1437b16c3f6b50919f840854 Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210727205300.2554659-5-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: move preemption disablement to prctl handlersPeter Collingbourne3-22/+17
In the next patch, we will start reading sctlr_user from mte_update_sctlr_user and subsequently writing a new value based on the task's TCF setting and potentially the per-CPU TCF preference. This means that we need to be careful to disable preemption around any code sequences that read from sctlr_user and subsequently write to sctlr_user and/or SCTLR_EL1, so that we don't end up writing a stale value (based on the previous CPU's TCF preference) to either of them. We currently have four such sequences, in the prctl handlers for PR_SET_TAGGED_ADDR_CTRL and PR_PAC_SET_ENABLED_KEYS, as well as in the task initialization code that resets the prctl settings. Change the prctl handlers to disable preemption in the handlers themselves rather than the functions that they call, and change the task initialization code to call the respective prctl handlers instead of setting sctlr_user directly. As a result of this change, we no longer need the helper function set_task_sctlr_el1, nor does its behavior make sense any more, so remove it. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ic0e8a0c00bb47d786c1e8011df0b7fe99bee4bb5 Link: https://lore.kernel.org/r/20210727205300.2554659-4-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: change ASYNC and SYNC TCF settings into bitfieldsPeter Collingbourne1-42/+28
Allow the user program to specify both ASYNC and SYNC TCF modes by repurposing the existing constants as bitfields. This will allow the kernel to select one of the modes on behalf of the user program. With this patch the kernel will always select async mode, but a subsequent patch will make this configurable. Link: https://linux-review.googlesource.com/id/Icc5923c85a8ea284588cc399ae74fd19ec291230 Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210727205300.2554659-3-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: rename gcr_user_excl to mte_ctrlPeter Collingbourne3-9/+11
We are going to use this field to store more data. To prepare for that, rename it and change the users to rely on the bit position of gcr_user_excl in mte_ctrl. Link: https://linux-review.googlesource.com/id/Ie1fd18e480100655f5d22137f5b22f4f3a9f9e2e Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210727205300.2554659-2-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-27arm64: mte: avoid TFSRE0_EL1 related operations unless in async modePeter Collingbourne1-9/+19
There is no reason to touch TFSRE0_EL1 nor issue a DSB unless our task is in asynchronous mode. Since these operations (especially the DSB) may be expensive on certain microarchitectures, only perform them if necessary. Furthermore, stop clearing TFSRE0_EL1 on entry because it will be cleared on exit and it is not necessary to have any particular value in TFSRE0_EL1 between entry and exit. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ib353a63e3d0abc2b0b008e96aa2d9692cfc1b815 Link: https://lore.kernel.org/r/20210709023532.2133673-1-pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-23signal: Verify the alignment and size of siginfo_tEric W. Biederman2-0/+4
Update the static assertions about siginfo_t to also describe it's alignment and size. While investigating if it was possible to add a 64bit field into siginfo_t[1] it became apparent that the alignment of siginfo_t is as much a part of the ABI as the size of the structure. If the alignment changes siginfo_t when embedded in another structure can move to a different offset. Which is not acceptable from an ABI structure. So document that fact and add static assertions to notify developers if they change change the alignment by accident. [1] https://lkml.kernel.org/r/YJEZdhe6JGFNYlum@elver.google.com Acked-by: Marco Elver <elver@google.com> v1: https://lkml.kernel.org/r/20210505141101.11519-4-ebiederm@xmission.co Link: https://lkml.kernel.org/r/875yxaxmyl.fsf_-_@disp2133 Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2021-07-23arm64: Add compile-time asserts for siginfo_t offsetsMarco Elver2-0/+74
To help catch ABI breaks at compile-time, add compile-time assertions to verify the siginfo_t layout. Link: https://lkml.kernel.org/r/20210505141101.11519-3-ebiederm@xmission.com Link: https://lkml.kernel.org/r/20210429190734.624918-3-elver@google.com Link: https://lkml.kernel.org/r/87sg0fxx92.fsf_-_@disp2133 Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2021-07-21arm64: smccc: Save lr before calling __arm_smccc_sve_check()Jean-Philippe Brucker1-3/+6
Commit cfa7ff959a78 ("arm64: smccc: Support SMCCC v1.3 SVE register saving hint") added a call to __arm_smccc_sve_check() which clobbers the lr (register x30), causing __arm_smccc_hvc() to return to itself and crash. Save lr on the stack before calling __arm_smccc_sve_check(). Save the frame pointer (x29) to complete the frame record, and adjust the offsets used to access stack parameters. Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Brown <broonie@kernel.org> Fixes: cfa7ff959a78 ("arm64: smccc: Support SMCCC v1.3 SVE register saving hint") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210721071834.69130-1-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-07-15arm64: entry: fix KCOV suppressionMark Rutland1-1/+1
We suppress KCOV for entry.o rather than entry-common.o. As entry.o is built from entry.S, this is pointless, and permits instrumentation of entry-common.o, which is built from entry-common.c. Fix the Makefile to suppress KCOV for entry-common.o, as we had intended to begin with. I've verified with objdump that this is working as expected. Fixes: bf6fa2c0dda7 ("arm64: entry: don't instrument entry code with KCOV") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210715123049.9990-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-07-15arm64: entry: add missing noinstrMark Rutland1-1/+1
We intend that all the early exception handling code is marked as `noinstr`, but we forgot this for __el0_error_handler_common(), which is called before we have completed entry from user mode. If it were instrumented, we could run into problems with RCU, lockdep, etc. Mark it as `noinstr` to prevent this. The few other functions in entry-common.c which do not have `noinstr` are called once we've completed entry, and are safe to instrument. Fixes: bb8e93a287a5 ("arm64: entry: convert SError handlers to C") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210714172801.16475-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-07-15arm64: mte: fix restoration of GCR_EL1 from suspendMark Rutland1-13/+2
Since commit: bad1e1c663e0a72f ("arm64: mte: switch GCR_EL1 in kernel entry and exit") we saved/restored the user GCR_EL1 value at exception boundaries, and update_gcr_el1_excl() is no longer used for this. However it is used to restore the kernel's GCR_EL1 value when returning from a suspend state. Thus, the comment is misleading (and an ISB is necessary). When restoring the kernel's GCR value, we need an ISB to ensure this is used by subsequent instructions. We don't necessarily get an ISB by other means (e.g. if the kernel is built without support for pointer authentication). As __cpu_setup() initialised GCR_EL1.Exclude to 0xffff, until a context synchronization event, allocation tag 0 may be used rather than the desired set of tags. This patch drops the misleading comment, adds the missing ISB, and for clarity folds update_gcr_el1_excl() into its only user. Fixes: bad1e1c663e0 ("arm64: mte: switch GCR_EL1 in kernel entry and exit") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210714143843.56537-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-07-12arm64: Add missing header <asm/smp.h> in two filesCarlos Bilbao1-0/+1
Add missing header <asm/smp.h> on include/asm/smp_plat.h, as it calls function cpu_logical_map(). Also include it on kernel/cpufeature.c since it has calls to functions cpu_panic_kernel() and cpu_die_early(). Both files call functions defined on this header, make the header dependencies less fragile. Signed-off-by: Carlos Bilbao <bilbao@vt.edu> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/4325940.LvFx2qVVIh@iron-maiden Signed-off-by: Will Deacon <will@kernel.org>
2021-07-08arm64: stacktrace: use %pSb for backtrace printingStephen Boyd1-1/+1
Let's use the new printk format to print the stacktrace entry when printing a backtrace to the kernel logs. This will include any module's build ID[1] in it so that offline/crash debugging can easily locate the debuginfo for a module via something like debuginfod[2]. Link: https://lkml.kernel.org/r/20210511003845.2429846-7-swboyd@chromium.org Link: https://fedoraproject.org/wiki/Releases/FeatureBuildId [1] Link: https://sourceware.org/elfutils/Debuginfod.html [2] Signed-off-by: Stephen Boyd <swboyd@chromium.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Jessica Yu <jeyu@kernel.org> Cc: Evan Green <evgreen@chromium.org> Cc: Hsin-Yi Wang <hsinyi@chromium.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Young <dyoung@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sasha Levin <sashal@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-08arm64: convert to setup_initial_init_mm()Kefeng Wang1-4/+1
Use setup_initial_init_mm() helper to simplify code. Link: https://lkml.kernel.org/r/20210608083418.137226-5-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-08set_memory: allow querying whether set_direct_map_*() is actually enabledMike Rapoport1-0/+1
On arm64, set_direct_map_*() functions may return 0 without actually changing the linear map. This behaviour can be controlled using kernel parameters, so we need a way to determine at runtime whether calls to set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have any effect. Extend set_memory API with can_set_direct_map() function that allows checking if calling set_direct_map_*() will actually change the page table, replace several occurrences of open coded checks in arm64 with the new function and provide a generic stub for architectures that always modify page tables upon calls to set_direct_map APIs. [arnd@arndb.de: arm64: kfence: fix header inclusion ] Link: https://lkml.kernel.org/r/20210518072034.31572-4-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Christopher Lameter <cl@linux.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Elena Reshetova <elena.reshetova@intel.com> Cc: Hagen Paul Pfeifer <hagen@jauu.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Bottomley <jejb@linux.ibm.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tycho Andersen <tycho@tycho.ws> Cc: Will Deacon <will@kernel.org> Cc: kernel test robot <lkp@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-06Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-6/+1
Pull ARM development updates from Russell King: - Make it clear __swp_entry_to_pte() uses PTE_TYPE_FAULT - Updates for setting vmalloc size via command line to resolve an issue with the 8MiB hole not properly being accounted for, and clean up the code. - ftrace support for module PLTs - Spelling fixes - kbuild updates for removing generated files and pattern rules for generating files - Clang/llvm updates - Change the way the kernel is mapped, placing it in vmalloc space instead. - Remove arm_pm_restart from arm and aarch64. * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (29 commits) ARM: 9098/1: ftrace: MODULE_PLT: Fix build problem without DYNAMIC_FTRACE ARM: 9097/1: mmu: Declare section start/end correctly ARM: 9096/1: Remove arm_pm_restart() ARM: 9095/1: ARM64: Remove arm_pm_restart() ARM: 9094/1: Register with kernel restart handler ARM: 9093/1: drivers: firmwapsci: Register with kernel restart handler ARM: 9092/1: xen: Register with kernel restart handler ARM: 9091/1: Revert "mm: qsd8x50: Fix incorrect permission faults" ARM: 9090/1: Map the lowmem and kernel separately ARM: 9089/1: Define kernel physical section start and end ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET ARM: 9087/1: kprobes: test-thumb: fix for LLVM_IAS=1 ARM: 9086/1: syscalls: use pattern rules to generate syscall headers ARM: 9085/1: remove unneeded abi parameter to syscallnr.sh ARM: 9084/1: simplify the build rule of mach-types.h ARM: 9083/1: uncompress: atags_to_fdt: Spelling s/REturn/Return/ ARM: 9082/1: [v2] mark prepare_page_table as __init ARM: 9079/1: ftrace: Add MODULE_PLTS support ARM: 9078/1: Add warn suppress parameter to arm_gen_branch_link() ARM: 9077/1: PLT: Move struct plt_entries definition to header ...
2021-07-02Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-0/+1
Merge more updates from Andrew Morton: "190 patches. Subsystems affected by this patch series: mm (hugetlb, userfaultfd, vmscan, kconfig, proc, z3fold, zbud, ras, mempolicy, memblock, migration, thp, nommu, kconfig, madvise, memory-hotplug, zswap, zsmalloc, zram, cleanups, kfence, and hmm), procfs, sysctl, misc, core-kernel, lib, lz4, checkpatch, init, kprobes, nilfs2, hfs, signals, exec, kcov, selftests, compress/decompress, and ipc" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (190 commits) ipc/util.c: use binary search for max_idx ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock ipc: use kmalloc for msg_queue and shmid_kernel ipc sem: use kvmalloc for sem_undo allocation lib/decompressors: remove set but not used variabled 'level' selftests/vm/pkeys: exercise x86 XSAVE init state selftests/vm/pkeys: refill shadow register after implicit kernel write selftests/vm/pkeys: handle negative sys_pkey_alloc() return code selftests/vm/pkeys: fix alloc_random_pkey() to make it really, really random kcov: add __no_sanitize_coverage to fix noinstr for all architectures exec: remove checks in __register_bimfmt() x86: signal: don't do sas_ss_reset() until we are certain that sigframe won't be abandoned hfsplus: report create_date to kstat.btime hfsplus: remove unnecessary oom message nilfs2: remove redundant continue statement in a while-loop kprobes: remove duplicated strong free_insn_page in x86 and s390 init: print out unknown kernel parameters checkpatch: do not complain about positive return values starting with EPOLL checkpatch: improve the indented label test checkpatch: scripts/spdxcheck.py now requires python3 ...
2021-07-01kernel.h: split out panic and oops helpersAndy Shevchenko1-0/+1
kernel.h is being used as a dump for all kinds of stuff for a long time. Here is the attempt to start cleaning it up by splitting out panic and oops helpers. There are several purposes of doing this: - dropping dependency in bug.h - dropping a loop by moving out panic_notifier.h - unload kernel.h from something which has its own domain At the same time convert users tree-wide to use new headers, although for the time being include new header back to kernel.h to avoid twisted indirected includes for existing users. [akpm@linux-foundation.org: thread_info.h needs limits.h] [andriy.shevchenko@linux.intel.com: ia64 fix] Link: https://lkml.kernel.org/r/20210520130557.55277-1-andriy.shevchenko@linux.intel.com Link: https://lkml.kernel.org/r/20210511074137.33666-1-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Co-developed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Corey Minyard <cminyard@mvista.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Kees Cook <keescook@chromium.org> Acked-by: Wei Liu <wei.liu@kernel.org> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Sebastian Reichel <sre@kernel.org> Acked-by: Luis Chamberlain <mcgrof@kernel.org> Acked-by: Stephen Boyd <sboyd@kernel.org> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Acked-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-29Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-5/+15
Pull kvm updates from Paolo Bonzini: "This covers all architectures (except MIPS) so I don't expect any other feature pull requests this merge window. ARM: - Add MTE support in guests, complete with tag save/restore interface - Reduce the impact of CMOs by moving them in the page-table code - Allow device block mappings at stage-2 - Reduce the footprint of the vmemmap in protected mode - Support the vGIC on dumb systems such as the Apple M1 - Add selftest infrastructure to support multiple configuration and apply that to PMU/non-PMU setups - Add selftests for the debug architecture - The usual crop of PMU fixes PPC: - Support for the H_RPT_INVALIDATE hypercall - Conversion of Book3S entry/exit to C - Bug fixes S390: - new HW facilities for guests - make inline assembly more robust with KASAN and co x86: - Allow userspace to handle emulation errors (unknown instructions) - Lazy allocation of the rmap (host physical -> guest physical address) - Support for virtualizing TSC scaling on VMX machines - Optimizations to avoid shattering huge pages at the beginning of live migration - Support for initializing the PDPTRs without loading them from memory - Many TLB flushing cleanups - Refuse to load if two-stage paging is available but NX is not (this has been a requirement in practice for over a year) - A large series that separates the MMU mode (WP/SMAP/SMEP etc.) from CR0/CR4/EFER, using the MMU mode everywhere once it is computed from the CPU registers - Use PM notifier to notify the guest about host suspend or hibernate - Support for passing arguments to Hyper-V hypercalls using XMM registers - Support for Hyper-V TLB flush hypercalls and enlightened MSR bitmap on AMD processors - Hide Hyper-V hypercalls that are not included in the guest CPUID - Fixes for live migration of virtual machines that use the Hyper-V "enlightened VMCS" optimization of nested virtualization - Bugfixes (not many) Generic: - Support for retrieving statistics without debugfs - Cleanups for the KVM selftests API" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (314 commits) KVM: x86: rename apic_access_page_done to apic_access_memslot_enabled kvm: x86: disable the narrow guest module parameter on unload selftests: kvm: Allows userspace to handle emulation errors. kvm: x86: Allow userspace to handle emulation errors KVM: x86/mmu: Let guest use GBPAGES if supported in hardware and TDP is on KVM: x86/mmu: Get CR4.SMEP from MMU, not vCPU, in shadow page fault KVM: x86/mmu: Get CR0.WP from MMU, not vCPU, in shadow page fault KVM: x86/mmu: Drop redundant rsvd bits reset for nested NPT KVM: x86/mmu: Optimize and clean up so called "last nonleaf level" logic KVM: x86: Enhance comments for MMU roles and nested transition trickiness KVM: x86/mmu: WARN on any reserved SPTE value when making a valid SPTE KVM: x86/mmu: Add helpers to do full reserved SPTE checks w/ generic MMU KVM: x86/mmu: Use MMU's role to determine PTTYPE KVM: x86/mmu: Collapse 32-bit PAE and 64-bit statements for helpers KVM: x86/mmu: Add a helper to calculate root from role_regs KVM: x86/mmu: Add helper to update paging metadata KVM: x86/mmu: Don't update nested guest's paging bitmasks if CR0.PG=0 KVM: x86/mmu: Consolidate reset_rsvds_bits_mask() calls KVM: x86/mmu: Use MMU role_regs to get LA57, and drop vCPU LA57 helper KVM: x86/mmu: Get nested MMU's root level from the MMU's role ...
2021-06-29Merge tag 'arm64-upstream' of ↵Linus Torvalds41-2354/+1209
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "There's a reasonable amount here and the juicy details are all below. It's worth noting that the MTE/KASAN changes strayed outside of our usual directories due to core mm changes and some associated changes to some other architectures; Andrew asked for us to carry these [1] rather that take them via the -mm tree. Summary: - Optimise SVE switching for CPUs with 128-bit implementations. - Fix output format from SVE selftest. - Add support for versions v1.2 and 1.3 of the SMC calling convention. - Allow Pointer Authentication to be configured independently for kernel and userspace. - PMU driver cleanups for managing IRQ affinity and exposing event attributes via sysfs. - KASAN optimisations for both hardware tagging (MTE) and out-of-line software tagging implementations. - Relax frame record alignment requirements to facilitate 8-byte alignment with KASAN and Clang. - Cleanup of page-table definitions and removal of unused memory types. - Reduction of ARCH_DMA_MINALIGN back to 64 bytes. - Refactoring of our instruction decoding routines and addition of some missing encodings. - Move entry code moved into C and hardened against harmful compiler instrumentation. - Update booting requirements for the FEAT_HCX feature, added to v8.7 of the architecture. - Fix resume from idle when pNMI is being used. - Additional CPU sanity checks for MTE and preparatory changes for systems where not all of the CPUs support 32-bit EL0. - Update our kernel string routines to the latest Cortex Strings implementation. - Big cleanup of our cache maintenance routines, which were confusingly named and inconsistent in their implementations. - Tweak linker flags so that GDB can understand vmlinux when using RELR relocations. - Boot path cleanups to enable early initialisation of per-cpu operations needed by KCSAN. - Non-critical fixes and miscellaneous cleanup" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (150 commits) arm64: tlb: fix the TTL value of tlb_get_level arm64: Restrict undef hook for cpufeature registers arm64/mm: Rename ARM64_SWAPPER_USES_SECTION_MAPS arm64: insn: avoid circular include dependency arm64: smp: Bump debugging information print down to KERN_DEBUG drivers/perf: fix the missed ida_simple_remove() in ddr_perf_probe() perf/arm-cmn: Fix invalid pointer when access dtc object sharing the same IRQ number arm64: suspend: Use cpuidle context helpers in cpu_suspend() PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter() arm64: Convert cpu_do_idle() to using cpuidle context helpers arm64: Add cpuidle context save/restore helpers arm64: head: fix code comments in set_cpu_boot_mode_flag arm64: mm: drop unused __pa(__idmap_text_start) arm64: mm: fix the count comments in compute_indices arm64/mm: Fix ttbr0 values stored in struct thread_info for software-pan arm64: mm: Pass original fault address to handle_mm_fault() arm64/mm: Drop SECTION_[SHIFT|SIZE|MASK] arm64/mm: Use CONT_PMD_SHIFT for ARM64_MEMSTART_SHIFT arm64/mm: Drop SWAPPER_INIT_MAP_SIZE arm64: Conditionally configure PTR_AUTH key of the kernel. ...
2021-06-28Merge tag 'sched-core-2021-06-28' of ↵Linus Torvalds2-2/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler udpates from Ingo Molnar: - Changes to core scheduling facilities: - Add "Core Scheduling" via CONFIG_SCHED_CORE=y, which enables coordinated scheduling across SMT siblings. This is a much requested feature for cloud computing platforms, to allow the flexible utilization of SMT siblings, without exposing untrusted domains to information leaks & side channels, plus to ensure more deterministic computing performance on SMT systems used by heterogenous workloads. There are new prctls to set core scheduling groups, which allows more flexible management of workloads that can share siblings. - Fix task->state access anti-patterns that may result in missed wakeups and rename it to ->__state in the process to catch new abuses. - Load-balancing changes: - Tweak newidle_balance for fair-sched, to improve 'memcache'-like workloads. - "Age" (decay) average idle time, to better track & improve workloads such as 'tbench'. - Fix & improve energy-aware (EAS) balancing logic & metrics. - Fix & improve the uclamp metrics. - Fix task migration (taskset) corner case on !CONFIG_CPUSET. - Fix RT and deadline utilization tracking across policy changes - Introduce a "burstable" CFS controller via cgroups, which allows bursty CPU-bound workloads to borrow a bit against their future quota to improve overall latencies & batching. Can be tweaked via /sys/fs/cgroup/cpu/<X>/cpu.cfs_burst_us. - Rework assymetric topology/capacity detection & handling. - Scheduler statistics & tooling: - Disable delayacct by default, but add a sysctl to enable it at runtime if tooling needs it. Use static keys and other optimizations to make it more palatable. - Use sched_clock() in delayacct, instead of ktime_get_ns(). - Misc cleanups and fixes. * tag 'sched-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits) sched/doc: Update the CPU capacity asymmetry bits sched/topology: Rework CPU capacity asymmetry detection sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag psi: Fix race between psi_trigger_create/destroy sched/fair: Introduce the burstable CFS controller sched/uclamp: Fix uclamp_tg_restrict() sched/rt: Fix Deadline utilization tracking during policy change sched/rt: Fix RT utilization tracking during policy change sched: Change task_struct::state sched,arch: Remove unused TASK_STATE offsets sched,timer: Use __set_current_state() sched: Add get_current_state() sched,perf,kvm: Fix preemption condition sched: Introduce task_is_running() sched: Unbreak wakeups sched/fair: Age the average idle time sched/cpufreq: Consider reduced CPU capacity in energy calculation sched/fair: Take thermal pressure into account while estimating energy thermal/cpufreq_cooling: Update offline CPUs per-cpu thermal_pressure sched/fair: Return early from update_tg_cfs_load() if delta == 0 ...
2021-06-24Merge branch 'for-next/sve' into for-next/coreWill Deacon2-9/+19
Optimise SVE switching for CPUs with 128-bit implementations. * for-next/sve: arm64/sve: Skip flushing Z registers with 128 bit vectors arm64/sve: Use the sve_flush macros in sve_load_from_fpsimd_state() arm64/sve: Split _sve_flush macro into separate Z and predicate flushes
2021-06-24Merge branch 'for-next/smccc' into for-next/coreWill Deacon2-0/+92
Add support for versions v1.2 and 1.3 of the SMC calling convention. * for-next/smccc: arm64: smccc: Support SMCCC v1.3 SVE register saving hint arm64: smccc: Add support for SMCCCv1.2 extended input/output registers
2021-06-24Merge branch 'for-next/ptrauth' into for-next/coreWill Deacon1-0/+4
Allow Pointer Authentication to be configured independently for kernel and userspace. * for-next/ptrauth: arm64: Conditionally configure PTR_AUTH key of the kernel. arm64: Add ARM64_PTR_AUTH_KERNEL config option
2021-06-24Merge branch 'for-next/perf' into for-next/coreWill Deacon1-5/+35
PMU driver cleanups for managing IRQ affinity and exposing event attributes via sysfs. * for-next/perf: (36 commits) drivers/perf: fix the missed ida_simple_remove() in ddr_perf_probe() perf/arm-cmn: Fix invalid pointer when access dtc object sharing the same IRQ number arm64: perf: Simplify EVENT ATTR macro in perf_event.c drivers/perf: Simplify EVENT ATTR macro in fsl_imx8_ddr_perf.c drivers/perf: Simplify EVENT ATTR macro in xgene_pmu.c drivers/perf: Simplify EVENT ATTR macro in qcom_l3_pmu.c drivers/perf: Simplify EVENT ATTR macro in qcom_l2_pmu.c drivers/perf: Simplify EVENT ATTR macro in SMMU PMU driver perf: Add EVENT_ATTR_ID to simplify event attributes perf/smmuv3: Don't trample existing events with global filter perf/hisi: Constify static attribute_group structs perf: qcom: Remove redundant dev_err call in qcom_l3_cache_pmu_probe() drivers/perf: hisi: Fix data source control arm64: perf: Add more support on caps under sysfs perf: qcom_l2_pmu: move to use request_irq by IRQF_NO_AUTOEN flag arm_pmu: move to use request_irq by IRQF_NO_AUTOEN flag perf: arm_spe: use DEVICE_ATTR_RO macro perf: xgene_pmu: use DEVICE_ATTR_RO macro perf: qcom: use DEVICE_ATTR_RO macro perf: arm_pmu: use DEVICE_ATTR_RO macro ...
2021-06-24Merge branch 'for-next/mm' into for-next/coreWill Deacon3-9/+20
Lots of cleanup to our various page-table definitions, but also some non-critical fixes and removal of some unnecessary memory types. The most interesting change here is the reduction of ARCH_DMA_MINALIGN back to 64 bytes, since we're not aware of any machines that need a higher value with the way the code is structured (only needed for non-coherent DMA). * for-next/mm: arm64: tlb: fix the TTL value of tlb_get_level arm64/mm: Rename ARM64_SWAPPER_USES_SECTION_MAPS arm64: head: fix code comments in set_cpu_boot_mode_flag arm64: mm: drop unused __pa(__idmap_text_start) arm64: mm: fix the count comments in compute_indices arm64/mm: Fix ttbr0 values stored in struct thread_info for software-pan arm64: mm: Pass original fault address to handle_mm_fault() arm64/mm: Drop SECTION_[SHIFT|SIZE|MASK] arm64/mm: Use CONT_PMD_SHIFT for ARM64_MEMSTART_SHIFT arm64/mm: Drop SWAPPER_INIT_MAP_SIZE arm64: mm: decode xFSC in mem_abort_decode() arm64: mm: Add is_el1_data_abort() helper arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES) arm64: mm: Remove unused support for Normal-WT memory type arm64: acpi: Map EFI_MEMORY_WT memory as Normal-NC arm64: mm: Remove unused support for Device-GRE memory type arm64: mm: Use better bitmap_zalloc() arm64/mm: Make vmemmap_free() available only with CONFIG_MEMORY_HOTPLUG arm64/mm: Remove [PUD|PMD]_TABLE_BIT from [pud|pmd]_bad() arm64/mm: Validate CONFIG_PGTABLE_LEVELS
2021-06-24Merge branch 'for-next/misc' into for-next/coreWill Deacon1-1/+1
Reduce loglevel of useless print during CPU offlining. * for-next/misc: arm64: smp: Bump debugging information print down to KERN_DEBUG
2021-06-24Merge branch 'for-next/insn' into for-next/coreWill Deacon10-1709/+267
Refactoring of our instruction decoding routines and addition of some missing encodings. * for-next/insn: arm64: insn: avoid circular include dependency arm64: insn: move AARCH64_INSN_SIZE into <asm/insn.h> arm64: insn: decouple patching from insn code arm64: insn: Add load/store decoding helpers arm64: insn: Add some opcodes to instruction decoder arm64: insn: Add barrier encodings arm64: insn: Add SVE instruction class arm64: Move instruction encoder/decoder under lib/ arm64: Move aarch32 condition check functions arm64: Move patching utilities out of instruction encoding/decoding
2021-06-24Merge branch 'for-next/entry' into for-next/coreWill Deacon7-421/+388
The never-ending entry.S refactoring continues, putting us in a much better place wrt compiler instrumentation whilst moving more of the code into C. * for-next/entry: arm64: idle: don't instrument idle code with KCOV arm64: entry: don't instrument entry code with KCOV arm64: entry: make NMI entry/exit functions static arm64: entry: split SDEI entry arm64: entry: split bad stack entry arm64: entry: fold el1_inv() into el1h_64_sync_handler() arm64: entry: handle all vectors with C arm64: entry: template the entry asm functions arm64: entry: improve bad_mode() arm64: entry: move bad_mode() to entry-common.c arm64: entry: consolidate EL1 exception returns arm64: entry: organise entry vectors consistently arm64: entry: organise entry handlers consistently arm64: entry: convert IRQ+FIQ handlers to C arm64: entry: add a call_on_irq_stack helper arm64: entry: move NMI preempt logic to C arm64: entry: move arm64_preempt_schedule_irq to entry-common.c arm64: entry: convert SError handlers to C arm64: entry: unmask IRQ+FIQ after EL0 handling arm64: remove redundant local_daif_mask() in bad_mode()
2021-06-24Merge branch 'for-next/cpuidle' into for-next/coreWill Deacon2-33/+20
Fix resume from idle when pNMI is being used. * for-next/cpuidle: arm64: suspend: Use cpuidle context helpers in cpu_suspend() PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter() arm64: Convert cpu_do_idle() to using cpuidle context helpers arm64: Add cpuidle context save/restore helpers
2021-06-24Merge branch 'for-next/cpufeature' into for-next/coreWill Deacon4-75/+237
Additional CPU sanity checks for MTE and preparatory changes for systems where not all of the CPUs support 32-bit EL0. * for-next/cpufeature: arm64: Restrict undef hook for cpufeature registers arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support arm64: Allow mismatched 32-bit EL0 support arm64: cpuinfo: Split AArch32 registers out into a separate struct arm64: Check if GMID_EL1.BS is the same on all CPUs arm64: Change the cpuinfo_arm64 member type for some sysregs to u64
2021-06-24Merge branch 'for-next/caches' into for-next/coreWill Deacon13-45/+66
Big cleanup of our cache maintenance routines, which were confusingly named and inconsistent in their implementations. * for-next/caches: arm64: Rename arm64-internal cache maintenance functions arm64: Fix cache maintenance function comments arm64: sync_icache_aliases to take end parameter instead of size arm64: __clean_dcache_area_pou to take end parameter instead of size arm64: __clean_dcache_area_pop to take end parameter instead of size arm64: __clean_dcache_area_poc to take end parameter instead of size arm64: __flush_dcache_area to take end parameter instead of size arm64: dcache_by_line_op to take end parameter instead of size arm64: __inval_dcache_area to take end parameter instead of size arm64: Fix comments to refer to correct function __flush_icache_range arm64: Move documentation of dcache_by_line_op arm64: assembler: remove user_alt arm64: Downgrade flush_icache_range to invalidate arm64: Do not enable uaccess for invalidate_icache_range arm64: Do not enable uaccess for flush_icache_range arm64: Apply errata to swsusp_arch_suspend_exit arm64: assembler: add conditional cache fixups arm64: assembler: replace `kaddr` with `addr`
2021-06-22Merge branch kvm-arm64/mmu/mte into kvmarm-master/nextMarc Zyngier2-5/+15
KVM/arm64 support for MTE, courtesy of Steven Price. It allows the guest to use memory tagging, and offers a new userspace API to save/restore the tags. * kvm-arm64/mmu/mte: KVM: arm64: Document MTE capability and ioctl KVM: arm64: Add ioctl to fetch/store tags in a guest KVM: arm64: Expose KVM_ARM_CAP_MTE KVM: arm64: Save/restore MTE registers KVM: arm64: Introduce MTE VM feature arm64: mte: Sync tags for pages where PTE is untagged Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-06-22KVM: arm64: Save/restore MTE registersSteven Price1-0/+2
Define the new system registers that MTE introduces and context switch them. The MTE feature is still hidden from the ID register as it isn't supported in a VM yet. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-4-steven.price@arm.com
2021-06-22arm64: Restrict undef hook for cpufeature registersRaphael Gault1-2/+2
This commit modifies the mask of the mrs_hook declared in arch/arm64/kernel/cpufeatures.c which emulates only feature register access. This is necessary because this hook's mask was too large and thus masking any mrs instruction, even if not related to the emulated registers which made the pmu emulation inefficient. Signed-off-by: Raphael Gault <raphael.gault@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210517180256.2881891-1-robh@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-22arm64: mte: Sync tags for pages where PTE is untaggedSteven Price1-5/+13
A KVM guest could store tags in a page even if the VMM hasn't mapped the page with PROT_MTE. So when restoring pages from swap we will need to check to see if there are any saved tags even if !pte_tagged(). However don't check pages for which pte_access_permitted() returns false as these will not have been swapped out. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-2-steven.price@arm.com
2021-06-18sched: Introduce task_is_running()Peter Zijlstra1-1/+1
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper: task_is_running(p). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Davidlohr Bueso <dave@stgolabs.net> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org
2021-06-18arm64: smp: Bump debugging information print down to KERN_DEBUGLee Jones1-1/+1
This sort of information is only generally useful when debugging. No need to have these sprinkled through the kernel log otherwise. Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Lee Jones <lee.jones@linaro.org> Link: https://lore.kernel.org/r/20210617073059.315542-1-lee.jones@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-17arm64: suspend: Use cpuidle context helpers in cpu_suspend()Marc Zyngier1-1/+11
Use cpuidle context helpers to switch to using DAIF.IF instead of PMR to mask interrupts, ensuring that we suspend with interrupts being able to reach the CPU interface. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210615111227.2454465-5-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-17arm64: Convert cpu_do_idle() to using cpuidle context helpersMarc Zyngier1-32/+9
Now that we have helpers that are aware of the pseudo-NMI feature, introduce them to cpu_do_idle(). This allows for some nice cleanup. No functional change intended. Tested-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210615111227.2454465-3-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-15arm64: head: fix code comments in set_cpu_boot_mode_flagDong Aisheng1-1/+1
Up to here, the CPU boot mode can either be EL1 or EL2. Correct the code comments a bit. Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-5-aisheng.dong@nxp.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-15arm64: mm: drop unused __pa(__idmap_text_start)Dong Aisheng1-1/+0
x5 is not used in the following map_memory. Instead, __pa(__idmap_text_start) is stored in x3 which is used later. Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-4-aisheng.dong@nxp.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-15arm64: mm: fix the count comments in compute_indicesDong Aisheng1-1/+1
'count - 1' is confusing and not comply with the real code running. 'count' actually represents the extra entries required, no need minus 1. Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-3-aisheng.dong@nxp.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-15arm64/mm: Fix ttbr0 values stored in struct thread_info for software-panAnshuman Khandual1-1/+1
When using CONFIG_ARM64_SW_TTBR0_PAN, a task's thread_info::ttbr0 must be the TTBR0_EL1 value used to run userspace. With 52-bit PAs, the PA must be packed into the TTBR using phys_to_ttbr(), but we forget to do this in some of the SW PAN code. Thus, if the value is installed into TTBR0_EL1 (as may happen in the uaccess routines), this could result in UNPREDICTABLE behaviour. Since hardware with 52-bit PA support almost certainly has HW PAN, which will be used in preference, this shouldn't be a practical issue, but let's fix this for consistency. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Fixes: 529c4b05a3cb ("arm64: handle 52-bit addresses in TTBR") Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1623749578-11231-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-15arm64: Conditionally configure PTR_AUTH key of the kernel.Daniel Kiss1-0/+2
If the kernel is not compiled with CONFIG_ARM64_PTR_AUTH_KERNEL=y, then no PACI/AUTI instructions are expected while the kernel is running so the kernel's key will not be used. Write of a system registers is expensive therefore avoid if not required. Signed-off-by: Daniel Kiss <daniel.kiss@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210613092632.93591-3-daniel.kiss@arm.com Signed-off-by: Will Deacon <will@kernel.org>