summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2021-08-25riscv: Keep the riscv Kconfig selects sortedJisheng Zhang1-3/+3
Move three Kconfig selects: ARCH_STACKWALK, ARCH_SUPPORTS_ATOMIC_RMW and ARCH_SUPPORTS_DEBUG_PAGEALLOC to the right place. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-08-25riscv: dts: microchip: Add ethernet0 to the aliases nodeBin Meng1-0/+4
U-Boot expects this alias to be in place in order to fix up the mac address of the ethernet node. Note on the Icicle Kit board, currently only emac1 is enabled so it becomes the 'ethernet0'. Signed-off-by: Bin Meng <bin.meng@windriver.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-08-25riscv: dts: microchip: Use 'local-mac-address' for emac1Bin Meng1-1/+1
Per the DT spec, 'local-mac-address' is used to specify MAC address that was assigned to the network device, while 'mac-address' is used to specify the MAC address that was last used by the boot program, and shall be used only if the value differs from 'local-mac-address' property value. Signed-off-by: Bin Meng <bin.meng@windriver.com> Reviewed-by: conor dooley <conor.dooley@microchip.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-08-25riscv: Ensure the value of FP registers in the core dump file is up to dateVincent Chen1-0/+4
The value of FP registers in the core dump file comes from the thread.fstate. However, kernel saves the FP registers to the thread.fstate only before scheduling out the process. If no process switch happens during the exception handling process, kernel will not have a chance to save the latest value of FP registers to thread.fstate. It will cause the value of FP registers in the core dump file may be incorrect. To solve this problem, this patch force lets kernel save the FP register into the thread.fstate if the target task_struct equals the current. Signed-off-by: Vincent Chen <vincent.chen@sifive.com> Reviewed-by: Jisheng Zhang <jszhang@kernel.org> Fixes: b8c8a9590e4f ("RISC-V: Add FP register ptrace support for gdb.") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-08-25powerpc: Use lwarx/ldarx directly instead of PPC_LWARX/LDARX macrosChristophe Leroy5-13/+11
Force the eh flag at 0 on PPC32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1fc81f07cabebb875b963e295408cc3dd38c8d85.1614674882.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/ptrace: Make user_mode() common to PPC32 and PPC64Christophe Leroy1-4/+0
Today we have: #ifdef __powerpc64__ #define user_mode(regs) ((((regs)->msr) >> MSR_PR_LG) & 0x1) #else #define user_mode(regs) (((regs)->msr & MSR_PR) != 0) #endif With ppc64_defconfig, we get: if (!user_mode(regs)) 14b4: e9 3e 01 08 ld r9,264(r30) 14b8: 71 29 40 00 andi. r9,r9,16384 14bc: 41 82 07 a4 beq 1c60 <.emulate_instruction+0x7d0> If taking the ppc32 definition of user_mode(), the exact same code is generated for ppc64_defconfig. So, only keep one version of user_mode(), preferably the one not using MSR_PR_LG which should be kept internal to reg.h. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/000a28c51808bbd802b505af42d2cb316c2be7d3.1629216000.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/syscalls: Simplify do_mmap2()Christophe Leroy1-11/+4
When shift is nul, operations remain valid so no test needed. And 'ret' is unnecessary. And use IS_ALIGNED() to check alignment, that's more clear. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/373ec500f386374bc5735007df3d3869eac47be1.1624618701.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/ptdump: Convert powerpc to GENERIC_PTDUMPChristophe Leroy6-144/+47
This patch converts powerpc to the generic PTDUMP implementation. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/03166d569526be70214fe9370a7bad219d2f41c8.1625762907.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/ptdump: Reduce level numbers by 1 in note_page() and add p4d levelChristophe Leroy4-14/+21
Do the same as commit f8f0d0b6fa20 ("mm: ptdump: reduce level numbers by 1 in note_page()") and add missing p4d level. This will align powerpc to the users of generic ptdump. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d76495c574132b197b445a1f133755cca4b912a4.1625762906.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/ptdump: Remove unused 'page_size' parameterChristophe Leroy1-3/+3
note_page_update_state() doesn't use page_size. Remove it. Could also be removed to note_page() but as a following patch will remove all current users of note_page(), just leave it as is for now. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e2f80d052001155251bfe009c360d0c5d9242c6b.1625762906.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/ptdump: Use DEFINE_SHOW_ATTRIBUTE()Christophe Leroy4-46/+5
Use DEFINE_SHOW_ATTRIBUTE() instead of open coding open() and fops. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b864a92693ca8413ef0b19f0c12065c212899b6e.1625762905.git.christophe.leroy@csgroup.eu
2021-08-25powerpc: Avoid link stack corruption in misc asm functionsChristophe Leroy5-10/+10
bl;mflr is used at several places to get code position. Use bcl 20,31,+4 instead of bl in order to preserve link stack. See commit c974809a26a1 ("powerpc/vdso: Avoid link stack corruption in __get_datapage()") for details. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c6eabb4fb6c156f75d56dcbcc6f243e5ac0fba42.1629791763.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/booke: Avoid link stack corruption in several placesChristophe Leroy6-16/+16
Use bcl 20,31,+4 instead of bl in order to preserve link stack. See commit c974809a26a1 ("powerpc/vdso: Avoid link stack corruption in __get_datapage()") for details. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e9fbc285eceb720e6c0e032ef47fe8b05f669b48.1629791751.git.christophe.leroy@csgroup.eu
2021-08-25powerpc/32: indirect function call use bctrl rather than blrl in ↵Christophe Leroy1-2/+2
ret_from_kernel_thread Copied from commit 89bbe4c798bc ("powerpc/64: indirect function call use bctrl rather than blrl in ret_from_kernel_thread") blrl is not recommended to use as an indirect function call, as it may corrupt the link stack predictor. This is not a performance critical path but this should be fixed for consistency. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/91b1d242525307ceceec7ef6e832bfbacdd4501b.1629436472.git.christophe.leroy@csgroup.eu
2021-08-25arm64: kdump: Remove custom linux,usable-memory-range handlingGeert Uytterhoeven1-35/+0
Remove the architecture-specific code for handling the "linux,usable-memory-range" property under the "/chosen" node in DT, as the platform-agnostic FDT core code already takes care of this. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/7356c531c49a24b4a55577bf8e46d93f4d8ae460.1628670468.git.geert+renesas@glider.be
2021-08-25arm64: kdump: Remove custom linux,elfcorehdr handlingGeert Uytterhoeven1-53/+0
Remove the architecture-specific code for handling the "linux,elfcorehdr" property under the "/chosen" node in DT, as the platform-agnostic handling in the FDT core code already takes care of this. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/3b8f801f9b92066855e87f3079fafc153ab20f69.1628670468.git.geert+renesas@glider.be
2021-08-25riscv: Remove non-standard linux,elfcorehdr handlingGeert Uytterhoeven1-20/+0
RISC-V uses platform-specific code to locate the elf core header in memory. However, this does not conform to the standard "linux,elfcorehdr" DT bindings, as it relies on a reserved memory node with the "linux,elfcorehdr" compatible value, instead of on a "linux,elfcorehdr" property under the "/chosen" node. The non-compliant code can just be removed, as the standard behavior is already implemented by platform-agnostic handling in the FDT core code. Fixes: 5640975003d0234d ("RISC-V: Add crash kernel support") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/41c75d6ee3114ae6304f8afe0051895af91200ee.1628670468.git.geert+renesas@glider.be
2021-08-25ARC: mm: move MMU specific bits out of entry code ...Vineet Gupta3-5/+11
... to avoid polluting shared entry code (across three ISA variants) with ISA/MMU specific code. Cc: Jose Abreu <joabreu@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move MMU specific bits out of ASID allocatorVineet Gupta3-22/+30
And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: non-functional code movement/cleanupVineet Gupta1-14/+16
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set)Vineet Gupta2-10/+10
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flagVineet Gupta2-4/+2
and remove the one off uncached definition for ARC Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: Enable STRICT_MM_TYPECHECKSVineet Gupta2-27/+1
In the past I've refrained from doing this (at least 2 times) due to the slight code bloat due to ABI implications of pte_t etc becoming struct Per ARC ABI, functions return struct via memory and not through register r0, even if the struct would fit in register(s) - caller allocates space on stack and passes the address as first arg (r0), shifting rest of args by one - callee creates return struct in memory (referenced via r0) This time around the code actually shrunk slightly (due to subtle inlining heuristic effects), but still slightly inefficient due to return values passed through memory. That however seems like a small cost compared to maintenance burden given the impending new mmu support for page walk etc Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: Fixes to allow STRICT_MM_TYPECHECKSVineet Gupta1-5/+8
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move mmu/cache externs out to setup.hVineet Gupta3-10/+10
Don't pollute mmu.h and cache.h with ARC internal bootlog/setup related functions. Move them aside to setup.h Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: remove tlb paranoid codeVineet Gupta4-99/+0
This was used back in arc700 days when ASID allocator was fragile. Not needed in last 5 years Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 onlyVineet Gupta7-40/+5
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in ARC700 SMP port, it has to be repurposed for re-entrant interrupt handling, while UP port doesn't. We currently handle these use-cases using a fabricated #define which has usual issues of dependency nesting and obvious ugliness. So clean this up: for ARC700 don't use to cache pgd (even in UP) and do the opposite for ARCv2. And while here, switch to canonical pgd_offset(). Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: retire MMUv1 and MMUv2 supportVineet Gupta7-412/+42
There's no known/active customer using them with latest kernels anyways. Removal helps cleanup code and remove the hack for MMU_VER to MMU_V[3-4] conversion Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: retire ARC750 supportVineet Gupta1-11/+1
There's no known/active customer using them with latest kernels anyways. Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variantsVineet Gupta2-23/+27
And move them out of cmpxchg.h to canonical atomic.h Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only)Vineet Gupta1-9/+2
It only makes sense to do this for the LLSC config Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: cmpxchg/xchg: rewrite as macros to make type safeVineet Gupta1-96/+117
Existing code forces/assume args to type "long" which won't work in LP64 regime, so prepare code for that Interestingly this should be a non functional change but I do see some codegen changes | bloat-o-meter vmlinux-cmpxchg-A vmlinux-cmpxchg-B | add/remove: 0/0 grow/shrink: 17/12 up/down: 218/-150 (68) | | Function old new delta | rwsem_optimistic_spin 518 550 +32 | rwsem_down_write_slowpath 1244 1274 +30 | __do_sys_perf_event_open 2576 2600 +24 | down_read 192 200 +8 | __down_read 192 200 +8 ... | task_work_run 168 148 -20 | dma_fence_chain_walk.part 760 736 -24 | __genradix_ptr_alloc 674 646 -28 Total: Before=6187409, After=6187477, chg +0.00% Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: xchg: !LLSC: remove UP micro-optimization/hackVineet Gupta1-7/+1
It gets in the way of cleaning things up and is a maintenance pain-in-neck ! Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: bitops: fls/ffs to take int (vs long) per asm-generic definesVineet Gupta1-2/+2
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: switch to generic bitopsVineet Gupta3-198/+2
- !LLSC now only needs a single spinlock for atomics and bitops - Some codegen changes (slight bloat) with generic bitops 1. code increase due to LD-check-atomic paradigm vs. unconditonal atomic (but dirty'ing the cache line even if set already). So despite increase, generic is right thing to do. 2. code decrease (but use of costlier instructions such as DIV vs. shifts based math) due to signed arithmetic. This needs to be revisited seperately. arc: static inline int test_bit(unsigned int nr, const volatile unsigned long *addr) ^^^^^^^^^^^^ generic: static inline int test_bit(int nr, const volatile unsigned long *addr) ^^^ Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.com Signed-off-by: Will Deacon <will@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> [vgupta: wrote patch based on Will's poc, analysed codegen diffs] Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomics: implement relaxed variantsVineet Gupta2-30/+26
The current ARC fetch/return atomics provide fully ordered semantics only with 2 full barriers around the operation. Instead implement them as relaxed variants without any barriers and rely on generic code to generate the fully-ordered, acquire and release varaints by adding the appropriate full barriers. This helps elide some extra barriers in case of acquire/release/relaxed calls. bloat-o-meter for hsdk defconfig shows codegen improvements, although numbers below inflated due to unrelated inlining heuristic changes | bloat-o-meter vmlinux-643babe34fd7-non-relaxed vmlinux-45aa05cb44d7-relaxed | add/remove: 2/5 grow/shrink: 42/1222 up/down: 4158/-14312 (-10154) | Function old new delta | .. | sys_renameat 462 476 +14 | ip_mc_inc_group 424 436 +12 | do_read_cache_page 1882 1894 +12 | .. | refcount_dec_and_mutex_lock 254 250 -4 | refcount_dec_and_lock_irqsave 258 254 -4 | refcount_dec_and_lock 254 250 -4 | .. | tcp_v6_route_req 246 238 -8 | tcp_v4_destroy_sock 286 278 -8 | tcp_twsk_unique 352 344 -8 Link: https://lore.kernel.org/r/20180830144344.GW24142@hirez.programming.kicks-ass.net Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_returnVineet Gupta1-0/+6
This is a non-functional change since those wrappers are not used in kernel sources at all. Link: http://lists.infradead.org/pipermail/linux-snps-arc/2018-August/004246.html Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic: !LLSC: use int data type consistentlyVineet Gupta1-2/+2
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic: !LLSC: remove hack in atomic_set() for for UPVineet Gupta1-13/+4
!LLSC atomics use spinlock (SMP) or irq-disable (UP) to implement criticla regions. UP atomic_set() however was "cheating" by not doing any of that so and still being functional. Remove this anomaly (primarily as cleanup for future code improvements) given that this config is not worth hassle of special case code. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomics: disintegrate headerVineet Gupta4-424/+461
Non functional change, to ease future addition/removal Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: export clear_user_page() for modulesRandy Dunlap1-1/+1
0day bot reports a build error: ERROR: modpost: "clear_user_page" [drivers/media/v4l2-core/videobuf-dma-sg.ko] undefined! so export it in arch/arc/ to fix the build error. In most ARCHes, clear_user_page() is a macro. OTOH, in a few ARCHes it is a function and needs to be exported. PowerPC exported it in 2004. It looks like nds32 and nios2 still need to have it exported. Fixes: 4102b53392d63 ("ARC: [mm] Aliasing VIPT dcache support 2/4") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: linux-snps-arc@lists.infradead.org Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25arch/arc/kernel/: fix misspellings using codespell toolChangcheng Deng3-3/+3
Some typos are found out by codespell tool: ./intc-compact.c:145: prioity ==> priority ./smp.c:286: recevier ==> receiver ./stacktrace.c:152 prelogue ==> prologue Fix typos found by codespell. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Changcheng Deng <deng.changcheng@zte.com.cn> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24s390/vfio-ap: replace open coded locks for VFIO_GROUP_NOTIFY_SET_KVM ↵Tony Krowiak1-4/+27
notification It was pointed out during an unrelated patch review that locks should not be open coded - i.e., writing the algorithm of a standard lock in a function instead of using a lock from the standard library. The setting and testing of a busy flag and sleeping on a wait_event is the same thing a lock does. The open coded locks are invisible to lockdep, so potential locking problems are not detected. This patch removes the open coded locks used during VFIO_GROUP_NOTIFY_SET_KVM notification. The busy flag and wait queue were introduced to resolve a possible circular locking dependency reported by lockdep when starting a secure execution guest configured with AP adapters and domains. Reversing the order in which the kvm->lock mutex and matrix_dev->lock mutex are locked resolves the issue reported by lockdep, thus enabling the removal of the open coded locks. Signed-off-by: Tony Krowiak <akrowiak@linux.ibm.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20210823212047.1476436-3-akrowiak@linux.ibm.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2021-08-24s390/vfio-ap: r/w lock for PQAP interception handler function pointerTony Krowiak3-11/+13
The function pointer to the interception handler for the PQAP instruction can get changed during the interception process. Let's add a semaphore to struct kvm_s390_crypto to control read/write access to the function pointer contained therein. The semaphore must be locked for write access by the vfio_ap device driver when notified that the KVM pointer has been set or cleared. It must be locked for read access by the interception framework when the PQAP instruction is intercepted. Signed-off-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Link: https://lore.kernel.org/r/20210823212047.1476436-2-akrowiak@linux.ibm.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2021-08-24arm64/sve: Better handle failure to allocate SVE register storageMark Brown3-6/+14
Currently we "handle" failure to allocate the SVE register storage by doing a BUG_ON() and hoping for the best. This is obviously not great and the memory allocation failure will already be loud enough without the BUG_ON(). As the comment says it is a corner case but let's try to do a bit better, remove the BUG_ON() and add code to handle the failure in the callers. For the ptrace and signal code we can return -ENOMEM gracefully however we have no real error reporting path available to us for the SVE access trap so instead generate a SIGKILL if the allocation fails there. This at least means that we won't try to soldier on and end up trying to access the nonexistant state and while it's obviously not ideal for userspace SIGKILL doesn't allow any handling so minimises the ABI impact, making it easier to improve the interface later if we come up with a better idea. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210824153417.18371-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24arm64: head: avoid over-mapping in map_memoryMark Rutland2-7/+8
The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e4845 ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24arm64/sve: Add a comment documenting the binutils needed for SVE asmMark Brown1-0/+1
At some point it would be nice to avoid the need to manually encode SVE instructions, add a note of the binutils version required to save looking it up. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210816125024.8112-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24arm64/sve: Add some comments for sve_save/load_state()Mark Brown1-0/+13
The use of macros for the actual function bodies means legibility is always going to be a bit of a challenge, especially while we can't rely on SVE support in the toolchain, but this helps a little. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210812201143.35578-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24Merge 5.14-rc7 into char-misc-nextGreg Kroah-Hartman52-211/+352
We need the char/misc fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-24Merge 5.14-rc7 into usb-nextGreg Kroah-Hartman18-59/+112
We need the USB fix in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>