summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2021-02-11powerpc/syscall: implement system call entry/exit logic in C for PPC32Christophe Leroy3-229/+30
That's port of PPC64 syscall entry/exit logic in C to PPC32. Performancewise on 8xx: Before : 304 cycles on null_syscall After : 348 cycles on null_syscall Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a93b08e1275e9d1f0b1c39043d1b827586b2b401.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/32: Always save non volatile GPRs at syscall entryChristophe Leroy4-62/+8
In preparation for porting syscall entry/exit to C, inconditionally save non volatile general purpose registers. Commit 965dd3ad3076 ("powerpc/64/syscall: Remove non-volatile GPR save optimisation") provides detailed explanation. This increases the number of cycles by 24 cycles on 8xx with null_syscall benchmark (280 => 304 cycles) Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/21c08162b83655195fe9ead78ff2cfd28508d023.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/syscall: Change condition to check MSR_RIChristophe Leroy1-3/+4
In system_call_exception(), MSR_RI also needs to be checked on 8xx. Only booke and 40x doesn't have MSR_RI. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/67820fada8dd6a8fe9d7b666f175d4cc9d8de87e.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/syscall: Save r3 in regs->orig_r3Christophe Leroy2-2/+2
Save r3 in regs->orig_r3 in system_call_exception() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9a90805ab6b9101b46daf56470f457a57acd86fc.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/syscall: Use is_compat_task()Christophe Leroy1-1/+3
Instead of hard comparing task flags with _TIF_32BIT, use is_compat_task(). The advantage is that it returns 0 on PPC32 allthough _TIF_32BIT is always set. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c8094662199337a7200fea9f6e1d1f8b1b6d5f69.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/syscall: Make interrupt.c buildable on PPC32Christophe Leroy2-9/+26
To allow building interrupt.c on PPC32, ifdef out specific PPC64 code or use helpers which are available on both PP32 and PPC64 Modify Makefile to always build interrupt.o Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ba073ad67bd971a88ce331b65d6655523b54c794.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/syscall: Rename syscall_64.c into interrupt.cChristophe Leroy2-1/+1
syscall_64.c will be reused almost as is for PPC32. As this file also contains functions to handle other types of interrupts rename it interrupt.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/cddc2deaa8f049d3ec419738e69804934919b935.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/irq: Add stub irq_soft_mask_return() for PPC32Christophe Leroy1-0/+5
To allow building syscall_64.c smoothly on PPC32, add stub version of irq_soft_mask_return(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9b9f62c5e2e63cc121fd749a923aaaee92ee0da4.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/irq: Rework helpers that manipulate MSR[EE/RI]Christophe Leroy2-24/+52
In preparation of porting PPC32 to C syscall entry/exit, rewrite the following helpers as static inline functions and add support for PPC32 in them: __hard_irq_enable() __hard_irq_disable() __hard_EE_RI_disable() __hard_RI_enable() Then use them in PPC32 version of arch_local_irq_disable() and arch_local_irq_enable() to avoid code duplication. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0e290372a0e7dc2ae657b4a01aec85f8de7fdf77.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/irq: Add helper to set regs->softeChristophe Leroy1-2/+9
regs->softe doesn't exist on PPC32. Add irq_soft_mask_regs_set_state() helper to set regs->softe. This helper will void on PPC32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5f37d1177a751fdbca79df461d283850ca3a34a2.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/32: Reorder instructions to avoid using CTR in syscall entryChristophe Leroy1-12/+10
Now that we are using rfi instead of mtmsr to reactivate MMU, it is possible to reorder instructions and avoid the need to use CTR for stashing SRR0. null_syscall on 8xx is reduced by 3 cycles (283 => 280 cycles). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8fa13a59f73647e058c95fc7e1c7a98f316bd20a.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/32: On syscall entry, enable instruction translation at the same ↵Christophe Leroy2-22/+11
time as data On 40x and 8xx, kernel text is pinned. On book3s/32, kernel text is mapped by BATs. Enable instruction translation at the same time as data translation, it makes things simpler. MSR_RI can also be set at the same time because srr0/srr1 are already saved and r1 is set properly. On booke, translation is always on, so at the end all PPC32 have translation on early. This reduces null_syscall benchmark by 13 cycles on 8xx (296 ==> 283 cycles). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3fe8891c814103a3549efc1d4e7ffc828bba5993.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/32: Always enable data translation on syscall entryChristophe Leroy2-24/+1
If the code can use a stack in vm area, it can also use a stack in linear space. Simplify code by removing old non VMAP stack code on PPC32 in syscall. That means the data translation is now re-enabled early in syscall entry in all cases, not only when using VMAP stacks. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/412c6c1786922d991bbb89c2ad2e82cffe8ab112.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/32s: Add missing call to kuep_lock on syscall entryChristophe Leroy1-0/+3
Userspace Execution protection and fast syscall entry were implemented independently from each other and were both merged in kernel 5.2, leading to syscall entry missing userspace execution protection. On syscall entry, execution of user space memory must be locked in the same way as on exception entry. Fixes: b86fb88855ea ("powerpc/32: implement fast entry for syscalls on non BOOKE") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c65e105b63aaf74f91a14f845bc77192350b84a6.1612796617.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/compat_sys: swap hi/lo parts of 64-bit syscall args in LE modeWill Springer1-21/+28
Swap upper/lower 32 bits for 64-bit compat syscalls, conditioned on endianness. This is modeled after the same functionality in arch/mips/kernel/linux32.c. This fixes compat_sys on ppc64le, when called by 32-bit little-endian processes. Tested with `file /bin/bash` (pread64) and `truncate -s 5G test` (ftruncate64). Signed-off-by: Will Springer <skirmisher@protonmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2765111.e9J7NaK4W3@sheen
2021-02-11powerpc: use kernel endianness in MSR in 32-bit signal handlerJoseph J Allen1-1/+2
This mirrors the behavior in handle_rt_signal32, to obey kernel endianness rather than assume a 32-bit process is big-endian. Without this change, any 32-bit little-endian process will SIGILL immediately upon handling a signal. Signed-off-by: Joseph J Allen <eerykitty@gmail.com> Signed-off-by: Will Springer <skirmisher@protonmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2058876.irdbgypaU6@sheen
2021-02-11powerpc/kexec_file: fix FDT size estimation for kdump kernelHari Bathini3-1/+37
On systems with large amount of memory, loading kdump kernel through kexec_file_load syscall may fail with the below error: "Failed to update fdt with linux,drconf-usable-memory property" This happens because the size estimation for kdump kernel's FDT does not account for the additional space needed to setup usable memory properties. Fix it by accounting for the space needed to include linux,usable-memory & linux,drconf-usable-memory properties while estimating kdump kernel's FDT size. Fixes: 6ecd0163d360 ("powerpc/kexec_file: Add appropriate regions for memory reserve map") Cc: stable@vger.kernel.org # v5.9+ Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/161243826811.119001.14083048209224609814.stgit@hbathini
2021-02-11powerpc/mm: Remove dcache flush from memory remove.Aneesh Kumar K.V2-22/+29
We added dcache flush on memory add/remove in commit fb5924fddf9e ("powerpc/mm: Flush cache on memory hot(un)plug") to handle crashes on GPU hotplug. Instead of adding dcache flush in generic memory add/remove routine which is used even for regular memory, we should handle these devices specific flush in the device driver code. memtrace did handle this in the driver and that was removed by commit 7fd6641de28f ("powerpc/powernv/memtrace: Let the arch hotunplug code flush cache"). This patch reverts that commit. The dcache flush in memory add was removed by commit ea458effa88e ("powerpc: Don't flush caches when adding memory") which I don't think is correct. The reason why we require dcache flush in memtrace is to make sure we don't have a dirty cache when we remap a pfn to cache inhibited. We should do that when the memtrace module removes the memory and make the pfn available for HTM traces to map it as cache inhibited. The other device mentioned in commit fb5924fddf9e ("powerpc/mm: Flush cache on memory hot(un)plug") is nvlink device with coherent memory. The support for that was removed in commit 7eb3cf761927 ("powerpc/powernv: remove unused NPU DMA code") and commit 25b2995a35b6 ("mm: remove MEMORY_DEVICE_PUBLIC support") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210203045812.234439-3-aneesh.kumar@linux.ibm.com
2021-02-11powerpc/mm: Add PG_dcache_clean to indicate dcache clean stateAneesh Kumar K.V5-13/+19
This just add a better name for PG_arch_1. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210203045812.234439-2-aneesh.kumar@linux.ibm.com
2021-02-11powerpc/mm: Enable compound page check for both THP and HugeTLBAneesh Kumar K.V3-26/+22
THP config results in compound pages. Make sure the kernel enables the PageCompound() check with CONFIG_HUGETLB_PAGE disabled and CONFIG_TRANSPARENT_HUGEPAGE enabled. This makes sure we correctly flush the icache with THP pages. flush_dcache_icache_page only matter for platforms that don't support COHERENT_ICACHE. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210203045812.234439-1-aneesh.kumar@linux.ibm.com
2021-02-11powerpc/xive: Assign boolean values to a bool variableJiapeng Chong1-2/+2
Fix the following coccicheck warnings: ./arch/powerpc/kvm/book3s_xive.c:1856:2-17: WARNING: Assignment of 0/1 to bool variable. ./arch/powerpc/kvm/book3s_xive.c:1854:2-17: WARNING: Assignment of 0/1 to bool variable. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1612680192-43116-1-git-send-email-jiapeng.chong@linux.alibaba.com
2021-02-11powerpc/32: Preserve cr1 in exception prolog stack check to fix build errorChristophe Leroy2-7/+1
THREAD_ALIGN_SHIFT = THREAD_SHIFT + 1 = PAGE_SHIFT + 1 Maximum PAGE_SHIFT is 18 for 256k pages so THREAD_ALIGN_SHIFT is 19 at the maximum. No need to clobber cr1, it can be preserved when moving r1 into CR when we check stack overflow. This reduces the number of instructions in Machine Check Exception prolog and fixes a build failure reported by the kernel test robot on v5.10 stable when building with RTAS + VMAP_STACK + KVM. That build failure is due to too many instructions in the prolog hence not fitting between 0x200 and 0x300. Allthough the problem doesn't show up in mainline, it is still worth the change. Fixes: 98bf2d3f4970 ("powerpc/32s: Fix RTAS machine check with VMAP stack") Cc: stable@vger.kernel.org Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5ae4d545e3ac58e133d2599e0deb88843cb494fc.1612768623.git.christophe.leroy@csgroup.eu
2021-02-11powerpc/64s: Remove EXSLB interrupt save areaNicholas Piggin3-8/+1
SLB faults should not be taken while the PACA save areas are live, all memory accesses should be fetches from the kernel text, and access to PACA and the current stack, before C code is called or any other accesses are made. All of these have pinned SLBs so will not take a SLB fault. Therefore EXSLB is not be required. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210208063406.331655-1-npiggin@gmail.com
2021-02-11powerpc/64s: syscall real mode entry use mtmsrd rather than rfidNicholas Piggin2-6/+9
Have the real mode system call entry handler branch to the kernel 0xc000... address and then use mtmsrd to enable the MMU, rather than use SRRs and rfid. Commit 8729c26e675c ("powerpc/64s/exception: Move real to virt switch into the common handler") implemented this style of real mode entry for other interrupt handlers, so this brings system calls into line with them, which is the main motivcation for the change. This tends to be slightly faster due to avoiding the mtsprs, and it also does not clobber the SRR registers, which becomes important in a subsequent change. The real mode entry points don't tend to be too important for performance these days, but it is possible for a hypervisor to run guests in AIL=0 mode for certian reasons. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210208063326.331502-1-npiggin@gmail.com
2021-02-11powerpc/kuap: Restore AMR after replaying soft interruptsAlexey Kardashevskiy1-1/+26
Since de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection"), user access helpers call user_{read|write}_access_{begin|end} when user space access is allowed. Commit 890274c2dc4c ("powerpc/64s: Implement KUAP for Radix MMU") made the mentioned helpers program a AMR special register to allow such access for a short period of time, most of the time AMR is expected to block user memory access by the kernel. Since the code accesses the user space memory, unsafe_get_user() calls might_fault() which calls arch_local_irq_restore() if either CONFIG_PROVE_LOCKING or CONFIG_DEBUG_ATOMIC_SLEEP is enabled. arch_local_irq_restore() then attempts to replay pending soft interrupts as KUAP regions have hardware interrupts enabled. If a pending interrupt happens to do user access (performance interrupts do that), it enables access for a short period of time so after returning from the replay, the user access state remains blocked and if a user page fault happens - "Bug: Read fault blocked by AMR!" appears and SIGSEGV is sent. An example trace: Bug: Read fault blocked by AMR! WARNING: CPU: 0 PID: 1603 at /home/aik/p/kernel/arch/powerpc/include/asm/book3s/64/kup-radix.h:145 CPU: 0 PID: 1603 Comm: amr Not tainted 5.10.0-rc6_v5.10-rc6_a+fstn1 #24 NIP: c00000000009ece8 LR: c00000000009ece4 CTR: 0000000000000000 REGS: c00000000dc63560 TRAP: 0700 Not tainted (5.10.0-rc6_v5.10-rc6_a+fstn1) MSR: 8000000000021033 <SF,ME,IR,DR,RI,LE> CR: 28002888 XER: 20040000 CFAR: c0000000001fa928 IRQMASK: 1 GPR00: c00000000009ece4 c00000000dc637f0 c000000002397600 000000000000001f GPR04: c0000000020eb318 0000000000000000 c00000000dc63494 0000000000000027 GPR08: c00000007fe4de68 c00000000dfe9180 0000000000000000 0000000000000001 GPR12: 0000000000002000 c0000000030a0000 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 bfffffffffffffff GPR20: 0000000000000000 c0000000134a4020 c0000000019c2218 0000000000000fe0 GPR24: 0000000000000000 0000000000000000 c00000000d106200 0000000040000000 GPR28: 0000000000000000 0000000000000300 c00000000dc63910 c000000001946730 NIP __do_page_fault+0xb38/0xde0 LR __do_page_fault+0xb34/0xde0 Call Trace: __do_page_fault+0xb34/0xde0 (unreliable) handle_page_fault+0x10/0x2c --- interrupt: 300 at strncpy_from_user+0x290/0x440 LR = strncpy_from_user+0x284/0x440 strncpy_from_user+0x2f0/0x440 (unreliable) getname_flags+0x88/0x2c0 do_sys_openat2+0x2d4/0x5f0 do_sys_open+0xcc/0x140 system_call_exception+0x160/0x240 system_call_common+0xf0/0x27c To fix it save/restore the AMR when replaying interrupts, and also add a check if AMR was not blocked prior to replaying interrupts. Originally found by syzkaller. Fixes: 890274c2dc4c ("powerpc/64s: Implement KUAP for Radix MMU") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Use normal commit citation format and add full oops log to change log, move kuap_check_amr() into the restore routine to avoid warnings about unreconciled IRQ state] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210202091541.36499-1-aik@ozlabs.ru
2021-02-11powerpc/uaccess: Avoid might_fault() when user access is enabledAlexey Kardashevskiy1-3/+10
The amount of code executed with enabled user space access (unlocked KUAP) should be minimal. However with CONFIG_PROVE_LOCKING or CONFIG_DEBUG_ATOMIC_SLEEP enabled, might_fault() calls into various parts of the kernel, and may even end up replaying interrupts which in turn may access user space and forget to restore the KUAP state. The problem places are: 1. strncpy_from_user (and similar) which unlock KUAP and call unsafe_get_user -> __get_user_allowed -> __get_user_nocheck() with do_allow=false to skip KUAP as the caller took care of it. 2. __unsafe_put_user_goto() which is called with unlocked KUAP. eg: WARNING: CPU: 30 PID: 1 at arch/powerpc/include/asm/book3s/64/kup.h:324 arch_local_irq_restore+0x160/0x190 NIP arch_local_irq_restore+0x160/0x190 LR lock_is_held_type+0x140/0x200 Call Trace: 0xc00000007f392ff8 (unreliable) ___might_sleep+0x180/0x320 __might_fault+0x50/0xe0 filldir64+0x2d0/0x5d0 call_filldir+0xc8/0x180 ext4_readdir+0x948/0xb40 iterate_dir+0x1ec/0x240 sys_getdents64+0x80/0x290 system_call_exception+0x160/0x280 system_call_common+0xf0/0x27c Change __get_user_nocheck() to look at `do_allow` to decide whether to skip might_fault(). Since strncpy_from_user/etc call might_fault() anyway before unlocking KUAP, there should be no visible change. Drop might_fault() in __unsafe_put_user_goto() as it is only called from unsafe_put_user(), which already has KUAP unlocked. Since keeping might_fault() is still desirable for debugging, add calls to it in user_[read|write]_access_begin(). That also allows us to drop the is_kernel_addr() test, because there should be no code using user_[read|write]_access_begin() in order to access a kernel address. Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> [mpe: Combine with related patch from myself, merge change logs] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210204121612.32721-1-aik@ozlabs.ru
2021-02-11powerpc/uaccess: Simplify unsafe_put_user() implementationMichael Ellerman1-8/+7
Currently unsafe_put_user() expands to __put_user_goto(), which expands to __put_user_nocheck_goto(). There are no other uses of __put_user_nocheck_goto(), and although there are some other uses of __put_user_goto() those could just use unsafe_put_user(). Every layer of indirection introduces the possibility that some code is calling that layer, and makes keeping track of the required semantics at each point more complicated. So drop __put_user_goto(), and rename __put_user_nocheck_goto() to __unsafe_put_user_goto(). The "nocheck" is implied by "unsafe". Replace the few uses of __put_user_goto() with unsafe_put_user(). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210208135717.2618798-1-mpe@ellerman.id.au
2021-02-11powerpc/amigaone: Make amigaone_discover_phbs() staticMichael Ellerman1-1/+1
It's only used in setup.c, so make it static. Fixes: 053d58c87029 ("powerpc/amigaone: Move PHB discovery") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210210130804.3190952-3-mpe@ellerman.id.au
2021-02-11powerpc/mm/64s: Fix no previous prototype warningMichael Ellerman3-2/+6
As reported by lkp: arch/powerpc/mm/book3s64/radix_tlb.c:646:6: warning: no previous prototype for function 'exit_lazy_flush_tlb' Fix it by moving the prototype into the existing header. Fixes: 032b7f08932c ("powerpc/64s/radix: serialize_against_pte_lookup IPIs trim mm_cpumask") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210210130804.3190952-2-mpe@ellerman.id.au
2021-02-11powerpc/83xx: Fix build error when CONFIG_PCI=nMichael Ellerman1-1/+1
As reported by lkp: arch/powerpc/platforms/83xx/km83xx.c:183:19: error: 'mpc83xx_setup_pci' undeclared here (not in a function) 183 | .discover_phbs = mpc83xx_setup_pci, | ^~~~~~~~~~~~~~~~~ | mpc83xx_setup_arch There is a stub defined for the CONFIG_PCI=n case, but now that mpc83xx_setup_pci() is being assigned to discover_phbs the correct empty value is NULL. Fixes: 83f84041ff1c ("powerpc/83xx: Move PHB discovery") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210210130804.3190952-1-mpe@ellerman.id.au
2021-02-11powerpc: remove interrupt handler functions from the noinstr sectionNicholas Piggin3-15/+20
The allyesconfig ppc64 kernel fails to link with relocations unable to fit after commit 3a96570ffceb ("powerpc: convert interrupt handlers to use wrappers"), which is due to the interrupt handler functions being put into the .noinstr.text section, which the linker script places on the opposite side of the main .text section from the interrupt entry asm code which calls the handlers. This results in a lot of linker stubs that overwhelm the 252-byte sized space we allow for them, or in the case of BE a .opd relocation link error for some reason. It's not required to put interrupt handlers in the .noinstr section, previously they used NOKPROBE_SYMBOL, so take them out and replace with a NOKPROBE_SYMBOL in the wrapper macro. Remove the explicit NOKPROBE_SYMBOL macros in the interrupt handler functions. This makes a number of interrupt handlers nokprobe that were not prior to the interrupt wrappers commit, but since that commit they were made nokprobe due to being in .noinstr.text, so this fix does not change that. The fixes tag is different to the commit that first exposes the problem because it is where the wrapper macros were introduced. Fixes: 8d41fc618ab8 ("powerpc: interrupt handler wrapper functions") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Slightly fix up comment wording] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210211063636.236420-1-npiggin@gmail.com
2021-02-11powerpc/powernv/pci: Use kzalloc() for phb related allocationsMichael Ellerman1-3/+3
As part of commit fbbefb320214 ("powerpc/pci: Move PHB discovery for PCI_DN using platforms"), I switched some allocations from memblock_alloc() to kmalloc(), otherwise memblock would warn that it was being called after slab init. However I missed that the code relied on the allocations being zeroed, without which we could end up crashing: pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to ff BUG: Unable to handle kernel data access on read at 0x6b6b6b6b6b6b6af7 Faulting instruction address: 0xc0000000000dbc90 Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV ... NIP pnv_ioda_get_pe_state+0xe0/0x1d0 LR pnv_ioda_get_pe_state+0xb4/0x1d0 Call Trace: pnv_ioda_get_pe_state+0xb4/0x1d0 (unreliable) pnv_pci_config_check_eeh.isra.9+0x78/0x270 pnv_pci_read_config+0xf8/0x160 pci_bus_read_config_dword+0xa4/0x120 pci_bus_generic_read_dev_vendor_id+0x54/0x270 pci_scan_single_device+0xb8/0x140 pci_scan_slot+0x80/0x1b0 pci_scan_child_bus_extend+0x94/0x490 pcibios_scan_phb+0x1f8/0x3c0 pcibios_init+0x8c/0x12c do_one_initcall+0x94/0x510 kernel_init_freeable+0x35c/0x3fc kernel_init+0x2c/0x168 ret_from_kernel_thread+0x5c/0x70 Switch them to kzalloc(). Fixes: fbbefb320214 ("powerpc/pci: Move PHB discovery for PCI_DN using platforms") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210211112749.3410771-1-mpe@ellerman.id.au
2021-02-11KVM: PPC: Book3S HV: Fix host radix SLB optimisation with hash guestsNicholas Piggin1-0/+3
Commit 68ad28a4cdd4 ("KVM: PPC: Book3S HV: Fix radix guest SLB side channel") incorrectly removed the radix host instruction patch to skip re-loading the host SLB entries when exiting from a hash guest. Restore it. Fixes: 68ad28a4cdd4 ("KVM: PPC: Book3S HV: Fix radix guest SLB side channel") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-11KVM: PPC: Book3S HV: Ensure radix guest has no SLB entriesPaul Mackerras1-1/+5
Commit 68ad28a4cdd4 ("KVM: PPC: Book3S HV: Fix radix guest SLB side channel") changed the older guest entry path, with the side effect that vcpu->arch.slb_max no longer gets cleared for a radix guest. This means that a HPT guest which loads some SLB entries, switches to radix mode, runs the guest using the old guest entry path (e.g., because the indep_threads_mode module parameter has been set to false), and then switches back to HPT mode would now see the old SLB entries being present, whereas previously it would have seen no SLB entries. To avoid changing guest-visible behaviour, this adds a store instruction to clear vcpu->arch.slb_max for a radix guest using the old guest entry path. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-11softirq: Move do_softirq_own_stack() to generic asm headerThomas Gleixner1-0/+1
To avoid include recursion hell move the do_softirq_own_stack() related content into a generic asm header and include it from all places in arch/ which need the prototype. This allows architectures to provide an inline implementation of do_softirq_own_stack() without introducing a lot of #ifdeffery all over the place. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002513.289960691@linutronix.de
2021-02-11softirq: Move __ARCH_HAS_DO_SOFTIRQ to KconfigThomas Gleixner2-2/+1
To prepare for inlining do_softirq_own_stack() replace __ARCH_HAS_DO_SOFTIRQ with a Kconfig switch and select it in the affected architectures. This allows in the next step to move the function prototype and the inline stub into a seperate asm-generic header file which is required to avoid include recursion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002513.181713427@linutronix.de
2021-02-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller8-21/+23
2021-02-10crypto: powerpc/sha256 - remove unneeded semicolonYang Li1-1/+1
Eliminate the following coccicheck warning: ./arch/powerpc/crypto/sha256-spe-glue.c:132:2-3: Unneeded semicolon Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10KVM: PPC: Don't always report hash MMU capability for P9 < DD2.2Fabiano Rosas3-2/+13
These machines don't support running both MMU types at the same time, so remove the KVM_CAP_PPC_MMU_HASH_V3 capability when the host is using Radix MMU. [paulus@ozlabs.org - added defensive check on kvmppc_hv_ops->hash_v3_possible] Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Save and restore FSCR in the P9 pathFabiano Rosas1-0/+4
The Facility Status and Control Register is a privileged SPR that defines the availability of some features in problem state. Since it can be written by the guest, we must restore it to the previous host value after guest exit. This restoration is currently done by taking the value from current->thread.fscr, which in the P9 path is not enough anymore because the guest could context switch the QEMU thread, causing the guest-current value to be saved into the thread struct. The above situation manifested when running a QEMU linked against a libc with System Call Vectored support, which causes scv instructions to be run by QEMU early during the guest boot (during SLOF), at which point the FSCR is 0 due to guest entry. After a few scv calls (1 to a couple hundred), the context switching happens and the QEMU thread runs with the guest value, resulting in a Facility Unavailable interrupt. This patch saves and restores the host value of FSCR in the inner guest entry loop in a way independent of current->thread.fscr. The old way of doing it is still kept in place because it works for the old entry path. Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: remove unneeded semicolonYang Li1-1/+1
Eliminate the following coccicheck warning: ./arch/powerpc/kvm/booke.c:701:2-3: Unneeded semicolon Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Use POWER9 SLBIA IH=6 variant to clear SLBNicholas Piggin1-3/+3
IH=6 may preserve hypervisor real-mode ERAT entries and is the recommended SLBIA hint for switching partitions. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: No need to clear radix host SLB before loading HPT guestNicholas Piggin1-1/+5
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Fix radix guest SLB side channelNicholas Piggin1-8/+31
The slbmte instruction is legal in radix mode, including radix guest mode. This means radix guests can load the SLB with arbitrary data. KVM host does not clear the SLB when exiting a guest if it was a radix guest, which would allow a rogue radix guest to use the SLB as a side channel to communicate with other guests. Fix this by ensuring the SLB is cleared when coming out of a radix guest. Only the first 4 entries are a concern, because radix guests always run with LPCR[UPRT]=1, which limits the reach of slbmte. slbia is not used (except in a non-performance-critical path) because it can clear cached translations. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Remove support for running HPT guest on RPT host ↵Nicholas Piggin5-226/+32
without mixed mode support This reverts much of commit c01015091a770 ("KVM: PPC: Book3S HV: Run HPT guests on POWER9 radix hosts"), which was required to run HPT guests on RPT hosts on early POWER9 CPUs without support for "mixed mode", which meant the host could not run with MMU on while guests were running. This code has some corner case bugs, e.g., when the guest hits a machine check or HMI the primary locks up waiting for secondaries to switch LPCR to host, which they never do. This could all be fixed in software, but most CPUs in production have mixed mode support, and those that don't are believed to be all in installations that don't use this capability. So simplify things and remove support. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Introduce new capability for 2nd DAWRRavi Bangoria3-0/+23
Introduce KVM_CAP_PPC_DAWR1 which can be used by QEMU to query whether KVM supports 2nd DAWR or not. The capability is by default disabled even when the underlying CPU supports 2nd DAWR. QEMU needs to check and enable it manually to use the feature. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Add infrastructure to support 2nd DAWRRavi Bangoria7-1/+87
KVM code assumes single DAWR everywhere. Add code to support 2nd DAWR. DAWR is a hypervisor resource and thus H_SET_MODE hcall is used to set/ unset it. Introduce new case H_SET_MODE_RESOURCE_SET_DAWR1 for 2nd DAWR. Also, KVM will support 2nd DAWR only if CPU_FTR_DAWR1 is set. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Rename current DAWR macros and variablesRavi Bangoria5-30/+30
Power10 is introducing a second DAWR (Data Address Watchpoint Register). Use real register names (with suffix 0) from ISA for current macros and variables used by kvm. One exception is KVM_REG_PPC_DAWR. Keep it as it is because it's uapi so changing it will break userspace. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-10KVM: PPC: Book3S HV: Allow nested guest creation when L0 hv_guest_state > L1Ravi Bangoria2-12/+60
On powerpc, L1 hypervisor takes help of L0 using H_ENTER_NESTED hcall to load L2 guest state in cpu. L1 hypervisor prepares the L2 state in struct hv_guest_state and passes a pointer to it via hcall. Using that pointer, L0 reads/writes that state directly from/to L1 memory. Thus L0 must be aware of hv_guest_state layout of L1. Currently it uses version field to achieve this. i.e. If L0 hv_guest_state.version != L1 hv_guest_state.version, L0 won't allow nested kvm guest. This restriction can be loosened up a bit. L0 can be taught to understand older layout of hv_guest_state, if we restrict the new members to be added only at the end, i.e. we can allow nested guest even when L0 hv_guest_state.version > L1 hv_guest_state.version. Though, the other way around is not possible. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2021-02-09KVM: Raise the maximum number of user memslotsVitaly Kuznetsov1-1/+0
Current KVM_USER_MEM_SLOTS limits are arch specific (512 on Power, 509 on x86, 32 on s390, 16 on MIPS) but they don't really need to be. Memory slots are allocated dynamically in KVM when added so the only real limitation is 'id_to_index' array which is 'short'. We don't have any other KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined structures. Low KVM_USER_MEM_SLOTS can be a limiting factor for some configurations. In particular, when QEMU tries to start a Windows guest with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC requires two pages per vCPU and the guest is free to pick any GFN for each of them, this fragments memslots as QEMU wants to have a separate memslot for each of these pages (which are supposed to act as 'overlay' pages). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210127175731.2020089-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>