summaryrefslogtreecommitdiff
path: root/arch/s390
AgeCommit message (Collapse)AuthorFilesLines
2020-09-09s390: don't trace preemption in percpu macrosSven Schnelle1-14/+14
[ Upstream commit 1196f12a2c960951d02262af25af0bb1775ebcc2 ] Since commit a21ee6055c30 ("lockdep: Change hardirq{s_enabled,_context} to per-cpu variables") the lockdep code itself uses percpu variables. This leads to recursions because the percpu macros are calling preempt_enable() which might call trace_preempt_on(). Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-03s390/numa: set node distance to LOCAL_DISTANCESasha Levin3-9/+0
[ Upstream commit 535e4fc623fab2e09a0653fc3a3e17f382ad0251 ] The node distance is hardcoded to 0, which causes a trouble for some user-level applications. In particular, "libnuma" expects the distance of a node to itself as LOCAL_DISTANCE. This update removes the offending node distance override. Cc: <stable@vger.kernel.org> # 4.4 Fixes: 3a368f742da1 ("s390/numa: add core infrastructure") Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-26s390/ptrace: fix storage key handlingHeiko Carstens1-2/+5
[ Upstream commit fd78c59446b8d050ecf3e0897c5a486c7de7c595 ] The key member of the runtime instrumentation control block contains only the access key, not the complete storage key. Therefore the value must be shifted by four bits. Since existing user space does not necessarily query and set the access key correctly, just ignore the user space provided key and use the correct one. Note: this is only relevant for debugging purposes in case somebody compiles a kernel with a default storage access key set to a value not equal to zero. Fixes: 262832bc5acd ("s390/ptrace: add runtime instrumention register get/set") Reported-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-26s390/runtime_instrumentation: fix storage key handlingHeiko Carstens1-1/+1
[ Upstream commit 9eaba29c7985236e16468f4e6a49cc18cf01443e ] The key member of the runtime instrumentation control block contains only the access key, not the complete storage key. Therefore the value must be shifted by four bits. Note: this is only relevant for debugging purposes in case somebody compiles a kernel with a default storage access key set to a value not equal to zero. Fixes: e4b8b3f33fca ("s390: add support for runtime instrumentation") Reported-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-19s390/gmap: improve THP splittingGerald Schaefer1-7/+20
commit ba925fa35057a062ac98c3e8138b013ce4ce351c upstream. During s390_enable_sie(), we need to take care of splitting all qemu user process THP mappings. This is currently done with follow_page(FOLL_SPLIT), by simply iterating over all vma ranges, with PAGE_SIZE increment. This logic is sub-optimal and can result in a lot of unnecessary overhead, especially when using qemu and ASAN with large shadow map. Ilya reported significant system slow-down with one CPU busy for a long time and overall unresponsiveness. Fix this by using walk_page_vma() and directly calling split_huge_pmd() only for present pmds, which greatly reduces overhead. Cc: <stable@vger.kernel.org> # v5.4+ Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-16s390/maccess: add no DAT mode to kernel_writeVasily Gorbik1-5/+9
[ Upstream commit d6df52e9996dcc2062c3d9c9123288468bb95b52 ] To be able to patch kernel code before paging is initialized do plain memcpy if DAT is off. This is required to enable early jump label initialization. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-16s390: Change s390_kernel_write() return type to match memcpy()Josh Poimboeuf2-4/+7
[ Upstream commit cb2cceaefb4c4dc28fc27ff1f1b2d258bfc10353 ] s390_kernel_write()'s function type is almost identical to memcpy(). Change its return type to "void *" so they can be used interchangeably. Cc: linux-s390@vger.kernel.org Cc: heiko.carstens@de.ibm.com Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> # s390 Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-16s390/mm: fix huge pte soft dirty copyingJanosch Frank1-1/+1
commit 528a9539348a0234375dfaa1ca5dbbb2f8f8e8d2 upstream. If the pmd is soft dirty we must mark the pte as soft dirty (and not dirty). This fixes some cases for guest migration with huge page backings. Cc: <stable@vger.kernel.org> # 4.8 Fixes: bc29b7ac1d9f ("s390/mm: clean up pte/pmd encoding") Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-16s390/setup: init jump labels before command line parsingVasily Gorbik1-0/+1
commit 95e61b1b5d6394b53d147c0fcbe2ae70fbe09446 upstream. Command line parameters might set static keys. This is true for s390 at least since commit 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options"). To avoid the following WARN: static_key_enable_cpuslocked(): static key 'init_on_alloc+0x0/0x40' used before call to jump_label_init() call jump_label_init() just before parse_early_param(). jump_label_init() is safe to call multiple times (x86 does that), doesn't do any memory allocations and hence should be safe to call that early. Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options") Cc: <stable@vger.kernel.org> # 5.3: d6df52e9996d: s390/maccess: add no DAT mode to kernel_write Cc: <stable@vger.kernel.org> # 5.3 Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-16s390/kasan: fix early pgm check handler executionVasily Gorbik1-0/+2
[ Upstream commit 998f5bbe3dbdab81c1cfb1aef7c3892f5d24f6c7 ] Currently if early_pgm_check_handler is called it ends up in pgm check loop. The problem is that early_pgm_check_handler is instrumented by KASAN but executed without DAT flag enabled which leads to addressing exception when KASAN checks try to access shadow memory. Fix that by executing early handlers with DAT flag on under KASAN as expected. Reported-and-tested-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-16KVM: s390: reduce number of IO pins to 1Christian Borntraeger1-4/+4
[ Upstream commit 774911290c589e98e3638e73b24b0a4d4530e97c ] The current number of KVM_IRQCHIP_NUM_PINS results in an order 3 allocation (32kb) for each guest start/restart. This can result in OOM killer activity even with free swap when the memory is fragmented enough: kernel: qemu-system-s39 invoked oom-killer: gfp_mask=0x440dc0(GFP_KERNEL_ACCOUNT|__GFP_COMP|__GFP_ZERO), order=3, oom_score_adj=0 kernel: CPU: 1 PID: 357274 Comm: qemu-system-s39 Kdump: loaded Not tainted 5.4.0-29-generic #33-Ubuntu kernel: Hardware name: IBM 8562 T02 Z06 (LPAR) kernel: Call Trace: kernel: ([<00000001f848fe2a>] show_stack+0x7a/0xc0) kernel: [<00000001f8d3437a>] dump_stack+0x8a/0xc0 kernel: [<00000001f8687032>] dump_header+0x62/0x258 kernel: [<00000001f8686122>] oom_kill_process+0x172/0x180 kernel: [<00000001f8686abe>] out_of_memory+0xee/0x580 kernel: [<00000001f86e66b8>] __alloc_pages_slowpath+0xd18/0xe90 kernel: [<00000001f86e6ad4>] __alloc_pages_nodemask+0x2a4/0x320 kernel: [<00000001f86b1ab4>] kmalloc_order+0x34/0xb0 kernel: [<00000001f86b1b62>] kmalloc_order_trace+0x32/0xe0 kernel: [<00000001f84bb806>] kvm_set_irq_routing+0xa6/0x2e0 kernel: [<00000001f84c99a4>] kvm_arch_vm_ioctl+0x544/0x9e0 kernel: [<00000001f84b8936>] kvm_vm_ioctl+0x396/0x760 kernel: [<00000001f875df66>] do_vfs_ioctl+0x376/0x690 kernel: [<00000001f875e304>] ksys_ioctl+0x84/0xb0 kernel: [<00000001f875e39a>] __s390x_sys_ioctl+0x2a/0x40 kernel: [<00000001f8d55424>] system_call+0xd8/0x2c8 As far as I can tell s390x does not use the iopins as we bail our for anything other than KVM_IRQ_ROUTING_S390_ADAPTER and the chip/pin is only used for KVM_IRQ_ROUTING_IRQCHIP. So let us use a small number to reduce the memory footprint. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20200617083620.5409-1-borntraeger@de.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-09s390/debug: avoid kernel warning on too large number of pagesChristian Borntraeger1-1/+2
[ Upstream commit 827c4913923e0b441ba07ba4cc41e01181102303 ] When specifying insanely large debug buffers a kernel warning is printed. The debug code does handle the error gracefully, though. Instead of duplicating the check let us silence the warning to avoid crashes when panic_on_warn is used. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30s390/vdso: fix vDSO clock_getres()Vincenzo Frascino4-6/+8
[ Upstream commit 478237a595120a18e9b52fd2c57a6e8b7a01e411 ] clock_getres in the vDSO library has to preserve the same behaviour of posix_get_hrtimer_res(). In particular, posix_get_hrtimer_res() does: sec = 0; ns = hrtimer_resolution; and hrtimer_resolution depends on the enablement of the high resolution timers that can happen either at compile or at run time. Fix the s390 vdso implementation of clock_getres keeping a copy of hrtimer_resolution in vdso data and using that directly. Link: https://lkml.kernel.org/r/20200324121027.21665-1-vincenzo.frascino@arm.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [heiko.carstens@de.ibm.com: use llgf for proper zero extension] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30s390/vdso: Use $(LD) instead of $(CC) to link vDSONathan Chancellor1-6/+4
[ Upstream commit 2b2a25845d534ac6d55086e35c033961fdd83a26 ] Currently, the VDSO is being linked through $(CC). This does not match how the rest of the kernel links objects, which is through the $(LD) variable. When clang is built in a default configuration, it first attempts to use the target triple's default linker, which is just ld. However, the user can override this through the CLANG_DEFAULT_LINKER cmake define so that clang uses another linker by default, such as LLVM's own linker, ld.lld. This can be useful to get more optimized links across various different projects. However, this is problematic for the s390 vDSO because ld.lld does not have any s390 emulatiom support: https://github.com/llvm/llvm-project/blob/llvmorg-10.0.1-rc1/lld/ELF/Driver.cpp#L132-L150 Thus, if a user is using a toolchain with ld.lld as the default, they will see an error, even if they have specified ld.bfd through the LD make variable: $ make -j"$(nproc)" -s ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- LLVM=1 \ LD=s390x-linux-gnu-ld \ defconfig arch/s390/kernel/vdso64/ ld.lld: error: unknown emulation: elf64_s390 clang-11: error: linker command failed with exit code 1 (use -v to see invocation) Normally, '-fuse-ld=bfd' could be used to get around this; however, this can be fragile, depending on paths and variable naming. The cleaner solution for the kernel is to take advantage of the fact that $(LD) can be invoked directly, which bypasses the heuristics of $(CC) and respects the user's choice. Similar changes have been done for ARM, ARM64, and MIPS. Link: https://lkml.kernel.org/r/20200602192523.32758-1-natechancellor@gmail.com Link: https://github.com/ClangBuiltLinux/linux/issues/1041 Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> [heiko.carstens@de.ibm.com: add --build-id flag] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30s390/ptrace: fix setting syscall numberSven Schnelle1-1/+30
[ Upstream commit 873e5a763d604c32988c4a78913a8dab3862d2f9 ] When strace wants to update the syscall number, it sets GPR2 to the desired number and updates the GPR via PTRACE_SETREGSET. It doesn't update regs->int_code which would cause the old syscall executed on syscall restart. As we cannot change the ptrace ABI and don't have a field for the interruption code, check whether the tracee is in a syscall and the last instruction was svc. In that case assume that the tracer wants to update the syscall number and copy the GPR2 value to regs->int_code. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30s390/ptrace: pass invalid syscall numbers to tracingSven Schnelle2-5/+3
[ Upstream commit 00332c16b1604242a56289ff2b26e283dbad0812 ] tracing expects to see invalid syscalls, so pass it through. The syscall path in entry.S checks the syscall number before looking up the handler, so it is still safe. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-24s390: fix syscall_get_error for compat processesDmitry V. Levin1-1/+11
commit b3583fca5fb654af2cfc1c08259abb9728272538 upstream. If both the tracer and the tracee are compat processes, and gprs[2] is assigned a value by __poke_user_compat, then the higher 32 bits of gprs[2] are cleared, IS_ERR_VALUE() always returns false, and syscall_get_error() always returns 0. Fix the implementation by sign-extending the value for compat processes the same way as x86 implementation does. The bug was exposed to user space by commit 201766a20e30f ("ptrace: add PTRACE_GET_SYSCALL_INFO request") and detected by strace test suite. This change fixes strace syscall tampering on s390. Link: https://lkml.kernel.org/r/20200602180051.GA2427@altlinux.org Fixes: 753c4dd6a2fa2 ("[S390] ptrace changes") Cc: Elvira Khabirova <lineprinter@altlinux.org> Cc: stable@vger.kernel.org # v2.6.28+ Signed-off-by: Dmitry V. Levin <ldv@altlinux.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-17s390/pci: Log new handle in clp_disable_fh()Petr Tesarik1-2/+1
[ Upstream commit e1750a3d9abbea2ece29cac8dc5a6f5bc19c1492 ] After disabling a function, the original handle is logged instead of the disabled handle. Link: https://lkml.kernel.org/r/20200522183922.5253-1-ptesarik@suse.com Fixes: 17cdec960cf7 ("s390/pci: Recover handle in clp_set_pci_fn()") Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Petr Tesarik <ptesarik@suse.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-07s390/mm: fix set_huge_pte_at() for empty ptesGerald Schaefer1-3/+6
[ Upstream commit ac8372f3b4e41015549b331a4f350224661e7fc6 ] On s390, the layout of normal and large ptes (i.e. pmds/puds) differs. Therefore, set_huge_pte_at() does a conversion from a normal pte to the corresponding large pmd/pud. So, when converting an empty pte, this should result in an empty pmd/pud, which would return true for pmd/pud_none(). However, after conversion we also mark the pmd/pud as large, and therefore present. For empty ptes, this will result in an empty pmd/pud that is also marked as large, and pmd/pud_none() would not return true. There is currently no issue with this behaviour, as set_huge_pte_at() does not seem to be called for empty ptes. It would be valid though, so let's fix this by not marking empty ptes as large in set_huge_pte_at(). This was found by testing a patch from from Anshuman Khandual, which is currently discussed on LKML ("mm/debug: Add more arch page table helper tests"). Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-07s390/ftrace: save traced function callerVasily Gorbik1-0/+1
[ Upstream commit b4adfe55915d8363e244e42386d69567db1719b9 ] A typical backtrace acquired from ftraced function currently looks like the following (e.g. for "path_openat"): arch_stack_walk+0x15c/0x2d8 stack_trace_save+0x50/0x68 stack_trace_call+0x15a/0x3b8 ftrace_graph_caller+0x0/0x1c 0x3e0007e3c98 <- ftraced function caller (should be do_filp_open+0x7c/0xe8) do_open_execat+0x70/0x1b8 __do_execve_file.isra.0+0x7d8/0x860 __s390x_sys_execve+0x56/0x68 system_call+0xdc/0x2d8 Note random "0x3e0007e3c98" stack value as ftraced function caller. This value causes either imprecise unwinder result or unwinding failure. That "0x3e0007e3c98" comes from r14 of ftraced function stack frame, which it haven't had a chance to initialize since the very first instruction calls ftrace code ("ftrace_caller"). (ftraced function might never save r14 as well). Nevertheless according to s390 ABI any function is called with stack frame allocated for it and r14 contains return address. "ftrace_caller" itself is called with "brasl %r0,ftrace_caller". So, to fix this issue simply always save traced function caller onto ftraced function stack frame. Reported-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-05-27s390/kexec_file: fix initrd location for kdump kernelPhilipp Rudo1-1/+1
commit 70b690547d5ea1a3d135a4cc39cd1e08246d0c3a upstream. initrd_start must not point at the location the initrd is loaded into the crashkernel memory but at the location it will be after the crashkernel memory is swapped with the memory at 0. Fixes: ee337f5469fd ("s390/kexec_file: Add crash support to image loader") Reported-by: Lianbo Jiang <lijiang@redhat.com> Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Tested-by: Lianbo Jiang <lijiang@redhat.com> Link: https://lore.kernel.org/r/20200512193956.15ae3f23@laptop2-ibm.local Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-27s390/kaslr: add support for R_390_JMP_SLOT relocation typeGerald Schaefer1-0/+1
commit 4c1cbcbd6c56c79de2c07159be4f55386bb0bef2 upstream. With certain kernel configurations, the R_390_JMP_SLOT relocation type might be generated, which is not expected by the KASLR relocation code, and the kernel stops with the message "Unknown relocation type". This was found with a zfcpdump kernel config, where CONFIG_MODULES=n and CONFIG_VFIO=n. In that case, symbol_get() is used on undefined __weak symbols in virt/kvm/vfio.c, which results in the generation of R_390_JMP_SLOT relocation types. Fix this by handling R_390_JMP_SLOT similar to R_390_GLOB_DAT. Fixes: 805bc0bc238f ("s390/kernel: build a relocatable kernel") Cc: <stable@vger.kernel.org> # v5.2+ Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Reviewed-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-27s390/pci: Fix s390_mmio_read/write with MIONiklas Schnelle2-4/+219
commit f058599e22d59e594e5aae1dc10560568d8f4a8b upstream. The s390_mmio_read/write syscalls are currently broken when running with MIO. The new pcistb_mio/pcstg_mio/pcilg_mio instructions are executed similiarly to normal load/store instructions and do address translation in the current address space. That means inside the kernel they are aware of mappings into kernel address space while outside the kernel they use user space mappings (usually created through mmap'ing a PCI device file). Now when existing user space applications use the s390_pci_mmio_write and s390_pci_mmio_read syscalls, they pass I/O addresses that are mapped into user space so as to be usable with the new instructions without needing a syscall. Accessing these addresses with the old instructions as done currently leads to a kernel panic. Also, for such a user space mapping there may not exist an equivalent kernel space mapping which means we can't just use the new instructions in kernel space. Instead of replicating user mappings in the kernel which then might collide with other mappings, we can conceptually execute the new instructions as if executed by the user space application using the secondary address space. This even allows us to directly store to the user pointer without the need for copy_to/from_user(). Cc: stable@vger.kernel.org Fixes: 71ba41c9b1d9 ("s390/pci: provide support for MIO instructions") Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-14KVM: s390: Remove false WARN_ON_ONCE for the PQAP instructionChristian Borntraeger1-1/+3
commit 5615e74f48dcc982655543e979b6c3f3f877e6f6 upstream. In LPAR we will only get an intercept for FC==3 for the PQAP instruction. Running nested under z/VM can result in other intercepts as well as ECA_APIE is an effective bit: If one hypervisor layer has turned this bit off, the end result will be that we will get intercepts for all function codes. Usually the first one will be a query like PQAP(QCI). So the WARN_ON_ONCE is not right. Let us simply remove it. Cc: Pierre Morel <pmorel@linux.ibm.com> Cc: Tony Krowiak <akrowiak@linux.ibm.com> Cc: stable@vger.kernel.org # v5.3+ Fixes: e5282de93105 ("s390: ap: kvm: add PQAP interception for AQIC") Link: https://lore.kernel.org/kvm/20200505083515.2720-1-borntraeger@de.ibm.com Reported-by: Qian Cai <cailca@icloud.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-02s390/pci: do not set affinity for floating irqsNiklas Schnelle1-2/+3
commit 86dbf32da150339ca81509fa2eb84c814b55258b upstream. with the introduction of CPU directed interrupts the kernel parameter pci=force_floating was introduced to fall back to the previous behavior using floating irqs. However we were still setting the affinity in that case, both in __irq_alloc_descs() and via the irq_set_affinity callback in struct irq_chip. For the former only set the affinity in the directed case. The latter is explicitly set in zpci_directed_irq_init() so we can just leave it unset for the floating case. Fixes: e979ce7bced2 ("s390/pci: provide support for CPU directed interrupts") Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-02s390/ftrace: fix potential crashes when switching tracersPhilipp Rudo3-4/+4
commit 8ebf6da9db1b2a20bb86cc1bee2552e894d03308 upstream. Switching tracers include instruction patching. To prevent that a instruction is patched while it's read the instruction patching is done in stop_machine 'context'. This also means that any function called during stop_machine must not be traced. Thus add 'notrace' to all functions called within stop_machine. Fixes: 1ec2772e0c3c ("s390/diag: add a statistic for diagnose calls") Fixes: 38f2c691a4b3 ("s390: improve wait logic of stop_machine") Fixes: 4ecf0a43e729 ("processor: get rid of cpu_relax_yield") Signed-off-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-29s390/mm: fix page table upgrade vs 2ndary address mode accessesChristian Borntraeger2-2/+18
commit 316ec154810960052d4586b634156c54d0778f74 upstream. A page table upgrade in a kernel section that uses secondary address mode will mess up the kernel instructions as follows: Consider the following scenario: two threads are sharing memory. On CPU1 thread 1 does e.g. strnlen_user(). That gets to old_fs = enable_sacf_uaccess(); len = strnlen_user_srst(src, size); and " la %2,0(%1)\n" " la %3,0(%0,%1)\n" " slgr %0,%0\n" " sacf 256\n" "0: srst %3,%2\n" in strnlen_user_srst(). At that point we are in secondary space mode, control register 1 points to kernel page table and instruction fetching happens via c1, rather than usual c13. Interrupts are not disabled, for obvious reasons. On CPU2 thread 2 does MAP_FIXED mmap(), forcing the upgrade of page table from 3-level to e.g. 4-level one. We'd allocated new top-level table, set it up and now we hit this: notify = 1; spin_unlock_bh(&mm->page_table_lock); } if (notify) on_each_cpu(__crst_table_upgrade, mm, 0); OK, we need to actually change over to use of new page table and we need that to happen in all threads that are currently running. Which happens to include the thread 1. IPI is delivered and we have static void __crst_table_upgrade(void *arg) { struct mm_struct *mm = arg; if (current->active_mm == mm) set_user_asce(mm); __tlb_flush_local(); } run on CPU1. That does static inline void set_user_asce(struct mm_struct *mm) { S390_lowcore.user_asce = mm->context.asce; OK, user page table address updated... __ctl_load(S390_lowcore.user_asce, 1, 1); ... and control register 1 set to it. clear_cpu_flag(CIF_ASCE_PRIMARY); } IPI is run in home space mode, so it's fine - insns are fetched using c13, which always points to kernel page table. But as soon as we return from the interrupt, previous PSW is restored, putting CPU1 back into secondary space mode, at which point we no longer get the kernel instructions from the kernel mapping. The fix is to only fixup the control registers that are currently in use for user processes during the page table update. We must also disable interrupts in enable_sacf_uaccess to synchronize the cr and thread.mm_segment updates against the on_each-cpu. Fixes: 0aaba41b58bc ("s390: remove all code using the access register mode") Cc: stable@vger.kernel.org # 4.15+ Reported-by: Al Viro <viro@zeniv.linux.org.uk> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> References: CVE-2020-11884 Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-29KVM: s390: Return last valid slot if approx index is out-of-boundsSean Christopherson1-0/+3
commit 97daa028f3f621adff2c4f7b15fe0874e5b5bd6c upstream. Return the index of the last valid slot from gfn_to_memslot_approx() if its binary search loop yielded an out-of-bounds index. The index can be out-of-bounds if the specified gfn is less than the base of the lowest memslot (which is also the last valid memslot). Note, the sole caller, kvm_s390_get_cmma(), ensures used_slots is non-zero. Fixes: afdad61615cc3 ("KVM: s390: Fix storage attributes migration with memory slots") Cc: stable@vger.kernel.org # 4.19.x: 0774a964ef56: KVM: Fix out of range accesses to memslots Cc: stable@vger.kernel.org # 4.19.x Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200408064059.8957-3-sean.j.christopherson@intel.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-23KVM: s390: vsie: Fix possible race when shadowing region 3 tablesDavid Hildenbrand1-0/+1
[ Upstream commit 1493e0f944f3c319d11e067c185c904d01c17ae5 ] We have to properly retry again by returning -EINVAL immediately in case somebody else instantiated the table concurrently. We missed to add the goto in this function only. The code now matches the other, similar shadowing functions. We are overwriting an existing region 2 table entry. All allocated pages are added to the crst_list to be freed later, so they are not lost forever. However, when unshadowing the region 2 table, we wouldn't trigger unshadowing of the original shadowed region 3 table that we replaced. It would get unshadowed when the original region 3 table is modified. As it's not connected to the page table hierarchy anymore, it's not going to get used anymore. However, for a limited time, this page table will stick around, so it's in some sense a temporary memory leak. Identified by manual code inspection. I don't think this classifies as stable material. Fixes: 998f637cc4b9 ("s390/mm: avoid races on region/segment/page table shadowing") Signed-off-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20200403153050.20569-4-david@redhat.com Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23s390/cpum_sf: Fix wrong page count in error messageThomas Richter1-0/+1
[ Upstream commit 4141b6a5e9f171325effc36a22eb92bf961e7a5c ] When perf record -e SF_CYCLES_BASIC_DIAG runs with very high frequency, the samples arrive faster than the perf process can save them to file. Eventually, for longer running processes, this leads to the siutation where the trace buffers allocated by perf slowly fills up. At one point the auxiliary trace buffer is full and the CPU Measurement sampling facility is turned off. Furthermore a warning is printed to the kernel log buffer: cpum_sf: The AUX buffer with 0 pages for the diagnostic-sampling mode is full The number of allocated pages for the auxiliary trace buffer is shown as zero pages. That is wrong. Fix this by saving the number of allocated pages before entering the work loop in the interrupt handler. When the interrupt handler processes the samples, it may detect the buffer full condition and stop sampling, reducing the buffer size to zero. Print the correct value in the error message: cpum_sf: The AUX buffer with 256 pages for the diagnostic-sampling mode is full Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-23s390/cpuinfo: fix wrong output when CPU0 is offlineAlexander Gordeev1-1/+4
[ Upstream commit 872f27103874a73783aeff2aac2b41a489f67d7c ] /proc/cpuinfo should not print information about CPU 0 when it is offline. Fixes: 281eaa8cb67c ("s390/cpuinfo: simplify locking and skip offline cpus early") Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> [heiko.carstens@de.ibm.com: shortened commit message] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-17s390/diag: fix display of diagnose call statisticsMichael Mueller1-1/+1
commit 6c7c851f1b666a8a455678a0b480b9162de86052 upstream. Show the full diag statistic table and not just parts of it. The issue surfaced in a KVM guest with a number of vcpus defined smaller than NR_DIAG_STAT. Fixes: 1ec2772e0c3c ("s390/diag: add a statistic for diagnose calls") Cc: stable@vger.kernel.org Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-17KVM: s390: vsie: Fix delivery of addressing exceptionsDavid Hildenbrand1-0/+1
commit 4d4cee96fb7a3cc53702a9be8299bf525be4ee98 upstream. Whenever we get an -EFAULT, we failed to read in guest 2 physical address space. Such addressing exceptions are reported via a program intercept to the nested hypervisor. We faked the intercept, we have to return to guest 2. Instead, right now we would be returning -EFAULT from the intercept handler, eventually crashing the VM. the correct thing to do is to return 1 as rc == 1 is the internal representation of "we have to go back into g2". Addressing exceptions can only happen if the g2->g3 page tables reference invalid g2 addresses (say, either a table or the final page is not accessible - so something that basically never happens in sane environments. Identified by manual code inspection. Fixes: a3508fbe9dc6 ("KVM: s390: vsie: initial support for nested virtualization") Cc: <stable@vger.kernel.org> # v4.8+ Signed-off-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20200403153050.20569-3-david@redhat.com Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [borntraeger@de.ibm.com: fix patch description] Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-17KVM: s390: vsie: Fix region 1 ASCE sanity shadow address checksDavid Hildenbrand1-1/+5
commit a1d032a49522cb5368e5dfb945a85899b4c74f65 upstream. In case we have a region 1 the following calculation (31 + ((gmap->asce & _ASCE_TYPE_MASK) >> 2)*11) results in 64. As shifts beyond the size are undefined the compiler is free to use instructions like sllg. sllg will only use 6 bits of the shift value (here 64) resulting in no shift at all. That means that ALL addresses will be rejected. The can result in endless loops, e.g. when prefix cannot get mapped. Fixes: 4be130a08420 ("s390/mm: add shadow gmap support") Tested-by: Janosch Frank <frankja@linux.ibm.com> Reported-by: Janosch Frank <frankja@linux.ibm.com> Cc: <stable@vger.kernel.org> # v4.8+ Signed-off-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20200403153050.20569-2-david@redhat.com Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [borntraeger@de.ibm.com: fix patch description, remove WARN_ON_ONCE] Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-13s390: prevent leaking kernel address in BEARSven Schnelle9-27/+62
commit 0b38b5e1d0e2f361e418e05c179db05bb688bbd6 upstream. When userspace executes a syscall or gets interrupted, BEAR contains a kernel address when returning to userspace. This make it pretty easy to figure out where the kernel is mapped even with KASLR enabled. To fix this, add lpswe to lowcore and always execute it there, so userspace sees only the lowcore address of lpswe. For this we have to extend both critical_cleanup and the SWITCH_ASYNC macro to also check for lpswe addresses in lowcore. Fixes: b2d24b97b2a9 ("s390/kernel: add support for kernel address space layout randomization (KASLR)") Cc: <stable@vger.kernel.org> # v5.2+ Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-12s390/mm: fix panic in gup_fast on large pudGerald Schaefer1-0/+6
commit 582b4e55403e053d8a48ff687a05174da9cc3fb0 upstream. On s390 there currently is no implementation of pud_write(). That was ok as long as we had our own implementation of get_user_pages_fast() which checked for pud protection by testing the bit directly w/o using pud_write(). The other callers of pud_write() are not reachable on s390. After commit 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code") we use the generic get_user_pages_fast(), which does call pud_write() in pud_access_permitted() for FOLL_WRITE access on a large pud. Without an s390 specific pud_write(), the generic version is called, which contains a BUG() statement to remind us that we don't have a proper implementation. This results in a kernel panic. Fix this by providing an implementation of pud_write(). Cc: <stable@vger.kernel.org> # 5.2+ Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code") Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-12s390/pci: Fix unexpected write combine on resourceNiklas Schnelle1-2/+2
commit df057c914a9c219ac8b8ed22caf7da2f80c1fe26 upstream. In the initial MIO support introduced in commit 71ba41c9b1d9 ("s390/pci: provide support for MIO instructions") zpci_map_resource() and zpci_setup_resources() default to using the mio_wb address as the resource's start address. This means users of the mapping, which includes most drivers, will get write combining on PCI Stores. This may lead to problems when drivers expect write through behavior when not using an explicit ioremap_wc(). Cc: stable@vger.kernel.org Fixes: 71ba41c9b1d9 ("s390/pci: provide support for MIO instructions") Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-12s390/qdio: fill SL with absolute addressesJulian Wiedmann1-1/+1
[ Upstream commit e9091ffd6a0aaced111b5d6ead5eaab5cd7101bc ] As the comment says, sl->sbal holds an absolute address. qeth currently solves this through wild casting, while zfcp doesn't care. Handle this properly in the code that actually builds the SL. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Reviewed-by: Steffen Maier <maier@linux.ibm.com> [for qdio] Reviewed-by: Benjamin Block <bblock@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-03-12s390: make 'install' not depend on vmlinuxMasahiro Yamada2-2/+2
[ Upstream commit 94e90f727f7424d827256023cace829cad6896f4 ] For the same reason as commit 19514fc665ff ("arm, kbuild: make "make install" not depend on vmlinux"), the install targets should never trigger the rebuild of the kernel. The variable, CONFIGURE, is not set by anyone. Remove it as well. Link: https://lkml.kernel.org/r/20200216144829.27023-1-masahiroy@kernel.org Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-28s390/mm: Explicitly compare PAGE_DEFAULT_KEY against zero in ↵Nathan Chancellor1-1/+1
storage_key_init_range commit 380324734956c64cd060e1db4304f3117ac15809 upstream. Clang warns: In file included from ../arch/s390/purgatory/purgatory.c:10: In file included from ../include/linux/kexec.h:18: In file included from ../include/linux/crash_core.h:6: In file included from ../include/linux/elfcore.h:5: In file included from ../include/linux/user.h:1: In file included from ../arch/s390/include/asm/user.h:11: ../arch/s390/include/asm/page.h:45:6: warning: converting the result of '<<' to a boolean always evaluates to false [-Wtautological-constant-compare] if (PAGE_DEFAULT_KEY) ^ ../arch/s390/include/asm/page.h:23:44: note: expanded from macro 'PAGE_DEFAULT_KEY' #define PAGE_DEFAULT_KEY (PAGE_DEFAULT_ACC << 4) ^ 1 warning generated. Explicitly compare this against zero to silence the warning as it is intended to be used in a boolean context. Fixes: de3fa841e429 ("s390/mm: fix compile for PAGE_DEFAULT_KEY != 0") Link: https://github.com/ClangBuiltLinux/linux/issues/860 Link: https://lkml.kernel.org/r/20200214064207.10381-1-natechancellor@gmail.com Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-28s390/kaslr: Fix casts in get_randomNathan Chancellor1-1/+1
commit 788d671517b5c81efbed9310ccbadb8cca86a08e upstream. Clang warns: ../arch/s390/boot/kaslr.c:78:25: warning: passing 'char *' to parameter of type 'const u8 *' (aka 'const unsigned char *') converts between pointers to integer types with different sign [-Wpointer-sign] (char *) entropy, (char *) entropy, ^~~~~~~~~~~~~~~~ ../arch/s390/include/asm/cpacf.h:280:28: note: passing argument to parameter 'src' here u8 *dest, const u8 *src, long src_len) ^ 2 warnings generated. Fix the cast to match what else is done in this function. Fixes: b2d24b97b2a9 ("s390/kernel: add support for kernel address space layout randomization (KASLR)") Link: https://github.com/ClangBuiltLinux/linux/issues/862 Link: https://lkml.kernel.org/r/20200208141052.48476-1-natechancellor@gmail.com Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-24s390/pci: Recover handle in clp_set_pci_fn()Niklas Schnelle3-21/+31
[ Upstream commit 17cdec960cf776b20b1fb08c622221babe591d51 ] When we try to recover a PCI function using echo 1 > /sys/bus/pci/devices/<id>/recover or manually with echo 1 > /sys/bus/pci/devices/<id>/remove echo 0 > /sys/bus/pci/slots/<slot>/power echo 1 > /sys/bus/pci/slots/<slot>/power clp_disable_fn() / clp_enable_fn() call clp_set_pci_fn() to first disable and then reenable the function. When the function is already in the requested state we may be left with an invalid function handle. To get a new valid handle we do a clp_list_pci() call. For this we need both the function ID and function handle in clp_set_pci_fn() so pass the zdev and get both. To simplify things also pull setting the refreshed function handle into clp_set_pci_fn() Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-24s390/ftrace: generate traced function stack frameVasily Gorbik1-1/+14
[ Upstream commit 45f7a0da600d3c409b5ad8d5ddddacd98ddc8840 ] Currently backtrace from ftraced function does not contain ftraced function itself. e.g. for "path_openat": arch_stack_walk+0x15c/0x2d8 stack_trace_save+0x50/0x68 stack_trace_call+0x15e/0x3d8 ftrace_graph_caller+0x0/0x1c <-- ftrace code do_filp_open+0x7c/0xe8 <-- ftraced function caller do_open_execat+0x76/0x1b8 open_exec+0x52/0x78 load_elf_binary+0x180/0x1160 search_binary_handler+0x8e/0x288 load_script+0x2a8/0x2b8 search_binary_handler+0x8e/0x288 __do_execve_file.isra.39+0x6fa/0xb40 __s390x_sys_execve+0x56/0x68 system_call+0xdc/0x2d8 Ftraced function is expected in the backtrace by ftrace kselftests, which are now failing. It would also be nice to have it for clarity reasons. "ftrace_caller" itself is called without stack frame allocated for it and does not store its caller (ftraced function). Instead it simply allocates a stack frame for "ftrace_trace_function" and sets backchain to point to ftraced function stack frame (which contains ftraced function caller in saved r14). To fix this issue make "ftrace_caller" allocate a stack frame for itself just to store ftraced function for the stack unwinder. As a result backtrace looks like the following: arch_stack_walk+0x15c/0x2d8 stack_trace_save+0x50/0x68 stack_trace_call+0x15e/0x3d8 ftrace_graph_caller+0x0/0x1c <-- ftrace code path_openat+0x6/0xd60 <-- ftraced function do_filp_open+0x7c/0xe8 <-- ftraced function caller do_open_execat+0x76/0x1b8 open_exec+0x52/0x78 load_elf_binary+0x180/0x1160 search_binary_handler+0x8e/0x288 load_script+0x2a8/0x2b8 search_binary_handler+0x8e/0x288 __do_execve_file.isra.39+0x6fa/0xb40 __s390x_sys_execve+0x56/0x68 system_call+0xdc/0x2d8 Reported-by: Sven Schnelle <sven.schnelle@ibm.com> Tested-by: Sven Schnelle <sven.schnelle@ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-24s390: adjust -mpacked-stack support check for clang 10Vasily Gorbik1-1/+1
[ Upstream commit 253b3c4b2920e07ce9e2b18800b9b65245e2fafa ] clang 10 introduces -mpacked-stack compiler option implementation. At the same time currently it does not support a combination of -mpacked-stack and -mbackchain. This leads to the following build error: clang: error: unsupported option '-mpacked-stack with -mbackchain' for target 's390x-ibm-linux' If/when clang adds support for a combination of -mpacked-stack and -mbackchain it would also require -msoft-float (like gcc does). According to Ulrich Weigand "stack slot assigned to the kernel backchain overlaps the stack slot assigned to the FPR varargs (both are required to be placed immediately after the saved r15 slot if present)." Extend -mpacked-stack compiler option support check to include all 3 options -mpacked-stack -mbackchain -msoft-float which must present to support -mpacked-stack with -mbackchain. Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-24KVM: s390: ENOTSUPP -> EOPNOTSUPP fixupsChristian Borntraeger1-3/+3
[ Upstream commit c611990844c28c61ca4b35ff69d3a2ae95ccd486 ] There is no ENOTSUPP for userspace. Reported-by: Julian Wiedmann <jwi@linux.ibm.com> Fixes: 519783935451 ("KVM: s390: introduce ais mode modify function") Fixes: 2c1a48f2e5ed ("KVM: S390: add new group for flic") Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-24s390/pci: Fix possible deadlock in recover_store()Niklas Schnelle1-21/+42
[ Upstream commit 576c75e36c689bec6a940e807bae27291ab0c0de ] With zpci_disable() working, lockdep detected a potential deadlock (lockdep output at the end). The deadlock is between recovering a PCI function via the /sys/bus/pci/devices/<dev>/recover attribute vs powering it off via /sys/bus/pci/slots/<slot>/power. The fix is analogous to the changes in commit 0ee223b2e1f6 ("scsi: core: Avoid that SCSI device removal through sysfs triggers a deadlock") that fixed a potential deadlock on removing a SCSI device via sysfs. [ 204.830107] ====================================================== [ 204.830109] WARNING: possible circular locking dependency detected [ 204.830111] 5.5.0-rc2-06072-gbc03ecc9a672 #6 Tainted: G W [ 204.830112] ------------------------------------------------------ [ 204.830113] bash/1034 is trying to acquire lock: [ 204.830115] 0000000192a1a610 (kn->count#200){++++}, at: kernfs_remove_by_name_ns+0x5c/0xa8 [ 204.830122] but task is already holding lock: [ 204.830123] 00000000c16134a8 (pci_rescan_remove_lock){+.+.}, at: pci_stop_and_remove_bus_device_locked+0x26/0x48 [ 204.830128] which lock already depends on the new lock. [ 204.830129] the existing dependency chain (in reverse order) is: [ 204.830130] -> #1 (pci_rescan_remove_lock){+.+.}: [ 204.830134] validate_chain+0x93a/0xd08 [ 204.830136] __lock_acquire+0x4ae/0x9d0 [ 204.830137] lock_acquire+0x114/0x280 [ 204.830140] __mutex_lock+0xa2/0x960 [ 204.830142] mutex_lock_nested+0x32/0x40 [ 204.830145] recover_store+0x4c/0xa8 [ 204.830147] kernfs_fop_write+0xe6/0x218 [ 204.830151] vfs_write+0xb0/0x1b8 [ 204.830152] ksys_write+0x6c/0xf8 [ 204.830154] system_call+0xd8/0x2d8 [ 204.830155] -> #0 (kn->count#200){++++}: [ 204.830187] check_noncircular+0x1e6/0x240 [ 204.830189] check_prev_add+0xfc/0xdb0 [ 204.830190] validate_chain+0x93a/0xd08 [ 204.830192] __lock_acquire+0x4ae/0x9d0 [ 204.830193] lock_acquire+0x114/0x280 [ 204.830194] __kernfs_remove.part.0+0x2e4/0x360 [ 204.830196] kernfs_remove_by_name_ns+0x5c/0xa8 [ 204.830198] remove_files.isra.0+0x4c/0x98 [ 204.830199] sysfs_remove_group+0x66/0xc8 [ 204.830201] sysfs_remove_groups+0x46/0x68 [ 204.830204] device_remove_attrs+0x52/0x90 [ 204.830207] device_del+0x182/0x418 [ 204.830208] pci_remove_bus_device+0x8a/0x130 [ 204.830210] pci_stop_and_remove_bus_device_locked+0x3a/0x48 [ 204.830212] disable_slot+0x68/0x100 [ 204.830213] power_write_file+0x7c/0x130 [ 204.830215] kernfs_fop_write+0xe6/0x218 [ 204.830217] vfs_write+0xb0/0x1b8 [ 204.830218] ksys_write+0x6c/0xf8 [ 204.830220] system_call+0xd8/0x2d8 [ 204.830221] other info that might help us debug this: [ 204.830223] Possible unsafe locking scenario: [ 204.830224] CPU0 CPU1 [ 204.830225] ---- ---- [ 204.830226] lock(pci_rescan_remove_lock); [ 204.830227] lock(kn->count#200); [ 204.830229] lock(pci_rescan_remove_lock); [ 204.830231] lock(kn->count#200); [ 204.830233] *** DEADLOCK *** [ 204.830234] 4 locks held by bash/1034: [ 204.830235] #0: 00000001b6fbc498 (sb_writers#4){.+.+}, at: vfs_write+0x158/0x1b8 [ 204.830239] #1: 000000018c9f5090 (&of->mutex){+.+.}, at: kernfs_fop_write+0xaa/0x218 [ 204.830242] #2: 00000001f7da0810 (kn->count#235){.+.+}, at: kernfs_fop_write+0xb6/0x218 [ 204.830245] #3: 00000000c16134a8 (pci_rescan_remove_lock){+.+.}, at: pci_stop_and_remove_bus_device_locked+0x26/0x48 [ 204.830248] stack backtrace: [ 204.830250] CPU: 2 PID: 1034 Comm: bash Tainted: G W 5.5.0-rc2-06072-gbc03ecc9a672 #6 [ 204.830252] Hardware name: IBM 8561 T01 703 (LPAR) [ 204.830253] Call Trace: [ 204.830257] [<00000000c05e10c0>] show_stack+0x88/0xf0 [ 204.830260] [<00000000c112dca4>] dump_stack+0xa4/0xe0 [ 204.830261] [<00000000c0694c06>] check_noncircular+0x1e6/0x240 [ 204.830263] [<00000000c0695bec>] check_prev_add+0xfc/0xdb0 [ 204.830264] [<00000000c06971da>] validate_chain+0x93a/0xd08 [ 204.830266] [<00000000c06994c6>] __lock_acquire+0x4ae/0x9d0 [ 204.830267] [<00000000c069867c>] lock_acquire+0x114/0x280 [ 204.830269] [<00000000c09ca15c>] __kernfs_remove.part.0+0x2e4/0x360 [ 204.830270] [<00000000c09cb5c4>] kernfs_remove_by_name_ns+0x5c/0xa8 [ 204.830272] [<00000000c09cee14>] remove_files.isra.0+0x4c/0x98 [ 204.830274] [<00000000c09cf2ae>] sysfs_remove_group+0x66/0xc8 [ 204.830276] [<00000000c09cf356>] sysfs_remove_groups+0x46/0x68 [ 204.830278] [<00000000c0e3dfe2>] device_remove_attrs+0x52/0x90 [ 204.830280] [<00000000c0e40382>] device_del+0x182/0x418 [ 204.830281] [<00000000c0dcfd7a>] pci_remove_bus_device+0x8a/0x130 [ 204.830283] [<00000000c0dcfe92>] pci_stop_and_remove_bus_device_locked+0x3a/0x48 [ 204.830285] [<00000000c0de7190>] disable_slot+0x68/0x100 [ 204.830286] [<00000000c0de6514>] power_write_file+0x7c/0x130 [ 204.830288] [<00000000c09cc846>] kernfs_fop_write+0xe6/0x218 [ 204.830290] [<00000000c08f3480>] vfs_write+0xb0/0x1b8 [ 204.830291] [<00000000c08f378c>] ksys_write+0x6c/0xf8 [ 204.830293] [<00000000c1154374>] system_call+0xd8/0x2d8 [ 204.830294] INFO: lockdep is turned off. Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-19s390/time: Fix clk type in get_tod_clockNathan Chancellor1-1/+1
commit 0f8a206df7c920150d2aa45574fba0ab7ff6be4f upstream. Clang warns: In file included from ../arch/s390/boot/startup.c:3: In file included from ../include/linux/elf.h:5: In file included from ../arch/s390/include/asm/elf.h:132: In file included from ../include/linux/compat.h:10: In file included from ../include/linux/time.h:74: In file included from ../include/linux/time32.h:13: In file included from ../include/linux/timex.h:65: ../arch/s390/include/asm/timex.h:160:20: warning: passing 'unsigned char [16]' to parameter of type 'char *' converts between pointers to integer types with different sign [-Wpointer-sign] get_tod_clock_ext(clk); ^~~ ../arch/s390/include/asm/timex.h:149:44: note: passing argument to parameter 'clk' here static inline void get_tod_clock_ext(char *clk) ^ Change clk's type to just be char so that it matches what happens in get_tod_clock_ext. Fixes: 57b28f66316d ("[S390] s390_hypfs: Add new attributes") Link: https://github.com/ClangBuiltLinux/linux/issues/861 Link: http://lkml.kernel.org/r/20200208140858.47970-1-natechancellor@gmail.com Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-19s390/uv: Fix handling of length extensionsChristian Borntraeger1-1/+2
commit 27dc0700c3be7c681cea03c5230b93d02f623492 upstream. The query parameter block might contain additional information and can be extended in the future. If the size of the block does not suffice we get an error code of rc=0x100. The buffer will contain all information up to the specified size and the hypervisor/guest simply do not need the additional information as they do not know about the new data. That means that we can (and must) accept rc=0x100 as success. Cc: stable@vger.kernel.org Reviewed-by: Cornelia Huck <cohuck@redhat.com> Fixes: 5abb9351dfd9 ("s390/uv: introduce guest side ultravisor code") Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11KVM: s390: do not clobber registers during guest reset/store statusChristian Borntraeger1-4/+2
commit 55680890ea78be0df5e1384989f1be835043c084 upstream. The initial CPU reset clobbers the userspace fpc and the store status ioctl clobbers the guest acrs + fpr. As these calls are only done via ioctl (and not via vcpu_run), no CPU context is loaded, so we can (and must) act directly on the sync regs, not on the thread context. Cc: stable@kernel.org Fixes: e1788bb995be ("KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load") Fixes: 31d8b8d41a7e ("KVM: s390: handle access registers in the run ioctl not in vcpu_put/load") Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20200131100205.74720-2-frankja@linux.ibm.com Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11s390/mm: fix dynamic pagetable upgrade for hugetlbfsGerald Schaefer2-1/+101
commit 5f490a520bcb393389a4d44bec90afcb332eb112 upstream. Commit ee71d16d22bb ("s390/mm: make TASK_SIZE independent from the number of page table levels") changed the logic of TASK_SIZE and also removed the arch_mmap_check() implementation for s390. This combination has a subtle effect on how get_unmapped_area() for hugetlbfs pages works. It is now possible that a user process establishes a hugetlbfs mapping at an address above 4 TB, without triggering a dynamic pagetable upgrade from 3 to 4 levels. This is because hugetlbfs mappings will not use mm->get_unmapped_area, but rather file->f_op->get_unmapped_area, which currently is the generic implementation of hugetlb_get_unmapped_area() that does not know about s390 dynamic pagetable upgrades, but with the new definition of TASK_SIZE, it will now allow mappings above 4 TB. Subsequent access to such a mapped address above 4 TB will result in a page fault loop, because the CPU cannot translate such a large address with 3 pagetable levels. The fault handler will try to map in a hugepage at the address, but due to the folded pagetable logic it will end up with creating entries in the 3 level pagetable, possibly overwriting existing mappings, and then it all repeats when the access is retried. Apart from the page fault loop, this can have various nasty effects, e.g. kernel panic from one of the BUG_ON() checks in memory management code, or even data loss if an existing mapping gets overwritten. Fix this by implementing HAVE_ARCH_HUGETLB_UNMAPPED_AREA support for s390, providing an s390 version for hugetlb_get_unmapped_area() with pagetable upgrade support similar to arch_get_unmapped_area(), which will then be used instead of the generic version. Fixes: ee71d16d22bb ("s390/mm: make TASK_SIZE independent from the number of page table levels") Cc: <stable@vger.kernel.org> # 4.12+ Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>