summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2020-08-21ARM: 8986/1: hw_breakpoint: Don't invoke overflow handler on uaccess watchpointsWill Deacon1-5/+22
commit eec13b42d41b0f3339dcf0c4da43734427c68620 upstream. Unprivileged memory accesses generated by the so-called "translated" instructions (e.g. LDRT) in kernel mode can cause user watchpoints to fire unexpectedly. In such cases, the hw_breakpoint logic will invoke the user overflow handler which will typically raise a SIGTRAP back to the current task. This is futile when returning back to the kernel because (a) the signal won't have been delivered and (b) userspace can't handle the thing anyway. Avoid invoking the user overflow handler for watchpoints triggered by kernel uaccess routines, and instead single-step over the faulting instruction as we would if no overflow handler had been installed. Cc: <stable@vger.kernel.org> Fixes: f81ef4a920c8 ("ARM: 6356/1: hw-breakpoint: add ARM backend for the hw-breakpoint framework") Reported-by: Luis Machado <luis.machado@linaro.org> Tested-by: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-31parisc: Add atomic64_set_release() define to avoid CPU soft lockupsJohn David Anglin1-0/+2
commit be6577af0cef934ccb036445314072e8cb9217b9 upstream. Stalls are quite frequent with recent kernels. I enabled CONFIG_SOFTLOCKUP_DETECTOR and I caught the following stall: watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [cc1:22803] CPU: 0 PID: 22803 Comm: cc1 Not tainted 5.6.17+ #3 Hardware name: 9000/800/rp3440 IAOQ[0]: d_alloc_parallel+0x384/0x688 IAOQ[1]: d_alloc_parallel+0x388/0x688 RP(r2): d_alloc_parallel+0x134/0x688 Backtrace: [<000000004036974c>] __lookup_slow+0xa4/0x200 [<0000000040369fc8>] walk_component+0x288/0x458 [<000000004036a9a0>] path_lookupat+0x88/0x198 [<000000004036e748>] filename_lookup+0xa0/0x168 [<000000004036e95c>] user_path_at_empty+0x64/0x80 [<000000004035d93c>] vfs_statx+0x104/0x158 [<000000004035dfcc>] __do_sys_lstat64+0x44/0x80 [<000000004035e5a0>] sys_lstat64+0x20/0x38 [<0000000040180054>] syscall_exit+0x0/0x14 The code was stuck in this loop in d_alloc_parallel: 4037d414: 0e 00 10 dc ldd 0(r16),ret0 4037d418: c7 fc 5f ed bb,< ret0,1f,4037d414 <d_alloc_parallel+0x384> 4037d41c: 08 00 02 40 nop This is the inner loop of bit_spin_lock which is called by hlist_bl_unlock in d_alloc_parallel: static inline void bit_spin_lock(int bitnum, unsigned long *addr) { /* * Assuming the lock is uncontended, this never enters * the body of the outer loop. If it is contended, then * within the inner loop a non-atomic test is used to * busywait with less bus contention for a good time to * attempt to acquire the lock bit. */ preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) while (unlikely(test_and_set_bit_lock(bitnum, addr))) { preempt_enable(); do { cpu_relax(); } while (test_bit(bitnum, addr)); preempt_disable(); } #endif __acquire(bitlock); } After consideration, I realized that we must be losing bit unlocks. Then, I noticed that we missed defining atomic64_set_release(). Adding this define fixes the stalls in bit operations. Signed-off-by: Dave Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-31x86: math-emu: Fix up 'cmp' insn for clang iasArnd Bergmann1-1/+1
[ Upstream commit 81e96851ea32deb2c921c870eecabf335f598aeb ] The clang integrated assembler requires the 'cmp' instruction to have a length prefix here: arch/x86/math-emu/wm_sqrt.S:212:2: error: ambiguous instructions require an explicit suffix (could be 'cmpb', 'cmpw', or 'cmpl') cmp $0xffffffff,-24(%ebp) ^ Make this a 32-bit comparison, which it was clearly meant to be. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20200527135352.1198078-1-arnd@arndb.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-31arm64: Use test_tsk_thread_flag() for checking TIF_SINGLESTEPWill Deacon1-2/+2
[ Upstream commit 5afc78551bf5d53279036e0bf63314e35631d79f ] Rather than open-code test_tsk_thread_flag() at each callsite, simply replace the couple of offenders with calls to test_tsk_thread_flag() directly. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-31xtensa: update *pos in cpuinfo_op.nextMax Filippov1-1/+2
[ Upstream commit 0d5ab144429e8bd80889b856a44d56ab4a5cd59b ] Increment *pos in the cpuinfo_op.next to fix the following warning triggered by cat /proc/cpuinfo: seq_file: buggy .next function c_next did not update position index Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-31xtensa: fix __sync_fetch_and_{and,or}_4 declarationsMax Filippov1-2/+2
[ Upstream commit 73f9941306d5ce030f3ffc7db425c7b2a798cf8e ] Building xtensa kernel with gcc-10 produces the following warnings: arch/xtensa/kernel/xtensa_ksyms.c:90:15: warning: conflicting types for built-in function ‘__sync_fetch_and_and_4’; expected ‘unsigned int(volatile void *, unsigned int)’ [-Wbuiltin-declaration-mismatch] arch/xtensa/kernel/xtensa_ksyms.c:96:15: warning: conflicting types for built-in function ‘__sync_fetch_and_or_4’; expected ‘unsigned int(volatile void *, unsigned int)’ [-Wbuiltin-declaration-mismatch] Fix declarations of these functions to avoid the warning. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-22x86/cpu: Move x86_cache_bits settingsSuraj Jitindar Singh1-1/+1
This patch is to fix the backport of the upstream patch: cc51e5428ea5 x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+ When this was backported to the 4.9 and 4.14 stable branches the line + c->x86_cache_bits = c->x86_phys_bits; was applied in the wrong place, being added to the identify_cpu_without_cpuid() function instead of the get_cpu_cap() function which it was correctly applied to in the 4.4 backport. This means that x86_cache_bits is not set correctly resulting in the following warning due to the cache bits being left uninitalised (zero). WARNING: CPU: 0 PID: 7566 at arch/x86/kvm/mmu.c:284 kvm_mmu_set_mmio_spte_mask+0x4e/0x60 [kvm Modules linked in: kvm_intel(+) kvm irqbypass ipv6 crc_ccitt binfmt_misc evdev lpc_ich mfd_core ioatdma pcc_cpufreq dca ena acpi_power_meter hwmon acpi_pad button ext4 crc16 mbcache jbd2 fscrypto nvme nvme_core dm_mirror dm_region_hash dm_log dm_mod dax Hardware name: Amazon EC2 i3.metal/Not Specified, BIOS 1.0 10/16/2017 task: ffff88ff77704c00 task.stack: ffffc9000edac000 RIP: 0010:kvm_mmu_set_mmio_spte_mask+0x4e/0x60 [kvm RSP: 0018:ffffc9000edafc60 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000ffffff45 RDX: 000000000000002e RSI: 0008000000000001 RDI: 0008000000000001 RBP: ffffffffa036f000 R08: ffffffffffffff80 R09: ffffe8ffffccb3c0 R10: 0000000000000038 R11: 0000000000000000 R12: 0000000000005b80 R13: ffffffffa0370e40 R14: 0000000000000001 R15: ffff88bf7c0927e0 FS: 00007fa316f24740(0000) GS:ffff88bf7f600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fa316ea0000 CR3: 0000003f7e986004 CR4: 00000000003606f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: kvm_mmu_module_init+0x166/0x230 [kvm kvm_arch_init+0x5d/0x150 [kvm kvm_init+0x1c/0x2d0 [kvm ? hardware_setup+0x4a6/0x4a6 [kvm_intel vmx_init+0x23/0x6aa [kvm_intel ? hardware_setup+0x4a6/0x4a6 [kvm_intel do_one_initcall+0x3e/0x15d do_init_module+0x5b/0x1e5 load_module+0x19e6/0x1dc0 ? SYSC_init_module+0x13b/0x170 SYSC_init_module+0x13b/0x170 do_syscall_64+0x67/0x110 entry_SYSCALL_64_after_hwframe+0x41/0xa6 RIP: 0033:0x7fa316828f3a RSP: 002b:00007ffc9d65c1f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000af RAX: ffffffffffffffda RBX: 00007fa316b08849 RCX: 00007fa316828f3a RDX: 00007fa316b08849 RSI: 0000000000071328 RDI: 00007fa316e37000 RBP: 0000000000b47e80 R08: 0000000000000003 R09: 0000000000000000 R10: 00007fa316822dba R11: 0000000000000246 R12: 0000000000b46340 R13: 0000000000b464c0 R14: 0000000000000000 R15: 0000000000040000 Code: e9 65 06 00 75 25 48 b8 00 00 00 00 00 00 00 40 48 09 c6 48 09 c7 48 89 35 f8 65 06 00 48 89 3d f9 65 06 00 c3 0f 0b 0f 0b eb d2 <0f> 0b eb d7 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 Fixes: 4.9.x ef3d45c95764 x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+ Fixes: 4.14.x ec4034835eaf x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+ Cc: stable@vger.kernel.org # 4.9.x-4.14.x Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com> Reviewed-by: Samuel Mendoza-Jonas <samjonas@amazon.com> Reviewed-by: Frank van der Linden <fllinden@amazon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22arm64: ptrace: Override SPSR.SS when single-stepping is enabledWill Deacon3-6/+20
commit 3a5a4366cecc25daa300b9a9174f7fdd352b9068 upstream. Luis reports that, when reverse debugging with GDB, single-step does not function as expected on arm64: | I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP | request by GDB won't execute the underlying instruction. As a consequence, | the PC doesn't move, but we return a SIGTRAP just like we would for a | regular successful PTRACE_SINGLESTEP request. The underlying problem is that when the CPU register state is restored as part of a reverse step, the SPSR.SS bit is cleared and so the hardware single-step state can transition to the "active-pending" state, causing an unexpected step exception to be taken immediately if a step operation is attempted. In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate accessible by the GPR regset, but it's a bit late for that now. Instead, simply prevent userspace from configuring the bit to a value which is inconsistent with the TIF_SINGLESTEP state for the task being traced. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org Reported-by: Luis Machado <luis.machado@linaro.org> Tested-by: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22MIPS: Fix build for LTS kernel caused by backporting lpj adjustmentHuacai Chen1-9/+4
Commit ed26aacfb5f71eecb20a ("mips: Add udelay lpj numbers adjustment") has backported to 4.4~5.4, but the "struct cpufreq_freqs" (and also the cpufreq notifier machanism) of 4.4~4.19 are different from the upstream kernel. These differences cause build errors, and this patch can fix the build. Cc: Serge Semin <Sergey.Semin@baikalelectronics.ru> Cc: Stable <stable@vger.kernel.org> # 4.4/4.9/4.14/4.19 Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22ARM: dts: socfpga: Align L2 cache-controller nodename with dtschemaKrzysztof Kozlowski2-2/+2
[ Upstream commit d7adfe5ffed9faa05f8926223086b101e14f700d ] Fix dtschema validator warnings like: l2-cache@fffff000: $nodename:0: 'l2-cache@fffff000' does not match '^(cache-controller|cpu)(@[0-9a-f,]+)*$' Fixes: 475dc86d08de ("arm: dts: socfpga: Add a base DTSI for Altera's Arria10 SOC") Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-22s390/mm: fix huge pte soft dirty copyingJanosch Frank1-1/+1
commit 528a9539348a0234375dfaa1ca5dbbb2f8f8e8d2 upstream. If the pmd is soft dirty we must mark the pte as soft dirty (and not dirty). This fixes some cases for guest migration with huge page backings. Cc: <stable@vger.kernel.org> # 4.8 Fixes: bc29b7ac1d9f ("s390/mm: clean up pte/pmd encoding") Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22ARC: elf: use right ELF_ARCHVineet Gupta1-1/+1
commit b7faf971081a4e56147f082234bfff55135305cb upstream. Cc: <stable@vger.kernel.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22ARC: entry: fix potential EFA clobber when TIF_SYSCALL_TRACEVineet Gupta1-11/+5
commit 00fdec98d9881bf5173af09aebd353ab3b9ac729 upstream. Trap handler for syscall tracing reads EFA (Exception Fault Address), in case strace wants PC of trap instruction (EFA is not part of pt_regs as of current code). However this EFA read is racy as it happens after dropping to pure kernel mode (re-enabling interrupts). A taken interrupt could context-switch, trigger a different task's trap, clobbering EFA for this execution context. Fix this by reading EFA early, before re-enabling interrupts. A slight side benefit is de-duplication of FAKE_RET_FROM_EXCPN in trap handler. The trap handler is common to both ARCompact and ARCv2 builds too. This just came out of code rework/review and no real problem was reported but is clearly a potential problem specially for strace. Cc: <stable@vger.kernel.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22KVM: x86: bit 8 of non-leaf PDPEs is not reservedPaolo Bonzini1-1/+1
commit 5ecad245de2ae23dc4e2dbece92f8ccfbaed2fa7 upstream. Bit 8 would be the "global" bit, which does not quite make sense for non-leaf page table entries. Intel ignores it; AMD ignores it in PDEs and PDPEs, but reserves it in PML4Es. Probably, earlier versions of the AMD manual documented it as reserved in PDPEs as well, and that behavior made it into KVM as well as kvm-unit-tests; fix it. Cc: stable@vger.kernel.org Reported-by: Nadav Amit <namit@vmware.com> Fixes: a0c0feb57992 ("KVM: x86: reserve bit 8 of non-leaf PDPEs and PML4Es in 64-bit mode on AMD", 2014-09-03) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22KVM: arm64: Fix definition of PAGE_HYP_DEVICEWill Deacon1-1/+1
commit 68cf617309b5f6f3a651165f49f20af1494753ae upstream. PAGE_HYP_DEVICE is intended to encode attribute bits for an EL2 stage-1 pte mapping a device. Unfortunately, it includes PROT_DEVICE_nGnRE which encodes attributes for EL1 stage-1 mappings such as UXN and nG, which are RES0 for EL2, and DBM which is meaningless as TCR_EL2.HD is not set. Fix the definition of PAGE_HYP_DEVICE so that it doesn't set RES0 bits at EL2. Acked-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20200708162546.26176-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-22arm64: kgdb: Fix single-step exception handling oopsWei Li1-1/+1
[ Upstream commit 8523c006264df65aac7d77284cc69aac46a6f842 ] After entering kdb due to breakpoint, when we execute 'ss' or 'go' (will delay installing breakpoints, do single-step first), it won't work correctly, and it will enter kdb due to oops. It's because the reason gotten in kdb_stub() is not as expected, and it seems that the ex_vector for single-step should be 0, like what arch powerpc/sh/parisc has implemented. Before the patch: Entering kdb (current=0xffff8000119e2dc0, pid 0) on processor 0 due to Keyboard Entry [0]kdb> bp printk Instruction(i) BP #0 at 0xffff8000101486cc (printk) is enabled addr at ffff8000101486cc, hardtype=0 installed=0 [0]kdb> g / # echo h > /proc/sysrq-trigger Entering kdb (current=0xffff0000fa878040, pid 266) on processor 3 due to Breakpoint @ 0xffff8000101486cc [3]kdb> ss Entering kdb (current=0xffff0000fa878040, pid 266) on processor 3 Oops: (null) due to oops @ 0xffff800010082ab8 CPU: 3 PID: 266 Comm: sh Not tainted 5.7.0-rc4-13839-gf0e5ad491718 #6 Hardware name: linux,dummy-virt (DT) pstate: 00000085 (nzcv daIf -PAN -UAO) pc : el1_irq+0x78/0x180 lr : __handle_sysrq+0x80/0x190 sp : ffff800015003bf0 x29: ffff800015003d20 x28: ffff0000fa878040 x27: 0000000000000000 x26: ffff80001126b1f0 x25: ffff800011b6a0d8 x24: 0000000000000000 x23: 0000000080200005 x22: ffff8000101486cc x21: ffff800015003d30 x20: 0000ffffffffffff x19: ffff8000119f2000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000 x8 : ffff800015003e50 x7 : 0000000000000002 x6 : 00000000380b9990 x5 : ffff8000106e99e8 x4 : ffff0000fadd83c0 x3 : 0000ffffffffffff x2 : ffff800011b6a0d8 x1 : ffff800011b6a000 x0 : ffff80001130c9d8 Call trace: el1_irq+0x78/0x180 printk+0x0/0x84 write_sysrq_trigger+0xb0/0x118 proc_reg_write+0xb4/0xe0 __vfs_write+0x18/0x40 vfs_write+0xb0/0x1b8 ksys_write+0x64/0xf0 __arm64_sys_write+0x14/0x20 el0_svc_common.constprop.2+0xb0/0x168 do_el0_svc+0x20/0x98 el0_sync_handler+0xec/0x1a8 el0_sync+0x140/0x180 [3]kdb> After the patch: Entering kdb (current=0xffff8000119e2dc0, pid 0) on processor 0 due to Keyboard Entry [0]kdb> bp printk Instruction(i) BP #0 at 0xffff8000101486cc (printk) is enabled addr at ffff8000101486cc, hardtype=0 installed=0 [0]kdb> g / # echo h > /proc/sysrq-trigger Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to Breakpoint @ 0xffff8000101486cc [0]kdb> g Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to Breakpoint @ 0xffff8000101486cc [0]kdb> ss Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to SS trap @ 0xffff800010082ab8 [0]kdb> Fixes: 44679a4f142b ("arm64: KGDB: Add step debugging support") Signed-off-by: Wei Li <liwei391@huawei.com> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20200509214159.19680-2-liwei391@huawei.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-22ARM: imx6: add missing put_device() call in imx6q_suspend_init()yu kuai1-4/+6
[ Upstream commit 4845446036fc9c13f43b54a65c9b757c14f5141b ] if of_find_device_by_node() succeed, imx6q_suspend_init() doesn't have a corresponding put_device(). Thus add a jump target to fix the exception handling for this function implementation. Signed-off-by: yu kuai <yukuai3@huawei.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-22s390/kasan: fix early pgm check handler executionVasily Gorbik1-0/+2
[ Upstream commit 998f5bbe3dbdab81c1cfb1aef7c3892f5d24f6c7 ] Currently if early_pgm_check_handler is called it ends up in pgm check loop. The problem is that early_pgm_check_handler is instrumented by KASAN but executed without DAT flag enabled which leads to addressing exception when KASAN checks try to access shadow memory. Fix that by executing early handlers with DAT flag on under KASAN as expected. Reported-and-tested-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-22KVM: s390: reduce number of IO pins to 1Christian Borntraeger1-4/+4
[ Upstream commit 774911290c589e98e3638e73b24b0a4d4530e97c ] The current number of KVM_IRQCHIP_NUM_PINS results in an order 3 allocation (32kb) for each guest start/restart. This can result in OOM killer activity even with free swap when the memory is fragmented enough: kernel: qemu-system-s39 invoked oom-killer: gfp_mask=0x440dc0(GFP_KERNEL_ACCOUNT|__GFP_COMP|__GFP_ZERO), order=3, oom_score_adj=0 kernel: CPU: 1 PID: 357274 Comm: qemu-system-s39 Kdump: loaded Not tainted 5.4.0-29-generic #33-Ubuntu kernel: Hardware name: IBM 8562 T02 Z06 (LPAR) kernel: Call Trace: kernel: ([<00000001f848fe2a>] show_stack+0x7a/0xc0) kernel: [<00000001f8d3437a>] dump_stack+0x8a/0xc0 kernel: [<00000001f8687032>] dump_header+0x62/0x258 kernel: [<00000001f8686122>] oom_kill_process+0x172/0x180 kernel: [<00000001f8686abe>] out_of_memory+0xee/0x580 kernel: [<00000001f86e66b8>] __alloc_pages_slowpath+0xd18/0xe90 kernel: [<00000001f86e6ad4>] __alloc_pages_nodemask+0x2a4/0x320 kernel: [<00000001f86b1ab4>] kmalloc_order+0x34/0xb0 kernel: [<00000001f86b1b62>] kmalloc_order_trace+0x32/0xe0 kernel: [<00000001f84bb806>] kvm_set_irq_routing+0xa6/0x2e0 kernel: [<00000001f84c99a4>] kvm_arch_vm_ioctl+0x544/0x9e0 kernel: [<00000001f84b8936>] kvm_vm_ioctl+0x396/0x760 kernel: [<00000001f875df66>] do_vfs_ioctl+0x376/0x690 kernel: [<00000001f875e304>] ksys_ioctl+0x84/0xb0 kernel: [<00000001f875e39a>] __s390x_sys_ioctl+0x2a/0x40 kernel: [<00000001f8d55424>] system_call+0xd8/0x2c8 As far as I can tell s390x does not use the iopins as we bail our for anything other than KVM_IRQ_ROUTING_S390_ADAPTER and the chip/pin is only used for KVM_IRQ_ROUTING_IRQCHIP. So let us use a small number to reduce the memory footprint. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20200617083620.5409-1-borntraeger@de.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-09MIPS: Add missing EHB in mtc0 -> mfc0 sequence for DSPenHauke Mehrtens1-0/+1
commit fcec538ef8cca0ad0b84432235dccd9059c8e6f8 upstream. This resolves the hazard between the mtc0 in the change_c0_status() and the mfc0 in configure_exception_vector(). Without resolving this hazard configure_exception_vector() could read an old value and would restore this old value again. This would revert the changes change_c0_status() did. I checked this by printing out the read_c0_status() at the end of per_cpu_trap_init() and the ST0_MX is not set without this patch. The hazard is documented in the MIPS Architecture Reference Manual Vol. III: MIPS32/microMIPS32 Privileged Resource Architecture (MD00088), rev 6.03 table 8.1 which includes: Producer | Consumer | Hazard ----------|----------|---------------------------- mtc0 | mfc0 | any coprocessor 0 register I saw this hazard on an Atheros AR9344 rev 2 SoC with a MIPS 74Kc CPU. There the change_c0_status() function would activate the DSPen by setting ST0_MX in the c0_status register. This was reverted and then the system got a DSP exception when the DSP registers were saved in save_dsp() in the first process switch. The crash looks like this: [ 0.089999] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes, linear) [ 0.097796] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes, linear) [ 0.107070] Kernel panic - not syncing: Unexpected DSP exception [ 0.113470] Rebooting in 1 seconds.. We saw this problem in OpenWrt only on the MIPS 74Kc based Atheros SoCs, not on the 24Kc based SoCs. We only saw it with kernel 5.4 not with kernel 4.19, in addition we had to use GCC 8.4 or 9.X, with GCC 8.3 it did not happen. In the kernel I bisected this problem to commit 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING"), but when this was reverted it also happened after commit 172dcd935c34b ("MIPS: Always allocate exception vector for MIPSr2+"). Commit 0b24cae4d535 ("MIPS: Add missing EHB in mtc0 -> mfc0 sequence.") does similar changes to a different file. I am not sure if there are more places affected by this problem. Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de> Cc: <stable@vger.kernel.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-30arm64: perf: Report the PC value in REGS_ABI_32 modeJiping Ma1-3/+22
commit 8dfe804a4031ca6ba3a3efb2048534249b64f3a5 upstream. A 32-bit perf querying the registers of a compat task using REGS_ABI_32 will receive zeroes from w15, when it expects to find the PC. Return the PC value for register dwarf register 15 when returning register values for a compat task to perf. Cc: <stable@vger.kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Jiping Ma <jiping.ma2@windriver.com> Link: https://lore.kernel.org/r/1589165527-188401-1-git-send-email-jiping.ma2@windriver.com [will: Shuffled code and added a comment] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-30KVM: X86: Fix MSR range of APIC registers in X2APIC modeXiaoyao Li1-2/+2
commit bf10bd0be53282183f374af23577b18b5fbf7801 upstream. Only MSR address range 0x800 through 0x8ff is architecturally reserved and dedicated for accessing APIC registers in x2APIC mode. Fixes: 0105d1a52640 ("KVM: x2apic interface to lapic") Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Message-Id: <20200616073307.16440-1-xiaoyao.li@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-30s390/ptrace: fix setting syscall numberSven Schnelle1-1/+30
[ Upstream commit 873e5a763d604c32988c4a78913a8dab3862d2f9 ] When strace wants to update the syscall number, it sets GPR2 to the desired number and updates the GPR via PTRACE_SETREGSET. It doesn't update regs->int_code which would cause the old syscall executed on syscall restart. As we cannot change the ptrace ABI and don't have a field for the interruption code, check whether the tracee is in a syscall and the last instruction was svc. In that case assume that the tracer wants to update the syscall number and copy the GPR2 value to regs->int_code. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30ARM: imx5: add missing put_device() call in imx_suspend_alloc_ocram()yu kuai1-2/+4
[ Upstream commit 586745f1598ccf71b0a5a6df2222dee0a865954e ] if of_find_device_by_node() succeed, imx_suspend_alloc_ocram() doesn't have a corresponding put_device(). Thus add a jump target to fix the exception handling for this function implementation. Fixes: 1579c7b9fe01 ("ARM: imx53: Set DDR pins to high impedance when in suspend to RAM.") Signed-off-by: yu kuai <yukuai3@huawei.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30fix a braino in "sparc32: fix register window handling in genregs32_[gs]et()"Al Viro1-2/+7
[ Upstream commit 9d964e1b82d8182184153b70174f445ea616f053 ] lost npc in PTRACE_SETREGSET, breaking PTRACE_SETREGS as well Fixes: cf51e129b968 "sparc32: fix register window handling in genregs32_[gs]et()" Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30kretprobe: Prevent triggering kretprobe from within kprobe_flush_taskJiri Olsa1-13/+3
[ Upstream commit 9b38cc704e844e41d9cf74e647bff1d249512cb3 ] Ziqian reported lockup when adding retprobe on _raw_spin_lock_irqsave. My test was also able to trigger lockdep output: ============================================ WARNING: possible recursive locking detected 5.6.0-rc6+ #6 Not tainted -------------------------------------------- sched-messaging/2767 is trying to acquire lock: ffffffff9a492798 (&(kretprobe_table_locks[i].lock)){-.-.}, at: kretprobe_hash_lock+0x52/0xa0 but task is already holding lock: ffffffff9a491a18 (&(kretprobe_table_locks[i].lock)){-.-.}, at: kretprobe_trampoline+0x0/0x50 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(kretprobe_table_locks[i].lock)); lock(&(kretprobe_table_locks[i].lock)); *** DEADLOCK *** May be due to missing lock nesting notation 1 lock held by sched-messaging/2767: #0: ffffffff9a491a18 (&(kretprobe_table_locks[i].lock)){-.-.}, at: kretprobe_trampoline+0x0/0x50 stack backtrace: CPU: 3 PID: 2767 Comm: sched-messaging Not tainted 5.6.0-rc6+ #6 Call Trace: dump_stack+0x96/0xe0 __lock_acquire.cold.57+0x173/0x2b7 ? native_queued_spin_lock_slowpath+0x42b/0x9e0 ? lockdep_hardirqs_on+0x590/0x590 ? __lock_acquire+0xf63/0x4030 lock_acquire+0x15a/0x3d0 ? kretprobe_hash_lock+0x52/0xa0 _raw_spin_lock_irqsave+0x36/0x70 ? kretprobe_hash_lock+0x52/0xa0 kretprobe_hash_lock+0x52/0xa0 trampoline_handler+0xf8/0x940 ? kprobe_fault_handler+0x380/0x380 ? find_held_lock+0x3a/0x1c0 kretprobe_trampoline+0x25/0x50 ? lock_acquired+0x392/0xbc0 ? _raw_spin_lock_irqsave+0x50/0x70 ? __get_valid_kprobe+0x1f0/0x1f0 ? _raw_spin_unlock_irqrestore+0x3b/0x40 ? finish_task_switch+0x4b9/0x6d0 ? __switch_to_asm+0x34/0x70 ? __switch_to_asm+0x40/0x70 The code within the kretprobe handler checks for probe reentrancy, so we won't trigger any _raw_spin_lock_irqsave probe in there. The problem is in outside kprobe_flush_task, where we call: kprobe_flush_task kretprobe_table_lock raw_spin_lock_irqsave _raw_spin_lock_irqsave where _raw_spin_lock_irqsave triggers the kretprobe and installs kretprobe_trampoline handler on _raw_spin_lock_irqsave return. The kretprobe_trampoline handler is then executed with already locked kretprobe_table_locks, and first thing it does is to lock kretprobe_table_locks ;-) the whole lockup path like: kprobe_flush_task kretprobe_table_lock raw_spin_lock_irqsave _raw_spin_lock_irqsave ---> probe triggered, kretprobe_trampoline installed ---> kretprobe_table_locks locked kretprobe_trampoline trampoline_handler kretprobe_hash_lock(current, &head, &flags); <--- deadlock Adding kprobe_busy_begin/end helpers that mark code with fake probe installed to prevent triggering of another kprobe within this code. Using these helpers in kprobe_flush_task, so the probe recursion protection check is hit and the probe is never set to prevent above lockup. Link: http://lkml.kernel.org/r/158927059835.27680.7011202830041561604.stgit@devnote2 Fixes: ef53d9c5e4da ("kprobes: improve kretprobe scalability with hashed locking") Cc: Ingo Molnar <mingo@kernel.org> Cc: "Gustavo A . R . Silva" <gustavoars@kernel.org> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: "Naveen N . Rao" <naveen.n.rao@linux.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Reported-by: "Ziqian SUN (Zamir)" <zsun@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30x86/kprobes: Avoid kretprobe recursion bugMasami Hiramatsu1-2/+20
[ Upstream commit b191fa96ea6dc00d331dcc28c1f7db5e075693a0 ] Avoid kretprobe recursion loop bg by setting a dummy kprobes to current_kprobe per-CPU variable. This bug has been introduced with the asm-coded trampoline code, since previously it used another kprobe for hooking the function return placeholder (which only has a nop) and trampoline handler was called from that kprobe. This revives the old lost kprobe again. With this fix, we don't see deadlock anymore. And you can see that all inner-called kretprobe are skipped. event_1 235 0 event_2 19375 19612 The 1st column is recorded count and the 2nd is missed count. Above shows (event_1 rec) + (event_2 rec) ~= (event_2 missed) (some difference are here because the counter is racy) Reported-by: Andrea Righi <righi.andrea@gmail.com> Tested-by: Andrea Righi <righi.andrea@gmail.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: c9becf58d935 ("[PATCH] kretprobe: kretprobe-booster") Link: http://lkml.kernel.org/r/155094064889.6137.972160690963039.stgit@devbox Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30powerpc/kprobes: Fixes for kprobe_lookup_name() on BENaveen N. Rao1-1/+2
[ Upstream commit 30176466e36aadba01e1a630cf42397a3438efa4 ] Fix two issues with kprobes.h on BE which were exposed with the optprobes work: - one, having to do with a missing include for linux/module.h for MODULE_NAME_LEN -- this didn't show up previously since the only users of kprobe_lookup_name were in kprobes.c, which included linux/module.h through other headers, and - two, with a missing const qualifier for a local variable which ends up referring a string literal. Again, this is unique to how kprobe_lookup_name is being invoked in optprobes.c Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30s390: fix syscall_get_error for compat processesDmitry V. Levin1-1/+11
commit b3583fca5fb654af2cfc1c08259abb9728272538 upstream. If both the tracer and the tracee are compat processes, and gprs[2] is assigned a value by __poke_user_compat, then the higher 32 bits of gprs[2] are cleared, IS_ERR_VALUE() always returns false, and syscall_get_error() always returns 0. Fix the implementation by sign-extending the value for compat processes the same way as x86 implementation does. The bug was exposed to user space by commit 201766a20e30f ("ptrace: add PTRACE_GET_SYSCALL_INFO request") and detected by strace test suite. This change fixes strace syscall tampering on s390. Link: https://lkml.kernel.org/r/20200602180051.GA2427@altlinux.org Fixes: 753c4dd6a2fa2 ("[S390] ptrace changes") Cc: Elvira Khabirova <lineprinter@altlinux.org> Cc: stable@vger.kernel.org # v2.6.28+ Signed-off-by: Dmitry V. Levin <ldv@altlinux.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-30x86/boot/compressed: Relax sed symbol type regex for LLVM ld.lldArd Biesheuvel1-1/+1
commit bc310baf2ba381c648983c7f4748327f17324562 upstream. The final build stage of the x86 kernel captures some symbol addresses from the decompressor binary and copies them into zoffset.h. It uses sed with a regular expression that matches the address, symbol type and symbol name, and mangles the captured addresses and the names of symbols of interest into #define directives that are added to zoffset.h The symbol type is indicated by a single letter, which we match strictly: only letters in the set 'ABCDGRSTVW' are matched, even though the actual symbol type is relevant and therefore ignored. Commit bc7c9d620 ("efi/libstub/x86: Force 'hidden' visibility for extern declarations") made a change to the way external symbol references are classified, resulting in 'startup_32' now being emitted as a hidden symbol. This prevents the use of GOT entries to refer to this symbol via its absolute address, which recent toolchains (including Clang based ones) already avoid by default, making this change a no-op in the majority of cases. However, as it turns out, the LLVM linker classifies such hidden symbols as symbols with static linkage in fully linked ELF binaries, causing tools such as NM to output a lowercase 't' rather than an upper case 'T' for the type of such symbols. Since our sed expression only matches upper case letters for the symbol type, the line describing startup_32 is disregarded, resulting in a build error like the following arch/x86/boot/header.S:568:18: error: symbol 'ZO_startup_32' can not be undefined in a subtraction expression init_size: .long (0x00000000008fd000 - ZO_startup_32 + (((0x0000000001f6361c + ((0x0000000001f6361c >> 8) + 65536) - 0x00000000008c32e5) + 4095) & ~4095)) # kernel initialization size Given that we are only interested in the value of the symbol, let's match any character in the set 'a-zA-Z' instead. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30openrisc: Fix issue with argument clobbering for clone/forkStafford Horne1-2/+2
[ Upstream commit 6bd140e14d9aaa734ec37985b8b20a96c0ece948 ] Working on the OpenRISC glibc port I found that sometimes clone was working strange. That the tls data argument sent in r7 was always wrong. Further investigation revealed that the arguments were getting clobbered in the entry code. This patch removes the code that writes to the argument registers. This was likely due to some old code hanging around. This patch fixes this up for clone and fork. This fork clobber is harmless but also useless so remove. Signed-off-by: Stafford Horne <shorne@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30powerpc/64s/pgtable: fix an undefined behaviourQian Cai1-4/+19
[ Upstream commit c2e929b18cea6cbf71364f22d742d9aad7f4677a ] Booting a power9 server with hash MMU could trigger an undefined behaviour because pud_offset(p4d, 0) will do, 0 >> (PAGE_SHIFT:16 + PTE_INDEX_SIZE:8 + H_PMD_INDEX_SIZE:10) Fix it by converting pud_index() and friends to static inline functions. UBSAN: shift-out-of-bounds in arch/powerpc/mm/ptdump/ptdump.c:282:15 shift exponent 34 is too large for 32-bit type 'int' CPU: 6 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc4-next-20200303+ #13 Call Trace: dump_stack+0xf4/0x164 (unreliable) ubsan_epilogue+0x18/0x78 __ubsan_handle_shift_out_of_bounds+0x160/0x21c walk_pagetables+0x2cc/0x700 walk_pud at arch/powerpc/mm/ptdump/ptdump.c:282 (inlined by) walk_pagetables at arch/powerpc/mm/ptdump/ptdump.c:311 ptdump_check_wx+0x8c/0xf0 mark_rodata_ro+0x48/0x80 kernel_init+0x74/0x194 ret_from_kernel_thread+0x5c/0x74 Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Link: https://lore.kernel.org/r/20200306044852.3236-1-cai@lca.pw Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30powerpc/ps3: Fix kexec shutdown hangGeoff Levand1-10/+12
[ Upstream commit 126554465d93b10662742128918a5fc338cda4aa ] The ps3_mm_region_destroy() and ps3_mm_vas_destroy() routines are called very late in the shutdown via kexec's mmu_cleanup_all routine. By the time mmu_cleanup_all runs it is too late to use udbg_printf, and calling it will cause PS3 systems to hang. Remove all debugging statements from ps3_mm_region_destroy() and ps3_mm_vas_destroy() and replace any error reporting with calls to lv1_panic. With this change builds with 'DEBUG' defined will not cause kexec reboots to hang, and builds with 'DEBUG' defined or not will end in lv1_panic if an error is encountered. Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7325c4af2b4c989c19d6a26b90b1fec9c0615ddf.1589049250.git.geoff@infradead.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30powerpc/pseries/ras: Fix FWNMI_VALID off by oneNicholas Piggin1-2/+3
[ Upstream commit deb70f7a35a22dffa55b2c3aac71bc6fb0f486ce ] This was discovered developing qemu fwnmi sreset support. This off-by-one bug means the last 16 bytes of the rtas area can not be used for a 16 byte save area. It's not a serious bug, and QEMU implementation has to retain a workaround for old kernels, but it's good to tighten it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Link: https://lore.kernel.org/r/20200508043408.886394-7-npiggin@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30powerpc/crashkernel: Take "mem=" option into accountPingfan Liu1-3/+5
[ Upstream commit be5470e0c285a68dc3afdea965032f5ddc8269d7 ] 'mem=" option is an easy way to put high pressure on memory during some test. Hence after applying the memory limit, instead of total mem, the actual usable memory should be considered when reserving mem for crashkernel. Otherwise the boot up may experience OOM issue. E.g. it would reserve 4G prior to the change and 512M afterward, if passing crashkernel="2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G", and mem=5G on a 256G machine. This issue is powerpc specific because it puts higher priority on fadump and kdump reservation than on "mem=". Referring the following code: if (fadump_reserve_mem() == 0) reserve_crashkernel(); ... /* Ensure that total memory size is page-aligned. */ limit = ALIGN(memory_limit ?: memblock_phys_mem_size(), PAGE_SIZE); memblock_enforce_memory_limit(limit); While on other arches, the effect of "mem=" takes a higher priority and pass through memblock_phys_mem_size() before calling reserve_crashkernel(). Signed-off-by: Pingfan Liu <kernelfans@gmail.com> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1585749644-4148-1-git-send-email-kernelfans@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30powerpc/perf/hv-24x7: Fix inconsistent output values incase multiple hv-24x7 ↵Kajol Jain1-10/+0
events run [ Upstream commit b4ac18eead28611ff470d0f47a35c4e0ac080d9c ] Commit 2b206ee6b0df ("powerpc/perf/hv-24x7: Display change in counter values")' added to print _change_ in the counter value rather then raw value for 24x7 counters. Incase of transactions, the event count is set to 0 at the beginning of the transaction. It also sets the event's prev_count to the raw value at the time of initialization. Because of setting event count to 0, we are seeing some weird behaviour, whenever we run multiple 24x7 events at a time. For example: command#: ./perf stat -e "{hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/, hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/}" -C 0 -I 1000 sleep 100 1.000121704 120 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 1.000121704 5 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 2.000357733 8 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 2.000357733 10 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 3.000495215 18,446,744,073,709,551,616 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 3.000495215 18,446,744,073,709,551,616 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 4.000641884 56 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 4.000641884 18,446,744,073,709,551,616 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 5.000791887 18,446,744,073,709,551,616 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ Getting these large values in case we do -I. As we are setting event_count to 0, for interval case, overall event_count is not coming in incremental order. As we may can get new delta lesser then previous count. Because of which when we print intervals, we are getting negative value which create these large values. This patch removes part where we set event_count to 0 in function 'h_24x7_event_read'. There won't be much impact as we do set event->hw.prev_count to the raw value at the time of initialization to print change value. With this patch In power9 platform command#: ./perf stat -e "{hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/, hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/}" -C 0 -I 1000 sleep 100 1.000117685 93 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 1.000117685 1 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 2.000349331 98 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 2.000349331 2 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 3.000495900 131 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 3.000495900 4 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 4.000645920 204 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ 4.000645920 61 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=1/ 4.284169997 22 hv_24x7/PM_MCS01_128B_RD_DISP_PORT01,chip=0/ Suggested-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Tested-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525104308.9814-2-kjain@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-30ARM: integrator: Add some Kconfig selectionsLinus Walleij1-3/+4
[ Upstream commit d2854bbe5f5c4b4bec8061caf4f2e603d8819446 ] The CMA and DMA_CMA Kconfig options need to be selected by the Integrator in order to produce boot console on some Integrator systems. The REGULATOR and REGULATOR_FIXED_VOLTAGE need to be selected in order to boot the system from an external MMC card when using MMCI/PL181 from the device tree probe path. Select these things directly from the Kconfig so we are sure to be able to bring the systems up with console from any device tree. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20ARM: tegra: Correct PL310 Auxiliary Control Register initializationDmitry Osipenko1-2/+2
commit 35509737c8f958944e059d501255a0bf18361ba0 upstream. The PL310 Auxiliary Control Register shouldn't have the "Full line of zero" optimization bit being set before L2 cache is enabled. The L2X0 driver takes care of enabling the optimization by itself. This patch fixes a noisy error message on Tegra20 and Tegra30 telling that cache optimization is erroneously enabled without enabling it for the CPU: L2C-310: enabling full line of zeros but not enabled in Cortex-A9 Cc: <stable@vger.kernel.org> Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Tested-by: Nicolas Chauvet <kwizart@gmail.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-20sparc64: fix misuses of access_process_vm() in genregs32_[sg]et()Al Viro1-14/+3
commit 142cd25293f6a7ecbdff4fb0af17de6438d46433 upstream. We do need access_process_vm() to access the target's reg_window. However, access to caller's memory (storing the result in genregs32_get(), fetching the new values in case of genregs32_set()) should be done by normal uaccess primitives. Fixes: ad4f95764040 ([SPARC64]: Fix user accesses in regset code.) Cc: stable@kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-20sparc32: fix register window handling in genregs32_[gs]et()Al Viro1-130/+98
commit cf51e129b96847f969bfb8af1ee1516a01a70b39 upstream. It needs access_process_vm() if the traced process does not share mm with the caller. Solution is similar to what sparc64 does. Note that genregs32_set() is only ever called with pos being 0 or 32 * sizeof(u32) (the latter - as part of PTRACE_SETREGS handling). Cc: stable@kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-20MIPS: Fix IRQ tracing when call handle_fpe() and handle_msa_fpe()YuanJunQing1-3/+3
[ Upstream commit 31e1b3efa802f97a17628dde280006c4cee4ce5e ] Register "a1" is unsaved in this function, when CONFIG_TRACE_IRQFLAGS is enabled, the TRACE_IRQS_OFF macro will call trace_hardirqs_off(), and this may change register "a1". The changed register "a1" as argument will be send to do_fpe() and do_msa_fpe(). Signed-off-by: YuanJunQing <yuanjunqing66@163.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20m68k: mac: Don't call via_flush_cache() on Mac IIfxFinn Thain3-20/+8
[ Upstream commit bcc44f6b74106b31f0b0408b70305a40360d63b7 ] There is no VIA2 chip on the Mac IIfx, so don't call via_flush_cache(). This avoids a boot crash which appeared in v5.4. printk: console [ttyS0] enabled printk: bootconsole [debug0] disabled printk: bootconsole [debug0] disabled Calibrating delay loop... 9.61 BogoMIPS (lpj=48064) pid_max: default: 32768 minimum: 301 Mount-cache hash table entries: 1024 (order: 0, 4096 bytes, linear) Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes, linear) devtmpfs: initialized random: get_random_u32 called from bucket_table_alloc.isra.27+0x68/0x194 with crng_init=0 clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns futex hash table entries: 256 (order: -1, 3072 bytes, linear) NET: Registered protocol family 16 Data read fault at 0x00000000 in Super Data (pc=0x8a6a) BAD KERNEL BUSERR Oops: 00000000 Modules linked in: PC: [<00008a6a>] via_flush_cache+0x12/0x2c SR: 2700 SP: 01c1fe3c a2: 01c24000 d0: 00001119 d1: 0000000c d2: 00012000 d3: 0000000f d4: 01c06840 d5: 00033b92 a0: 00000000 a1: 00000000 Process swapper (pid: 1, task=01c24000) Frame format=B ssw=0755 isc=0200 isb=fff7 daddr=00000000 dobuf=01c1fed0 baddr=00008a6e dibuf=0000004e ver=f Stack from 01c1fec4: 01c1fed0 00007d7e 00010080 01c1fedc 0000792e 00000001 01c1fef4 00006b40 01c80000 00040000 00000006 00000003 01c1ff1c 004a545e 004ff200 00040000 00000000 00000003 01c06840 00033b92 004a5410 004b6c88 01c1ff84 000021e2 00000073 00000003 01c06840 00033b92 0038507a 004bb094 004b6ca8 004b6c88 004b6ca4 004b6c88 000021ae 00020002 00000000 01c0685d 00000000 01c1ffb4 0049f938 00409c85 01c06840 0045bd40 00000073 00000002 00000002 00000000 Call Trace: [<00007d7e>] mac_cache_card_flush+0x12/0x1c [<00010080>] fix_dnrm+0x2/0x18 [<0000792e>] cache_push+0x46/0x5a [<00006b40>] arch_dma_prep_coherent+0x60/0x6e [<00040000>] switched_to_dl+0x76/0xd0 [<004a545e>] dma_atomic_pool_init+0x4e/0x188 [<00040000>] switched_to_dl+0x76/0xd0 [<00033b92>] parse_args+0x0/0x370 [<004a5410>] dma_atomic_pool_init+0x0/0x188 [<000021e2>] do_one_initcall+0x34/0x1be [<00033b92>] parse_args+0x0/0x370 [<0038507a>] strcpy+0x0/0x1e [<000021ae>] do_one_initcall+0x0/0x1be [<00020002>] do_proc_dointvec_conv+0x54/0x74 [<0049f938>] kernel_init_freeable+0x126/0x190 [<0049f94c>] kernel_init_freeable+0x13a/0x190 [<004a5410>] dma_atomic_pool_init+0x0/0x188 [<00041798>] complete+0x0/0x3c [<000b9b0c>] kfree+0x0/0x20a [<0038df98>] schedule+0x0/0xd0 [<0038d604>] kernel_init+0x0/0xda [<0038d610>] kernel_init+0xc/0xda [<0038d604>] kernel_init+0x0/0xda [<00002d38>] ret_from_kernel_thread+0xc/0x14 Code: 0000 2079 0048 10da 2279 0048 10c8 d3c8 <1011> 0200 fff7 1280 d1f9 0048 10c8 1010 0000 0008 1080 4e5e 4e75 4e56 0000 2039 Disabling lock debugging due to kernel taint Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b Thanks to Stan Johnson for capturing the console log and running git bisect. Git bisect said commit 8e3a68fb55e0 ("dma-mapping: make dma_atomic_pool_init self-contained") is the first "bad" commit. I don't know why. Perhaps mach_l2_flush first became reachable with that commit. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-and-tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Cc: Joshua Thompson <funaho@jurai.org> Link: https://lore.kernel.org/r/b8bbeef197d6b3898e82ed0d231ad08f575a4b34.1589949122.git.fthain@telegraphics.com.au Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20x86/mm: Stop printing BRK addressesArvind Sankar1-2/+0
[ Upstream commit 67d631b7c05eff955ccff4139327f0f92a5117e5 ] This currently leaks kernel physical addresses into userspace. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Kees Cook <keescook@chromium.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/20200229231120.1147527-1-nivedita@alum.mit.edu Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20mips: Add udelay lpj numbers adjustmentSerge Semin1-0/+70
[ Upstream commit ed26aacfb5f71eecb20a51c4467da440cb719d66 ] Loops-per-jiffies is a special number which represents a number of noop-loop cycles per CPU-scheduler quantum - jiffies. As you understand aside from CPU-specific implementation it depends on the CPU frequency. So when a platform has the CPU frequency fixed, we have no problem and the current udelay interface will work just fine. But as soon as CPU-freq driver is enabled and the cores frequency changes, we'll end up with distorted udelay's. In order to fix this we have to accordinly adjust the per-CPU udelay_val (the same as the global loops_per_jiffy) number. This can be done in the CPU-freq transition event handler. We subscribe to that event in the MIPS arch time-inititalization method. Co-developed-by: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru> Signed-off-by: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Paul Burton <paulburton@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Rob Herring <robh+dt@kernel.org> Cc: devicetree@vger.kernel.org Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20x86/boot: Correct relocation destination on old linkersArvind Sankar2-2/+4
[ Upstream commit 5214028dd89e49ba27007c3ee475279e584261f0 ] For the 32-bit kernel, as described in 6d92bc9d483a ("x86/build: Build compressed x86 kernels as PIE"), pre-2.26 binutils generates R_386_32 relocations in PIE mode. Since the startup code does not perform relocation, any reloc entry with R_386_32 will remain as 0 in the executing code. Commit 974f221c84b0 ("x86/boot: Move compressed kernel to the end of the decompression buffer") added a new symbol _end but did not mark it hidden, which doesn't give the correct offset on older linkers. This causes the compressed kernel to be copied beyond the end of the decompression buffer, rather than flush against it. This region of memory may be reserved or already allocated for other purposes by the bootloader. Mark _end as hidden to fix. This changes the relocation from R_386_32 to R_386_RELATIVE even on the pre-2.26 binutils. For 64-bit, this is not strictly necessary, as the 64-bit kernel is only built as PIE if the linker supports -z noreloc-overflow, which implies binutils-2.27+, but for consistency, mark _end as hidden here too. The below illustrates the before/after impact of the patch using binutils-2.25 and gcc-4.6.4 (locally compiled from source) and QEMU. Disassembly before patch: 48: 8b 86 60 02 00 00 mov 0x260(%esi),%eax 4e: 2d 00 00 00 00 sub $0x0,%eax 4f: R_386_32 _end Disassembly after patch: 48: 8b 86 60 02 00 00 mov 0x260(%esi),%eax 4e: 2d 00 f0 76 00 sub $0x76f000,%eax 4f: R_386_RELATIVE *ABS* Dump from extract_kernel before patch: early console in extract_kernel input_data: 0x0207c098 <--- this is at output + init_size input_len: 0x0074fef1 output: 0x01000000 output_len: 0x00fa63d0 kernel_total_size: 0x0107c000 needed_size: 0x0107c000 Dump from extract_kernel after patch: early console in extract_kernel input_data: 0x0190d098 <--- this is at output + init_size - _end input_len: 0x0074fef1 output: 0x01000000 output_len: 0x00fa63d0 kernel_total_size: 0x0107c000 needed_size: 0x0107c000 Fixes: 974f221c84b0 ("x86/boot: Move compressed kernel to the end of the decompression buffer") Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200207214926.3564079-1-nivedita@alum.mit.edu Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20mips: cm: Fix an invalid error code of INTVN_*_ERRSerge Semin1-3/+3
[ Upstream commit 8a0efb8b101665a843205eab3d67ab09cb2d9a8d ] Commit 3885c2b463f6 ("MIPS: CM: Add support for reporting CM cache errors") adds cm2_causes[] array with map of error type ID and pointers to the short description string. There is a mistake in the table, since according to MIPS32 manual CM2_ERROR_TYPE = {17,18} correspond to INTVN_WR_ERR and INTVN_RD_ERR, while the table claims they have {0x17,0x18} codes. This is obviously hex-dec copy-paste bug. Moreover codes {0x18 - 0x1a} indicate L2 ECC errors. Fixes: 3885c2b463f6 ("MIPS: CM: Add support for reporting CM cache errors") Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Cc: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Paul Burton <paulburton@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Rob Herring <robh+dt@kernel.org> Cc: linux-pm@vger.kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20MIPS: Truncate link address into 32bit for 32bit kernelJiaxun Yang3-3/+14
[ Upstream commit ff487d41036035376e47972c7c522490b839ab37 ] LLD failed to link vmlinux with 64bit load address for 32bit ELF while bfd will strip 64bit address into 32bit silently. To fix LLD build, we should truncate load address provided by platform into 32bit for 32bit kernel. Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Link: https://github.com/ClangBuiltLinux/linux/issues/786 Link: https://sourceware.org/bugzilla/show_bug.cgi?id=25784 Reviewed-by: Fangrui Song <maskray@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20powerpc/spufs: fix copy_to_user while atomicJeremy Kerr1-38/+75
[ Upstream commit 88413a6bfbbe2f648df399b62f85c934460b7a4d ] Currently, we may perform a copy_to_user (through simple_read_from_buffer()) while holding a context's register_lock, while accessing the context save area. This change uses a temporary buffer for the context save area data, which we then pass to simple_read_from_buffer. Includes changes from Christoph Hellwig <hch@lst.de>. Fixes: bf1ab978be23 ("[POWERPC] coredump: Add SPU elf notes to coredump.") Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> [hch: renamed to function to avoid ___-prefixes] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20MIPS: Make sparse_init() using top-down allocationTiezhu Yang1-0/+10
[ Upstream commit 269b3a9ac538c4ae87f84be640b9fa89914a2489 ] In the current code, if CONFIG_SWIOTLB is set, when failed to get IO TLB memory from the low pages by plat_swiotlb_setup(), it may lead to the boot process failed with kernel panic. (1) On the Loongson and SiByte platform arch/mips/loongson64/dma.c arch/mips/sibyte/common/dma.c void __init plat_swiotlb_setup(void) { swiotlb_init(1); } kernel/dma/swiotlb.c void __init swiotlb_init(int verbose) { ... vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose)) return; ... pr_warn("Cannot allocate buffer"); no_iotlb_memory = true; } phys_addr_t swiotlb_tbl_map_single() { ... if (no_iotlb_memory) panic("Can not allocate SWIOTLB buffer earlier ..."); ... } (2) On the Cavium OCTEON platform arch/mips/cavium-octeon/dma-octeon.c void __init plat_swiotlb_setup(void) { ... octeon_swiotlb = memblock_alloc_low(swiotlbsize, PAGE_SIZE); if (!octeon_swiotlb) panic("%s: Failed to allocate %zu bytes align=%lx\n", __func__, swiotlbsize, PAGE_SIZE); ... } Because IO_TLB_DEFAULT_SIZE is 64M, if the rest size of low memory is less than 64M when call plat_swiotlb_setup(), we can easily reproduce the panic case. In order to reduce the possibility of kernel panic when failed to get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate low memory as small as possible before plat_swiotlb_setup(), so make sparse_init() using top-down allocation. Reported-by: Juxin Gao <gaojuxin@loongson.cn> Co-developed-by: Juxin Gao <gaojuxin@loongson.cn> Signed-off-by: Juxin Gao <gaojuxin@loongson.cn> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20ARM: 8978/1: mm: make act_mm() respect THREAD_SIZELinus Walleij1-1/+2
[ Upstream commit e1de94380af588bdf6ad6f0cc1f75004c35bc096 ] Recent work with KASan exposed the folling hard-coded bitmask in arch/arm/mm/proc-macros.S: bic rd, sp, #8128 bic rd, rd, #63 This forms the bitmask 0x1FFF that is coinciding with (PAGE_SIZE << THREAD_SIZE_ORDER) - 1, this code was assuming that THREAD_SIZE is always 8K (8192). As KASan was increasing THREAD_SIZE_ORDER to 2, I ran into this bug. Fix it by this little oneline suggested by Ard: bic rd, sp, #(THREAD_SIZE - 1) & ~63 Where THREAD_SIZE is defined using THREAD_SIZE_ORDER. We have to also include <linux/const.h> since the THREAD_SIZE expands to use the _AC() macro. Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>