summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2021-04-10bpf, x86: Validate computation of branch displacements for x86-32Piotr Krysiuk1-1/+10
commit 26f55a59dc65ff77cd1c4b37991e26497fc68049 upstream. The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by: Piotr Krysiuk <piotras@gmail.com> Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-10bpf, x86: Validate computation of branch displacements for x86-64Piotr Krysiuk1-1/+10
commit e4d4d456436bfb2fe412ee2cd489f7658449b098 upstream. The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by: Piotr Krysiuk <piotras@gmail.com> Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-10ia64: fix format strings for err_injectSergei Trofimovich1-11/+11
[ Upstream commit 95d44a470a6814207d52dd6312203b0f4ef12710 ] Fix warning with %lx / u64 mismatch: arch/ia64/kernel/err_inject.c: In function 'show_resources': arch/ia64/kernel/err_inject.c:62:22: warning: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'u64' {aka 'long long unsigned int'} 62 | return sprintf(buf, "%lx", name[cpu]); \ | ^~~~~~~ Link: https://lkml.kernel.org/r/20210313104312.1548232-1-slyfox@gentoo.org Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10ia64: mca: allocate early mca with GFP_ATOMICSergei Trofimovich1-1/+1
[ Upstream commit f2a419cf495f95cac49ea289318b833477e1a0e2 ] The sleep warning happens at early boot right at secondary CPU activation bootup: smp: Bringing up secondary CPUs ... BUG: sleeping function called from invalid context at mm/page_alloc.c:4942 in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 0, name: swapper/1 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.12.0-rc2-00007-g79e228d0b611-dirty #99 .. Call Trace: show_stack+0x90/0xc0 dump_stack+0x150/0x1c0 ___might_sleep+0x1c0/0x2a0 __might_sleep+0xa0/0x160 __alloc_pages_nodemask+0x1a0/0x600 alloc_page_interleave+0x30/0x1c0 alloc_pages_current+0x2c0/0x340 __get_free_pages+0x30/0xa0 ia64_mca_cpu_init+0x2d0/0x3a0 cpu_init+0x8b0/0x1440 start_secondary+0x60/0x700 start_ap+0x750/0x780 Fixed BSP b0 value from CPU 1 As I understand interrupts are not enabled yet and system has a lot of memory. There is little chance to sleep and switch to GFP_ATOMIC should be a no-op. Link: https://lkml.kernel.org/r/20210315085045.204414-1-slyfox@gentoo.org Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10arm64: kernel: disable CNP on CarmelRich Wiley4-2/+24
[ Upstream commit 20109a859a9b514eb10c22b8a14b5704ffe93897 ] On NVIDIA Carmel cores, CNP behaves differently than it does on standard ARM cores. On Carmel, if two cores have CNP enabled and share an L2 TLB entry created by core0 for a specific ASID, a non-shareable TLBI from core1 may still see the shared entry. On standard ARM cores, that TLBI will invalidate the shared entry as well. This causes issues with patchsets that attempt to do local TLBIs based on cpumasks instead of broadcast TLBIs. Avoid these issues by disabling CNP support for NVIDIA Carmel cores. Signed-off-by: Rich Wiley <rwiley@nvidia.com> Link: https://lore.kernel.org/r/20210324002809.30271-1-rwiley@nvidia.com [will: Fix pre-existing whitespace issue] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10x86/build: Turn off -fcf-protection for realmode targetsArnd Bergmann1-1/+1
[ Upstream commit 9fcb51c14da2953de585c5c6e50697b8a6e91a7b ] The new Ubuntu GCC packages turn on -fcf-protection globally, which causes a build failure in the x86 realmode code: cc1: error: ‘-fcf-protection’ is not compatible with this target Turn it off explicitly on compilers that understand this option. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210323124846.1584944-1-arnd@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10bpf, x86: Use kvmalloc_array instead kmalloc_array in bpf_jit_compYonghong Song1-2/+2
[ Upstream commit de920fc64cbaa031f947e9be964bda05fd090380 ] x86 bpf_jit_comp.c used kmalloc_array to store jited addresses for each bpf insn. With a large bpf program, we have see the following allocation failures in our production server: page allocation failure: order:5, mode:0x40cc0(GFP_KERNEL|__GFP_COMP), nodemask=(null),cpuset=/,mems_allowed=0" Call Trace: dump_stack+0x50/0x70 warn_alloc.cold.120+0x72/0xd2 ? __alloc_pages_direct_compact+0x157/0x160 __alloc_pages_slowpath+0xcdb/0xd00 ? get_page_from_freelist+0xe44/0x1600 ? vunmap_page_range+0x1ba/0x340 __alloc_pages_nodemask+0x2c9/0x320 kmalloc_order+0x18/0x80 kmalloc_order_trace+0x1d/0xa0 bpf_int_jit_compile+0x1e2/0x484 ? kmalloc_order_trace+0x1d/0xa0 bpf_prog_select_runtime+0xc3/0x150 bpf_prog_load+0x480/0x720 ? __mod_memcg_lruvec_state+0x21/0x100 __do_sys_bpf+0xc31/0x2040 ? close_pdeo+0x86/0xe0 do_syscall_64+0x42/0x110 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f2f300f7fa9 Code: Bad RIP value. Dumped assembly: ffffffff810b6d70 <bpf_int_jit_compile>: ; { ffffffff810b6d70: e8 eb a5 b4 00 callq 0xffffffff81c01360 <__fentry__> ffffffff810b6d75: 41 57 pushq %r15 ... ffffffff810b6f39: e9 72 fe ff ff jmp 0xffffffff810b6db0 <bpf_int_jit_compile+0x40> ; addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); ffffffff810b6f3e: 8b 45 0c movl 12(%rbp), %eax ; return __kmalloc(bytes, flags); ffffffff810b6f41: be c0 0c 00 00 movl $3264, %esi ; addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); ffffffff810b6f46: 8d 78 01 leal 1(%rax), %edi ; if (unlikely(check_mul_overflow(n, size, &bytes))) ffffffff810b6f49: 48 c1 e7 02 shlq $2, %rdi ; return __kmalloc(bytes, flags); ffffffff810b6f4d: e8 8e 0c 1d 00 callq 0xffffffff81287be0 <__kmalloc> ; if (!addrs) { ffffffff810b6f52: 48 85 c0 testq %rax, %rax Change kmalloc_array() to kvmalloc_array() to avoid potential allocation error for big bpf programs. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210309015647.3657852-1-yhs@fb.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10ARM: dts: am33xx: add aliases for mmc interfacesMans Rullgard1-0/+3
[ Upstream commit 9bbce32a20d6a72c767a7f85fd6127babd1410ac ] Without DT aliases, the numbering of mmc interfaces is unpredictable. Adding them makes it possible to refer to devices consistently. The popular suggestion to use UUIDs obviously doesn't work with a blank device fresh from the factory. See commit fa2d0aa96941 ("mmc: core: Allow setting slot index via device tree alias") for more discussion. Signed-off-by: Mans Rullgard <mans@mansr.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-07riscv: evaluate put_user() arg before enabling user accessBen Dooks1-2/+5
commit 285a76bb2cf51b0c74c634f2aaccdb93e1f2a359 upstream. The <asm/uaccess.h> header has a problem with put_user(a, ptr) if the 'a' is not a simple variable, such as a function. This can lead to the compiler producing code as so: 1: enable_user_access() 2: evaluate 'a' into register 'r' 3: put 'r' to 'ptr' 4: disable_user_acess() The issue is that 'a' is now being evaluated with the user memory protections disabled. So we try and force the evaulation by assigning 'x' to __val at the start, and hoping the compiler barriers in enable_user_access() do the job of ordering step 2 before step 1. This has shown up in a bug where 'a' sleeps and thus schedules out and loses the SR_SUM flag. This isn't sufficient to fully fix, but should reduce the window of opportunity. The first instance of this we found is in scheudle_tail() where the code does: $ less -N kernel/sched/core.c 4263 if (current->set_child_tid) 4264 put_user(task_pid_vnr(current), current->set_child_tid); Here, the task_pid_vnr(current) is called within the block that has enabled the user memory access. This can be made worse with KASAN which makes task_pid_vnr() a rather large call with plenty of opportunity to sleep. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reported-by: syzbot+e74b94fe601ab9552d69@syzkaller.appspotmail.com Suggested-by: Arnd Bergman <arnd@arndb.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -- Changes since v1: - fixed formatting and updated the patch description with more info Changes since v2: - fixed commenting on __put_user() (schwab@linux-m68k.org) Change since v3: - fixed RFC in patch title. Should be ready to merge. Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-04-07powerpc/mm/book3s64: Use the correct storage key value when calling H_PROTECTAneesh Kumar K.V1-1/+2
[ Upstream commit 53f1d31708f6240e4615b0927df31f182e389e2f ] H_PROTECT expects the flag value to include flags: AVPN, pp0, pp1, pp2, key0-key4, Noexec, CMO Option flags This patch updates hpte_updatepp() to fetch the storage key value from the linux page table and use the same in H_PROTECT hcall. native_hpte_updatepp() is not updated because the kernel doesn't clear the existing storage key value there. The kernel also doesn't use hpte_updatepp() callback for updating storage keys. This fixes the below kernel crash observed with KUAP enabled. BUG: Unable to handle kernel data access on write at 0xc009fffffc440000 Faulting instruction address: 0xc0000000000b7030 Key fault AMR: 0xfcffffffffffffff IAMR: 0xc0000077bc498100 Found HPTE: v = 0x40070adbb6fffc05 r = 0x1ffffffffff1194 Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries ... CFAR: c000000000010100 DAR: c009fffffc440000 DSISR: 02200000 IRQMASK: 0 ... NIP memset+0x68/0x104 LR pcpu_alloc+0x54c/0xb50 Call Trace: pcpu_alloc+0x55c/0xb50 (unreliable) blk_stat_alloc_callback+0x94/0x150 blk_mq_init_allocated_queue+0x64/0x560 blk_mq_init_queue+0x54/0xb0 scsi_mq_alloc_queue+0x30/0xa0 scsi_alloc_sdev+0x1cc/0x300 scsi_probe_and_add_lun+0xb50/0x1020 __scsi_scan_target+0x17c/0x790 scsi_scan_channel+0x90/0xe0 scsi_scan_host_selected+0x148/0x1f0 do_scan_async+0x2c/0x2a0 async_run_entry_fn+0x78/0x220 process_one_work+0x264/0x540 worker_thread+0xa8/0x600 kthread+0x190/0x1a0 ret_from_kernel_thread+0x5c/0x6c With KUAP enabled the kernel uses storage key 3 for all its translations. But as shown by the debug print, in this specific case we have the hash page table entry created with key value 0. Found HPTE: v = 0x40070adbb6fffc05 r = 0x1ffffffffff1194 and DSISR indicates a key fault. This can happen due to parallel fault on the same EA by different CPUs: CPU 0 CPU 1 fault on X H_PAGE_BUSY set fault on X finish fault handling and clear H_PAGE_BUSY check for H_PAGE_BUSY continue with fault handling. This implies CPU1 will end up calling hpte_updatepp for address X and the kernel updated the hash pte entry with key 0 Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation") Reported-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Debugged-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210326070755.304625-1-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-07powerpc/pseries/mobility: handle premature return from H_JOINNathan Lynch1-1/+25
[ Upstream commit 274cb1ca2e7ce02cab56f5f4c61a74aeb566f931 ] The pseries join/suspend sequence in its current form was written with the assumption that it was the only user of H_PROD and that it needn't handle spurious successful returns from H_JOIN. That's wrong; powerpc's paravirt spinlock code uses H_PROD, and CPUs entering do_join() can be woken prematurely from H_JOIN with a status of H_SUCCESS as a result. This causes all CPUs to exit the sequence early, preventing suspend from occurring at all. Add a 'done' boolean flag to the pseries_suspend_info struct, and have the waking thread set it before waking the other threads. Threads which receive H_SUCCESS from H_JOIN retry if the 'done' flag is still unset. Fixes: 9327dc0aeef3 ("powerpc/pseries/mobility: use stop_machine for join/suspend") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210315080045.460331-3-nathanl@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-07powerpc/pseries/mobility: use struct for shared stateNathan Lynch1-3/+19
[ Upstream commit e834df6cfc71d8e5ce2c27a0184145ea125c3f0f ] The atomic_t counter is the only shared state for the join/suspend sequence so far, but that will change. Contain it in a struct (pseries_suspend_info), and document its intended use. No functional change. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210315080045.460331-2-nathanl@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-07s390/vdso: fix tod_steering_delta typeHeiko Carstens1-1/+1
commit b24bacd67ffddd9192c4745500fd6f73dbfe565e upstream. The s390 specific vdso function __arch_get_hw_counter() is supposed to consider tod clock steering. If a tod clock steering event happens and the tod clock is set to a new value __arch_get_hw_counter() will not return the real tod clock value but slowly drift it from the old delta until the returned value finally matches the real tod clock value again. Unfortunately the type of tod_steering_delta unsigned while it is supposed to be signed. It depends on if tod_steering_delta is negative or positive in which direction the vdso code drifts the clock value. Worst case is now that instead of drifting the clock slowly it will jump into the opposite direction by a factor of two. Fix this by simply making tod_steering_delta signed. Fixes: 4bff8cb54502 ("s390: convert to GENERIC_VDSO") Cc: <stable@vger.kernel.org> # 5.10 Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07s390/vdso: copy tod_steering_delta value to vdso_data pageHeiko Carstens1-0/+1
commit 72bbc226ed2ef0a46c165a482861fff00dd6d4e1 upstream. When converting the vdso assembler code to C it was forgotten to actually copy the tod_steering_delta value to vdso_data page. Which in turn means that tod clock steering will not work correctly. Fix this by simply copying the value whenever it is updated. Fixes: 4bff8cb54502 ("s390: convert to GENERIC_VDSO") Cc: <stable@vger.kernel.org> # 5.10 Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07KVM: SVM: ensure that EFER.SVME is set when running nested guest or on ↵Paolo Bonzini1-1/+17
nested vmexit commit 3c346c0c60ab06a021d1c0884a0ef494bc4ee3a7 upstream. Fixing nested_vmcb_check_save to avoid all TOC/TOU races is a bit harder in released kernels, so do the bare minimum by avoiding that EFER.SVME is cleared. This is problematic because svm_set_efer frees the data structures for nested virtualization if EFER.SVME is cleared. Also check that EFER.SVME remains set after a nested vmexit; clearing it could happen if the bit is zero in the save area that is passed to KVM_SET_NESTED_STATE (the save area of the nested state corresponds to the nested hypervisor's state and is restored on the next nested vmexit). Cc: stable@vger.kernel.org Fixes: 2fcf4876ada ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07KVM: SVM: load control fields from VMCB12 before checking themPaolo Bonzini1-4/+6
commit a58d9166a756a0f4a6618e4f593232593d6df134 upstream. Avoid races between check and use of the nested VMCB controls. This for example ensures that the VMRUN intercept is always reflected to the nested hypervisor, instead of being processed by the host. Without this patch, it is possible to end up with svm->nested.hsave pointing to the MSR permission bitmap for nested guests. This bug is CVE-2021-29657. Reported-by: Felix Wilhelm <fwilhelm@google.com> Cc: stable@vger.kernel.org Fixes: 2fcf4876ada ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07xtensa: move coprocessor_flush to the .text sectionMax Filippov1-31/+33
commit ab5eb336411f18fd449a1fb37d36a55ec422603f upstream. coprocessor_flush is not a part of fast exception handlers, but it uses parts of fast coprocessor handling code that's why it's in the same source file. It uses call0 opcode to invoke those parts so there are no limitations on their relative location, but the rest of the code calls coprocessor_flush with call8 and that doesn't work when vectors are placed in a different gigabyte-aligned area than the rest of the kernel. Move coprocessor_flush from the .exception.text section to the .text so that it's reachable from the rest of the kernel with call8. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07xtensa: fix uaccess-related livelock in do_page_faultMax Filippov1-1/+4
commit 7b9acbb6aad4f54623dcd4bd4b1a60fe0c727b09 upstream. If a uaccess (e.g. get_user()) triggers a fault and there's a fault signal pending, the handler will return to the uaccess without having performed a uaccess fault fixup, and so the CPU will immediately execute the uaccess instruction again, whereupon it will livelock bouncing between that instruction and the fault handler. https://lore.kernel.org/lkml/20210121123140.GD48431@C02TD0UTHF1T.local/ Cc: stable@vger.kernel.org Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07ACPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()Vitaly Kuznetsov2-1/+2
commit 8cdddd182bd7befae6af49c5fd612893f55d6ccb upstream. Commit 496121c02127 ("ACPI: processor: idle: Allow probing on platforms with one ACPI C-state") broke CPU0 hotplug on certain systems, e.g. I'm observing the following on AWS Nitro (e.g r5b.xlarge but other instance types are affected as well): # echo 0 > /sys/devices/system/cpu/cpu0/online # echo 1 > /sys/devices/system/cpu/cpu0/online <10 seconds delay> -bash: echo: write error: Input/output error In fact, the above mentioned commit only revealed the problem and did not introduce it. On x86, to wakeup CPU an NMI is being used and hlt_play_dead()/mwait_play_dead() loops are prepared to handle it: /* * If NMI wants to wake up CPU0, start CPU0. */ if (wakeup_cpu0()) start_cpu0(); cpuidle_play_dead() -> acpi_idle_play_dead() (which is now being called on systems where it wasn't called before the above mentioned commit) serves the same purpose but it doesn't have a path for CPU0. What happens now on wakeup is: - NMI is sent to CPU0 - wakeup_cpu0_nmi() works as expected - we get back to while (1) loop in acpi_idle_play_dead() - safe_halt() puts CPU0 to sleep again. The straightforward/minimal fix is add the special handling for CPU0 on x86 and that's what the patch is doing. Fixes: 496121c02127 ("ACPI: processor: idle: Allow probing on platforms with one ACPI C-state") Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: 5.10+ <stable@vger.kernel.org> # 5.10+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07ACPI: tables: x86: Reserve memory occupied by ACPI tablesRafael J. Wysocki2-18/+15
commit 1a1c130ab7575498eed5bcf7220037ae09cd1f8a upstream. The following problem has been reported by George Kennedy: Since commit 7fef431be9c9 ("mm/page_alloc: place pages to tail in __free_pages_core()") the following use after free occurs intermittently when ACPI tables are accessed. BUG: KASAN: use-after-free in ibft_init+0x134/0xc49 Read of size 4 at addr ffff8880be453004 by task swapper/0/1 CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.12.0-rc1-7a7fd0d #1 Call Trace: dump_stack+0xf6/0x158 print_address_description.constprop.9+0x41/0x60 kasan_report.cold.14+0x7b/0xd4 __asan_report_load_n_noabort+0xf/0x20 ibft_init+0x134/0xc49 do_one_initcall+0xc4/0x3e0 kernel_init_freeable+0x5af/0x66b kernel_init+0x16/0x1d0 ret_from_fork+0x22/0x30 ACPI tables mapped via kmap() do not have their mapped pages reserved and the pages can be "stolen" by the buddy allocator. Apparently, on the affected system, the ACPI table in question is not located in "reserved" memory, like ACPI NVS or ACPI Data, that will not be used by the buddy allocator, so the memory occupied by that table has to be explicitly reserved to prevent the buddy allocator from using it. In order to address this problem, rearrange the initialization of the ACPI tables on x86 to locate the initial tables earlier and reserve the memory occupied by them. The other architectures using ACPI should not be affected by this change. Link: https://lore.kernel.org/linux-acpi/1614802160-29362-1-git-send-email-george.kennedy@oracle.com/ Reported-by: George Kennedy <george.kennedy@oracle.com> Tested-by: George Kennedy <george.kennedy@oracle.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Cc: 5.10+ <stable@vger.kernel.org> # 5.10+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07arm64: mm: correct the inside linear map range during hotplug checkPavel Tatashin1-2/+18
[ Upstream commit ee7febce051945be28ad86d16a15886f878204de ] Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the linear map range is not checked correctly. The start physical address that linear map covers can be actually at the end of the range because of randomization. Check that and if so reduce it to 0. This can be verified on QEMU with setting kaslr-seed to ~0ul: memstart_offset_seed = 0xffff START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000 END: __pa(PAGE_END - 1) = 1000bfffffff Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping") Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20210216150351.129018-2-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30Revert "xen: fix p2m size in dom0 for disabled memory hotplug case"Roger Pau Monne3-17/+14
commit af44a387e743ab7aa39d3fb5e29c0a973cf91bdc upstream. This partially reverts commit 882213990d32 ("xen: fix p2m size in dom0 for disabled memory hotplug case") There's no need to special case XEN_UNPOPULATED_ALLOC anymore in order to correctly size the p2m. The generic memory hotplug option has already been tied together with the Xen hotplug limit, so enabling memory hotplug should already trigger a properly sized p2m on Xen PV. Note that XEN_UNPOPULATED_ALLOC depends on ZONE_DEVICE which pulls in MEMORY_HOTPLUG. Leave the check added to __set_phys_to_machine and the adjusted comment about EXTRA_MEM_RATIO. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-3-roger.pau@citrix.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [boris: fixed formatting issues] Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2021-03-30x86/mem_encrypt: Correct physical address calculation in __set_clr_pte_enc()Isaku Yamahata1-1/+1
commit 8249d17d3194eac064a8ca5bc5ca0abc86feecde upstream. The pfn variable contains the page frame number as returned by the pXX_pfn() functions, shifted to the right by PAGE_SHIFT to remove the page bits. After page protection computations are done to it, it gets shifted back to the physical address using page_level_shift(). That is wrong, of course, because that function determines the shift length based on the level of the page in the page table but in all the cases, it was shifted by PAGE_SHIFT before. Therefore, shift it back using PAGE_SHIFT to get the correct physical address. [ bp: Rewrite commit message. ] Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/81abbae1657053eccc535c16151f63cd049dcb97.1616098294.git.isaku.yamahata@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30bpf: Use NOP_ATOMIC5 instead of emit_nops(&prog, 5) for BPF_TRAMP_F_CALL_ORIGStanislav Fomichev1-1/+2
[ Upstream commit b9082970478009b778aa9b22d5561eef35b53b63 ] __bpf_arch_text_poke does rewrite only for atomic nop5, emit_nops(xxx, 5) emits non-atomic one which breaks fentry/fexit with k8 atomics: P6_NOP5 == P6_NOP5_ATOMIC (0f1f440000 == 0f1f440000) K8_NOP5 != K8_NOP5_ATOMIC (6666906690 != 6666666690) Can be reproduced by doing "ideal_nops = k8_nops" in "arch_init_ideal_nops() and running fexit_bpf2bpf selftest. Fixes: e21aa341785c ("bpf: Fix fexit trampoline.") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210320000001.915366-1-sdf@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30bpf: Fix fexit trampoline.Alexei Starovoitov1-4/+22
[ Upstream commit e21aa341785c679dd409c8cb71f864c00fe6c463 ] The fexit/fmod_ret programs can be attached to kernel functions that can sleep. The synchronize_rcu_tasks() will not wait for such tasks to complete. In such case the trampoline image will be freed and when the task wakes up the return IP will point to freed memory causing the crash. Solve this by adding percpu_ref_get/put for the duration of trampoline and separate trampoline vs its image life times. The "half page" optimization has to be removed, since first_half->second_half->first_half transition cannot be guaranteed to complete in deterministic time. Every trampoline update becomes a new image. The image with fmod_ret or fexit progs will be freed via percpu_ref_kill and call_rcu_tasks. Together they will wait for the original function and trampoline asm to complete. The trampoline is patched from nop to jmp to skip fexit progs. They are freed independently from the trampoline. The image with fentry progs only will be freed via call_rcu_tasks_trace+call_rcu_tasks which will wait for both sleepable and non-sleepable progs to complete. Fixes: fec56f5890d9 ("bpf: Introduce BPF trampoline") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Paul E. McKenney <paulmck@kernel.org> # for RCU Link: https://lore.kernel.org/bpf/20210316210007.38949-1-alexei.starovoitov@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30xen/x86: make XEN_BALLOON_MEMORY_HOTPLUG_LIMIT depend on MEMORY_HOTPLUGRoger Pau Monne1-2/+2
[ Upstream commit 2b514ec72706a31bea0c3b97e622b81535b5323a ] The Xen memory hotplug limit should depend on the memory hotplug generic option, rather than the Xen balloon configuration. It's possible to have a kernel with generic memory hotplug enabled, but without Xen balloon enabled, at which point memory hotplug won't work correctly due to the size limitation of the p2m. Rename the option to XEN_MEMORY_HOTPLUG_LIMIT since it's no longer tied to ballooning. Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory") Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-2-roger.pau@citrix.com Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30arm64: kdump: update ppos when reading elfcorehdrPavel Tatashin1-0/+2
[ Upstream commit 141f8202cfa4192c3af79b6cbd68e7760bb01b5a ] The ppos points to a position in the old kernel memory (and in case of arm64 in the crash kernel since elfcorehdr is passed as a segment). The function should update the ppos by the amount that was read. This bug is not exposed by accident, but other platforms update this value properly. So, fix it in ARM64 version of elfcorehdr_read() as well. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Fixes: e62aaeac426a ("arm64: kdump: provide /proc/vmcore file") Reviewed-by: Tyler Hicks <tyhicks@linux.microsoft.com> Link: https://lore.kernel.org/r/20210319205054.743368-1-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ARM: dts: imx6ull: fix ubi filesystem mount faileddillon min1-0/+1
[ Upstream commit e4817a1b6b77db538bc0141c3b138f2df803ce87 ] For NAND Ecc layout, there is a dependency from old kernel's nand driver setting and current. if old kernel use 4 bit ecc , we should use 4 bit in new kernel either. else will run into following error at filesystem mounting. So, enable fsl,use-minimum-ecc from device tree, to fix this mismatch [ 9.449265] ubi0: scanning is finished [ 9.463968] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read only 22528 bytes, retry [ 9.486940] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read only 22528 bytes, retry [ 9.509906] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read only 22528 bytes, retry [ 9.532845] ubi0 error: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read 22528 bytes Fixes: f9ecf10cb88c ("ARM: dts: imx6ull: add MYiR MYS-6ULX SBC") Signed-off-by: dillon min <dillon.minfei@gmail.com> Reviewed-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ARM: OMAP2+: Fix smartreflex init regression after dropping legacy dataTony Lindgren1-17/+58
[ Upstream commit fbfa463be8dc7957ee4f81556e9e1ea2a951807d ] When I dropped legacy data for omap4 and dra7 smartreflex in favor of device tree based data, it seems I only testd for the "SmartReflex Class3 initialized" line in dmesg. I missed the fact that there is also omap_devinit_smartreflex() that happens later, and now it produces an error on boot for "No Voltage table for the corresponding vdd. Cannot create debugfs entries for n-values". This happens as we no longer have the smartreflex instance legacy data, and have not yet moved completely to device tree based booting for the driver. Let's fix the issue by changing the smartreflex init to use names. This should all eventually go away in favor of doing the init in the driver based on devicetree compatible value. Note that dra7xx_init_early() is not calling any voltage domain init like omap54xx_voltagedomains_init(), or a dra7 specific voltagedomains init. This means that on dra7 smartreflex is still not fully initialized, and also seems to be missing the related devicetree nodes. Fixes: a6b1e717e942 ("ARM: OMAP2+: Drop legacy platform data for omap4 smartreflex") Fixes: e54740b4afe8 ("ARM: OMAP2+: Drop legacy platform data for dra7 smartreflex") Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ARM: dts: at91-sama5d27_som1: fix phy address to 7Claudiu Beznea1-2/+2
commit 221c3a09ddf70a0a51715e6c2878d8305e95c558 upstream. Fix the phy address to 7 for Ethernet PHY on SAMA5D27 SOM1. No connection established if phy address 0 is used. The board uses the 24 pins version of the KSZ8081RNA part, KSZ8081RNA pin 16 REFCLK as PHYAD bit [2] has weak internal pull-down. But at reset, connected to PD09 of the MPU it's connected with an internal pull-up forming PHYAD[2:0] = 7. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Fixes: 2f61929eb10a ("ARM: dts: at91: at91-sama5d27_som1: fix PHY ID") Cc: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: <stable@vger.kernel.org> # 4.14+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30ARM: dts: at91: sam9x60: fix mux-mask to match product's datasheetNicolas Ferre2-8/+9
commit 2c69c8a1736eace8de491d480e6e577a27c2087c upstream. Fix the whole mux-mask table according to datasheet for the sam9x60 product. Too much functions for pins were disabled leading to misunderstandings when enabling more peripherals or taking this table as an example for another board. Take advantage of this fix to move the mux-mask in the SoC file where it belongs and use lower case letters for hex numbers like everywhere in the file. Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Fixes: 1e5f532c2737 ("ARM: dts: at91: sam9x60: add device tree for soc and board") Cc: <stable@vger.kernel.org> # 5.6+ Cc: Sandeep Sheriker Mallikarjun <sandeepsheriker.mallikarjun@microchip.com> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Link: https://lore.kernel.org/r/20210310152006.15018-1-nicolas.ferre@microchip.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30ARM: dts: at91: sam9x60: fix mux-mask for PA7 so it can be set to A, B and CFederico Pellegrin1-1/+1
commit 664979bba8169d775959452def968d1a7c03901f upstream. According to the datasheet PA7 can be set to either function A, B or C (see table 6-2 of DS60001579D). The previous value would permit just configuring with function C. Signed-off-by: Federico Pellegrin <fede@evolware.org> Fixes: 1e5f532c2737 ("ARM: dts: at91: sam9x60: add device tree for soc and board") Cc: <stable@vger.kernel.org> # 5.6+ Cc: Sandeep Sheriker Mallikarjun <sandeepsheriker.mallikarjun@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30arm64: dts: ls1043a: mark crypto engine dma coherentHoria Geantă1-0/+1
commit 4fb3a074755b7737c4081cffe0ccfa08c2f2d29d upstream. Crypto engine (CAAM) on LS1043A platform is configured HW-coherent, mark accordingly the DT node. Lack of "dma-coherent" property for an IP that is configured HW-coherent can lead to problems, similar to what has been reported for LS1046A. Cc: <stable@vger.kernel.org> # v4.8+ Fixes: 63dac35b58f4 ("arm64: dts: ls1043a: add crypto node") Link: https://lore.kernel.org/linux-crypto/fe6faa24-d8f7-d18f-adfa-44fa0caa1598@arm.com Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Li Yang <leoyang.li@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30arm64: dts: ls1012a: mark crypto engine dma coherentHoria Geantă1-0/+1
commit ba8da03fa7dff59d9400250aebd38f94cde3cb0f upstream. Crypto engine (CAAM) on LS1012A platform is configured HW-coherent, mark accordingly the DT node. Lack of "dma-coherent" property for an IP that is configured HW-coherent can lead to problems, similar to what has been reported for LS1046A. Cc: <stable@vger.kernel.org> # v4.12+ Fixes: 85b85c569507 ("arm64: dts: ls1012a: add crypto node") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Li Yang <leoyang.li@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30arm64: dts: ls1046a: mark crypto engine dma coherentHoria Geantă1-0/+1
commit 9c3a16f88385e671b63a0de7b82b85e604a80f42 upstream. Crypto engine (CAAM) on LS1046A platform is configured HW-coherent, mark accordingly the DT node. As reported by Greg and Sascha, and explained by Robin, lack of "dma-coherent" property for an IP that is configured HW-coherent can lead to problems, e.g. on v5.11: > kernel BUG at drivers/crypto/caam/jr.c:247! > Internal error: Oops - BUG: 0 [#1] PREEMPT SMP > Modules linked in: > CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.11.0-20210225-3-00039-g434215968816-dirty #12 > Hardware name: TQ TQMLS1046A SoM on Arkona AT1130 (C300) board (DT) > pstate: 60000005 (nZCv daif -PAN -UAO -TCO BTYPE=--) > pc : caam_jr_dequeue+0x98/0x57c > lr : caam_jr_dequeue+0x98/0x57c > sp : ffff800010003d50 > x29: ffff800010003d50 x28: ffff8000118d4000 > x27: ffff8000118d4328 x26: 00000000000001f0 > x25: ffff0008022be480 x24: ffff0008022c6410 > x23: 00000000000001f1 x22: ffff8000118d4329 > x21: 0000000000004d80 x20: 00000000000001f1 > x19: 0000000000000001 x18: 0000000000000020 > x17: 0000000000000000 x16: 0000000000000015 > x15: ffff800011690230 x14: 2e2e2e2e2e2e2e2e > x13: 2e2e2e2e2e2e2020 x12: 3030303030303030 > x11: ffff800011700a38 x10: 00000000fffff000 > x9 : ffff8000100ada30 x8 : ffff8000116a8a38 > x7 : 0000000000000001 x6 : 0000000000000000 > x5 : 0000000000000000 x4 : 0000000000000000 > x3 : 00000000ffffffff x2 : 0000000000000000 > x1 : 0000000000000000 x0 : 0000000000001800 > Call trace: > caam_jr_dequeue+0x98/0x57c > tasklet_action_common.constprop.0+0x164/0x18c > tasklet_action+0x44/0x54 > __do_softirq+0x160/0x454 > __irq_exit_rcu+0x164/0x16c > irq_exit+0x1c/0x30 > __handle_domain_irq+0xc0/0x13c > gic_handle_irq+0x5c/0xf0 > el1_irq+0xb4/0x180 > arch_cpu_idle+0x18/0x30 > default_idle_call+0x3c/0x1c0 > do_idle+0x23c/0x274 > cpu_startup_entry+0x34/0x70 > rest_init+0xdc/0xec > arch_call_rest_init+0x1c/0x28 > start_kernel+0x4ac/0x4e4 > Code: 91392021 912c2000 d377d8c6 97f24d96 (d4210000) Cc: <stable@vger.kernel.org> # v4.10+ Fixes: 8126d88162a5 ("arm64: dts: add QorIQ LS1046A SoC support") Link: https://lore.kernel.org/linux-crypto/fe6faa24-d8f7-d18f-adfa-44fa0caa1598@arm.com Reported-by: Greg Ungerer <gerg@kernel.org> Reported-by: Sascha Hauer <s.hauer@pengutronix.de> Tested-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Greg Ungerer <gerg@kernel.org> Acked-by: Li Yang <leoyang.li@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30arm64: stacktrace: don't trace arch_stack_walk()Mark Rutland1-4/+5
commit c607ab4f916d4d5259072eca34055d3f5a795c21 upstream. We recently converted arm64 to use arch_stack_walk() in commit: 5fc57df2f6fd ("arm64: stacktrace: Convert to ARCH_STACKWALK") The core stacktrace code expects that (when tracing the current task) arch_stack_walk() starts a trace at its caller, and does not include itself in the trace. However, arm64's arch_stack_walk() includes itself, and so traces include one more entry than callers expect. The core stacktrace code which calls arch_stack_walk() tries to skip a number of entries to prevent itself appearing in a trace, and the additional entry prevents skipping one of the core stacktrace functions, leaving this in the trace unexpectedly. We can fix this by having arm64's arch_stack_walk() begin the trace with its caller. The first value returned by the trace will be __builtin_return_address(0), i.e. the caller of arch_stack_walk(). The first frame record to be unwound will be __builtin_frame_address(1), i.e. the caller's frame record. To prevent surprises, arch_stack_walk() is also marked noinline. While __builtin_frame_address(1) is not safe in portable code, local GCC developers have confirmed that it is safe on arm64. To find the caller's frame record, the builtin can safely dereference the current function's frame record or (in theory) could stash the original FP into another GPR at function entry time, neither of which are problematic. Prior to this patch, the tracing code would unexpectedly show up in traces of the current task, e.g. | # cat /proc/self/stack | [<0>] stack_trace_save_tsk+0x98/0x100 | [<0>] proc_pid_stack+0xb4/0x130 | [<0>] proc_single_show+0x60/0x110 | [<0>] seq_read_iter+0x230/0x4d0 | [<0>] seq_read+0xdc/0x130 | [<0>] vfs_read+0xac/0x1e0 | [<0>] ksys_read+0x6c/0xfc | [<0>] __arm64_sys_read+0x20/0x30 | [<0>] el0_svc_common.constprop.0+0x60/0x120 | [<0>] do_el0_svc+0x24/0x90 | [<0>] el0_svc+0x2c/0x54 | [<0>] el0_sync_handler+0x1a4/0x1b0 | [<0>] el0_sync+0x170/0x180 After this patch, the tracing code will not show up in such traces: | # cat /proc/self/stack | [<0>] proc_pid_stack+0xb4/0x130 | [<0>] proc_single_show+0x60/0x110 | [<0>] seq_read_iter+0x230/0x4d0 | [<0>] seq_read+0xdc/0x130 | [<0>] vfs_read+0xac/0x1e0 | [<0>] ksys_read+0x6c/0xfc | [<0>] __arm64_sys_read+0x20/0x30 | [<0>] el0_svc_common.constprop.0+0x60/0x120 | [<0>] do_el0_svc+0x24/0x90 | [<0>] el0_svc+0x2c/0x54 | [<0>] el0_sync_handler+0x1a4/0x1b0 | [<0>] el0_sync+0x170/0x180 Erring on the side of caution, I've given this a spin with a bunch of toolchains, verifying the output of /proc/self/stack and checking that the assembly looked sound. For GCC (where we require version 5.1.0 or later) I tested with the kernel.org crosstool binares for versions 5.5.0, 6.4.0, 6.5.0, 7.3.0, 7.5.0, 8.1.0, 8.3.0, 8.4.0, 9.2.0, and 10.1.0. For clang (where we require version 10.0.1 or later) I tested with the llvm.org binary releases of 11.0.0, and 11.0.1. Fixes: 5fc57df2f6fd ("arm64: stacktrace: Convert to ARCH_STACKWALK") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chen Jun <chenjun102@huawei.com> Cc: Marco Elver <elver@google.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: <stable@vger.kernel.org> # 5.10.x Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210319184106.5688-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30KVM: x86: Protect userspace MSR filter with SRCU, and set atomically-ishSean Christopherson2-49/+74
[ Upstream commit b318e8decf6b9ef1bcf4ca06fae6d6a2cb5d5c5c ] Fix a plethora of issues with MSR filtering by installing the resulting filter as an atomic bundle instead of updating the live filter one range at a time. The KVM_X86_SET_MSR_FILTER ioctl() isn't truly atomic, as the hardware MSR bitmaps won't be updated until the next VM-Enter, but the relevant software struct is atomically updated, which is what KVM really needs. Similar to the approach used for modifying memslots, make arch.msr_filter a SRCU-protected pointer, do all the work configuring the new filter outside of kvm->lock, and then acquire kvm->lock only when the new filter has been vetted and created. That way vCPU readers either see the old filter or the new filter in their entirety, not some half-baked state. Yuan Yao pointed out a use-after-free in ksm_msr_allowed() due to a TOCTOU bug, but that's just the tip of the iceberg... - Nothing is __rcu annotated, making it nigh impossible to audit the code for correctness. - kvm_add_msr_filter() has an unpaired smp_wmb(). Violation of kernel coding style aside, the lack of a smb_rmb() anywhere casts all code into doubt. - kvm_clear_msr_filter() has a double free TOCTOU bug, as it grabs count before taking the lock. - kvm_clear_msr_filter() also has memory leak due to the same TOCTOU bug. The entire approach of updating the live filter is also flawed. While installing a new filter is inherently racy if vCPUs are running, fixing the above issues also makes it trivial to ensure certain behavior is deterministic, e.g. KVM can provide deterministic behavior for MSRs with identical settings in the old and new filters. An atomic update of the filter also prevents KVM from getting into a half-baked state, e.g. if installing a filter fails, the existing approach would leave the filter in a half-baked state, having already committed whatever bits of the filter were already processed. [*] https://lkml.kernel.org/r/20210312083157.25403-1-yaoyuan0329os@gmail.com Fixes: 1a155254ff93 ("KVM: x86: Introduce MSR filtering") Cc: stable@vger.kernel.org Cc: Alexander Graf <graf@amazon.com> Reported-by: Yuan Yao <yaoyuan0329os@gmail.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210316184436.2544875-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30static_call: Allow module use without exposing static_call_keyJosh Poimboeuf1-0/+7
[ Upstream commit 73f44fe19d359635a607e8e8daa0da4001c1cfc2 ] When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module can use static_call_update() to change the function called. This is not desirable in general. Not exporting static_call_key however also disallows usage of static_call(), since objtool needs the key to construct the static_call_site. Solve this by allowing objtool to create the static_call_site using the trampoline address when it builds a module and cannot find the static_call_key symbol. The module loader will then try and map the trampole back to a key before it constructs the normal sites list. Doing this requires a trampoline -> key associsation, so add another magic section that keeps those. Originally-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ia64: fix ptrace(PTRACE_SYSCALL_INFO_EXIT) signSergei Trofimovich1-1/+1
[ Upstream commit 61bf318eac2c13356f7bd1c6a05421ef504ccc8a ] In https://bugs.gentoo.org/769614 Dmitry noticed that `ptrace(PTRACE_GET_SYSCALL_INFO)` does not return error sign properly. The bug is in mismatch between get/set errors: static inline long syscall_get_error(struct task_struct *task, struct pt_regs *regs) { return regs->r10 == -1 ? regs->r8:0; } static inline long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) { return regs->r8; } static inline void syscall_set_return_value(struct task_struct *task, struct pt_regs *regs, int error, long val) { if (error) { /* error < 0, but ia64 uses > 0 return value */ regs->r8 = -error; regs->r10 = -1; } else { regs->r8 = val; regs->r10 = 0; } } Tested on v5.10 on rx3600 machine (ia64 9040 CPU). Link: https://lkml.kernel.org/r/20210221002554.333076-2-slyfox@gentoo.org Link: https://bugs.gentoo.org/769614 Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Reported-by: Dmitry V. Levin <ldv@altlinux.org> Reviewed-by: Dmitry V. Levin <ldv@altlinux.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ia64: fix ia64_syscall_get_set_arguments() for break-based syscallsSergei Trofimovich1-6/+18
[ Upstream commit 0ceb1ace4a2778e34a5414e5349712ae4dc41d85 ] In https://bugs.gentoo.org/769614 Dmitry noticed that `ptrace(PTRACE_GET_SYSCALL_INFO)` does not work for syscalls called via glibc's syscall() wrapper. ia64 has two ways to call syscalls from userspace: via `break` and via `eps` instructions. The difference is in stack layout: 1. `eps` creates simple stack frame: no locals, in{0..7} == out{0..8} 2. `break` uses userspace stack frame: may be locals (glibc provides one), in{0..7} == out{0..8}. Both work fine in syscall handling cde itself. But `ptrace(PTRACE_GET_SYSCALL_INFO)` uses unwind mechanism to re-extract syscall arguments but it does not account for locals. The change always skips locals registers. It should not change `eps` path as kernel's handler already enforces locals=0 and fixes `break`. Tested on v5.10 on rx3600 machine (ia64 9040 CPU). Link: https://lkml.kernel.org/r/20210221002554.333076-1-slyfox@gentoo.org Link: https://bugs.gentoo.org/769614 Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Reported-by: Dmitry V. Levin <ldv@altlinux.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30sparc64: Fix opcode filtering in handling of no fault loadsRob Gardner1-7/+6
[ Upstream commit e5e8b80d352ec999d2bba3ea584f541c83f4ca3f ] is_no_fault_exception() has two bugs which were discovered via random opcode testing with stress-ng. Both are caused by improper filtering of opcodes. The first bug can be triggered by a floating point store with a no-fault ASI, for instance "sta %f0, [%g0] #ASI_PNF", opcode C1A01040. The code first tests op3[5] (0x1000000), which denotes a floating point instruction, and then tests op3[2] (0x200000), which denotes a store instruction. But these bits are not mutually exclusive, and the above mentioned opcode has both bits set. The intent is to filter out stores, so the test for stores must be done first in order to have any effect. The second bug can be triggered by a floating point load with one of the invalid ASI values 0x8e or 0x8f, which pass this check in is_no_fault_exception(): if ((asi & 0xf2) == ASI_PNF) An example instruction is "ldqa [%l7 + %o7] #ASI 0x8f, %f38", opcode CF95D1EF. Asi values greater than 0x8b (ASI_SNFL) are fatal in handle_ldf_stq(), and is_no_fault_exception() must not allow these invalid asi values to make it that far. In both of these cases, handle_ldf_stq() reacts by calling sun4v_data_access_exception() or spitfire_data_access_exception(), which call is_no_fault_exception() and results in an infinite recursion. Signed-off-by: Rob Gardner <rob.gardner@oracle.com> Tested-by: Anatoly Pugachev <matorola@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30powerpc/4xx: Fix build errors from mfdcr()Michael Ellerman1-4/+4
[ Upstream commit eead089311f4d935ab5d1d8fbb0c42ad44699ada ] lkp reported a build error in fsp2.o: CC arch/powerpc/platforms/44x/fsp2.o {standard input}:577: Error: unsupported relocation against base Which comes from: pr_err("GESR0: 0x%08x\n", mfdcr(base + PLB4OPB_GESR0)); Where our mfdcr() macro is stringifying "base + PLB4OPB_GESR0", and passing that to the assembler, which obviously doesn't work. The mfdcr() macro already checks that the argument is constant using __builtin_constant_p(), and if not calls the out-of-line version of mfdcr(). But in this case GCC is smart enough to notice that "base + PLB4OPB_GESR0" will be constant, even though it's not something we can immediately stringify into a register number. Segher pointed out that passing the register number to the inline asm as a constant would be better, and in fact it fixes the build error, presumably because it gives GCC a chance to resolve the value. While we're at it, change mtdcr() similarly. Reported-by: kernel test robot <lkp@intel.com> Suggested-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Feng Tang <feng.tang@intel.com> Link: https://lore.kernel.org/r/20210218123058.748882-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-24x86/apic/of: Fix CPU devicetree-node lookupsJohan Hovold1-0/+5
commit dd926880da8dbbe409e709c1d3c1620729a94732 upstream. Architectures that describe the CPU topology in devicetree and do not have an identity mapping between physical and logical CPU ids must override the default implementation of arch_match_cpu_phys_id(). Failing to do so breaks CPU devicetree-node lookups using of_get_cpu_node() and of_cpu_device_node_get() which several drivers rely on. It also causes the CPU struct devices exported through sysfs to point to the wrong devicetree nodes. On x86, CPUs are described in devicetree using their APIC ids and those do not generally coincide with the logical ids, even if CPU0 typically uses APIC id 0. Add the missing implementation of arch_match_cpu_phys_id() so that CPU-node lookups work also with SMP. Apart from fixing the broken sysfs devicetree-node links this likely does not affect current users of mainline kernels on x86. Fixes: 4e07db9c8db8 ("x86/devicetree: Use CPU description from Device Tree") Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210312092033.26317-1-johan@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-24x86: Introduce TS_COMPAT_RESTART to fix get_nr_restart_syscall()Oleg Nesterov2-24/+14
commit 8c150ba2fb5995c84a7a43848250d444a3329a7d upstream. The comment in get_nr_restart_syscall() says: * The problem is that we can get here when ptrace pokes * syscall-like values into regs even if we're not in a syscall * at all. Yes, but if not in a syscall then the status & (TS_COMPAT|TS_I386_REGS_POKED) check below can't really help: - TS_COMPAT can't be set - TS_I386_REGS_POKED is only set if regs->orig_ax was changed by 32bit debugger; and even in this case get_nr_restart_syscall() is only correct if the tracee is 32bit too. Suppose that a 64bit debugger plays with a 32bit tracee and * Tracee calls sleep(2) // TS_COMPAT is set * User interrupts the tracee by CTRL-C after 1 sec and does "(gdb) call func()" * gdb saves the regs by PTRACE_GETREGS * does PTRACE_SETREGS to set %rip='func' and %orig_rax=-1 * PTRACE_CONT // TS_COMPAT is cleared * func() hits int3. * Debugger catches SIGTRAP. * Restore original regs by PTRACE_SETREGS. * PTRACE_CONT get_nr_restart_syscall() wrongly returns __NR_restart_syscall==219, the tracee calls ia32_sys_call_table[219] == sys_madvise. Add the sticky TS_COMPAT_RESTART flag which survives after return to user mode. It's going to be removed in the next step again by storing the information in the restart block. As a further cleanup it might be possible to remove also TS_I386_REGS_POKED with that. Test-case: $ cvs -d :pserver:anoncvs:anoncvs@sourceware.org:/cvs/systemtap co ptrace-tests $ gcc -o erestartsys-trap-debuggee ptrace-tests/tests/erestartsys-trap-debuggee.c --m32 $ gcc -o erestartsys-trap-debugger ptrace-tests/tests/erestartsys-trap-debugger.c -lutil $ ./erestartsys-trap-debugger Unexpected: retval 1, errno 22 erestartsys-trap-debugger: ptrace-tests/tests/erestartsys-trap-debugger.c:421 Fixes: 609c19a385c8 ("x86/ptrace: Stop setting TS_COMPAT in ptrace code") Reported-by: Jan Kratochvil <jan.kratochvil@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210201174709.GA17895@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-24x86: Move TS_COMPAT back to asm/thread_info.hOleg Nesterov2-9/+9
commit 66c1b6d74cd7035e85c426f0af4aede19e805c8a upstream. Move TS_COMPAT back to asm/thread_info.h, close to TS_I386_REGS_POKED. It was moved to asm/processor.h by b9d989c7218a ("x86/asm: Move the thread_info::status field to thread_struct"), then later 37a8f7c38339 ("x86/asm: Move 'status' from thread_struct to thread_info") moved the 'status' field back but TS_COMPAT was forgotten. Preparatory patch to fix the COMPAT case for get_nr_restart_syscall() Fixes: 609c19a385c8 ("x86/ptrace: Stop setting TS_COMPAT in ptrace code") Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210201174649.GA17880@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-24x86/ioapic: Ignore IRQ2 againThomas Gleixner1-0/+10
commit a501b048a95b79e1e34f03cac3c87ff1e9f229ad upstream. Vitaly ran into an issue with hotplugging CPU0 on an Amazon instance where the matrix allocator claimed to be out of vectors. He analyzed it down to the point that IRQ2, the PIC cascade interrupt, which is supposed to be not ever routed to the IO/APIC ended up having an interrupt vector assigned which got moved during unplug of CPU0. The underlying issue is that IRQ2 for various reasons (see commit af174783b925 ("x86: I/O APIC: Never configure IRQ2" for details) is treated as a reserved system vector by the vector core code and is not accounted as a regular vector. The Amazon BIOS has an routing entry of pin2 to IRQ2 which causes the IO/APIC setup to claim that interrupt which is granted by the vector domain because there is no sanity check. As a consequence the allocation counter of CPU0 underflows which causes a subsequent unplug to fail with: [ ... ] CPU 0 has 4294967295 vectors, 589 available. Cannot disable CPU There is another sanity check missing in the matrix allocator, but the underlying root cause is that the IO/APIC code lost the IRQ2 ignore logic during the conversion to irqdomains. For almost 6 years nobody complained about this wreckage, which might indicate that this requirement could be lifted, but for any system which actually has a PIC IRQ2 is unusable by design so any routing entry has no effect and the interrupt cannot be connected to a device anyway. Due to that and due to history biased paranoia reasons restore the IRQ2 ignore logic and treat it as non existent despite a routing entry claiming otherwise. Fixes: d32932d02e18 ("x86/irq: Convert IOAPIC to use hierarchical irqdomain interfaces") Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210318192819.636943062@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-24perf/x86/intel: Fix unchecked MSR access error caused by VLBR_EVENTKan Liang1-0/+3
commit 2dc0572f2cef87425147658698dce2600b799bd3 upstream. On a Haswell machine, the perf_fuzzer managed to trigger this message: [117248.075892] unchecked MSR access error: WRMSR to 0x3f1 (tried to write 0x0400000000000000) at rIP: 0xffffffff8106e4f4 (native_write_msr+0x4/0x20) [117248.089957] Call Trace: [117248.092685] intel_pmu_pebs_enable_all+0x31/0x40 [117248.097737] intel_pmu_enable_all+0xa/0x10 [117248.102210] __perf_event_task_sched_in+0x2df/0x2f0 [117248.107511] finish_task_switch.isra.0+0x15f/0x280 [117248.112765] schedule_tail+0xc/0x40 [117248.116562] ret_from_fork+0x8/0x30 A fake event called VLBR_EVENT may use the bit 58 of the PEBS_ENABLE, if the precise_ip is set. The bit 58 is reserved by the HW. Accessing the bit causes the unchecked MSR access error. The fake event doesn't support PEBS. The case should be rejected. Fixes: 097e4311cda9 ("perf/x86: Add constraint to create guest LBR event without hw counter") Reported-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1615555298-140216-2-git-send-email-kan.liang@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-24perf/x86/intel: Fix a crash caused by zero PEBS statusKan Liang1-1/+1
commit d88d05a9e0b6d9356e97129d4ff9942d765f46ea upstream. A repeatable crash can be triggered by the perf_fuzzer on some Haswell system. https://lore.kernel.org/lkml/7170d3b-c17f-1ded-52aa-cc6d9ae999f4@maine.edu/ For some old CPUs (HSW and earlier), the PEBS status in a PEBS record may be mistakenly set to 0. To minimize the impact of the defect, the commit was introduced to try to avoid dropping the PEBS record for some cases. It adds a check in the intel_pmu_drain_pebs_nhm(), and updates the local pebs_status accordingly. However, it doesn't correct the PEBS status in the PEBS record, which may trigger the crash, especially for the large PEBS. It's possible that all the PEBS records in a large PEBS have the PEBS status 0. If so, the first get_next_pebs_record_by_bit() in the __intel_pmu_pebs_event() returns NULL. The at = NULL. Since it's a large PEBS, the 'count' parameter must > 1. The second get_next_pebs_record_by_bit() will crash. Besides the local pebs_status, correct the PEBS status in the PEBS record as well. Fixes: 01330d7288e0 ("perf/x86: Allow zero PEBS status with only single active event") Reported-by: Vince Weaver <vincent.weaver@maine.edu> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1615555298-140216-1-git-send-email-kan.liang@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-24powerpc: Force inlining of cpu_has_feature() to avoid build failureChristophe Leroy1-2/+2
commit eed5fae00593ab9d261a0c1ffc1bdb786a87a55a upstream. The code relies on constant folding of cpu_has_feature() based on possible and always true values as defined per CPU_FTRS_ALWAYS and CPU_FTRS_POSSIBLE. Build failure is encountered with for instance book3e_all_defconfig on kisskb in the AMDGPU driver which uses cpu_has_feature(CPU_FTR_VSX_COMP) to decide whether calling kernel_enable_vsx() or not. The failure is due to cpu_has_feature() not being inlined with that configuration with gcc 4.9. In the same way as commit acdad8fb4a15 ("powerpc: Force inlining of mmu_has_feature to fix build failure"), for inlining of cpu_has_feature(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b231dfa040ce4cc37f702f5c3a595fdeabfe0462.1615378209.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-24powerpc/vdso32: Add missing _restgpr_31_x to fix build failureChristophe Leroy1-0/+11
[ Upstream commit 08c18b63d9656e0389087d1956d2b37fd7019172 ] With some defconfig including CONFIG_CC_OPTIMIZE_FOR_SIZE, (for instance mvme5100_defconfig and ps3_defconfig), gcc 5 generates a call to _restgpr_31_x. Until recently it went unnoticed, but commit 42ed6d56ade2 ("powerpc/vdso: Block R_PPC_REL24 relocations") made it rise to the surface. Provide that function (copied from lib/crtsavres.S) in gettimeofday.S Fixes: ab037dd87a2f ("powerpc/vdso: Switch VDSO to generic C implementation.") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a7aa198a88bcd33c6e35e99f70f86c7b7f2f9440.1615270757.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>