summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2023-08-03arm64/sme: Set new vector length before reallocatingMark Brown1-2/+2
commit 05d881b85b48c7ac6a7c92ce00aa916c4a84d052 upstream. As part of fixing the allocation of the buffer for SVE state when changing SME vector length we introduced an immediate reallocation of the SVE state, this is also done when changing the SVE vector length for consistency. Unfortunately this reallocation is done prior to writing the new vector length to the task struct, meaning the allocation is done with the old vector length and can lead to memory corruption due to an undersized buffer being used. Move the update of the vector length before the allocation to ensure that the new vector length is taken into account. For some reason this isn't triggering any problems when running tests on the arm64 fixes branch (even after repeated tries) but is triggering issues very often after merge into mainline. Fixes: d4d5be94a878 ("arm64/fpsimd: Ensure SME storage is allocated after SVE VL changes") Signed-off-by: Mark Brown <broonie@kernel.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20230726-arm64-fix-sme-fix-v1-1-7752ec58af27@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03LoongArch: BPF: Enable bpf_probe_read{, str}() on LoongArchChenguang Zhao1-0/+1
commit de0e30bee86d0f99c696a1fea34474e556a946ec upstream. Currently nettrace does not work on LoongArch due to missing bpf_probe_read{,str}() support, with the error message: ERROR: failed to load kprobe-based eBPF ERROR: failed to load kprobe-based bpf According to commit 0ebeea8ca8a4d1d ("bpf: Restrict bpf_probe_read{, str}() only to archs where they work"), we only need to select CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE to add said support, because LoongArch does have non-overlapping address ranges for kernel and userspace. Cc: stable@vger.kernel.org # 6.1 Signed-off-by: Chenguang Zhao <zhaochenguang@kylinos.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03LoongArch: BPF: Fix check condition to call lu32id in move_imm()Tiezhu Yang1-1/+1
commit 4eece7e6de94d833c8aeed2f438faf487cbf94ff upstream. As the code comment says, the initial aim is to reduce one instruction in some corner cases, if bit[51:31] is all 0 or all 1, no need to call lu32id. That is to say, it should call lu32id only if bit[51:31] is not all 0 and not all 1. The current code always call lu32id, the result is right but the logic is unexpected and wrong, fix it. Cc: stable@vger.kernel.org # 6.1 Fixes: 5dc615520c4d ("LoongArch: Add BPF JIT support") Reported-by: Colin King (gmail) <colin.i.king@gmail.com> Closes: https://lore.kernel.org/all/bcf97046-e336-712a-ac68-7fd194f2953e@gmail.com/ Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03LoongArch: Fix return value underflow in exception pathWANG Rui2-2/+4
commit e66d511fc92201ba481392e54896f1aeadfcf0e9 upstream. This patch fixes an underflow issue in the return value within the exception path, specifically at .Llt8 when the remaining length is less than 8 bytes. Cc: stable@vger.kernel.org Fixes: 8941e93ca590 ("LoongArch: Optimize memory ops (memset/memcpy/memmove)") Reported-by: Weihao Li <liweihao@loongson.cn> Signed-off-by: WANG Rui <wangrui@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03Revert "um: Use swap() to make code cleaner"Andy Shevchenko1-3/+4
commit dddfa05eb58076ad60f9a66e7155a5b3502b2dd5 upstream. This reverts commit 9b0da3f22307af693be80f5d3a89dc4c7f360a85. The sigio.c is clearly user space code which is handled by arch/um/scripts/Makefile.rules (see USER_OBJS rule). The above mentioned commit simply broke this agreement, we may not use Linux kernel internal headers in them without thorough thinking. Hence, revert the wrong commit. Link: https://lkml.kernel.org/r/20230724143131.30090-1-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202307212304.cH79zJp1-lkp@intel.com/ Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com> Cc: Herve Codina <herve.codina@bootlin.com> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Richard Weinberger <richard@nod.at> Cc: Yang Guang <yang.guang5@zte.com.cn> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03x86/cpu: Enable STIBP on AMD if Automatic IBRS is enabledKim Phillips1-6/+9
commit fd470a8beed88440b160d690344fbae05a0b9b1b upstream. Unlike Intel's Enhanced IBRS feature, AMD's Automatic IBRS does not provide protection to processes running at CPL3/user mode, see section "Extended Feature Enable Register (EFER)" in the APM v2 at https://bugzilla.kernel.org/attachment.cgi?id=304652 Explicitly enable STIBP to protect against cross-thread CPL3 branch target injections on systems with Automatic IBRS enabled. Also update the relevant documentation. Fixes: e7862eda309e ("x86/cpu: Support AMD Automatic IBRS") Reported-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230720194727.67022-1-kim.phillips@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03x86/MCE/AMD: Decrement threshold_bank refcount when removing threshold blocksYazen Ghannam1-2/+2
commit 3ba2e83334bed2b1980b59734e6e84dfaf96026c upstream. AMD systems from Family 10h to 16h share MCA bank 4 across multiple CPUs. Therefore, the threshold_bank structure for bank 4, and its threshold_block structures, will be initialized once at boot time. And the kobject for the shared bank will be added to each of the CPUs that share it. Furthermore, the threshold_blocks for the shared bank will be added again to the bank's kobject. These additions will increase the refcount for the bank's kobject. For example, a shared bank with two blocks and shared across two CPUs will be set up like this: CPU0 init bank create and add; bank refcount = 1; threshold_create_bank() block 0 init and add; bank refcount = 2; allocate_threshold_blocks() block 1 init and add; bank refcount = 3; allocate_threshold_blocks() CPU1 init bank add; bank refcount = 3; threshold_create_bank() block 0 add; bank refcount = 4; __threshold_add_blocks() block 1 add; bank refcount = 5; __threshold_add_blocks() Currently in threshold_remove_bank(), if the bank is shared then __threshold_remove_blocks() is called. Here the shared bank's kobject and the bank's blocks' kobjects are deleted. This is done on the first call even while the structures are still shared. Subsequent calls from other CPUs that share the structures will attempt to delete the kobjects. During kobject_del(), kobject->sd is removed. If the kobject is not part of a kset with default_groups, then subsequent kobject_del() calls seem safe even with kobject->sd == NULL. Originally, the AMD MCA thresholding structures did not use default_groups. And so the above behavior was not apparent. However, a recent change implemented default_groups for the thresholding structures. Therefore, kobject_del() will go down the sysfs_remove_groups() code path. In this case, the first kobject_del() may succeed and remove kobject->sd. But subsequent kobject_del() calls will give a WARNing in kernfs_remove_by_name_ns() since kobject->sd == NULL. Use kobject_put() on the shared bank's kobject when "removing" blocks. This decrements the bank's refcount while keeping kobjects enabled until the bank is no longer shared. At that point, kobject_put() will be called on the blocks which drives their refcount to 0 and deletes them and also decrementing the bank's refcount. And finally kobject_put() will be called on the bank driving its refcount to 0 and deleting it. The same example above: CPU1 shutdown bank is shared; bank refcount = 5; threshold_remove_bank() block 0 put parent bank; bank refcount = 4; __threshold_remove_blocks() block 1 put parent bank; bank refcount = 3; __threshold_remove_blocks() CPU0 shutdown bank is no longer shared; bank refcount = 3; threshold_remove_bank() block 0 put block; bank refcount = 2; deallocate_threshold_blocks() block 1 put block; bank refcount = 1; deallocate_threshold_blocks() put bank; bank refcount = 0; threshold_remove_bank() Fixes: 7f99cb5e6039 ("x86/CPU/AMD: Use default_groups in kobj_type") Reported-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Mikulas Patocka <mpatocka@redhat.com> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/alpine.LRH.2.02.2205301145540.25840@file01.intranet.prod.int.rdu2.redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03KVM: x86: Disallow KVM_SET_SREGS{2} if incoming CR0 is invalidSean Christopherson5-20/+52
commit 26a0652cb453c72f6aab0974bc4939e9b14f886b upstream. Reject KVM_SET_SREGS{2} with -EINVAL if the incoming CR0 is invalid, e.g. due to setting bits 63:32, illegal combinations, or to a value that isn't allowed in VMX (non-)root mode. The VMX checks in particular are "fun" as failure to disallow Real Mode for an L2 that is configured with unrestricted guest disabled, when KVM itself has unrestricted guest enabled, will result in KVM forcing VM86 mode to virtual Real Mode for L2, but then fail to unwind the related metadata when synthesizing a nested VM-Exit back to L1 (which has unrestricted guest enabled). Opportunistically fix a benign typo in the prototype for is_valid_cr4(). Cc: stable@vger.kernel.org Reported-by: syzbot+5feef0b9ee9c8e9e5689@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000f316b705fdf6e2b4@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20230613203037.1968489-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03KVM: VMX: Don't fudge CR0 and CR4 for restricted L2 guestSean Christopherson1-4/+9
commit c4abd7352023aa96114915a0bb2b88016a425cda upstream. Stuff CR0 and/or CR4 to be compliant with a restricted guest if and only if KVM itself is not configured to utilize unrestricted guests, i.e. don't stuff CR0/CR4 for a restricted L2 that is running as the guest of an unrestricted L1. Any attempt to VM-Enter a restricted guest with invalid CR0/CR4 values should fail, i.e. in a nested scenario, KVM (as L0) should never observe a restricted L2 with incompatible CR0/CR4, since nested VM-Enter from L1 should have failed. And if KVM does observe an active, restricted L2 with incompatible state, e.g. due to a KVM bug, fudging CR0/CR4 instead of letting VM-Enter fail does more harm than good, as KVM will often neglect to undo the side effects, e.g. won't clear rmode.vm86_active on nested VM-Exit, and thus the damage can easily spill over to L1. On the other hand, letting VM-Enter fail due to bad guest state is more likely to contain the damage to L2 as KVM relies on hardware to perform most guest state consistency checks, i.e. KVM needs to be able to reflect a failed nested VM-Enter into L1 irrespective of (un)restricted guest behavior. Cc: Jim Mattson <jmattson@google.com> Cc: stable@vger.kernel.org Fixes: bddd82d19e2e ("KVM: nVMX: KVM needs to unset "unrestricted guest" VM-execution control in vmcs02 if vmcs12 doesn't set it") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20230613203037.1968489-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-03x86/traps: Fix load_unaligned_zeropad() handling for shared TDX memoryKirill A. Shutemov1-7/+11
[ Upstream commit 9f9116406120638b4d8db3831ffbc430dd2e1e95 ] Commit c4e34dd99f2e ("x86: simplify load_unaligned_zeropad() implementation") changes how exceptions around load_unaligned_zeropad() handled. The kernel now uses the fault_address in fixup_exception() to verify the address calculations for the load_unaligned_zeropad(). It works fine for #PF, but breaks on #VE since no fault address is passed down to fixup_exception(). Propagating ve_info.gla down to fixup_exception() resolves the issue. See commit 1e7769653b06 ("x86/tdx: Handle load_unaligned_zeropad() page-cross to a shared page") for more context. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Kelley <mikelley@microsoft.com> Fixes: c4e34dd99f2e ("x86: simplify load_unaligned_zeropad() implementation") Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-03s390/mm: fix per vma lock fault handlingSven Schnelle1-0/+2
[ Upstream commit 7686762d1ed092db4d120e29b565712c969dc075 ] With per-vma locks, handle_mm_fault() may return non-fatal error flags. In this case the code should reset the fault flags before returning. Fixes: e06f47a16573 ("s390/mm: try VMA lock-based page fault handling first") Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-03KVM: s390: pv: fix index value of replaced ASCEClaudio Imbrenda1-0/+1
[ Upstream commit c2fceb59bbda16468bda82b002383bff59de89ab ] The index field of the struct page corresponding to a guest ASCE should be 0. When replacing the ASCE in s390_replace_asce(), the index of the new ASCE should also be set to 0. Having the wrong index might lead to the wrong addresses being passed around when notifying pte invalidations, and eventually to validity intercepts (VM crash) if the prefix gets unmapped and the notifier gets called with the wrong address. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Fixes: faa2f72cb356 ("KVM: s390: pv: leak the topmost page table when destroy fails") Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-ID: <20230705111937.33472-3-imbrenda@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-03KVM: s390: pv: simplify shutdown and fix raceClaudio Imbrenda1-2/+6
[ Upstream commit 5ff92181577a89ed12ad4e0e5813751faf16a139 ] Simplify the shutdown of non-protected VMs. There is no need to do complex manipulations of the counter if it was zero. This also fixes a very rare race which caused pages to be torn down from the address space with a non-zero counter even on older machines that don't support the UVC instruction, causing a crash. Reported-by: Marc Hartmayer <mhartmay@linux.ibm.com> Fixes: fb491d5500a7 ("KVM: s390: pv: asynchronous destroy for reboot") Reviewed-by: Nico Boehr <nrb@linux.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-ID: <20230705111937.33472-2-imbrenda@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-03powerpc/pseries/vas: Hold mmap_mutex after mmap lock during window closeHaren Myneni1-2/+7
[ Upstream commit b59c9dc4d9d47b3c4572d826603fde507055b656 ] Commit 8ef7b9e1765a ("powerpc/pseries/vas: Close windows with DLPAR core removal") unmaps the window paste address and issues HCALL to close window in the hypervisor for migration or DLPAR core removal events. So holds mmap_mutex and then mmap lock before unmap the paste address. But if the user space issue mmap paste address at the same time with the migration event, coproc_mmap() is called after holding the mmap lock which can trigger deadlock when trying to acquire mmap_mutex in coproc_mmap(). t1: mmap() call to mmap t2: Migration event window paste address do_mmap2() migration_store() ksys_mmap_pgoff() pseries_migrate_partition() vm_mmap_pgoff() vas_migration_handler() Acquire mmap lock reconfig_close_windows() do_mmap() lock mmap_mutex mmap_region() Acquire mmap lock call_mmap() //Wait for mmap lock coproc_mmap() unmap vma lock mmap_mutex update window status //wait for mmap_mutex Release mmap lock mmap vma unlock mmap_mutex update window status unlock mmap_mutex ... Release mmap lock Fix this deadlock issue by holding mmap lock first before mmap_mutex in reconfig_close_windows(). Fixes: 8ef7b9e1765a ("powerpc/pseries/vas: Close windows with DLPAR core removal") Signed-off-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230716100506.7833-1-haren@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-03KVM: arm64: Handle kvm_arm_init failure correctly in finalize_pkvmSudeep Holla3-2/+10
[ Upstream commit fa729bc7c9c8c17a2481358c841ef8ca920485d3 ] Currently there is no synchronisation between finalize_pkvm() and kvm_arm_init() initcalls. The finalize_pkvm() proceeds happily even if kvm_arm_init() fails resulting in the following warning on all the CPUs and eventually a HYP panic: | kvm [1]: IPA Size Limit: 48 bits | kvm [1]: Failed to init hyp memory protection | kvm [1]: error initializing Hyp mode: -22 | | <snip> | | WARNING: CPU: 0 PID: 0 at arch/arm64/kvm/pkvm.c:226 _kvm_host_prot_finalize+0x30/0x50 | Modules linked in: | CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.4.0 #237 | Hardware name: FVP Base RevC (DT) | pstate: 634020c5 (nZCv daIF +PAN -UAO +TCO +DIT -SSBS BTYPE=--) | pc : _kvm_host_prot_finalize+0x30/0x50 | lr : __flush_smp_call_function_queue+0xd8/0x230 | | Call trace: | _kvm_host_prot_finalize+0x3c/0x50 | on_each_cpu_cond_mask+0x3c/0x6c | pkvm_drop_host_privileges+0x4c/0x78 | finalize_pkvm+0x3c/0x5c | do_one_initcall+0xcc/0x240 | do_initcall_level+0x8c/0xac | do_initcalls+0x54/0x94 | do_basic_setup+0x1c/0x28 | kernel_init_freeable+0x100/0x16c | kernel_init+0x20/0x1a0 | ret_from_fork+0x10/0x20 | Failed to finalize Hyp protection: -22 | dtb=fvp-base-revc.dtb | kvm [95]: nVHE hyp BUG at: arch/arm64/kvm/hyp/nvhe/mem_protect.c:540! | kvm [95]: nVHE call trace: | kvm [95]: [<ffff800081052984>] __kvm_nvhe_hyp_panic+0xac/0xf8 | kvm [95]: [<ffff800081059644>] __kvm_nvhe_handle_host_mem_abort+0x1a0/0x2ac | kvm [95]: [<ffff80008105511c>] __kvm_nvhe_handle_trap+0x4c/0x160 | kvm [95]: [<ffff8000810540fc>] __kvm_nvhe___skip_pauth_save+0x4/0x4 | kvm [95]: ---[ end nVHE call trace ]--- | kvm [95]: Hyp Offset: 0xfffe8db00ffa0000 | Kernel panic - not syncing: HYP panic: | PS:a34023c9 PC:0000f250710b973c ESR:00000000f2000800 | FAR:ffff000800cb00d0 HPFAR:000000000880cb00 PAR:0000000000000000 | VCPU:0000000000000000 | CPU: 3 PID: 95 Comm: kworker/u16:2 Tainted: G W 6.4.0 #237 | Hardware name: FVP Base RevC (DT) | Workqueue: rpciod rpc_async_schedule | Call trace: | dump_backtrace+0xec/0x108 | show_stack+0x18/0x2c | dump_stack_lvl+0x50/0x68 | dump_stack+0x18/0x24 | panic+0x138/0x33c | nvhe_hyp_panic_handler+0x100/0x184 | new_slab+0x23c/0x54c | ___slab_alloc+0x3e4/0x770 | kmem_cache_alloc_node+0x1f0/0x278 | __alloc_skb+0xdc/0x294 | tcp_stream_alloc_skb+0x2c/0xf0 | tcp_sendmsg_locked+0x3d0/0xda4 | tcp_sendmsg+0x38/0x5c | inet_sendmsg+0x44/0x60 | sock_sendmsg+0x1c/0x34 | xprt_sock_sendmsg+0xdc/0x274 | xs_tcp_send_request+0x1ac/0x28c | xprt_transmit+0xcc/0x300 | call_transmit+0x78/0x90 | __rpc_execute+0x114/0x3d8 | rpc_async_schedule+0x28/0x48 | process_one_work+0x1d8/0x314 | worker_thread+0x248/0x474 | kthread+0xfc/0x184 | ret_from_fork+0x10/0x20 | SMP: stopping secondary CPUs | Kernel Offset: 0x57c5cb460000 from 0xffff800080000000 | PHYS_OFFSET: 0x80000000 | CPU features: 0x00000000,1035b7a3,ccfe773f | Memory Limit: none | ---[ end Kernel panic - not syncing: HYP panic: | PS:a34023c9 PC:0000f250710b973c ESR:00000000f2000800 | FAR:ffff000800cb00d0 HPFAR:000000000880cb00 PAR:0000000000000000 | VCPU:0000000000000000 ]--- Fix it by checking for the successfull initialisation of kvm_arm_init() in finalize_pkvm() before proceeding any futher. Fixes: 87727ba2bb05 ("KVM: arm64: Ensure CPU PMU probes before pKVM host de-privilege") Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230704193243.3300506-1-sudeep.holla@arm.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27bpf, arm64: Fix BTI type used for freplace attached functionsAlexander Duyck1-1/+7
[ Upstream commit a3f25d614bc73b45e8f02adc6769876dfd16ca84 ] When running an freplace attached bpf program on an arm64 system w were seeing the following issue: Unhandled 64-bit el1h sync exception on CPU47, ESR 0x0000000036000003 -- BTI After a bit of work to track it down I determined that what appeared to be happening is that the 'bti c' at the start of the program was somehow being reached after a 'br' instruction. Further digging pointed me toward the fact that the function was attached via freplace. This in turn led me to build_plt which I believe is invoking the long jump which is triggering this error. To resolve it we can replace the 'bti c' with 'bti jc' and add a comment explaining why this has to be modified as such. Fixes: b2ad54e1533e ("bpf, arm64: Implement bpf_arch_text_poke() for arm64") Signed-off-by: Alexander Duyck <alexanderduyck@fb.com> Acked-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/r/168926677665.316237.9953845318337455525.stgit@ahduyck-xeon-server.home.arpa Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27arm64: Fix HFGxTR_EL2 field namingMarc Zyngier1-6/+6
[ Upstream commit 55b87b74996383230586f4f9f801ae304c70e649 ] The HFGxTR_EL2 fields do not always follow the naming described in the spec, nor do they match the name of the register they trap in the rest of the kernel. It is a bit sad that they were written by hand despite the availability of a machine readable version... Fixes: cc077e7facbe ("arm64/sysreg: Convert HFG[RW]TR_EL2 to automatic generation") Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.org> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230703130416.1495307-1-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27arm64: mm: fix VA-range sanity checkMark Rutland1-2/+2
[ Upstream commit ab9b4008092c86dc12497af155a0901cc1156999 ] Both create_mapping_noalloc() and update_mapping_prot() sanity-check their 'virt' parameter, but the check itself doesn't make much sense. The condition used today appears to be a historical accident. The sanity-check condition: if ((virt >= PAGE_END) && (virt < VMALLOC_START)) { [ ... warning here ... ] return; } ... can only be true for the KASAN shadow region or the module region, and there's no reason to exclude these specifically for creating and updateing mappings. When arm64 support was first upstreamed in commit: c1cc1552616d0f35 ("arm64: MMU initialisation") ... the condition was: if (virt < VMALLOC_START) { [ ... warning here ... ] return; } At the time, VMALLOC_START was the lowest kernel address, and this was checking whether 'virt' would be translated via TTBR1. Subsequently in commit: 14c127c957c1c607 ("arm64: mm: Flip kernel VA space") ... the condition was changed to: if ((virt >= VA_START) && (virt < VMALLOC_START)) { [ ... warning here ... ] return; } This appear to have been a thinko. The commit moved the linear map to the bottom of the kernel address space, with VMALLOC_START being at the halfway point. The old condition would warn for changes to the linear map below this, and at the time VA_START was the end of the linear map. Subsequently we cleaned up the naming of VA_START in commit: 77ad4ce69321abbe ("arm64: memory: rename VA_START to PAGE_END") ... keeping the erroneous condition as: if ((virt >= PAGE_END) && (virt < VMALLOC_START)) { [ ... warning here ... ] return; } Correct the condition to check against the start of the TTBR1 address space, which is currently PAGE_OFFSET. This simplifies the logic, and more clearly matches the "outside kernel range" message in the warning. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/20230615102628.1052103-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27arm64: set __exception_irq_entry with __irq_entry as a defaultYoungmin Nam1-5/+0
[ Upstream commit f6794950f0e5ba37e3bbedda4d6ab0aad7395dd3 ] filter_irq_stacks() is supposed to cut entries which are related irq entries from its call stack. And in_irqentry_text() which is called by filter_irq_stacks() uses __irqentry_text_start/end symbol to find irq entries in callstack. But it doesn't work correctly as without "CONFIG_FUNCTION_GRAPH_TRACER", arm64 kernel doesn't include gic_handle_irq which is entry point of arm64 irq between __irqentry_text_start and __irqentry_text_end as we discussed in below link. https://lore.kernel.org/all/CACT4Y+aReMGLYua2rCLHgFpS9io5cZC04Q8GLs-uNmrn1ezxYQ@mail.gmail.com/#t This problem can makes unintentional deep call stack entries especially in KASAN enabled situation as below. [ 2479.383395]I[0:launcher-loader: 1719] Stack depot reached limit capacity [ 2479.383538]I[0:launcher-loader: 1719] WARNING: CPU: 0 PID: 1719 at lib/stackdepot.c:129 __stack_depot_save+0x464/0x46c [ 2479.385693]I[0:launcher-loader: 1719] pstate: 624000c5 (nZCv daIF +PAN -UAO +TCO -DIT -SSBS BTYPE=--) [ 2479.385724]I[0:launcher-loader: 1719] pc : __stack_depot_save+0x464/0x46c [ 2479.385751]I[0:launcher-loader: 1719] lr : __stack_depot_save+0x460/0x46c [ 2479.385774]I[0:launcher-loader: 1719] sp : ffffffc0080073c0 [ 2479.385793]I[0:launcher-loader: 1719] x29: ffffffc0080073e0 x28: ffffffd00b78a000 x27: 0000000000000000 [ 2479.385839]I[0:launcher-loader: 1719] x26: 000000000004d1dd x25: ffffff891474f000 x24: 00000000ca64d1dd [ 2479.385882]I[0:launcher-loader: 1719] x23: 0000000000000200 x22: 0000000000000220 x21: 0000000000000040 [ 2479.385925]I[0:launcher-loader: 1719] x20: ffffffc008007440 x19: 0000000000000000 x18: 0000000000000000 [ 2479.385969]I[0:launcher-loader: 1719] x17: 2065726568207475 x16: 000000000000005e x15: 2d2d2d2d2d2d2d20 [ 2479.386013]I[0:launcher-loader: 1719] x14: 5d39313731203a72 x13: 00000000002f6b30 x12: 00000000002f6af8 [ 2479.386057]I[0:launcher-loader: 1719] x11: 00000000ffffffff x10: ffffffb90aacf000 x9 : e8a74a6c16008800 [ 2479.386101]I[0:launcher-loader: 1719] x8 : e8a74a6c16008800 x7 : 00000000002f6b30 x6 : 00000000002f6af8 [ 2479.386145]I[0:launcher-loader: 1719] x5 : ffffffc0080070c8 x4 : ffffffd00b192380 x3 : ffffffd0092b313c [ 2479.386189]I[0:launcher-loader: 1719] x2 : 0000000000000001 x1 : 0000000000000004 x0 : 0000000000000022 [ 2479.386231]I[0:launcher-loader: 1719] Call trace: [ 2479.386248]I[0:launcher-loader: 1719] __stack_depot_save+0x464/0x46c [ 2479.386273]I[0:launcher-loader: 1719] kasan_save_stack+0x58/0x70 [ 2479.386303]I[0:launcher-loader: 1719] save_stack_info+0x34/0x138 [ 2479.386331]I[0:launcher-loader: 1719] kasan_save_free_info+0x18/0x24 [ 2479.386358]I[0:launcher-loader: 1719] ____kasan_slab_free+0x16c/0x170 [ 2479.386385]I[0:launcher-loader: 1719] __kasan_slab_free+0x10/0x20 [ 2479.386410]I[0:launcher-loader: 1719] kmem_cache_free+0x238/0x53c [ 2479.386435]I[0:launcher-loader: 1719] mempool_free_slab+0x1c/0x28 [ 2479.386460]I[0:launcher-loader: 1719] mempool_free+0x7c/0x1a0 [ 2479.386484]I[0:launcher-loader: 1719] bvec_free+0x34/0x80 [ 2479.386514]I[0:launcher-loader: 1719] bio_free+0x60/0x98 [ 2479.386540]I[0:launcher-loader: 1719] bio_put+0x50/0x21c [ 2479.386567]I[0:launcher-loader: 1719] f2fs_write_end_io+0x4ac/0x4d0 [ 2479.386594]I[0:launcher-loader: 1719] bio_endio+0x2dc/0x300 [ 2479.386622]I[0:launcher-loader: 1719] __dm_io_complete+0x324/0x37c [ 2479.386650]I[0:launcher-loader: 1719] dm_io_dec_pending+0x60/0xa4 [ 2479.386676]I[0:launcher-loader: 1719] clone_endio+0xf8/0x2f0 [ 2479.386700]I[0:launcher-loader: 1719] bio_endio+0x2dc/0x300 [ 2479.386727]I[0:launcher-loader: 1719] blk_update_request+0x258/0x63c [ 2479.386754]I[0:launcher-loader: 1719] scsi_end_request+0x50/0x304 [ 2479.386782]I[0:launcher-loader: 1719] scsi_io_completion+0x88/0x160 [ 2479.386808]I[0:launcher-loader: 1719] scsi_finish_command+0x17c/0x194 [ 2479.386833]I[0:launcher-loader: 1719] scsi_complete+0xcc/0x158 [ 2479.386859]I[0:launcher-loader: 1719] blk_mq_complete_request+0x4c/0x5c [ 2479.386885]I[0:launcher-loader: 1719] scsi_done_internal+0xf4/0x1e0 [ 2479.386910]I[0:launcher-loader: 1719] scsi_done+0x14/0x20 [ 2479.386935]I[0:launcher-loader: 1719] ufshcd_compl_one_cqe+0x578/0x71c [ 2479.386963]I[0:launcher-loader: 1719] ufshcd_mcq_poll_cqe_nolock+0xc8/0x150 [ 2479.386991]I[0:launcher-loader: 1719] ufshcd_intr+0x868/0xc0c [ 2479.387017]I[0:launcher-loader: 1719] __handle_irq_event_percpu+0xd0/0x348 [ 2479.387044]I[0:launcher-loader: 1719] handle_irq_event_percpu+0x24/0x74 [ 2479.387068]I[0:launcher-loader: 1719] handle_irq_event+0x74/0xe0 [ 2479.387091]I[0:launcher-loader: 1719] handle_fasteoi_irq+0x174/0x240 [ 2479.387118]I[0:launcher-loader: 1719] handle_irq_desc+0x7c/0x2c0 [ 2479.387147]I[0:launcher-loader: 1719] generic_handle_domain_irq+0x1c/0x28 [ 2479.387174]I[0:launcher-loader: 1719] gic_handle_irq+0x64/0x158 [ 2479.387204]I[0:launcher-loader: 1719] call_on_irq_stack+0x2c/0x54 [ 2479.387231]I[0:launcher-loader: 1719] do_interrupt_handler+0x70/0xa0 [ 2479.387258]I[0:launcher-loader: 1719] el1_interrupt+0x34/0x68 [ 2479.387283]I[0:launcher-loader: 1719] el1h_64_irq_handler+0x18/0x24 [ 2479.387308]I[0:launcher-loader: 1719] el1h_64_irq+0x68/0x6c [ 2479.387332]I[0:launcher-loader: 1719] blk_attempt_bio_merge+0x8/0x170 [ 2479.387356]I[0:launcher-loader: 1719] blk_mq_attempt_bio_merge+0x78/0x98 [ 2479.387383]I[0:launcher-loader: 1719] blk_mq_submit_bio+0x324/0xa40 [ 2479.387409]I[0:launcher-loader: 1719] __submit_bio+0x104/0x138 [ 2479.387436]I[0:launcher-loader: 1719] submit_bio_noacct_nocheck+0x1d0/0x4a0 [ 2479.387462]I[0:launcher-loader: 1719] submit_bio_noacct+0x618/0x804 [ 2479.387487]I[0:launcher-loader: 1719] submit_bio+0x164/0x180 [ 2479.387511]I[0:launcher-loader: 1719] f2fs_submit_read_bio+0xe4/0x1c4 [ 2479.387537]I[0:launcher-loader: 1719] f2fs_mpage_readpages+0x888/0xa4c [ 2479.387563]I[0:launcher-loader: 1719] f2fs_readahead+0xd4/0x19c [ 2479.387587]I[0:launcher-loader: 1719] read_pages+0xb0/0x4ac [ 2479.387614]I[0:launcher-loader: 1719] page_cache_ra_unbounded+0x238/0x288 [ 2479.387642]I[0:launcher-loader: 1719] do_page_cache_ra+0x60/0x6c [ 2479.387669]I[0:launcher-loader: 1719] page_cache_ra_order+0x318/0x364 [ 2479.387695]I[0:launcher-loader: 1719] ondemand_readahead+0x30c/0x3d8 [ 2479.387722]I[0:launcher-loader: 1719] page_cache_sync_ra+0xb4/0xc8 [ 2479.387749]I[0:launcher-loader: 1719] filemap_read+0x268/0xd24 [ 2479.387777]I[0:launcher-loader: 1719] f2fs_file_read_iter+0x1a0/0x62c [ 2479.387806]I[0:launcher-loader: 1719] vfs_read+0x258/0x34c [ 2479.387831]I[0:launcher-loader: 1719] ksys_pread64+0x8c/0xd0 [ 2479.387857]I[0:launcher-loader: 1719] __arm64_sys_pread64+0x48/0x54 [ 2479.387881]I[0:launcher-loader: 1719] invoke_syscall+0x58/0x158 [ 2479.387909]I[0:launcher-loader: 1719] el0_svc_common+0xf0/0x134 [ 2479.387935]I[0:launcher-loader: 1719] do_el0_svc+0x44/0x114 [ 2479.387961]I[0:launcher-loader: 1719] el0_svc+0x2c/0x80 [ 2479.387985]I[0:launcher-loader: 1719] el0t_64_sync_handler+0x48/0x114 [ 2479.388010]I[0:launcher-loader: 1719] el0t_64_sync+0x190/0x194 [ 2479.388038]I[0:launcher-loader: 1719] Kernel panic - not syncing: kernel: panic_on_warn set ... So let's set __exception_irq_entry with __irq_entry as a default. Applying this patch, we can see gic_hande_irq is included in Systemp.map as below. * Before ffffffc008010000 T __do_softirq ffffffc008010000 T __irqentry_text_end ffffffc008010000 T __irqentry_text_start ffffffc008010000 T __softirqentry_text_start ffffffc008010000 T _stext ffffffc00801066c T __softirqentry_text_end ffffffc008010670 T __entry_text_start * After ffffffc008010000 T __irqentry_text_start ffffffc008010000 T _stext ffffffc008010000 t gic_handle_irq ffffffc00801013c t gic_handle_irq ffffffc008010294 T __irqentry_text_end ffffffc008010298 T __do_softirq ffffffc008010298 T __softirqentry_text_start ffffffc008010904 T __softirqentry_text_end ffffffc008010908 T __entry_text_start Signed-off-by: Youngmin Nam <youngmin.nam@samsung.com> Signed-off-by: SEO HOYOUNG <hy50.seo@samsung.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230424010436.779733-1-youngmin.nam@samsung.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27MIPS: dec: prom: Address -Warray-bounds warningGustavo A. R. Silva1-1/+1
[ Upstream commit 7b191b9b55df2a844bd32d1d380f47a7df1c2896 ] Zero-length arrays are deprecated, and we are replacing them with flexible array members instead. So, replace zero-length array with flexible-array member in struct memmap. Address the following warning found after building (with GCC-13) mips64 with decstation_64_defconfig: In function 'rex_setup_memory_region', inlined from 'prom_meminit' at arch/mips/dec/prom/memory.c:91:3: arch/mips/dec/prom/memory.c:72:31: error: array subscript i is outside array bounds of 'unsigned char[0]' [-Werror=array-bounds=] 72 | if (bm->bitmap[i] == 0xff) | ~~~~~~~~~~^~~ In file included from arch/mips/dec/prom/memory.c:16: ./arch/mips/include/asm/dec/prom.h: In function 'prom_meminit': ./arch/mips/include/asm/dec/prom.h:73:23: note: while referencing 'bitmap' 73 | unsigned char bitmap[0]; This helps with the ongoing efforts to globally enable -Warray-bounds. This results in no differences in binary output. Link: https://github.com/KSPP/linux/issues/79 Link: https://github.com/KSPP/linux/issues/323 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27KVM: arm64: vgic-v4: Make the doorbell request robust w.r.t preemptionMarc Zyngier4-5/+12
commit b321c31c9b7b309dcde5e8854b741c8e6a9a05f0 upstream. Xiang reports that VMs occasionally fail to boot on GICv4.1 systems when running a preemptible kernel, as it is possible that a vCPU is blocked without requesting a doorbell interrupt. The issue is that any preemption that occurs between vgic_v4_put() and schedule() on the block path will mark the vPE as nonresident and *not* request a doorbell irq. This occurs because when the vcpu thread is resumed on its way to block, vcpu_load() will make the vPE resident again. Once the vcpu actually blocks, we don't request a doorbell anymore, and the vcpu won't be woken up on interrupt delivery. Fix it by tracking that we're entering WFI, and key the doorbell request on that flag. This allows us not to make the vPE resident when going through a preempt/schedule cycle, meaning we don't lose any state. Cc: stable@vger.kernel.org Fixes: 8e01d9a396e6 ("KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put") Reported-by: Xiang Chen <chenxiang66@hisilicon.com> Suggested-by: Zenghui Yu <yuzenghui@huawei.com> Tested-by: Xiang Chen <chenxiang66@hisilicon.com> Co-developed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20230713070657.3873244-1-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27KVM: arm64: Disable preemption in kvm_arch_hardware_enable()Marc Zyngier1-1/+12
commit 970dee09b230895fe2230d2b32ad05a2826818c6 upstream. Since 0bf50497f03b ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock"), hotplugging back a CPU whilst a guest is running results in a number of ugly splats as most of this code expects to run with preemption disabled, which isn't the case anymore. While the context is preemptable, it isn't migratable, which should be enough. But we have plenty of preemptible() checks all over the place, and our per-CPU accessors also disable preemption. Since this affects released versions, let's do the easy fix first, disabling preemption in kvm_arch_hardware_enable(). We can always revisit this with a more invasive fix in the future. Fixes: 0bf50497f03b ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Reported-by: Kristina Martsenko <kristina.martsenko@arm.com> Tested-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/aeab7562-2d39-e78e-93b1-4711f8cc3fa5@arm.com Cc: stable@vger.kernel.org # v6.3, v6.4 Link: https://lore.kernel.org/r/20230703163548.1498943-1-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27KVM: arm64: Correctly handle page aging notifiers for unaligned memslotOliver Upton3-36/+55
commit df6556adf27b7372cfcd97e1c0afb0d516c8279f upstream. Userspace is allowed to select any PAGE_SIZE aligned hva to back guest memory. This is even the case with hugepages, although it is a rather suboptimal configuration as PTE level mappings are used at stage-2. The arm64 page aging handlers have an assumption that the specified range is exactly one page/block of memory, which in the aforementioned case is not necessarily true. All together this leads to the WARN() in kvm_age_gfn() firing. However, the WARN is only part of the issue as the table walkers visit at most a single leaf PTE. For hugepage-backed memory in a memslot that isn't hugepage-aligned, page aging entirely misses accesses to the hugepage beyond the first page in the memslot. Add a new walker dedicated to handling page aging MMU notifiers capable of walking a range of PTEs. Convert kvm(_test)_age_gfn() over to the new walker and drop the WARN that caught the issue in the first place. The implementation of this walker was inspired by the test_clear_young() implementation by Yu Zhao [*], but repurposed to address a bug in the existing aging implementation. Cc: stable@vger.kernel.org # v5.15 Fixes: 056aad67f836 ("kvm: arm/arm64: Rework gpa callback handlers") Link: https://lore.kernel.org/kvmarm/20230526234435.662652-6-yuzhao@google.com/ Co-developed-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Yu Zhao <yuzhao@google.com> Reported-by: Reiji Watanabe <reijiw@google.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Link: https://lore.kernel.org/r/20230627235405.4069823-1-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27KVM: arm64: timers: Use CNTHCTL_EL2 when setting non-CNTKCTL_EL1 bitsMarc Zyngier1-3/+3
commit fe769e6c1f80f542d6f4e7f7c8c6bf20c1307f99 upstream. It recently appeared that, when running VHE, there is a notable difference between using CNTKCTL_EL1 and CNTHCTL_EL2, despite what the architecture documents: - When accessed from EL2, bits [19:18] and [16:10] of CNTKCTL_EL1 have the same assignment as CNTHCTL_EL2 - When accessed from EL1, bits [19:18] and [16:10] are RES0 It is all OK, until you factor in NV, where the EL2 guest runs at EL1. In this configuration, CNTKCTL_EL11 doesn't trap, nor ends up in the VNCR page. This means that any write from the guest affecting CNTHCTL_EL2 using CNTKCTL_EL1 ends up losing some state. Not good. The fix it obvious: don't use CNTKCTL_EL1 if you want to change bits that are not part of the EL1 definition of CNTKCTL_EL1, and use CNTHCTL_EL2 instead. This doesn't change anything for a bare-metal OS, and fixes it when running under NV. The NV hypervisor will itself have to work harder to merge the two accessors. Note that there is a pending update to the architecture to address this issue by making the affected bits UNKNOWN when CNTKCTL_EL1 is used from EL2 with VHE enabled. Fixes: c605ee245097 ("KVM: arm64: timers: Allow physical offset without CNTPOFF_EL2") Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org # v6.4 Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20230627140557.544885-1-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27arm64/fpsimd: Ensure SME storage is allocated after SVE VL changesMark Brown1-8/+25
commit d4d5be94a87872421ea2569044092535aff0b886 upstream. When we reconfigure the SVE vector length we discard the backing storage for the SVE vectors and then reallocate on next SVE use, leaving the SME specific state alone. This means that we do not enable SME traps if they were already disabled. That means that userspace code can enter streaming mode without trapping, putting the task in a state where if we try to save the state of the task we will fault. Since the ABI does not specify that changing the SVE vector length disturbs SME state, and since SVE code may not be aware of SME code in the process, we shouldn't simply discard any ZA state. Instead immediately reallocate the storage for SVE, and disable SME if we change the SVE vector length while there is no SME state active. Disabling SME traps on SVE vector length changes would make the overall code more complex since we would have a state where we have valid SME state stored but might get a SME trap. Fixes: 9e4ab6c89109 ("arm64/sme: Implement vector length configuration prctl()s") Reported-by: David Spickett <David.Spickett@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230720-arm64-fix-sve-sme-vl-change-v2-1-8eea06b82d57@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27ia64: mmap: Consider pgoff when searching for free mappingHelge Deller1-1/+1
commit 07e981137f17e5275b6fa5fd0c28b0ddb4519702 upstream. IA64 is the only architecture which does not consider the pgoff value when searching for a possible free memory region with vm_unmapped_area(). Adding this seems to have no negative side effect on IA64, so add it now to make IA64 consistent with all other architectures. Cc: stable@vger.kernel.org # 6.4 Signed-off-by: Helge Deller <deller@gmx.de> Tested-by: matoro <matoro_mailinglist_kernel@matoro.tk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-ia64@vger.kernel.org Link: https://lore.kernel.org/r/20230721152432.196382-3-deller@gmx.de Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27io_uring: Fix io_uring mmap() by using architecture-provided get_unmapped_area()Helge Deller1-5/+10
commit 32832a407a7178eec3215fad9b1a3298c14b0d69 upstream. The io_uring testcase is broken on IA-64 since commit d808459b2e31 ("io_uring: Adjust mapping wrt architecture aliasing requirements"). The reason is, that this commit introduced an own architecture independend get_unmapped_area() search algorithm which finds on IA-64 a memory region which is outside of the regular memory region used for shared userspace mappings and which can't be used on that platform due to aliasing. To avoid similar problems on IA-64 and other platforms in the future, it's better to switch back to the architecture-provided get_unmapped_area() function and adjust the needed input parameters before the call. Beside fixing the issue, the function now becomes easier to understand and maintain. This patch has been successfully tested with the io_uring testcase on physical x86-64, ppc64le, IA-64 and PA-RISC machines. On PA-RISC the LTP mmmap testcases did not report any regressions. Cc: stable@vger.kernel.org # 6.4 Signed-off-by: Helge Deller <deller@gmx.de> Reported-by: matoro <matoro_mailinglist_kernel@matoro.tk> Fixes: d808459b2e31 ("io_uring: Adjust mapping wrt architecture aliasing requirements") Link: https://lore.kernel.org/r/20230721152432.196382-2-deller@gmx.de Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-24x86/cpu/amd: Add a Zenbleed fixBorislav Petkov (AMD)5-0/+66
commit 522b1d69219d8f083173819fde04f994aa051a98 upstream. Add a fix for the Zen2 VZEROUPPER data corruption bug where under certain circumstances executing VZEROUPPER can cause register corruption or leak data. The optimal fix is through microcode but in the case the proper microcode revision has not been applied, enable a fallback fix using a chicken bit. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-24x86/cpu/amd: Move the errata checking functionality upBorislav Petkov (AMD)1-72/+67
commit 8b6f687743dacce83dbb0c7cfacf88bab00f808a upstream. Avoid new and remove old forward declarations. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23MIPS: kvm: Fix build error with KVM_MIPS_DEBUG_COP0_COUNTERS enabledThomas Bogendoerfer1-2/+2
commit 3a6dbb691782e88e07e5c70b327495dbd58a2e7f upstream. Commit e4de20576986 ("MIPS: KVM: Fix NULL pointer dereference") missed converting one place accessing cop0 registers, which results in a build error, if KVM_MIPS_DEBUG_COP0_COUNTERS is enabled. Fixes: e4de20576986 ("MIPS: KVM: Fix NULL pointer dereference") Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23perf/x86: Fix lockdep warning in for_each_sibling_event() on SPRNamhyung Kim1-0/+7
commit 27c68c216ee1f1b086e789a64486e6511e380b8a upstream. On SPR, the load latency event needs an auxiliary event in the same group to work properly. There's a check in intel_pmu_hw_config() for this to iterate sibling events and find a mem-loads-aux event. The for_each_sibling_event() has a lockdep assert to make sure if it disabled hardirq or hold leader->ctx->mutex. This works well if the given event has a separate leader event since perf_try_init_event() grabs the leader->ctx->mutex to protect the sibling list. But it can cause a problem when the event itself is a leader since the event is not initialized yet and there's no ctx for the event. Actually I got a lockdep warning when I run the below command on SPR, but I guess it could be a NULL pointer dereference. $ perf record -d -e cpu/mem-loads/uP true The code path to the warning is: sys_perf_event_open() perf_event_alloc() perf_init_event() perf_try_init_event() x86_pmu_event_init() hsw_hw_config() intel_pmu_hw_config() for_each_sibling_event() lockdep_assert_event_ctx() We don't need for_each_sibling_event() when it's a standalone event. Let's return the error code directly. Fixes: f3c0eba28704 ("perf: Add a few assertions") Reported-by: Greg Thelen <gthelen@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20230704181516.3293665-1-namhyung@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23xtensa: ISS: fix call to split_if_specMax Filippov1-1/+1
commit bc8d5916541fa19ca5bc598eb51a5f78eb891a36 upstream. split_if_spec expects a NULL-pointer as an end marker for the argument list, but tuntap_probe never supplied that terminating NULL. As a result incorrectly formatted interface specification string may cause a crash because of the random memory access. Fix that by adding NULL terminator to the split_if_spec argument list. Cc: stable@vger.kernel.org Fixes: 7282bee78798 ("[PATCH] xtensa: Architecture support for Tensilica Xtensa Part 8") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23s390/decompressor: fix misaligned symbol build errorHeiko Carstens1-0/+1
commit 938f0c35d7d93a822ab9c9728e3205e8e57409d0 upstream. Nathan Chancellor reported a kernel build error on Fedora 39: $ clang --version | head -1 clang version 16.0.5 (Fedora 16.0.5-1.fc39) $ s390x-linux-gnu-ld --version | head -1 GNU ld version 2.40-1.fc39 $ make -skj"$(nproc)" ARCH=s390 CC=clang CROSS_COMPILE=s390x-linux-gnu- olddefconfig all s390x-linux-gnu-ld: arch/s390/boot/startup.o(.text+0x5b4): misaligned symbol `_decompressor_end' (0x35b0f) for relocation R_390_PC32DBL make[3]: *** [.../arch/s390/boot/Makefile:78: arch/s390/boot/vmlinux] Error 1 It turned out that the problem with misaligned symbols on s390 was fixed with commit 80ddf5ce1c92 ("s390: always build relocatable kernel") for the kernel image, but did not take into account that the decompressor uses its own set of CFLAGS, which come without -fPIE. Add the -fPIE flag also to the decompresser CFLAGS to fix this. Reported-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Reported-by: CKI <cki-project@redhat.com> Suggested-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1747 Link: https://lore.kernel.org/32935.123062114500601371@us-mta-9.us.mimecast.lan/ Link: https://lore.kernel.org/r/20230622125508.1068457-1-hca@linux.ibm.com Cc: <stable@vger.kernel.org> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23arm64: errata: Mitigate Ampere1 erratum AC03_CPU_38 at stage-2Oliver Upton4-3/+38
commit 6df696cd9bc1ceed0e92e36908f88bbd16d18255 upstream. AmpereOne has an erratum in its implementation of FEAT_HAFDBS that required disabling the feature on the design. This was done by reporting the feature as not implemented in the ID register, although the corresponding control bits were not actually RES0. This does not align well with the requirements of the architecture, which mandates these bits be RES0 if HAFDBS isn't implemented. The kernel's use of stage-1 is unaffected, as the HA and HD bits are only set if HAFDBS is detected in the ID register. KVM, on the other hand, relies on the RES0 behavior at stage-2 to use the same value for VTCR_EL2 on any cpu in the system. Mitigate the non-RES0 behavior by leaving VTCR_EL2.HA clear on affected systems. Cc: stable@vger.kernel.org Cc: D Scott Phillips <scott@os.amperecomputing.com> Cc: Darren Hart <darren@os.amperecomputing.com> Acked-by: D Scott Phillips <scott@os.amperecomputing.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20230609220104.1836988-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23arm64: dts: ti: k3-j721s2: Fix wkup pinmux rangeSinthu Raja3-60/+87
commit 6bc829ceea4158c7aeb3a9e73d5c52634d78fb6f upstream. The WKUP_PADCONFIG register region in J721S2 has multiple non-addressable regions, accordingly split the existing wkup_pmx region as follows to avoid the non-addressable regions and include the rest of valid WKUP_PADCONFIG registers. Also update references to old nodes with new ones. wkup_pmx0 -> 13 pins (WKUP_PADCONFIG 0 - 12) wkup_pmx1 -> 11 pins (WKUP_PADCONFIG 14 - 24) wkup_pmx2 -> 72 pins (WKUP_PADCONFIG 26 - 97) wkup_pmx3 -> 1 pin (WKUP_PADCONFIG 100) Fixes: b8545f9d3a54 ("arm64: dts: ti: Add initial support for J721S2 SoC") Cc: <stable@vger.kernel.org> # 6.3 Signed-off-by: Sinthu Raja <sinthu.raja@ti.com> Signed-off-by: Thejasvi Konduru <t-konduru@ti.com> Signed-off-by: Nishanth Menon <nm@ti.com> Reviewed-by: Udit Kumar <u-kumar1@ti.com> Link: https://lore.kernel.org/r/20230602153554.1571128-2-nm@ti.com Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23arm64: dts: mt7986: use size of reserved partition for bl2Frank Wunderlich1-6/+1
commit 7afe7b5969329175ac4f55a6b9c13ba4f6dc267e upstream. To store uncompressed bl2 more space is required than partition is actually defined. There is currently no known usage of this reserved partition. Openwrt uses same partition layout. We added same change to u-boot with commit d7bb1099 [1]. [1] https://source.denx.de/u-boot/u-boot/-/commit/d7bb109900c1ca754a0198b9afb50e3161ffc21e Cc: stable@vger.kernel.org Fixes: 8e01fb15b815 ("arm64: dts: mt7986: add Bananapi R3") Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Daniel Golle <daniel@makrotopia.org> Link: https://lore.kernel.org/r/20230528113343.7649-1-linux@fw-web.de Signed-off-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23MIPS: KVM: Fix NULL pointer dereferenceHuacai Chen5-36/+36
commit e4de2057698636c0ee709e545d19b169d2069fa3 upstream. After commit 45c7e8af4a5e3f0bea4ac209 ("MIPS: Remove KVM_TE support") we get a NULL pointer dereference when creating a KVM guest: [ 146.243409] Starting KVM with MIPS VZ extensions [ 149.849151] CPU 3 Unable to handle kernel paging request at virtual address 0000000000000300, epc == ffffffffc06356ec, ra == ffffffffc063568c [ 149.849177] Oops[#1]: [ 149.849182] CPU: 3 PID: 2265 Comm: qemu-system-mip Not tainted 6.4.0-rc3+ #1671 [ 149.849188] Hardware name: THTF CX TL630 Series/THTF-LS3A4000-7A1000-ML4A, BIOS KL4.1F.TF.D.166.201225.R 12/25/2020 [ 149.849192] $ 0 : 0000000000000000 000000007400cce0 0000000000400004 ffffffff8119c740 [ 149.849209] $ 4 : 000000007400cce1 000000007400cce1 0000000000000000 0000000000000000 [ 149.849221] $ 8 : 000000240058bb36 ffffffff81421ac0 0000000000000000 0000000000400dc0 [ 149.849233] $12 : 9800000102a07cc8 ffffffff80e40e38 0000000000000001 0000000000400dc0 [ 149.849245] $16 : 0000000000000000 9800000106cd0000 9800000106cd0000 9800000100cce000 [ 149.849257] $20 : ffffffffc0632b28 ffffffffc05b31b0 9800000100ccca00 0000000000400000 [ 149.849269] $24 : 9800000106cd09ce ffffffff802f69d0 [ 149.849281] $28 : 9800000102a04000 9800000102a07cd0 98000001106a8000 ffffffffc063568c [ 149.849293] Hi : 00000335b2111e66 [ 149.849295] Lo : 6668d90061ae0ae9 [ 149.849298] epc : ffffffffc06356ec kvm_vz_vcpu_setup+0xc4/0x328 [kvm] [ 149.849324] ra : ffffffffc063568c kvm_vz_vcpu_setup+0x64/0x328 [kvm] [ 149.849336] Status: 7400cce3 KX SX UX KERNEL EXL IE [ 149.849351] Cause : 1000000c (ExcCode 03) [ 149.849354] BadVA : 0000000000000300 [ 149.849357] PrId : 0014c004 (ICT Loongson-3) [ 149.849360] Modules linked in: kvm nfnetlink_queue nfnetlink_log nfnetlink fuse sha256_generic libsha256 cfg80211 rfkill binfmt_misc vfat fat snd_hda_codec_hdmi input_leds led_class snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hda_core snd_pcm snd_timer snd serio_raw xhci_pci radeon drm_suballoc_helper drm_display_helper xhci_hcd ip_tables x_tables [ 149.849432] Process qemu-system-mip (pid: 2265, threadinfo=00000000ae2982d2, task=0000000038e09ad4, tls=000000ffeba16030) [ 149.849439] Stack : 9800000000000003 9800000100ccca00 9800000100ccc000 ffffffffc062cef4 [ 149.849453] 9800000102a07d18 c89b63a7ab338e00 0000000000000000 ffffffff811a0000 [ 149.849465] 0000000000000000 9800000106cd0000 ffffffff80e59938 98000001106a8920 [ 149.849476] ffffffff80e57f30 ffffffffc062854c ffffffff811a0000 9800000102bf4240 [ 149.849488] ffffffffc05b0000 ffffffff80e3a798 000000ff78000000 000000ff78000010 [ 149.849500] 0000000000000255 98000001021f7de0 98000001023f0078 ffffffff81434000 [ 149.849511] 0000000000000000 0000000000000000 9800000102ae0000 980000025e92ae28 [ 149.849523] 0000000000000000 c89b63a7ab338e00 0000000000000001 ffffffff8119dce0 [ 149.849535] 000000ff78000010 ffffffff804f3d3c 9800000102a07eb0 0000000000000255 [ 149.849546] 0000000000000000 ffffffff8049460c 000000ff78000010 0000000000000255 [ 149.849558] ... [ 149.849565] Call Trace: [ 149.849567] [<ffffffffc06356ec>] kvm_vz_vcpu_setup+0xc4/0x328 [kvm] [ 149.849586] [<ffffffffc062cef4>] kvm_arch_vcpu_create+0x184/0x228 [kvm] [ 149.849605] [<ffffffffc062854c>] kvm_vm_ioctl+0x64c/0xf28 [kvm] [ 149.849623] [<ffffffff805209c0>] sys_ioctl+0xc8/0x118 [ 149.849631] [<ffffffff80219eb0>] syscall_common+0x34/0x58 The root cause is the deletion of kvm_mips_commpage_init() leaves vcpu ->arch.cop0 NULL. So fix it by making cop0 from a pointer to an embedded object. Fixes: 45c7e8af4a5e3f0bea4ac209 ("MIPS: Remove KVM_TE support") Cc: stable@vger.kernel.org Reported-by: Yu Zhao <yuzhao@google.com> Suggested-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23MIPS: Loongson: Fix build error when make modules_installHuacai Chen1-7/+3
commit 531b3d1195d096f14e030c4b01ec3a53b80276bf upstream. After commit 0e96ea5c3eb5904e5dc2f ("MIPS: Loongson64: Clean up use of cc-ifversion") we get a build error when make modules_install: cc1: error: '-mloongson-mmi' must be used with '-mhard-float' The reason is when make modules_install, 'call cc-option' doesn't work in $(KBUILD_CFLAGS) of 'CHECKFLAGS'. Then there is no -mno-loongson-mmi applied and -march=loongson3a enable MMI instructions. To be detail, the error message comes from the CHECKFLAGS invocation of $(CC) but it has no impact on the final result of make modules_install, it is purely a cosmetic issue. The error occurs because cc-option is defined in scripts/Makefile.compiler, which is not included in Makefile when running 'make modules_install', as install targets are not supposed to require the compiler; see commit 805b2e1d427aab4b ("kbuild: include Makefile.compiler only when compiler is needed"). As a result, the call to check for '-mno-loongson-mmi' just never happens. Fix this by partially reverting to the old logic, use 'call cc-option' to conditionally apply -march=loongson3a and -march=mips64r2. By the way, Loongson-2E/2F is also broken in commit 13ceb48bc19c563e05f4 ("MIPS: Loongson2ef: Remove unnecessary {as,cc}-option calls") so fix it together. Fixes: 13ceb48bc19c563e05f4 ("MIPS: Loongson2ef: Remove unnecessary {as,cc}-option calls") Fixes: 0e96ea5c3eb5904e5dc2 ("MIPS: Loongson64: Clean up use of cc-ifversion") Cc: stable@vger.kernel.org Cc: Feiyang Chen <chenfeiyang@loongson.cn> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23MIPS: Loongson: Fix cpu_probe_loongson() againHuacai Chen1-6/+3
commit 65fee014dc41a774bcd94896f3fb380bc39d8dda upstream. Commit 7db5e9e9e5e6c10d7d ("MIPS: loongson64: fix FTLB configuration") move decode_configs() from the beginning of cpu_probe_loongson() to the end in order to fix FTLB configuration. However, it breaks the CPUCFG decoding because decode_configs() use "c->options = xxxx" rather than "c->options |= xxxx", all information get from CPUCFG by decode_cpucfg() is lost. This causes error when creating a KVM guest on Loongson-3A4000: Exception Code: 4 not handled @ PC: 0000000087ad5981, inst: 0xcb7a1898 BadVaddr: 0x0 Status: 0x0 Fix this by moving the c->cputype setting to the beginning and moving decode_configs() after that. Fixes: 7db5e9e9e5e6c10d7d ("MIPS: loongson64: fix FTLB configuration") Cc: stable@vger.kernel.org Cc: Huang Pei <huangpei@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23MIPS: cpu-features: Use boot_cpu_type for CPU type based featuresJiaxun Yang1-2/+2
commit 5487a7b60695a92cf998350e4beac17144c91fcd upstream. Some CPU feature macros were using current_cpu_type to mark feature availability. However current_cpu_type will use smp_processor_id, which is prohibited under preemptable context. Since those features are all uniform on all CPUs in a SMP system, use boot_cpu_type instead of current_cpu_type to fix preemptable kernel. Cc: stable@vger.kernel.org Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23powerpc/64s: Fix native_hpte_remove() to be irq-safeMichael Ellerman1-4/+9
commit 8bbe9fee5848371d4af101be445303cac8d880c5 upstream. Lockdep warns that the use of the hpte_lock in native_hpte_remove() is not safe against an IRQ coming in: ================================ WARNING: inconsistent lock state 6.4.0-rc2-g0c54f4d30ecc #1 Not tainted -------------------------------- inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage. qemu-system-ppc/93865 [HC0[0]:SC0[0]:HE1:SE1] takes: c0000000021f5180 (hpte_lock){+.?.}-{0:0}, at: native_lock_hpte+0x8/0xd0 {IN-SOFTIRQ-W} state was registered at: lock_acquire+0x134/0x3f0 native_lock_hpte+0x44/0xd0 native_hpte_insert+0xd4/0x2a0 __hash_page_64K+0x218/0x4f0 hash_page_mm+0x464/0x840 do_hash_fault+0x11c/0x260 data_access_common_virt+0x210/0x220 __ip_select_ident+0x140/0x150 ... net_rx_action+0x3bc/0x440 __do_softirq+0x180/0x534 ... sys_sendmmsg+0x34/0x50 system_call_exception+0x128/0x320 system_call_common+0x160/0x2e4 ... Possible unsafe locking scenario: CPU0 ---- lock(hpte_lock); <Interrupt> lock(hpte_lock); *** DEADLOCK *** ... Call Trace: dump_stack_lvl+0x98/0xe0 (unreliable) print_usage_bug.part.0+0x250/0x278 mark_lock+0xc9c/0xd30 __lock_acquire+0x440/0x1ca0 lock_acquire+0x134/0x3f0 native_lock_hpte+0x44/0xd0 native_hpte_remove+0xb0/0x190 kvmppc_mmu_map_page+0x650/0x698 [kvm_pr] kvmppc_handle_pagefault+0x534/0x6e8 [kvm_pr] kvmppc_handle_exit_pr+0x6d8/0xe90 [kvm_pr] after_sprg3_load+0x80/0x90 [kvm_pr] kvmppc_vcpu_run_pr+0x108/0x270 [kvm_pr] kvmppc_vcpu_run+0x34/0x48 [kvm] kvm_arch_vcpu_ioctl_run+0x340/0x470 [kvm] kvm_vcpu_ioctl+0x338/0x8b8 [kvm] sys_ioctl+0x7c4/0x13e0 system_call_exception+0x128/0x320 system_call_common+0x160/0x2e4 I suspect kvm_pr is the only caller that doesn't already have IRQs disabled, which is why this hasn't been reported previously. Fix it by disabling IRQs in native_hpte_remove(). Fixes: 35159b5717fa ("powerpc/64s: make HPTE lock and native_tlbie_lock irq-safe") Cc: stable@vger.kernel.org # v6.1+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230517123033.18430-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23powerpc/security: Fix Speculation_Store_Bypass reporting on Power10Michael Ellerman1-18/+19
commit 5bcedc5931e7bd6928a2d8207078d4cb476b3b55 upstream. Nageswara reported that /proc/self/status was showing "vulnerable" for the Speculation_Store_Bypass feature on Power10, eg: $ grep Speculation_Store_Bypass: /proc/self/status Speculation_Store_Bypass: vulnerable But at the same time the sysfs files, and lscpu, were showing "Not affected". This turns out to simply be a bug in the reporting of the Speculation_Store_Bypass, aka. PR_SPEC_STORE_BYPASS, case. When SEC_FTR_STF_BARRIER was added, so that firmware could communicate the vulnerability was not present, the code in ssb_prctl_get() was not updated to check the new flag. So add the check for SEC_FTR_STF_BARRIER being disabled. Rather than adding the new check to the existing if block and expanding the comment to cover both cases, rewrite the three cases to be separate so they can be commented separately for clarity. Fixes: 84ed26fd00c5 ("powerpc/security: Add a security feature for STF barrier") Cc: stable@vger.kernel.org # v5.14+ Reported-by: Nageswara R Sastry <rnsastry@linux.ibm.com> Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com> Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230517074945.53188-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23powerpc: Fail build if using recordmcount with binutils v2.37Naveen N Rao1-0/+8
commit 25ea739ea1d4d3de41acc4f4eb2d1a97eee0eb75 upstream. binutils v2.37 drops unused section symbols, which prevents recordmcount from capturing mcount locations in sections that have no non-weak symbols. This results in a build failure with a message such as: Cannot find symbol for section 12: .text.perf_callchain_kernel. kernel/events/callchain.o: failed The change to binutils was reverted for v2.38, so this behavior is specific to binutils v2.37: https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=c09c8b42021180eee9495bd50d8b35e683d3901b Objtool is able to cope with such sections, so this issue is specific to recordmcount. Fail the build and print a warning if binutils v2.37 is detected and if we are using recordmcount. Cc: stable@vger.kernel.org Suggested-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230530061436.56925-1-naveen@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23kasan: use internal prototypes matching gcc-13 builtinsArnd Bergmann2-2/+2
commit bb6e04a173f06e51819a4bb512e127dfbc50dcfa upstream. gcc-13 warns about function definitions for builtin interfaces that have a different prototype, e.g.: In file included from kasan_test.c:31: kasan.h:574:6: error: conflicting types for built-in function '__asan_register_globals'; expected 'void(void *, long int)' [-Werror=builtin-declaration-mismatch] 574 | void __asan_register_globals(struct kasan_global *globals, size_t size); kasan.h:577:6: error: conflicting types for built-in function '__asan_alloca_poison'; expected 'void(void *, long int)' [-Werror=builtin-declaration-mismatch] 577 | void __asan_alloca_poison(unsigned long addr, size_t size); kasan.h:580:6: error: conflicting types for built-in function '__asan_load1'; expected 'void(void *)' [-Werror=builtin-declaration-mismatch] 580 | void __asan_load1(unsigned long addr); kasan.h:581:6: error: conflicting types for built-in function '__asan_store1'; expected 'void(void *)' [-Werror=builtin-declaration-mismatch] 581 | void __asan_store1(unsigned long addr); kasan.h:643:6: error: conflicting types for built-in function '__hwasan_tag_memory'; expected 'void(void *, unsigned char, long int)' [-Werror=builtin-declaration-mismatch] 643 | void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size); The two problems are: - Addresses are passes as 'unsigned long' in the kernel, but gcc-13 expects a 'void *'. - sizes meant to use a signed ssize_t rather than size_t. Change all the prototypes to match these. Using 'void *' consistently for addresses gets rid of a couple of type casts, so push that down to the leaf functions where possible. This now passes all randconfig builds on arm, arm64 and x86, but I have not tested it on the other architectures that support kasan, since they tend to fail randconfig builds in other ways. This might fail if any of the 32-bit architectures expect a 'long' instead of 'int' for the size argument. The __asan_allocas_unpoison() function prototype is somewhat weird, since it uses a pointer for 'stack_top' and an size_t for 'stack_bottom'. This looks like it is meant to be 'addr' and 'size' like the others, but the implementation clearly treats them as 'top' and 'bottom'. Link: https://lkml.kernel.org/r/20230509145735.9263-2-arnd@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Marco Elver <elver@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-23riscv: mm: fix truncation warning on RV32Jisheng Zhang1-1/+1
[ Upstream commit b690e266dae2f85f4dfea21fa6a05e3500a51054 ] lkp reports below sparse warning when building for RV32: arch/riscv/mm/init.c:1204:48: sparse: warning: cast truncates bits from constant value (100000000 becomes 0) IMO, the reason we didn't see this truncates bug in real world is "0" means MEMBLOCK_ALLOC_ACCESSIBLE in memblock and there's no RV32 HW with more than 4GB memory. Fix it anyway to make sparse happy. Fixes: decf89f86ecd ("riscv: try to allocate crashkern region from 32bit addressible memory") Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202306080034.SLiCiOMn-lkp@intel.com/ Link: https://lore.kernel.org/r/20230709171036.1906-1-jszhang@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-23riscv, bpf: Fix inconsistent JIT image generationBjörn Töpel2-9/+16
[ Upstream commit c56fb2aab23505bb7160d06097c8de100b82b851 ] In order to generate the prologue and epilogue, the BPF JIT needs to know which registers that are clobbered. Therefore, the during pre-final passes, the prologue is generated after the body of the program body-prologue-epilogue. Then, in the final pass, a proper prologue-body-epilogue JITted image is generated. This scheme has worked most of the time. However, for some large programs with many jumps, e.g. the test_kmod.sh BPF selftest with hardening enabled (blinding constants), this has shown to be incorrect. For the final pass, when the proper prologue-body-epilogue is generated, the image has not converged. This will lead to that the final image will have incorrect jump offsets. The following is an excerpt from an incorrect image: | ... | 3b8: 00c50663 beq a0,a2,3c4 <.text+0x3c4> | 3bc: 0020e317 auipc t1,0x20e | 3c0: 49630067 jalr zero,1174(t1) # 20e852 <.text+0x20e852> | ... | 20e84c: 8796 c.mv a5,t0 | 20e84e: 6422 c.ldsp s0,8(sp) # Epilogue start | 20e850: 6141 c.addi16sp sp,16 | 20e852: 853e c.mv a0,a5 # Incorrect jump target | 20e854: 8082 c.jr ra The image has shrunk, and the epilogue offset is incorrect in the final pass. Correct the problem by always generating proper prologue-body-epilogue outputs, which means that the first pass will only generate the body to track what registers that are touched. Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G") Signed-off-by: Björn Töpel <bjorn@rivosinc.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230710074131.19596-1-bjorn@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-23openrisc: Union fpcsr and oldmask in sigcontext to unbreak userspace ABIStafford Horne2-4/+6
[ Upstream commit dceaafd668812115037fc13a1893d068b7b880f5 ] With commit 27267655c531 ("openrisc: Support floating point user api") I added an entry to the struct sigcontext which caused an unwanted change to the userspace ABI. To fix this we use the previously unused oldmask field space for the floating point fpcsr state. We do this with a union to restore the ABI back to the pre kernel v6.4 ABI and keep API compatibility. This does mean if there is some code somewhere that is setting oldmask in an OpenRISC specific userspace sighandler it would end up setting the floating point register status, but I think it's unlikely as oldmask was never functional before. Fixes: 27267655c531 ("openrisc: Support floating point user api") Reported-by: Szabolcs Nagy <nsz@port70.net> Closes: https://lore.kernel.org/openrisc/20230626213840.GA1236108@port70.net/ Signed-off-by: Stafford Horne <shorne@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-19sh: hd64461: Handle virq offset for offchip IRQ base and HD64461 IRQArtur Rojek2-3/+3
commit 7c28a35e19fafa1d3b367bcd3ec4021427a9397b upstream. A recent change to start counting SuperH IRQ #s from 16 breaks support for the Hitachi HD64461 companion chip. Move the offchip IRQ base and HD64461 IRQ # by 16 in order to accommodate for the new virq numbering rules. Fixes: a8ac2961148e ("sh: Avoid using IRQ0 on SH3 and SH4") Signed-off-by: Artur Rojek <contact@artur-rojek.eu> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20230710233132.69734-1-contact@artur-rojek.eu Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-19sh: mach-dreamcast: Handle virq offset in cascaded IRQ demuxGeert Uytterhoeven1-3/+3
commit 3d20f7a6eb76afdf9d4ad9cb864c2e2da9c38e1f upstream. Take into account the virq offset when translating cascaded interrupts. Fixes: a8ac2961148e8c72 ("sh: Avoid using IRQ0 on SH3 and SH4") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/7d0cb246c9f1cd24bb1f637ec5cb67e799a4c3b8.1688908227.git.geert+renesas@glider.be Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-19sh: mach-highlander: Handle virq offset in cascaded IRL demuxGeert Uytterhoeven1-2/+2
commit a2601b8d8f077368c6d113b4d496559415c6d495 upstream. Take into account the virq offset when translating cascaded IRL interrupts. Fixes: a8ac2961148e8c72 ("sh: Avoid using IRQ0 on SH3 and SH4") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/4fcb0d08a2b372431c41e04312742dc9e41e1be4.1688908186.git.geert+renesas@glider.be Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>