summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2022-08-21arm64: kexec_file: use more system keyrings to verify kernel image signatureCoiby Xu1-10/+1
commit 0d519cadf75184a24313568e7f489a7fc9b1be3b upstream. Currently, when loading a kernel image via the kexec_file_load() system call, arm64 can only use the .builtin_trusted_keys keyring to verify a signature whereas x86 can use three more keyrings i.e. .secondary_trusted_keys, .machine and .platform keyrings. For example, one resulting problem is kexec'ing a kernel image would be rejected with the error "Lockdown: kexec: kexec of unsigned images is restricted; see man kernel_lockdown.7". This patch set enables arm64 to make use of the same keyrings as x86 to verify the signature kexec'ed kernel image. Fixes: 732b7b93d849 ("arm64: kexec_file: add kernel signature verification support") Cc: stable@vger.kernel.org # 105e10e2cf1c: kexec_file: drop weak attribute from functions Cc: stable@vger.kernel.org # 34d5960af253: kexec: clean up arch_kexec_kernel_verify_sig Cc: stable@vger.kernel.org # 83b7bb2d49ae: kexec, KEYS: make the code in bzImage64_verify_sig generic Acked-by: Baoquan He <bhe@redhat.com> Cc: kexec@lists.infradead.org Cc: keyrings@vger.kernel.org Cc: linux-security-module@vger.kernel.org Co-developed-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Michal Suchanek <msuchanek@suse.de> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Coiby Xu <coxu@redhat.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-21kexec, KEYS: make the code in bzImage64_verify_sig genericCoiby Xu1-19/+1
commit c903dae8941deb55043ee46ded29e84e97cd84bb upstream. commit 278311e417be ("kexec, KEYS: Make use of platform keyring for signature verify") adds platform keyring support on x86 kexec but not arm64. The code in bzImage64_verify_sig uses the keys on the .builtin_trusted_keys, .machine, if configured and enabled, .secondary_trusted_keys, also if configured, and .platform keyrings to verify the signed kernel image as PE file. Cc: kexec@lists.infradead.org Cc: keyrings@vger.kernel.org Cc: linux-security-module@vger.kernel.org Reviewed-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Coiby Xu <coxu@redhat.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17powerpc/kexec: Fix build failure from uninitialised variableRussell Currey1-5/+5
commit 83ee9f23763a432a4077bf20624ee35de87bce99 upstream. clang 14 won't build because ret is uninitialised and can be returned if both prop and fdtprop are NULL. Drop the ret variable and return an error in that failure case. Fixes: b1fc44eaa9ba ("pseries/iommu/ddw: Fix kdump to work in absence of ibm,dma-window") Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220810054331.373761-1-ruscur@russell.cc Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17Revert "s390/smp: enforce lowcore protection on CPU restart"Alexander Gordeev1-1/+1
commit 953503751a426413ea8aee2299ae3ee971b70d9b upstream. This reverts commit 6f5c672d17f583b081e283927f5040f726c54598. This breaks normal crash dump when CPU0 is offline. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17powerpc: Fix eh field when calling lwarx on PPC32Christophe Leroy1-6/+9
commit 18db466a9a306406dab3b134014d9f6ed642471c upstream. Commit 9401f4e46cf6 ("powerpc: Use lwarx/ldarx directly instead of PPC_LWARX/LDARX macros") properly handled the eh field of lwarx in asm/bitops.h but failed to clear it for PPC32 in asm/simple_spinlock.h So, do as in arch_atomic_try_cmpxchg_lock(), set it to 1 if PPC64 but set it to 0 if PPC32. For that use IS_ENABLED(CONFIG_PPC64) which returns 1 when CONFIG_PPC64 is set and 0 otherwise. Fixes: 9401f4e46cf6 ("powerpc: Use lwarx/ldarx directly instead of PPC_LWARX/LDARX macros") Cc: stable@vger.kernel.org # v5.15+ Reported-by: Pali Rohár <pali@kernel.org> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Tested-by: Pali Rohár <pali@kernel.org> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> [mpe: Use symbolic names, use 'n' constraint per Segher] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a1176e19e627dd6a1b8d24c6c457a8ab874b7d12.1659430931.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17KVM: nVMX: Attempt to load PERF_GLOBAL_CTRL on nVMX xfer iff it existsSean Christopherson1-1/+3
[ Upstream commit 4496a6f9b45e8cd83343ad86a3984d614e22cf54 ] Attempt to load PERF_GLOBAL_CTRL during nested VM-Enter/VM-Exit if and only if the MSR exists (according to the guest vCPU model). KVM has very misguided handling of VM_{ENTRY,EXIT}_LOAD_IA32_PERF_GLOBAL_CTRL and attempts to force the nVMX MSR settings to match the vPMU model, i.e. to hide/expose the control based on whether or not the MSR exists from the guest's perspective. KVM's modifications fail to handle the scenario where the vPMU is hidden from the guest _after_ being exposed to the guest, e.g. by userspace doing multiple KVM_SET_CPUID2 calls, which is allowed if done before any KVM_RUN. nested_vmx_pmu_refresh() is called if and only if there's a recognized vPMU, i.e. KVM will leave the bits in the allow state and then ultimately reject the MSR load and WARN. KVM should not force the VMX MSRs in the first place. KVM taking control of the MSRs was a misguided attempt at mimicking what commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled", 2018-10-01) did for MPX. However, the MPX commit was a workaround for another KVM bug and not something that should be imitated (and it should never been done in the first place). In other words, KVM's ABI _should_ be that userspace has full control over the MSRs, at which point triggering the WARN that loading the MSR must not fail is trivial. The intent of the WARN is still valid; KVM has consistency checks to ensure that vmcs12->{guest,host}_ia32_perf_global_ctrl is valid. The problem is that '0' must be considered a valid value at all times, and so the simple/obvious solution is to just not actually load the MSR when it does not exist. It is userspace's responsibility to provide a sane vCPU model, i.e. KVM is well within its ABI and Intel's VMX architecture to skip the loads if the MSR does not exist. Fixes: 03a8871add95 ("KVM: nVMX: Expose load IA32_PERF_GLOBAL_CTRL VM-{Entry,Exit} control") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220722224409.1336532-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: VMX: Add helper to check if the guest PMU has PERF_GLOBAL_CTRLSean Christopherson2-2/+14
[ Upstream commit b663f0b5f3d665c261256d1f76e98f077c6e56af ] Add a helper to check of the guest PMU has PERF_GLOBAL_CTRL, which is unintuitive _and_ diverges from Intel's architecturally defined behavior. Even worse, KVM currently implements the check using two different (but equivalent) checks, _and_ there has been at least one attempt to add a _third_ flavor. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220722224409.1336532-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: x86/pmu: Ignore pmu->global_ctrl check if vPMU doesn't support global_ctrlLike Xu1-0/+3
[ Upstream commit 98defd2e17803263f49548fea930cfc974d505aa ] MSR_CORE_PERF_GLOBAL_CTRL is introduced as part of Architecture PMU V2, as indicated by Intel SDM 19.2.2 and the intel_is_valid_msr() function. So in the absence of global_ctrl support, all PMCs are enabled as AMD does. Signed-off-by: Like Xu <likexu@tencent.com> Message-Id: <20220509102204.62389-1-likexu@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: VMX: Mark all PERF_GLOBAL_(OVF)_CTRL bits reserved if there's no vPMUSean Christopherson1-0/+2
[ Upstream commit 93255bf92939d948bc86d81c6bb70bb0fecc5db1 ] Mark all MSR_CORE_PERF_GLOBAL_CTRL and MSR_CORE_PERF_GLOBAL_OVF_CTRL bits as reserved if there is no guest vPMU. The nVMX VM-Entry consistency checks do not check for a valid vPMU prior to consuming the masks via kvm_valid_perf_global_ctrl(), i.e. may incorrectly allow a non-zero mask to be loaded via VM-Enter or VM-Exit (well, attempted to be loaded, the actual MSR load will be rejected by intel_is_valid_msr()). Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220722224409.1336532-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: x86/pmu: Introduce the ctrl_mask value for fixed counterLike Xu2-1/+6
[ Upstream commit 2c985527dd8d283e786ad7a67e532ef7f6f00fac ] The mask value of fixed counter control register should be dynamic adjusted with the number of fixed counters. This patch introduces a variable that includes the reserved bits of fixed counter control registers. This is a generic code refactoring. Co-developed-by: Luwei Kang <luwei.kang@intel.com> Signed-off-by: Luwei Kang <luwei.kang@intel.com> Signed-off-by: Like Xu <like.xu@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Message-Id: <20220411101946.20262-6-likexu@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17s390/unwind: fix fgraph return address recoverySumanth Korikkar1-1/+1
[ Upstream commit ded466e1806686794b403ebf031133bbaca76bb2 ] When HAVE_FUNCTION_GRAPH_RET_ADDR_PTR is defined, the return address to the fgraph caller is recovered by tagging it along with the stack pointer of ftrace stack. This makes the stack unwinding more reliable. When the fgraph return address is modified to return_to_handler, ftrace_graph_ret_addr tries to restore it to the original value using tagged stack pointer. Fix this by passing tagged sp to ftrace_graph_ret_addr. Fixes: d81675b60d09 ("s390/unwind: recover kretprobe modified return address in stacktrace") Cc: <stable@vger.kernel.org> # 5.18 Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/powernv/kvm: Use darn for H_RANDOM on Power9Jason A. Donenfeld3-36/+12
[ Upstream commit 7ef3d06f1bc4a5e62273726f3dc2bd258ae1c71f ] The existing logic in KVM to support guests calling H_RANDOM only works on Power8, because it looks for an RNG in the device tree, but on Power9 we just use darn. In addition the existing code needs to work in real mode, so we have the special cased powernv_get_random_real_mode() to deal with that. Instead just have KVM call ppc_md.get_random_seed(), and do the real mode check inside of there, that way we use whatever RNG is available, including darn on Power9. Fixes: e928e9cb3601 ("KVM: PPC: Book3S HV: Add fast real-mode H_RANDOM implementation.") Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com> [mpe: Rebase on previous commit, update change log appropriately] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220727143219.2684192-2-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17kexec, KEYS, s390: Make use of built-in and secondary keyring for signature ↵Michal Suchanek1-5/+13
verification [ Upstream commit 0828c4a39be57768b8788e8cbd0d84683ea757e5 ] commit e23a8020ce4e ("s390/kexec_file: Signature verification prototype") adds support for KEXEC_SIG verification with keys from platform keyring but the built-in keys and secondary keyring are not used. Add support for the built-in keys and secondary keyring as x86 does. Fixes: e23a8020ce4e ("s390/kexec_file: Signature verification prototype") Cc: stable@vger.kernel.org Cc: Philipp Rudo <prudo@linux.ibm.com> Cc: kexec@lists.infradead.org Cc: keyrings@vger.kernel.org Cc: linux-security-module@vger.kernel.org Signed-off-by: Michal Suchanek <msuchanek@suse.de> Reviewed-by: "Lee, Chun-Yi" <jlee@suse.com> Acked-by: Baoquan He <bhe@redhat.com> Signed-off-by: Coiby Xu <coxu@redhat.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17kexec_file: drop weak attribute from functionsNaveen N. Rao4-1/+21
[ Upstream commit 65d9a9a60fd71be964effb2e94747a6acb6e7015 ] As requested (http://lkml.kernel.org/r/87ee0q7b92.fsf@email.froward.int.ebiederm.org), this series converts weak functions in kexec to use the #ifdef approach. Quoting the 3e35142ef99fe ("kexec_file: drop weak attribute from arch_kexec_apply_relocations[_add]") changelog: : Since commit d1bcae833b32f1 ("ELF: Don't generate unused section symbols") : [1], binutils (v2.36+) started dropping section symbols that it thought : were unused. This isn't an issue in general, but with kexec_file.c, gcc : is placing kexec_arch_apply_relocations[_add] into a separate : .text.unlikely section and the section symbol ".text.unlikely" is being : dropped. Due to this, recordmcount is unable to find a non-weak symbol in : .text.unlikely to generate a relocation record against. This patch (of 2); Drop __weak attribute from functions in kexec_file.c: - arch_kexec_kernel_image_probe() - arch_kimage_file_post_load_cleanup() - arch_kexec_kernel_image_load() - arch_kexec_locate_mem_hole() - arch_kexec_kernel_verify_sig() arch_kexec_kernel_image_load() calls into kexec_image_load_default(), so drop the static attribute for the latter. arch_kexec_kernel_verify_sig() is not overridden by any architecture, so drop the __weak attribute. Link: https://lkml.kernel.org/r/cover.1656659357.git.naveen.n.rao@linux.vnet.ibm.com Link: https://lkml.kernel.org/r/2cd7ca1fe4d6bb6ca38e3283c717878388ed6788.1656659357.git.naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Suggested-by: Eric Biederman <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: x86: Signal #GP, not -EPERM, on bad WRMSR(MCi_CTL/STATUS)Sean Christopherson1-2/+2
[ Upstream commit 2368048bf5c2ec4b604ac3431564071e89a0bc71 ] Return '1', not '-1', when handling an illegal WRMSR to a MCi_CTL or MCi_STATUS MSR. The behavior of "all zeros' or "all ones" for CTL MSRs is architectural, as is the "only zeros" behavior for STATUS MSRs. I.e. the intent is to inject a #GP, not exit to userspace due to an unhandled emulation case. Returning '-1' gets interpreted as -EPERM up the stack and effecitvely kills the guest. Fixes: 890ca9aefa78 ("KVM: Add MCE support") Fixes: 9ffd986c6e4e ("KVM: X86: #GP when guest attempts to write MCi_STATUS register w/o 0") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20220512222716.4112548-2-seanjc@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: set_msr_mce: Permit guests to ignore single-bit ECC errorsLev Kujawski1-2/+5
[ Upstream commit 0471a7bd1bca2a47a5f378f2222c5cf39ce94152 ] Certain guest operating systems (e.g., UNIXWARE) clear bit 0 of MC1_CTL to ignore single-bit ECC data errors. Single-bit ECC data errors are always correctable and thus are safe to ignore because they are informational in nature rather than signaling a loss of data integrity. Prior to this patch, these guests would crash upon writing MC1_CTL, with resultant error messages like the following: error: kvm run failed Operation not permitted EAX=fffffffe EBX=fffffffe ECX=00000404 EDX=ffffffff ESI=ffffffff EDI=00000001 EBP=fffdaba4 ESP=fffdab20 EIP=c01333a5 EFL=00000246 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0108 00000000 ffffffff 00c09300 DPL=0 DS [-WA] CS =0100 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA] SS =0108 00000000 ffffffff 00c09300 DPL=0 DS [-WA] DS =0108 00000000 ffffffff 00c09300 DPL=0 DS [-WA] FS =0000 00000000 ffffffff 00c00000 GS =0000 00000000 ffffffff 00c00000 LDT=0118 c1026390 00000047 00008200 DPL=0 LDT TR =0110 ffff5af0 00000067 00008b00 DPL=0 TSS32-busy GDT= ffff5020 000002cf IDT= ffff52f0 000007ff CR0=8001003b CR2=00000000 CR3=0100a000 CR4=00000230 DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000 DR6=ffff0ff0 DR7=00000400 EFER=0000000000000000 Code=08 89 01 89 51 04 c3 8b 4c 24 08 8b 01 8b 51 04 8b 4c 24 04 <0f> 30 c3 f7 05 a4 6d ff ff 10 00 00 00 74 03 0f 31 c3 33 c0 33 d2 c3 8d 74 26 00 0f 31 c3 Signed-off-by: Lev Kujawski <lkujaw@member.fsf.org> Message-Id: <20220521081511.187388-1-lkujaw@member.fsf.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17crypto: blake2s - remove shash moduleJason A. Donenfeld6-161/+4
[ Upstream commit 2d16803c562ecc644803d42ba98a8e0aef9c014e ] BLAKE2s has no currently known use as an shash. Just remove all of this unnecessary plumbing. Removing this shash was something we talked about back when we were making BLAKE2s a built-in, but I simply never got around to doing it. So this completes that project. Importantly, this fixs a bug in which the lib code depends on crypto_simd_disabled_for_test, causing linker errors. Also add more alignment tests to the selftests and compare SIMD and non-SIMD compression functions, to make up for what we lose from testmgr.c. Reported-by: gaochao <gaochao49@huawei.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: stable@vger.kernel.org Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17x86/olpc: fix 'logical not is only applied to the left hand side'Alexander Lobakin1-1/+1
commit 3a2ba42cbd0b669ce3837ba400905f93dd06c79f upstream. The bitops compile-time optimization series revealed one more problem in olpc-xo1-sci.c:send_ebook_state(), resulted in GCC warnings: arch/x86/platform/olpc/olpc-xo1-sci.c: In function 'send_ebook_state': arch/x86/platform/olpc/olpc-xo1-sci.c:83:63: warning: logical not is only applied to the left hand side of comparison [-Wlogical-not-parentheses] 83 | if (!!test_bit(SW_TABLET_MODE, ebook_switch_idev->sw) == state) | ^~ arch/x86/platform/olpc/olpc-xo1-sci.c:83:13: note: add parentheses around left hand side expression to silence this warning Despite this code working as intended, this redundant double negation of boolean value, together with comparing to `char` with no explicit conversion to bool, makes compilers think the author made some unintentional logical mistakes here. Make it the other way around and negate the char instead to silence the warnings. Fixes: d2aa37411b8e ("x86/olpc/xo1/sci: Produce wakeup events for buttons and switches") Cc: stable@vger.kernel.org # 3.5+ Reported-by: Guenter Roeck <linux@roeck-us.net> Reported-by: kernel test robot <lkp@intel.com> Reviewed-and-tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17x86/kprobes: Update kcb status flag after singlesteppingMasami Hiramatsu (Google)1-7/+11
commit dec8784c9088b131a1523f582c2194cfc8107dc0 upstream. Fix kprobes to update kcb (kprobes control block) status flag to KPROBE_HIT_SSDONE even if the kp->post_handler is not set. This bug may cause a kernel panic if another INT3 user runs right after kprobes because kprobe_int3_handler() misunderstands the INT3 is kprobe's single stepping INT3. Fixes: 6256e668b7af ("x86/kprobes: Use int3 instead of debug trap for single-step") Reported-by: Daniel Müller <deso@posteo.net> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Daniel Müller <deso@posteo.net> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20220727210136.jjgc3lpqeq42yr3m@muellerd-fedora-PC2BDTX9 Link: https://lore.kernel.org/r/165942025658.342061.12452378391879093249.stgit@devnote2 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17ftrace/x86: Add back ftrace_expected assignmentSteven Rostedt (Google)1-0/+1
commit ac6c1b2ca77e722a1e5d651f12f437f2f237e658 upstream. When a ftrace_bug happens (where ftrace fails to modify a location) it is helpful to have what was at that location as well as what was expected to be there. But with the conversion to text_poke() the variable that assigns the expected for debugging was dropped. Unfortunately, I noticed this when I needed it. Add it back. Link: https://lkml.kernel.org/r/20220726101851.069d2e70@gandalf.local.home Cc: "x86@kernel.org" <x86@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: stable@vger.kernel.org Fixes: 768ae4406a5c ("x86/ftrace: Use text_poke()") Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17x86/bugs: Enable STIBP for IBPB mitigated RETBleedKim Phillips1-4/+6
commit e6cfcdda8cbe81eaf821c897369a65fec987b404 upstream. AMD's "Technical Guidance for Mitigating Branch Type Confusion, Rev. 1.0 2022-07-12" whitepaper, under section 6.1.2 "IBPB On Privileged Mode Entry / SMT Safety" says: Similar to the Jmp2Ret mitigation, if the code on the sibling thread cannot be trusted, software should set STIBP to 1 or disable SMT to ensure SMT safety when using this mitigation. So, like already being done for retbleed=unret, and now also for retbleed=ibpb, force STIBP on machines that have it, and report its SMT vulnerability status accordingly. [ bp: Remove the "we" and remove "[AMD]" applicability parameter which doesn't work here. ] Fixes: 3ebc17006888 ("x86/bugs: Add retbleed=ibpb") Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # 5.10, 5.15, 5.19 Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Link: https://lore.kernel.org/r/20220804192201.439596-1-kim.phillips@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-17x86/entry: Build thunk_$(BITS) only if CONFIG_PREEMPTION=yAndrea Righi4-8/+4
[ Upstream commit de979c83574abf6e78f3fa65b716515c91b2613d ] With CONFIG_PREEMPTION disabled, arch/x86/entry/thunk_$(BITS).o becomes an empty object file. With some old versions of binutils (i.e., 2.35.90.20210113-1ubuntu1) the GNU assembler doesn't generate a symbol table for empty object files and objtool fails with the following error when a valid symbol table cannot be found: arch/x86/entry/thunk_64.o: warning: objtool: missing symbol table To prevent this from happening, build thunk_$(BITS).o only if CONFIG_PREEMPTION is enabled. BugLink: https://bugs.launchpad.net/bugs/1911359 Fixes: 320100a5ffe5 ("x86/entry: Remove the TRACE_IRQS cruft") Signed-off-by: Andrea Righi <andrea.righi@canonical.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/Ys/Ke7EWjcX+ZlXO@arighi-desktop Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17x86/numa: Use cpumask_available instead of hardcoded NULL checkSiddh Raman Pant1-2/+2
[ Upstream commit 625395c4a0f4775e0fe00f616888d2e6c1ba49db ] GCC-12 started triggering a new warning: arch/x86/mm/numa.c: In function ‘cpumask_of_node’: arch/x86/mm/numa.c:916:39: warning: the comparison will always evaluate as ‘false’ for the address of ‘node_to_cpumask_map’ will never be NULL [-Waddress] 916 | if (node_to_cpumask_map[node] == NULL) { | ^~ node_to_cpumask_map is of type cpumask_var_t[]. When CONFIG_CPUMASK_OFFSTACK is set, cpumask_var_t is typedef'd to a pointer for dynamic allocation, else to an array of one element. The "wicked game" can be checked on line 700 of include/linux/cpumask.h. The original code in debug_cpumask_set_cpu() and cpumask_of_node() were probably written by the original authors with CONFIG_CPUMASK_OFFSTACK=y (i.e. dynamic allocation) in mind, checking if the cpumask was available via a direct NULL check. When CONFIG_CPUMASK_OFFSTACK is not set, GCC gives the above warning while compiling the kernel. Fix that by using cpumask_available(), which does the NULL check when CONFIG_CPUMASK_OFFSTACK is set, otherwise returns true. Use it wherever such checks are made. Conditional definitions of cpumask_available() can be found along with the definition of cpumask_var_t. Check the cpumask.h reference mentioned above. Fixes: c032ef60d1aa ("cpumask: convert node_to_cpumask_map[] to cpumask_var_t") Fixes: de2d9445f162 ("x86: Unify node_to_cpumask_map handling between 32 and 64bit") Signed-off-by: Siddh Raman Pant <code@siddh.me> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20220731160913.632092-1-code@siddh.me Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/64e: Fix kexec build errorMichael Ellerman1-0/+1
[ Upstream commit 4cfa6ff24a9744ba484521c38bea613134fbfcb3 ] When building ppc64_book3e_allmodconfig the build fails with: arch/powerpc/kexec/file_load_64.c:1063:14: error: implicit declaration of function ‘firmware_has_feature’ 1063 | if (!firmware_has_feature(FW_FEATURE_LPAR)) | ^~~~~~~~~~~~~~~~~~~~ Add a direct include of asm/firmware.h to fix the error. Fixes: b1fc44eaa9ba ("pseries/iommu/ddw: Fix kdump to work in absence of ibm,dma-window") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220803063152.1249270-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/pci: Fix PHB numbering when using opal-phbidMichael Ellerman1-4/+6
[ Upstream commit f4b39e88b42d13366b831270306326b5c20971ca ] The recent change to the PHB numbering logic has a logic error in the handling of "ibm,opal-phbid". When an "ibm,opal-phbid" property is present, &prop is written to and ret is set to zero. The following call to of_alias_get_id() is skipped because ret == 0. But then the if (ret >= 0) is true, and the body of that if statement sets prop = ret which throws away the value that was just read from "ibm,opal-phbid". Fix the logic by only doing the ret >= 0 check in the of_alias_get_id() case. Fixes: 0fe1e96fef0a ("powerpc/pci: Prefer PCI domain assignment via DT 'linux,pci-domain' and alias") Reviewed-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220802105723.1055178-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17x86/bus_lock: Don't assume the init value of DEBUGCTLMSR.BUS_LOCK_DETECT to ↵Chenyi Qiang1-13/+14
be zero [ Upstream commit ffa6482e461ff550325356ae705b79e256702ea9 ] It's possible that this kernel has been kexec'd from a kernel that enabled bus lock detection, or (hypothetically) BIOS/firmware has set DEBUGCTLMSR_BUS_LOCK_DETECT. Disable bus lock detection explicitly if not wanted. Fixes: ebb1064e7c2e ("x86/traps: Handle #DB for bus lock") Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20220802033206.21333-1-chenyi.qiang@intel.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/cell/axon_msi: Fix refcount leak in setup_msi_msg_addressMiaoqian Lin1-0/+1
[ Upstream commit df5d4b616ee76abc97e5bd348e22659c2b095b1c ] of_get_next_parent() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() in the error path to avoid refcount leak. Fixes: ce21b3c9648a ("[CELL] add support for MSI on Axon-based Cell systems") Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220605065129.63906-1-linmq006@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/xive: Fix refcount leak in xive_get_max_prioMiaoqian Lin1-0/+1
[ Upstream commit 255b650cbec6849443ce2e0cdd187fd5e61c218c ] of_find_node_by_path() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: eac1e731b59e ("powerpc/xive: guest exploitation of the XIVE interrupt controller") Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220605053225.56125-1-linmq006@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/spufs: Fix refcount leak in spufs_init_isolated_loaderMiaoqian Lin1-0/+1
[ Upstream commit 6ac059dacffa8ab2f7798f20e4bd3333890c541c ] of_find_node_by_path() returns remote device nodepointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: 0afacde3df4c ("[POWERPC] spufs: allow isolated mode apps by starting the SPE loader") Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220603121543.22884-1-linmq006@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17s390/smp: enforce lowcore protection on CPU restartAlexander Gordeev1-1/+1
[ Upstream commit 6f5c672d17f583b081e283927f5040f726c54598 ] As result of commit 915fea04f932 ("s390/smp: enable DAT before CPU restart callback is called") the low-address protection bit gets mistakenly unset in control register 0 save area of the absolute zero memory. That area is used when manual PSW restart happened to hit an offline CPU. In this case the low-address protection for that CPU will be dropped. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Fixes: 915fea04f932 ("s390/smp: enable DAT before CPU restart callback is called") Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/pci: Prefer PCI domain assignment via DT 'linux,pci-domain' and aliasPali Rohár1-8/+19
[ Upstream commit 0fe1e96fef0a5c53b4c0d1500d356f3906000f81 ] Other Linux architectures use DT property 'linux,pci-domain' for specifying fixed PCI domain of PCI controller specified in Device-Tree. And lot of Freescale powerpc boards have defined numbered pci alias in Device-Tree for every PCIe controller which number specify preferred PCI domain. So prefer usage of DT property 'linux,pci-domain' (via function of_get_pci_domain_nr()) and DT pci alias (via function of_alias_get_id()) on powerpc architecture for assigning PCI domain to PCI controller. Fixes: 63a72284b159 ("powerpc/pci: Assign fixed PHB number based on device-tree properties") Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220706102148.5060-2-pali@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/iommu: Fix iommu_table_in_use for a small default DMA window caseAlexey Kardashevskiy1-0/+5
[ Upstream commit d80f6de9d601c30b53c17f00cb7cfe3169f2ddad ] The existing iommu_table_in_use() helper checks if the kernel is using any of TCEs. There are some reserved TCEs: 1) the very first one if DMA window starts from 0 to avoid having a zero but still valid DMA handle; 2) it_reserved_start..it_reserved_end to exclude MMIO32 window in case the default window spans across that - this is the default for the first DMA window on PowerNV. When 1) is the case and 2) is not the helper does not skip 1) and returns wrong status. This only seems occurring when passing through a PCI device to a nested guest (not something we support really well) so it has not been seen before. This fixes the bug by adding a special case for no MMIO32 reservation. Fixes: 3c33066a2190 ("powerpc/kernel/iommu: Add new iommu_table_in_use() helper") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220714081119.3714605-1-aik@ozlabs.ru Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17pseries/iommu/ddw: Fix kdump to work in absence of ibm,dma-windowAlexey Kardashevskiy2-41/+102
[ Upstream commit b1fc44eaa9ba31e28c4125d6b9205a3582b47b5d ] The pseries platform uses 32bit default DMA window (always 4K pages) and optional 64bit DMA window available via DDW ("Dynamic DMA Windows"), 64K or 2M pages. For ages the default one was not removed and a huge window was created in addition. Things changed with SRIOV-enabled PowerVM which creates a default-and-bigger DMA window in 64bit space (still using 4K pages) for IOV VFs so certain OSes do not need to use the DDW API in order to utilize all available TCE budget. Linux on the other hand removes the default window and creates a bigger one (with more TCEs or/and a bigger page size - 64K/2M) in a bid to map the entire RAM, and if the new window size is smaller than that - it still uses this new bigger window. The result is that the default window is removed but the "ibm,dma-window" property is not. When kdump is invoked, the existing code tries reusing the existing 64bit DMA window which location and parameters are stored in the device tree but this fails as the new property does not make it to the kdump device tree blob. So the code falls back to the default window which does not exist anymore although the device tree says that it does. The result of that is that PCI devices become unusable and cannot be used for kdumping. This preserves the DMA64 and DIRECT64 properties in the device tree blob for the crash kernel. Since the crash kernel setup is done after device drivers are loaded and probed, the proper DMA config is stored at least for boot time devices. Because DDW window is optional and the code configures the default window first, the existing code creates an IOMMU table descriptor for the non-existing default DMA window. It is harmless for kdump as it does not touch the actual window (only reads what is mapped and marks those IO pages as used) but it is bad for kexec which clears it thinking it is a smaller default window rather than a bigger DDW window. This removes the "ibm,dma-window" property from the device tree after a bigger window is created and the crash kernel setup picks it up. Fixes: 381ceda88c4c ("powerpc/pseries/iommu: Make use of DDW for indirect mapping") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Acked-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220629060614.1680476-1-aik@ozlabs.ru Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc: fix typos in commentsJulia Lawall83-104/+104
[ Upstream commit 1fd02f6605b855b4af2883f29a2abc88bdf17857 ] Various spelling mistakes in comments. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220430185654.5855-1-Julia.Lawall@inria.fr Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/32: Do not allow selection of e5500 or e6500 CPUs on PPC32Christophe Leroy1-2/+2
[ Upstream commit 9be013b2a9ecb29b5168e4b9db0e48ed53acf37c ] Commit 0e00a8c9fd92 ("powerpc: Allow CPU selection also on PPC32") enlarged the CPU selection logic to PPC32 by removing depend to PPC64, and failed to restrict that depend to E5500_CPU and E6500_CPU. Fortunately that got unnoticed because -mcpu=8540 will override the -mcpu=e500mc64 or -mpcu=e6500 as they are ealier, but that's fragile and may no be right in the future. Add back the depend PPC64 on E5500_CPU and E6500_CPU. Fixes: 0e00a8c9fd92 ("powerpc: Allow CPU selection also on PPC32") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8abab4888da69ff78b73a56f64d9678a7bf684e9.1657549153.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/32s: Fix boot failure with KASAN + SMP + JUMP_LABEL_FEATURE_CHECK_DEBUGChristophe Leroy1-1/+1
[ Upstream commit 6042a1652d643d1d34fa89bb314cb102960c0800 ] Since commit 4291d085b0b0 ("powerpc/32s: Make pte_update() non atomic on 603 core"), pte_update() has been using mmu_has_feature(MMU_FTR_HPTE_TABLE) to avoid a useless atomic operation on 603 cores. When kasan_early_init() sets up the early zero shadow, it uses __set_pte_at(). On book3s/32, __set_pte_at() calls pte_update() when CONFIG_SMP is selected in order to ensure the preservation of _PAGE_HASHPTE in case of concurrent update of the PTE. But that's too early for mmu_has_feature(), so when CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG is selected, mmu_has_feature() calls printk(). That's too early to call printk() because KASAN early zero shadow page is not set up yet. It leads to a deadlock. However, when kasan_early_init() is called, there is only one CPU running and no risk of concurrent PTE update. So __set_pte_at() can be called with the 'percpu' flag. With that flag set, the PTE is written directly instead of being written via pte_update(). Fixes: 4291d085b0b0 ("powerpc/32s: Make pte_update() non atomic on 603 core") Reported-by: Erhard Furtner <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2ee707512b8b212b079b877f4ceb525a1606a3fb.1656655567.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/32: Call mmu_mark_initmem_nx() regardless of data block mapping.Christophe Leroy2-5/+5
[ Upstream commit 980bbf7ca72012d317617fcdbfabe8708e4cef29 ] mark_initmem_nx() calls either mmu_mark_initmem_nx() or set_memory_attr() based on return from v_block_mapped() of _sinittext. But we can now handle text and data independently, so that text may be mapped by block even when data is mapped by pages. On the 8xx for instance, at startup 32Mbytes of memory are pinned in TLB. So the pinned entries need to go away for sinittext. In next patch a BAT will be set to also covers sinittext on book3s/32. So it will also be needed to call mmu_mark_initmem_nx() even when data above sinittext is not mapped with BATs. As this is highly dependent on the platform, call mmu_mark_initmem_nx() regardless of data block mapping. Then the platform will know what to do. Modify 8xx mmu_mark_initmem_nx() so that inittext mapping is modified only when pagealloc debug and kfence are not active, otherwise inittext is mapped with standard pages. And don't do anything on kernel text which is already mapped with PAGE_KERNEL_TEXT. Fixes: da1adea07576 ("powerpc/8xx: Allow STRICT_KERNEL_RwX with pinned TLB") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/db3fc14f3bfa6215b0786ef58a6e2bc1e1f964d7.1655202804.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17s390/crash: fix incorrect number of bytes to copy to user spaceAlexander Gordeev1-1/+1
[ Upstream commit f6749da17a34eb08c9665f072ce7c812ff68aad2 ] The number of bytes in a chunk is correctly calculated, but instead the total number of bytes is passed to copy_to_user_real() function. Reported-by: Matthew Wilcox <willy@infradead.org> Fixes: df9694c7975f ("s390/dump: streamline oldmem copy functions") Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17riscv: spinwait: Fix hartid variable typeSunil V L1-2/+2
[ Upstream commit c029e487e7c00e5594a4ae946952605db34e359b ] The hartid variable is of type int but compared with ULONG_MAX(INVALID_HARTID). This issue is fixed by changing the hartid variable type to unsigned long. Fixes: c78f94f35cf6 ("RISC-V: Use __cpu_up_stack/task_pointer only for spinwait method") Signed-off-by: Sunil V L <sunilvl@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20220527051743.2829940-3-sunilvl@ventanamicro.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17MIPS: Fixed __debug_virt_addr_valid()Florian Fainelli1-10/+4
[ Upstream commit 8a2b456665d1e797123669581524cbb095fb003b ] It is permissible for kernel code to call virt_to_phys() against virtual addresses that are in KSEG0 or KSEG1 and we need to be dealing with both types. Rewrite the test condition to ensure that the kernel virtual addresses are above PAGE_OFFSET which they must be, and below KSEG2 where the non-linear mapping starts. For EVA, there is not much that we can do given the linear address range that is offered, so just return any virtual address as being valid. Finally, when HIGHMEM is not enabled, all virtual addresses are assumed to be valid as well. Fixes: dfad83cb7193 ("MIPS: Add support for CONFIG_DEBUG_VIRTUAL") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17MIPS: vdso: Utilize __pa() for gic_pfnFlorian Fainelli1-1/+1
[ Upstream commit 8baa65126e19af5ee9f3c07e7bb53da41c39e4b1 ] The GIC user offset is mapped into every process' virtual address and is therefore part of the hot-path of arch_setup_additional_pages(). Utilize __pa() such that we are more optimal even when CONFIG_DEBUG_VIRTUAL is enabled, and while at it utilize PFN_DOWN() instead of open-coding the right shift by PAGE_SHIFT. Reported-by: Greg Ungerer <gerg@kernel.org> Suggested-by: Serge Semin <fancer.lancer@gmail.com> Fixes: dfad83cb7193 ("MIPS: Add support for CONFIG_DEBUG_VIRTUAL") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Greg Ungerer <gerg@kernel.org> Tested-by: Greg Ungerer <gerg@kernel.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17MIPS: Loongson64: Fix section mismatch warningTiezhu Yang1-1/+0
[ Upstream commit 08472f6ebdc23334ad11dcd761d2d52c32897793 ] prom_init_numa_memory() is annotated __init and not used by any module, thus don't export it. Remove not needed EXPORT_SYMBOL for prom_init_numa_memory() to fix the following section mismatch warning: LD vmlinux.o MODPOST vmlinux.symvers WARNING: modpost: vmlinux.o(___ksymtab+prom_init_numa_memory+0x0): Section mismatch in reference from the variable __ksymtab_prom_init_numa_memory to the function .init.text:prom_init_numa_memory() The symbol prom_init_numa_memory is exported and annotated __init Fix this by removing the __init annotation of prom_init_numa_memory or drop the export. This is build on Linux 5.19-rc4. Fixes: 6fbde6b492df ("MIPS: Loongson64: Move files to the top-level directory") Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Reviewed-by: Huacai Chen <chenhuacai@kernel.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17powerpc/perf: Optimize clearing the pending PMI and remove WARN_ON for PMI ↵Athira Rajeev1-20/+15
check in power_pmu_disable [ Upstream commit 890005a7d98f7452cfe86dcfb2aeeb7df01132ce ] commit 2c9ac51b850d ("powerpc/perf: Fix PMU callbacks to clear pending PMI before resetting an overflown PMC") added a new function "pmi_irq_pending" in hw_irq.h. This function is to check if there is a PMI marked as pending in Paca (PACA_IRQ_PMI).This is used in power_pmu_disable in a WARN_ON. The intention here is to provide a warning if there is PMI pending, but no counter is found overflown. During some of the perf runs, below warning is hit: WARNING: CPU: 36 PID: 0 at arch/powerpc/perf/core-book3s.c:1332 power_pmu_disable+0x25c/0x2c0 Modules linked in: ----- NIP [c000000000141c3c] power_pmu_disable+0x25c/0x2c0 LR [c000000000141c8c] power_pmu_disable+0x2ac/0x2c0 Call Trace: [c000000baffcfb90] [c000000000141c8c] power_pmu_disable+0x2ac/0x2c0 (unreliable) [c000000baffcfc10] [c0000000003e2f8c] perf_pmu_disable+0x4c/0x60 [c000000baffcfc30] [c0000000003e3344] group_sched_out.part.124+0x44/0x100 [c000000baffcfc80] [c0000000003e353c] __perf_event_disable+0x13c/0x240 [c000000baffcfcd0] [c0000000003dd334] event_function+0xc4/0x140 [c000000baffcfd20] [c0000000003d855c] remote_function+0x7c/0xa0 [c000000baffcfd50] [c00000000026c394] flush_smp_call_function_queue+0xd4/0x300 [c000000baffcfde0] [c000000000065b24] smp_ipi_demux_relaxed+0xa4/0x100 [c000000baffcfe20] [c0000000000cb2b0] xive_muxed_ipi_action+0x20/0x40 [c000000baffcfe40] [c000000000207c3c] __handle_irq_event_percpu+0x8c/0x250 [c000000baffcfee0] [c000000000207e2c] handle_irq_event_percpu+0x2c/0xa0 [c000000baffcff10] [c000000000210a04] handle_percpu_irq+0x84/0xc0 [c000000baffcff40] [c000000000205f14] generic_handle_irq+0x54/0x80 [c000000baffcff60] [c000000000015740] __do_irq+0x90/0x1d0 [c000000baffcff90] [c000000000016990] __do_IRQ+0xc0/0x140 [c0000009732f3940] [c000000bafceaca8] 0xc000000bafceaca8 [c0000009732f39d0] [c000000000016b78] do_IRQ+0x168/0x1c0 [c0000009732f3a00] [c0000000000090c8] hardware_interrupt_common_virt+0x218/0x220 This means that there is no PMC overflown among the active events in the PMU, but there is a PMU pending in Paca. The function "any_pmc_overflown" checks the PMCs on active events in cpuhw->n_events. Code snippet: <<>> if (any_pmc_overflown(cpuhw)) clear_pmi_irq_pending(); else WARN_ON(pmi_irq_pending()); <<>> Here the PMC overflown is not from active event. Example: When we do perf record, default cycles and instructions will be running on PMC6 and PMC5 respectively. It could happen that overflowed event is currently not active and pending PMI is for the inactive event. Debug logs from trace_printk: <<>> any_pmc_overflown: idx is 5: pmc value is 0xd9a power_pmu_disable: PMC1: 0x0, PMC2: 0x0, PMC3: 0x0, PMC4: 0x0, PMC5: 0xd9a, PMC6: 0x80002011 <<>> Here active PMC (from idx) is PMC5 , but overflown PMC is PMC6(0x80002011). When we handle PMI interrupt for such cases, if the PMC overflown is from inactive event, it will be ignored. Reference commit: commit bc09c219b2e6 ("powerpc/perf: Fix finding overflowed PMC in interrupt") Patch addresses two changes: 1) Fix 1 : Removal of warning ( WARN_ON(pmi_irq_pending()); ) We were printing warning if no PMC is found overflown among active PMU events, but PMI pending in PACA. But this could happen in cases where PMC overflown is not in active PMC. An inactive event could have caused the overflow. Hence the warning is not needed. To know pending PMI is from an inactive event, we need to loop through all PMC's which will cause more SPR reads via mfspr and increase in context switch. Also in existing function: perf_event_interrupt, already we ignore PMI's overflown when it is from an inactive PMC. 2) Fix 2: optimization in clearing pending PMI. Currently we check for any active PMC overflown before clearing PMI pending in Paca. This is causing additional SPR read also. From point 1, we know that if PMI pending in Paca from inactive cases, that is going to be ignored during replay. Hence if there is pending PMI in Paca, just clear it irrespective of PMC overflown or not. In summary, remove the any_pmc_overflown check entirely in power_pmu_disable. ie If there is a pending PMI in Paca, clear it, since we are in pmu_disable. There could be cases where PMI is pending because of inactive PMC ( which later when replayed also will get ignored ), so WARN_ON could give false warning. Hence removing it. Fixes: 2c9ac51b850d ("powerpc/perf: Fix PMU callbacks to clear pending PMI before resetting an overflown PMC") Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220522142256.24699-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: nVMX: Set UMIP bit CR4_FIXED1 MSR when emulating UMIPSean Christopherson1-0/+3
[ Upstream commit a910b5ab6b250a88fff1866bf708642d83317466 ] Make UMIP an "allowed-1" bit CR4_FIXED1 MSR when KVM is emulating UMIP. KVM emulates UMIP for both L1 and L2, and so should enumerate that L2 is allowed to have CR4.UMIP=1. Not setting the bit doesn't immediately break nVMX, as KVM does set/clear the bit in CR4_FIXED1 in response to a guest CPUID update, i.e. KVM will correctly (dis)allow nested VM-Entry based on whether or not UMIP is exposed to L1. That said, KVM should enumerate the bit as being allowed from time zero, e.g. userspace will see the wrong value if the MSR is read before CPUID is written. Fixes: 0367f205a3b7 ("KVM: vmx: add support for emulating UMIP") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220607213604.3346000-12-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17um: random: Don't initialise hwrng struct with zeroChristopher Obbard1-1/+1
[ Upstream commit 9e70cbd11b03889c92462cf52edb2bd023c798fa ] Initialising the hwrng struct with zeros causes a compile-time sparse warning: $ ARCH=um make -j10 W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ... CHECK arch/um/drivers/random.c arch/um/drivers/random.c:31:31: sparse: warning: Using plain integer as NULL pointer Fix the warning by not initialising the hwrng struct with zeros as it is initialised anyway during module init. Fixes: 72d3e093afae ("um: random: Register random as hwrng-core device") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christopher Obbard <chris.obbard@collabora.com> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17xtensa: iss: fix handling error cases in iss_net_configure()Yang Yingliang1-17/+15
[ Upstream commit 628ccfc8f5f79dd548319408fcc53949fe97b258 ] The 'pdev' and 'netdev' need to be released in error cases of iss_net_configure(). Change the return type of iss_net_configure() to void, because it's not used. Fixes: 7282bee78798 ("[PATCH] xtensa: Architecture support for Tensilica Xtensa Part 8") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17xtensa: iss/network: provide release() callbackMax Filippov1-0/+10
[ Upstream commit 8864fb8359682912ee99235db7db916733a1fd7b ] Provide release() callback for the platform device embedded into struct iss_net_private and registered in the iss_net_configure so that platform_device_unregister could be called for it. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: s390: pv: leak the topmost page table when destroy failsClaudio Imbrenda3-3/+94
[ Upstream commit faa2f72cb3569256480c5540d242c84e99965160 ] Each secure guest must have a unique ASCE (address space control element); we must avoid that new guests use the same page for their ASCE, to avoid errors. Since the ASCE mostly consists of the address of the topmost page table (plus some flags), we must not return that memory to the pool unless the ASCE is no longer in use. Only a successful Destroy Secure Configuration UVC will make the ASCE reusable again. If the Destroy Configuration UVC fails, the ASCE cannot be reused for a secure guest (either for the ASCE or for other memory areas). To avoid a collision, it must not be used again. This is a permanent error and the page becomes in practice unusable, so we set it aside and leak it. On failure we already leak other memory that belongs to the ultravisor (i.e. the variable and base storage for a guest) and not leaking the topmost page table was an oversight. This error (and thus the leakage) should not happen unless the hardware is broken or KVM has some unknown serious bug. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Fixes: 29b40f105ec8d55 ("KVM: s390: protvirt: Add initial vm and cpu lifecycle handling") Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20220628135619.32410-2-imbrenda@linux.ibm.com Message-Id: <20220628135619.32410-2-imbrenda@linux.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: arm64: Don't return from void functionQuentin Perret2-2/+2
[ Upstream commit 1c3ace2b8b3995d3213c5e2d2aca01a0577a3b0f ] Although harmless, the return statement in kvm_unexpected_el2_exception is rather confusing as the function itself has a void return type. The C standard is also pretty clear that "A return statement with an expression shall not appear in a function whose return type is void". Given that this return statement does not seem to add any actual value, let's not pointlessly violate the standard. Build-tested with GCC 10 and CLANG 13 for good measure, the disassembled code is identical with or without the return statement. Fixes: e9ee186bb735 ("KVM: arm64: Add kvm_extable for vaxorcism code") Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220705142310.3847918-1-qperret@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17KVM: x86: Fix errant brace in KVM capability handlingBen Gardon1-1/+1
[ Upstream commit 1c4dc57328bf218e999951824dce75c6125c4f3c ] The braces around the KVM_CAP_XSAVE2 block also surround the KVM_CAP_PMU_CAPABILITY block, likely the result of a merge issue. Simply move the curly brace back to where it belongs. Fixes: ba7bb663f5547 ("KVM: x86: Provide per VM capability for disabling PMU virtualization") Reviewed-by: David Matlack <dmatlack@google.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20220613212523.3436117-8-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>