summaryrefslogtreecommitdiff
path: root/arch/x86/kvm
AgeCommit message (Collapse)AuthorFilesLines
2025-02-28KVM: x86: Snapshot the host's DEBUGCTL after disabling IRQsSean Christopherson1-1/+2
Snapshot the host's DEBUGCTL after disabling IRQs, as perf can toggle debugctl bits from IRQ context, e.g. when enabling/disabling events via smp_call_function_single(). Taking the snapshot (long) before IRQs are disabled could result in KVM effectively clobbering DEBUGCTL due to using a stale snapshot. Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com> Link: https://lore.kernel.org/r/20250227222411.3490595-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: SVM: Manually context switch DEBUGCTL if LBR virtualization is disabledSean Christopherson1-0/+14
Manually load the guest's DEBUGCTL prior to VMRUN (and restore the host's value on #VMEXIT) if it diverges from the host's value and LBR virtualization is disabled, as hardware only context switches DEBUGCTL if LBR virtualization is fully enabled. Running the guest with the host's value has likely been mildly problematic for quite some time, e.g. it will result in undesirable behavior if BTF diverges (with the caveat that KVM now suppresses guest BTF due to lack of support). But the bug became fatal with the introduction of Bus Lock Trap ("Detect" in kernel paralance) support for AMD (commit 408eb7417a92 ("x86/bus_lock: Add support for AMD")), as a bus lock in the guest will trigger an unexpected #DB. Note, suppressing the bus lock #DB, i.e. simply resuming the guest without injecting a #DB, is not an option. It wouldn't address the general issue with DEBUGCTL, e.g. for things like BTF, and there are other guest-visible side effects if BusLockTrap is left enabled. If BusLockTrap is disabled, then DR6.BLD is reserved-to-1; any attempts to clear it by software are ignored. But if BusLockTrap is enabled, software can clear DR6.BLD: Software enables bus lock trap by setting DebugCtl MSR[BLCKDB] (bit 2) to 1. When bus lock trap is enabled, ... The processor indicates that this #DB was caused by a bus lock by clearing DR6[BLD] (bit 11). DR6[11] previously had been defined to be always 1. and clearing DR6.BLD is "sticky" in that it's not set (i.e. lowered) by other #DBs: All other #DB exceptions leave DR6[BLD] unmodified E.g. leaving BusLockTrap enable can confuse a legacy guest that writes '0' to reset DR6. Reported-by: rangemachine@gmail.com Reported-by: whanos@sergal.fun Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219787 Closes: https://lore.kernel.org/all/bug-219787-28872@https.bugzilla.kernel.org%2F Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com> Link: https://lore.kernel.org/r/20250227222411.3490595-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: x86: Snapshot the host's DEBUGCTL in common x86Sean Christopherson3-8/+3
Move KVM's snapshot of DEBUGCTL to kvm_vcpu_arch and take the snapshot in common x86, so that SVM can also use the snapshot. Opportunistically change the field to a u64. While bits 63:32 are reserved on AMD, not mentioned at all in Intel's SDM, and managed as an "unsigned long" by the kernel, DEBUGCTL is an MSR and therefore a 64-bit value. Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com> Link: https://lore.kernel.org/r/20250227222411.3490595-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: SVM: Suppress DEBUGCTL.BTF on AMDSean Christopherson2-1/+10
Mark BTF as reserved in DEBUGCTL on AMD, as KVM doesn't actually support BTF, and fully enabling BTF virtualization is non-trivial due to interactions with the emulator, guest_debug, #DB interception, nested SVM, etc. Don't inject #GP if the guest attempts to set BTF, as there's no way to communicate lack of support to the guest, and instead suppress the flag and treat the WRMSR as (partially) unsupported. In short, make KVM behave the same on AMD and Intel (VMX already squashes BTF). Note, due to other bugs in KVM's handling of DEBUGCTL, the only way BTF has "worked" in any capacity is if the guest simultaneously enables LBRs. Reported-by: Ravi Bangoria <ravi.bangoria@amd.com> Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com> Link: https://lore.kernel.org/r/20250227222411.3490595-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: SVM: Drop DEBUGCTL[5:2] from guest's effective valueSean Christopherson2-1/+13
Drop bits 5:2 from the guest's effective DEBUGCTL value, as AMD changed the architectural behavior of the bits and broke backwards compatibility. On CPUs without BusLockTrap (or at least, in APMs from before ~2023), bits 5:2 controlled the behavior of external pins: Performance-Monitoring/Breakpoint Pin-Control (PBi)—Bits 5:2, read/write. Software uses thesebits to control the type of information reported by the four external performance-monitoring/breakpoint pins on the processor. When a PBi bit is cleared to 0, the corresponding external pin (BPi) reports performance-monitor information. When a PBi bit is set to 1, the corresponding external pin (BPi) reports breakpoint information. With the introduction of BusLockTrap, presumably to be compatible with Intel CPUs, AMD redefined bit 2 to be BLCKDB: Bus Lock #DB Trap (BLCKDB)—Bit 2, read/write. Software sets this bit to enable generation of a #DB trap following successful execution of a bus lock when CPL is > 0. and redefined bits 5:3 (and bit 6) as "6:3 Reserved MBZ". Ideally, KVM would treat bits 5:2 as reserved. Defer that change to a feature cleanup to avoid breaking existing guest in LTS kernels. For now, drop the bits to retain backwards compatibility (of a sort). Note, dropping bits 5:2 is still a guest-visible change, e.g. if the guest is enabling LBRs *and* the legacy PBi bits, then the state of the PBi bits is visible to the guest, whereas now the guest will always see '0'. Reported-by: Ravi Bangoria <ravi.bangoria@amd.com> Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com> Link: https://lore.kernel.org/r/20250227222411.3490595-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: SVM: Inject #GP if memory operand for INVPCID is non-canonicalSean Christopherson1-0/+11
Inject a #GP if the memory operand received by INVCPID is non-canonical. The APM clearly states that the intercept takes priority over all #GP checks except the CPL0 restriction. Of course, that begs the question of how the CPU generates a linear address in the first place. Tracing confirms that EXITINFO1 does hold a linear address, at least for 64-bit mode guests (hooray GS prefix). Unfortunately, the APM says absolutely nothing about the EXITINFO fields for INVPCID intercepts, so it's not at all clear what's supposed to happen. Add a FIXME to call out that KVM still does the wrong thing for 32-bit guests, and if the stack segment is used for the memory operand. Cc: Babu Moger <babu.moger@amd.com> Cc: Jim Mattson <jmattson@google.com> Fixes: 4407a797e941 ("KVM: SVM: Enable INVPCID feature on AMD") Link: https://lore.kernel.org/r/20250224174522.2363400-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: VMX: Reject KVM_RUN if userspace forces emulation during nested VM-EnterSean Christopherson1-4/+28
Extend KVM's restrictions on userspace forcing "emulation required" at odd times to cover stuffing invalid guest state while a nested run is pending. Clobbering guest state while KVM is in the middle of emulating VM-Enter is nonsensical, as it puts the vCPU into an architecturally impossible state, and will trip KVM's sanity check that guards against KVM bugs, e.g. helps detect missed VMX consistency checks. WARNING: CPU: 3 PID: 6336 at arch/x86/kvm/vmx/vmx.c:6480 __vmx_handle_exit arch/x86/kvm/vmx/vmx.c:6480 [inline] WARNING: CPU: 3 PID: 6336 at arch/x86/kvm/vmx/vmx.c:6480 vmx_handle_exit+0x40f/0x1f70 arch/x86/kvm/vmx/vmx.c:6637 Modules linked in: CPU: 3 UID: 0 PID: 6336 Comm: syz.0.73 Not tainted 6.13.0-rc1-syzkaller-00316-gb5f217084ab3 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 RIP: 0010:__vmx_handle_exit arch/x86/kvm/vmx/vmx.c:6480 [inline] RIP: 0010:vmx_handle_exit+0x40f/0x1f70 arch/x86/kvm/vmx/vmx.c:6637 <TASK> vcpu_enter_guest arch/x86/kvm/x86.c:11081 [inline] vcpu_run+0x3047/0x4f50 arch/x86/kvm/x86.c:11242 kvm_arch_vcpu_ioctl_run+0x44a/0x1740 arch/x86/kvm/x86.c:11560 kvm_vcpu_ioctl+0x6ce/0x1520 virt/kvm/kvm_main.c:4340 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:906 [inline] __se_sys_ioctl fs/ioctl.c:892 [inline] __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f </TASK> Reported-by: syzbot+ac0bc3a70282b4d586cc@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/67598fb9.050a0220.17f54a.003b.GAE@google.com Debugged-by: James Houghton <jthoughton@google.com> Tested-by: James Houghton <jthoughton@google.com> Link: https://lore.kernel.org/r/20250224171409.2348647-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: SVM: Set RFLAGS.IF=1 in C code, to get VMRUN out of the STI shadowSean Christopherson2-9/+15
Enable/disable local IRQs, i.e. set/clear RFLAGS.IF, in the common svm_vcpu_enter_exit() just after/before guest_state_{enter,exit}_irqoff() so that VMRUN is not executed in an STI shadow. AMD CPUs have a quirk (some would say "bug"), where the STI shadow bleeds into the guest's intr_state field if a #VMEXIT occurs during injection of an event, i.e. if the VMRUN doesn't complete before the subsequent #VMEXIT. The spurious "interrupts masked" state is relatively benign, as it only occurs during event injection and is transient. Because KVM is already injecting an event, the guest can't be in HLT, and if KVM is querying IRQ blocking for injection, then KVM would need to force an immediate exit anyways since injecting multiple events is impossible. However, because KVM copies int_state verbatim from vmcb02 to vmcb12, the spurious STI shadow is visible to L1 when running a nested VM, which can trip sanity checks, e.g. in VMware's VMM. Hoist the STI+CLI all the way to C code, as the aforementioned calls to guest_state_{enter,exit}_irqoff() already inform lockdep that IRQs are enabled/disabled, and taking a fault on VMRUN with RFLAGS.IF=1 is already possible. I.e. if there's kernel code that is confused by running with RFLAGS.IF=1, then it's already a problem. In practice, since GIF=0 also blocks NMIs, the only change in exposure to non-KVM code (relative to surrounding VMRUN with STI+CLI) is exception handling code, and except for the kvm_rebooting=1 case, all exception in the core VM-Enter/VM-Exit path are fatal. Use the "raw" variants to enable/disable IRQs to avoid tracing in the "no instrumentation" code; the guest state helpers also take care of tracing IRQ state. Oppurtunstically document why KVM needs to do STI in the first place. Reported-by: Doug Covelli <doug.covelli@broadcom.com> Closes: https://lore.kernel.org/all/CADH9ctBs1YPmE4aCfGPNBwA10cA8RuAk2gO7542DjMZgs4uzJQ@mail.gmail.com Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Cc: stable@vger.kernel.org Reviewed-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20250224165442.2338294-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: x86/tdp_mmu: Remove tdp_mmu_for_each_pte()Nikolay Borisov1-6/+3
That macro acts as a different name for for_each_tdp_pte, apart from adding cognitive load it doesn't bring any value. Let's remove it. Signed-off-by: Nikolay Borisov <nik.borisov@suse.com> Link: https://lore.kernel.org/r/20250226074131.312565-1-nik.borisov@suse.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-28KVM: nVMX: Decouple EPT RWX bits from EPT Violation protection bitsSean Christopherson2-3/+2
Define independent macros for the RWX protection bits that are enumerated via EXIT_QUALIFICATION for EPT Violations, and tie them to the RWX bits in EPT entries via compile-time asserts. Piggybacking the EPTE defines works for now, but it creates holes in the EPT_VIOLATION_xxx macros and will cause headaches if/when KVM emulates Mode-Based Execution (MBEC), or any other features that introduces additional protection information. Opportunistically rename EPT_VIOLATION_RWX_MASK to EPT_VIOLATION_PROT_MASK so that it doesn't become stale if/when MBEC support is added. No functional change intended. Cc: Jon Kohler <jon@nutanix.com> Cc: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Link: https://lore.kernel.org/r/20250227000705.3199706-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-27KVM: nVMX: Always use IBPB to properly virtualize IBRSYosry Ahmed1-2/+1
On synthesized nested VM-exits in VMX, an IBPB is performed if IBRS is advertised to the guest to properly provide separate prediction domains for L1 and L2. However, this is currently conditional on X86_FEATURE_USE_IBPB, which depends on the host spectre_v2_user mitigation. In short, if spectre_v2_user=no, IBRS is not virtualized correctly and L1 becomes susceptible to attacks from L2. Fix this by performing the IBPB regardless of X86_FEATURE_USE_IBPB. Fixes: 2e7eab81425a ("KVM: VMX: Execute IBPB on emulated VM-exit when guest has IBRS") Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Jim Mattson <jmattson@google.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20250227012712.3193063-6-yosry.ahmed@linux.dev
2025-02-27x86/bugs: Use a static branch to guard IBPB on vCPU switchYosry Ahmed2-2/+2
Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in KVM when a new vCPU is loaded, introduce a static branch, similar to switch_mm_*_ibpb. This makes it obvious in spectre_v2_user_select_mitigation() what exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB (which will be shortly removed). It also provides more fine-grained control, making it simpler to change/add paths that control the IBPB in the vCPU switch path without affecting other IBPBs. Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20250227012712.3193063-5-yosry.ahmed@linux.dev
2025-02-27x86/bugs: Move the X86_FEATURE_USE_IBPB check into callersYosry Ahmed3-3/+6
indirect_branch_prediction_barrier() only performs the MSR write if X86_FEATURE_USE_IBPB is set, using alternative_msr_write(). In preparation for removing X86_FEATURE_USE_IBPB, move the feature check into the callers so that they can be addressed one-by-one, and use X86_FEATURE_IBPB instead to guard the MSR write. Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20250227012712.3193063-2-yosry.ahmed@linux.dev
2025-02-26KVM: Drop kvm_arch_sync_events() now that all implementations are nopsSean Christopherson1-5/+0
Remove kvm_arch_sync_events() now that x86 no longer uses it (no other arch has ever used it). No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Bibo Mao <maobibo@loongson.cn> Message-ID: <20250224235542.2562848-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26KVM: x86: Fold guts of kvm_arch_sync_events() into kvm_arch_pre_destroy_vm()Sean Christopherson1-3/+12
Fold the guts of kvm_arch_sync_events() into kvm_arch_pre_destroy_vm(), as the kvmclock and PIT background workers only need to be stopped before destroying vCPUs (to avoid accessing vCPUs as they are being freed); it's a-ok for them to be running while the VM is visible on the global vm_list. Note, the PIT also needs to be stopped before IRQ routing is freed (because KVM's IRQ routing is garbage and assumes there is always non-NULL routing). Opportunistically add comments to explain why KVM stops/frees certain assets early. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250224235542.2562848-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26KVM: x86: Unload MMUs during vCPU destruction, not beforeSean Christopherson1-12/+3
When destroying a VM, unload a vCPU's MMUs as part of normal vCPU freeing, instead of as a separate prepratory action. Unloading MMUs ahead of time is a holdover from commit 7b53aa565084 ("KVM: Fix vcpu freeing for guest smp"), which "fixed" a rather egregious flaw where KVM would attempt to free *all* MMU pages when destroying a vCPU. At the time, KVM would spin on all MMU pages in a VM when free a single vCPU, and so would hang due to the way KVM pins and zaps root pages (roots are invalidated but not freed if they are pinned by a vCPU). static void free_mmu_pages(struct kvm_vcpu *vcpu) { struct kvm_mmu_page *page; while (!list_empty(&vcpu->kvm->active_mmu_pages)) { page = container_of(vcpu->kvm->active_mmu_pages.next, struct kvm_mmu_page, link); kvm_mmu_zap_page(vcpu->kvm, page); } free_page((unsigned long)vcpu->mmu.pae_root); } Now that KVM doesn't try to free all MMU pages when destroying a single vCPU, there's no need to unpin roots prior to destroying a vCPU. Note! While KVM mostly destroys all MMUs before calling kvm_arch_destroy_vm() (see commit f00be0cae4e6 ("KVM: MMU: do not free active mmu pages in free_mmu_pages()")), unpinning MMU roots during vCPU destruction will unfortunately trigger remote TLB flushes, i.e. will try to send requests to all vCPUs. Happily, thanks to commit 27592ae8dbe4 ("KVM: Move wiping of the kvm->vcpus array to common code"), that's a non-issue as freed vCPUs are naturally skipped by xa_for_each_range(), i.e. by kvm_for_each_vcpu(). Prior to that commit, KVM x86 rather stupidly freed vCPUs one-by-one, and _then_ nullified them, one-by-one. I.e. triggering a VM-wide request would hit a use-after-free. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250224235542.2562848-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26KVM: x86: Don't load/put vCPU when unloading its MMU during teardownSean Christopherson1-8/+1
Don't load (and then put) a vCPU when unloading its MMU during VM destruction, as nothing in kvm_mmu_unload() accesses vCPU state beyond the root page/address of each MMU, i.e. can't possible need to run with the vCPU loaded. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250224235542.2562848-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26x86/bugs: KVM: Add support for SRSO_MSR_FIXBorislav Petkov1-0/+6
Add support for CPUID Fn8000_0021_EAX[31] (SRSO_MSR_FIX). If this bit is 1, it indicates that software may use MSR BP_CFG[BpSpecReduce] to mitigate SRSO. Enable BpSpecReduce to mitigate SRSO across guest/host boundaries. Switch back to enabling the bit when virtualization is enabled and to clear the bit when virtualization is disabled because using a MSR slot would clear the bit when the guest is exited and any training the guest has done, would potentially influence the host kernel when execution enters the kernel and hasn't VMRUN the guest yet. More detail on the public thread in Link below. Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20241202120416.6054-1-bp@kernel.org
2025-02-26KVM: nVMX: Process events on nested VM-Exit if injectable IRQ or NMI is pendingSean Christopherson1-0/+11
Process pending events on nested VM-Exit if the vCPU has an injectable IRQ or NMI, as the event may have become pending while L2 was active, i.e. may not be tracked in the context of vmcs01. E.g. if L1 has passed its APIC through to L2 and an IRQ arrives while L2 is active, then KVM needs to request an IRQ window prior to running L1, otherwise delivery of the IRQ will be delayed until KVM happens to process events for some other reason. The missed failure is detected by vmx_apic_passthrough_tpr_threshold_test in KVM-Unit-Tests, but has effectively been masked due to a flaw in KVM's PIC emulation that causes KVM to make spurious KVM_REQ_EVENT requests (and apparently no one ever ran the test with split IRQ chips). Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250224235542.2562848-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26KVM: x86: Free vCPUs before freeing VM stateSean Christopherson1-1/+1
Free vCPUs before freeing any VM state, as both SVM and VMX may access VM state when "freeing" a vCPU that is currently "in" L2, i.e. that needs to be kicked out of nested guest mode. Commit 6fcee03df6a1 ("KVM: x86: avoid loading a vCPU after .vm_destroy was called") partially fixed the issue, but for unknown reasons only moved the MMU unloading before VM destruction. Complete the change, and free all vCPU state prior to destroying VM state, as nVMX accesses even more state than nSVM. In addition to the AVIC, KVM can hit a use-after-free on MSR filters: kvm_msr_allowed+0x4c/0xd0 __kvm_set_msr+0x12d/0x1e0 kvm_set_msr+0x19/0x40 load_vmcs12_host_state+0x2d8/0x6e0 [kvm_intel] nested_vmx_vmexit+0x715/0xbd0 [kvm_intel] nested_vmx_free_vcpu+0x33/0x50 [kvm_intel] vmx_free_vcpu+0x54/0xc0 [kvm_intel] kvm_arch_vcpu_destroy+0x28/0xf0 kvm_vcpu_destroy+0x12/0x50 kvm_arch_destroy_vm+0x12c/0x1c0 kvm_put_kvm+0x263/0x3c0 kvm_vm_release+0x21/0x30 and an upcoming fix to process injectable interrupts on nested VM-Exit will access the PIC: BUG: kernel NULL pointer dereference, address: 0000000000000090 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page CPU: 23 UID: 1000 PID: 2658 Comm: kvm-nx-lpage-re RIP: 0010:kvm_cpu_has_extint+0x2f/0x60 [kvm] Call Trace: <TASK> kvm_cpu_has_injectable_intr+0xe/0x60 [kvm] nested_vmx_vmexit+0x2d7/0xdf0 [kvm_intel] nested_vmx_free_vcpu+0x40/0x50 [kvm_intel] vmx_vcpu_free+0x2d/0x80 [kvm_intel] kvm_arch_vcpu_destroy+0x2d/0x130 [kvm] kvm_destroy_vcpus+0x8a/0x100 [kvm] kvm_arch_destroy_vm+0xa7/0x1d0 [kvm] kvm_destroy_vm+0x172/0x300 [kvm] kvm_vcpu_release+0x31/0x50 [kvm] Inarguably, both nSVM and nVMX need to be fixed, but punt on those cleanups for the moment. Conceptually, vCPUs should be freed before VM state. Assets like the I/O APIC and PIC _must_ be allocated before vCPUs are created, so it stands to reason that they must be freed _after_ vCPUs are destroyed. Reported-by: Aaron Lewis <aaronlewis@google.com> Closes: https://lore.kernel.org/all/20240703175618.2304869-2-aaronlewis@google.com Cc: Jim Mattson <jmattson@google.com> Cc: Yan Zhao <yan.y.zhao@intel.com> Cc: Rick P Edgecombe <rick.p.edgecombe@intel.com> Cc: Kai Huang <kai.huang@intel.com> Cc: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250224235542.2562848-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26KVM: SVM: Add Idle HLT intercept supportManali Shukla1-3/+8
Add support for "Idle HLT" interception on AMD CPUs, and enable Idle HLT interception instead of "normal" HLT interception for all VMs for which HLT-exiting is enabled. Idle HLT provides a mild performance boost for all VM types, by avoiding a VM-Exit in the scenario where KVM would immediately "wake" and resume the vCPU. Idle HLT makes HLT-exiting conditional on the vCPU not having a valid, unmasked interrupt. Specifically, a VM-Exit occurs on execution of HLT if and only if there are no pending V_IRQ or V_NMI events. Note, Idle is a replacement for full HLT interception, i.e. enabling HLT interception would result in all HLT instructions causing unconditional VM-Exits. Per the APM: When both HLT and Idle HLT intercepts are active at the same time, the HLT intercept takes priority. This intercept occurs only if a virtual interrupt is not pending (V_INTR or V_NMI). For KVM's use of V_IRQ (also called V_INTR in the APM) to detect interrupt windows, the net effect of enabling Idle HLT is that, if a virtual interupt is pending and unmasked at the time of HLT, the vCPU will take a V_IRQ intercept instead of a HLT intercept. When AVIC is enabled, Idle HLT works as intended: the vCPU continues unimpeded and services the pending virtual interrupt. Note, the APM's description of V_IRQ interaction with AVIC is quite confusing, and requires piecing together implied behavior. Per the APM, when AVIC is enabled, V_IRQ *from the VMCB* is ignored: When AVIC mode is enabled for a virtual processor, the V_IRQ, V_INTR_PRIO, V_INTR_VECTOR, and V_IGN_TPR fields in the VMCB are ignored. Which seems to contradict the behavior of Idle HLT: This intercept occurs only if a virtual interrupt is not pending (V_INTR or V_NMI). What's not explicitly stated is that hardware's internal copy of V_IRQ (and related fields) *are* still active, i.e. are presumably used to cache information from the virtual APIC. Handle Idle HLT exits as if they were normal HLT exits, e.g. don't try to optimize the handling under the assumption that there isn't a pending IRQ. Irrespective of AVIC, Idle HLT is inherently racy with respect to the vIRR, as KVM can set vIRR bits asychronously. No changes are required to support KVM's use Idle HLT while running L2. In fact, supporting Idle HLT is actually a bug fix to some extent. If L1 wants to intercept HLT, recalc_intercepts() will enable HLT interception in vmcb02 and forward the intercept to L1 as normal. But if L1 does not want to intercept HLT, then KVM will run L2 with Idle HLT enabled and HLT interception disabled. If a V_IRQ or V_NMI for L2 becomes pending and L2 executes HLT, then use of Idle HLT will do the right thing, i.e. not #VMEXIT and instead deliver the virtual event. KVM currently doesn't handle this scenario correctly, e.g. doesn't check V_IRQ or V_NMI in vmcs02 as part of kvm_vcpu_has_events(). Do not expose Idle HLT to L1 at this time, as supporting nested Idle HLT is more complex than just enumerating the feature, e.g. requires KVM to handle the aforementioned scenarios of V_IRQ and V_NMI at the time of exit. Signed-off-by: Manali Shukla <Manali.Shukla@amd.com> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Link: https://bugzilla.kernel.org/attachment.cgi?id=306250 Link: https://lore.kernel.org/r/20250128124812.7324-3-manali.shukla@amd.com [sean: rewrite changelog, drop nested "support"] Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-26KVM: SVM: Provide helpers to set the error codeMelody Wang3-23/+49
Provide helpers to set the error code when converting VMGEXIT SW_EXITINFO1 and SW_EXITINFO2 codes from plain numbers to proper defines. Add comments for better code readability. No functionality changed. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Melody Wang <huibo.wang@amd.com> Link: https://lore.kernel.org/r/20250225213937.2471419-3-huibo.wang@amd.com [sean: tweak comments, fix formatting goofs] Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-26KVM: SVM: Convert plain error code numbers to definesMelody Wang2-7/+7
Convert VMGEXIT SW_EXITINFO1 codes from plain numbers to proper defines. Opportunistically update the comment for the malformed input "sub-error" codes to state that they are defined by the GHCB, and to capure the relationship to the malformed input response. No functional change intended. Signed-off-by: Melody Wang <huibo.wang@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Pavan Kumar Paluri <papaluri@amd.com> Link: https://lore.kernel.org/r/20250225213937.2471419-2-huibo.wang@amd.com [sean: update comments] Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-25KVM: VMX: Pass XFD_ERR as pseudo-payload when injecting #NMSean Christopherson1-4/+10
Pass XFD_ERR via KVM's exception payload mechanism when injecting an #NM after interception so that XFD_ERR can be propagated to FRED's event_data field without needing a dedicated field (which would need to be migrated). For non-FRED vCPUs, this is a glorified NOP as kvm_deliver_exception_payload() will simply do nothing (which is desirable and correct). Signed-off-by: Xin Li (Intel) <xin@zytor.com> Tested-by: Shan Kang <shan.kang@intel.com> Link: https://lore.kernel.org/r/20241001050110.3643764-15-xin@zytor.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-25KVM: VMX: Don't modify guest XFD_ERR if CR0.TS=1Sean Christopherson1-8/+8
Don't update the guest's XFD_ERR MSR if CR0.TS is set; per the SDM, XFD_ERR is not modified if CR0.TS=1. Although it's not explicitly stated in the SDM, conceptually it makes sense the CR0.TS check would be done prior to the XFD_ERR check, e.g. CR0.TS=1 blocks all SIMD state, whereas XFD blocks only XTILE state. Device-not-available exceptions that are not due to XFD - those resulting from setting CR0.TS to 1 - do not modify the IA32_XFD_ERR MSR. Opportunistically update the comment to call out that XFD_ERR is updated before the VM-Exit check occurs. Nothing in the SDM explicitly calls out this behavior, but logically it must be the behavior, otherwise reading XFD_ERR in handle_nm_fault_irqoff() would return stale data, i.e. the to-be-delivered XFD_ERR value would need to be saved in EXIT_QUALIFICATION, a la DR6 for #DB and CR2 for #PF, so that software could capture the guest value. Fixes: ec5be88ab29f ("kvm: x86: Intercept #NM for saving IA32_XFD_ERR") Signed-off-by: Xin Li (Intel) <xin@zytor.com> Tested-by: Shan Kang <shan.kang@intel.com> Link: https://lore.kernel.org/r/20241001050110.3643764-3-xin@zytor.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-25KVM: x86: Use a dedicated flow for queueing re-injected exceptionsSean Christopherson3-59/+61
Open code the filling of vcpu->arch.exception in kvm_requeue_exception() instead of bouncing through kvm_multiple_exception(), as re-injection doesn't actually share that much code with "normal" injection, e.g. the VM-Exit interception check, payload delivery, and nested exception code is all bypassed as those flows only apply during initial injection. When FRED comes along, the special casing will only get worse, as FRED explicitly tracks nested exceptions and essentially delivers the payload on the stack frame, i.e. re-injection will need more inputs, and normal injection will have yet more code that needs to be bypassed when KVM is re-injecting an exception. No functional change intended. Signed-off-by: Xin Li (Intel) <xin@zytor.com> Tested-by: Shan Kang <shan.kang@intel.com> Link: https://lore.kernel.org/r/20241001050110.3643764-2-xin@zytor.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-25KVM: x86: Rename and invert async #PF's send_user_only flag to send_alwaysSean Christopherson1-2/+2
Rename send_user_only to avoid "user", because KVM's ABI is to not inject page faults into CPL0, whereas "user" in x86 is specifically CPL3. Invert the polarity to keep the naming simple and unambiguous. E.g. while KVM often refers to CPL0 as "kernel", that terminology isn't ubiquitous, and "send_kernel" could be misconstrued as "send only to kernel". Link: https://lore.kernel.org/r/20250215010609.1199982-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-25KVM: x86: Don't inject PV async #PF if SEND_ALWAYS=0 and guest state is ↵Sean Christopherson1-1/+1
protected Don't inject PV async #PFs into guests with protected register state, i.e. SEV-ES and SEV-SNP guests, unless the guest has opted-in to receiving #PFs at CPL0. For protected guests, the actual CPL of the guest is unknown. Note, no sane CoCo guest should enable PV async #PF, but the current state of Linux-as-a-CoCo-guest isn't entirely sane. Fixes: add5e2f04541 ("KVM: SVM: Add support for the SEV-ES VMSA") Link: https://lore.kernel.org/r/20250215010609.1199982-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-25KVM: x86: Update Xen TSC leaves during CPUID emulationFred Griffoul5-27/+29
The Xen emulation in KVM modifies certain CPUID leaves to expose TSC information to the guest. Previously, these CPUID leaves were updated whenever guest time changed, but this conflicts with KVM_SET_CPUID/KVM_SET_CPUID2 ioctls which reject changes to CPUID entries on running vCPUs. Fix this by updating the TSC information directly in the CPUID emulation handler instead of modifying the vCPU's CPUID entries. Signed-off-by: Fred Griffoul <fgriffo@amazon.co.uk> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Link: https://lore.kernel.org/r/20250124150539.69975-1-fgriffo@amazon.co.uk Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: nVMX: Synthesize nested VM-Exit for supported emulation interceptsSean Christopherson1-14/+56
When emulating an instruction on behalf of L2 that L1 wants to intercept, generate a nested VM-Exit instead of injecting a #UD into L2. Now that (most of) the necessary information is available, synthesizing a VM-Exit isn't terribly difficult. Punt on decoding the ModR/M for descriptor table exits for now. There is no evidence that any hypervisor intercepts descriptor table accesses *and* uses the EXIT_QUALIFICATION to expedite emulation, i.e. it's not worth delaying basic support for. To avoid doing more harm than good, e.g. by putting L2 into an infinite or effectively corrupting its code stream, inject #UD if the instruction length is nonsensical. Link: https://lore.kernel.org/r/20250201015518.689704-11-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: nVMX: Allow the caller to provide instruction length on nested VM-ExitSean Christopherson2-7/+27
Rework the nested VM-Exit helper to take the instruction length as a parameter, and convert nested_vmx_vmexit() into a "default" wrapper that grabs the length from vmcs02 as appropriate. This will allow KVM to set the correct instruction length when synthesizing a nested VM-Exit when emulating an instruction that L1 wants to intercept. No functional change intended, as the path to prepare_vmcs12()'s reading of vmcs02.VM_EXIT_INSTRUCTION_LEN is gated on the same set of conditions as the VMREAD in the new nested_vmx_vmexit(). Link: https://lore.kernel.org/r/20250201015518.689704-10-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: x86: Add a #define for the architectural max instruction lengthSean Christopherson3-9/+11
Add a #define to capture x86's architecturally defined max instruction length instead of open coding the literal in a variety of places. No functional change intended. Link: https://lore.kernel.org/r/20250201015518.689704-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: x86: Plumb the emulator's starting RIP into nested intercept checksSean Christopherson2-0/+2
When checking for intercept when emulating an instruction on behalf of L2, pass the emulator's view of the RIP of the instruction being emulated to vendor code. Unlike SVM, which communicates the next RIP on VM-Exit, VMX communicates the length of the instruction that generated the VM-Exit, i.e. requires the current and next RIPs. Note, unless userspace modifies RIP during a userspace exit that requires completion, kvm_rip_read() will contain the same information. Pass the emulator's view largely out of a paranoia, and because there is no meaningful cost in doing so. Link: https://lore.kernel.org/r/20250201015518.689704-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: x86: Plumb the src/dst operand types through to .check_intercept()Sean Christopherson2-0/+4
When checking for intercept when emulating an instruction on behalf of L2, forward the source and destination operand types to vendor code so that VMX can synthesize the correct EXIT_QUALIFICATION for port I/O VM-Exits. Link: https://lore.kernel.org/r/20250201015518.689704-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: nVMX: Consolidate missing X86EMUL_INTERCEPTED logic in L2 emulationSean Christopherson1-11/+7
Refactor the handling of port I/O interception checks when emulating on behalf of L2 in anticipation of synthesizing a nested VM-Exit to L1 instead of injecting a #UD into L2. No functional change intended. Link: https://lore.kernel.org/r/20250201015518.689704-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: nVMX: Emulate HLT in L2 if it's not interceptedSean Christopherson1-0/+5
Extend VMX's nested intercept logic for emulated instructions to handle HLT interception, primarily for testing purposes. Failure to allow emulation of HLT isn't all that interesting, as emulating HLT while L2 is active either requires forced emulation (and no #UD intercept in L1), TLB games in the guest to coerce KVM into emulating the wrong instruction, or a bug elsewhere in KVM. E.g. without commit 47ef3ef843c0 ("KVM: VMX: Handle event vectoring error in check_emulate_instruction()"), KVM can end up trying to emulate HLT if RIP happens to point at a HLT when a vectored event arrives with L2's IDT pointing at emulated MMIO. Note, vmx_check_intercept() is still broken when L1 wants to intercept an instruction, as KVM injects a #UD instead of synthesizing a nested VM-Exit. That issue extends far beyond HLT, punt on it for now. Link: https://lore.kernel.org/r/20250201015518.689704-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: nVMX: Allow emulating RDPID on behalf of L2Sean Christopherson1-6/+7
Return X86EMUL_CONTINUE instead X86EMUL_UNHANDLEABLE when emulating RDPID on behalf of L2 and L1 _does_ expose RDPID/RDTSCP to L2. When RDPID emulation was added by commit fb6d4d340e05 ("KVM: x86: emulate RDPID"), KVM incorrectly allowed emulation by default. Commit 07721feee46b ("KVM: nVMX: Don't emulate instructions in guest mode") fixed that flaw, but missed that RDPID emulation was relying on the common return path to allow emulation on behalf of L2. Fixes: 07721feee46b ("KVM: nVMX: Don't emulate instructions in guest mode") Link: https://lore.kernel.org/r/20250201015518.689704-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: nSVM: Pass next RIP, not current RIP, for nested VM-Exit on emulationSean Christopherson1-1/+1
Set "next_rip" in the emulation interception info passed to vendor code using the emulator context's "_eip", not "eip". "eip" holds RIP from the start of emulation, i.e. the RIP of the instruction that's being emulated, whereas _eip tracks the context's current position in decoding the code stream, which at the time of the intercept checks is effectively the RIP of the next instruction. Passing the current RIP as next_rip causes SVM to stuff the wrong value value into vmcb12->control.next_rip if a nested VM-Exit is generated, i.e. if L1 wants to intercept the instruction, and could result in L1 putting L2 into an infinite loop due to restarting L2 with the same RIP over and over. Fixes: 8a76d7f25f8f ("KVM: x86: Add x86 callback for intercept check") Link: https://lore.kernel.org/r/20250201015518.689704-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: nVMX: Check PAUSE_EXITING, not BUS_LOCK_DETECTION, on PAUSE emulationSean Christopherson1-1/+1
When emulating PAUSE on behalf of L2, check for interception in vmcs12 by looking at primary execution controls, not secondary execution controls. Checking for PAUSE_EXITING in secondary execution controls effectively results in KVM looking for BUS_LOCK_DETECTION, which KVM doesn't expose to L1, i.e. is always off in vmcs12, and ultimately results in KVM failing to "intercept" PAUSE. Because KVM doesn't handle interception during emulation correctly on VMX, i.e. the "fixed" code is still quite broken, and not intercepting PAUSE is relatively benign, for all intents and purposes the bug means that L2 gets to live when it would otherwise get an unexpected #UD. Fixes: 4984563823f0 ("KVM: nVMX: Emulate NOPs in L2, and PAUSE if it's not intercepted") Link: https://lore.kernel.org/r/20250201015518.689704-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: x86/xen: Move kvm_xen_hvm_config field into kvm_xenSean Christopherson3-14/+14
Now that all KVM usage of the Xen HVM config information is buried behind CONFIG_KVM_XEN=y, move the per-VM kvm_xen_hvm_config field out of kvm_arch and into kvm_xen. No functional change intended. Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Paul Durrant <paul@xen.org> Link: https://lore.kernel.org/r/20250215011437.1203084-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: x86/xen: Consult kvm_xen_enabled when checking for Xen MSR writesSean Christopherson1-0/+3
Query kvm_xen_enabled when detecting writes to the Xen hypercall page MSR so that the check is optimized away in the likely scenario that Xen isn't enabled for the VM. Deliberately open code the check instead of using kvm_xen_msr_enabled() in order to avoid a double load of xen_hvm_config.msr (which is admittedly rather pointless given the widespread lack of READ_ONCE() usage on the plethora of vCPU-scoped accesses to kvm->arch.xen state). No functional change intended. Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Paul Durrant <paul@xen.org> Link: https://lore.kernel.org/r/20250215011437.1203084-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: x86/xen: Add an #ifdef'd helper to detect writes to Xen MSRSean Christopherson2-1/+11
Add a helper to detect writes to the Xen hypercall page MSR, and provide a stub for CONFIG_KVM_XEN=n to optimize out the check for kernels built without Xen support. Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Link: https://lore.kernel.org/r/20250215011437.1203084-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-24KVM: x86/xen: Restrict hypercall MSR to unofficial synthetic rangeSean Christopherson1-0/+9
Reject userspace attempts to set the Xen hypercall page MSR to an index outside of the "standard" virtualization range [0x40000000, 0x4fffffff], as KVM is not equipped to handle collisions with real MSRs, e.g. KVM doesn't update MSR interception, conflicts with VMCS/VMCB fields, special case writes in KVM, etc. While the MSR index isn't strictly ABI, i.e. can theoretically float to any value, in practice no known VMM sets the MSR index to anything other than 0x40000000 or 0x40000200. Cc: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Paul Durrant <paul@xen.org> Link: https://lore.kernel.org/r/20250215011437.1203084-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-18KVM: x86: Switch to use hrtimer_setup()Nam Cao5-12/+8
hrtimer_setup() takes the callback function pointer as argument and initializes the timer completely. Replace hrtimer_init() and the open coded initialization of hrtimer::function with the new setup mechanism. Patch was created by using Coccinelle. Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/all/5051cfe7ed48ef9913bf2583eeca6795cb53d6ae.1738746821.git.namcao@linutronix.de
2025-02-15Merge tag 'kvm-x86-fixes-6.14-rcN' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini8-18/+28
KVM fixes for 6.14 part 1 - Reject Hyper-V SEND_IPI hypercalls if the local APIC isn't being emulated by KVM to fix a NULL pointer dereference. - Enter guest mode (L2) from KVM's perspective before initializing the vCPU's nested NPT MMU so that the MMU is properly tagged for L2, not L1. - Load the guest's DR6 outside of the innermost .vcpu_run() loop, as the guest's value may be stale if a VM-Exit is handled in the fastpath.
2025-02-15KVM: SVM: Ensure PSP module is initialized if KVM module is built-inSean Christopherson1-0/+10
The kernel's initcall infrastructure lacks the ability to express dependencies between initcalls, whereas the modules infrastructure automatically handles dependencies via symbol loading. Ensure the PSP SEV driver is initialized before proceeding in sev_hardware_setup() if KVM is built-in as the dependency isn't handled by the initcall infrastructure. Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Message-ID: <f78ddb64087df27e7bcb1ae0ab53f55aa0804fab.1739226950.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-14KVM: x86/mmu: Walk rmaps (shadow MMU) without holding mmu_lock when aging gfnsSean Christopherson1-33/+39
Convert the shadow MMU to use per-rmap locking instead of the per-VM mmu_lock to protect rmaps when aging SPTEs. When A/D bits are enabled, it is safe to simply clear the Accessed bits, i.e. KVM just needs to ensure the parent page table isn't freed. The less obvious case is marking SPTEs for access tracking in the non-A/D case (for EPT only). Because aging a gfn means making the SPTE not-present, KVM needs to play nice with the case where the CPU has TLB entries for a SPTE that is not-present in memory. For example, when doing dirty tracking, if KVM encounters a non-present shadow accessed SPTE, KVM must know to do a TLB invalidation. Fortunately, KVM already provides (and relies upon) the necessary functionality. E.g. KVM doesn't flush TLBs when aging pages (even in the clear_flush_young() case), and when harvesting dirty bitmaps, KVM flushes based on the dirty bitmaps, not on SPTEs. Co-developed-by: James Houghton <jthoughton@google.com> Signed-off-by: James Houghton <jthoughton@google.com> Link: https://lore.kernel.org/r/20250204004038.1680123-12-jthoughton@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-14KVM: x86/mmu: Add support for lockless walks of rmap SPTEsSean Christopherson1-41/+92
Add a lockless version of for_each_rmap_spte(), which is pretty much the same as the normal version, except that it doesn't BUG() the host if a non-present SPTE is encountered. When mmu_lock is held, it should be impossible for a different task to zap a SPTE, _and_ zapped SPTEs must be removed from their rmap chain prior to dropping mmu_lock. Thus, the normal walker BUG()s if a non-present SPTE is encountered as something is wildly broken. When walking rmaps without holding mmu_lock, the SPTEs pointed at by the rmap chain can be zapped/dropped, and so a lockless walk can observe a non-present SPTE if it runs concurrently with a different operation that is zapping SPTEs. Signed-off-by: James Houghton <jthoughton@google.com> Link: https://lore.kernel.org/r/20250204004038.1680123-11-jthoughton@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-14KVM: x86/mmu: Add infrastructure to allow walking rmaps outside of mmu_lockSean Christopherson1-11/+99
Steal another bit from rmap entries (which are word aligned pointers, i.e. have 2 free bits on 32-bit KVM, and 3 free bits on 64-bit KVM), and use the bit to implement a *very* rudimentary per-rmap spinlock. The only anticipated usage of the lock outside of mmu_lock is for aging gfns, and collisions between aging and other MMU rmap operations are quite rare, e.g. unless userspace is being silly and aging a tiny range over and over in a tight loop, time between contention when aging an actively running VM is O(seconds). In short, a more sophisticated locking scheme shouldn't be necessary. Note, the lock only protects the rmap structure itself, SPTEs that are pointed at by a locked rmap can still be modified and zapped by another task (KVM drops/zaps SPTEs before deleting the rmap entries) Co-developed-by: James Houghton <jthoughton@google.com> Signed-off-by: James Houghton <jthoughton@google.com> Link: https://lore.kernel.org/r/20250204004038.1680123-10-jthoughton@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-02-14KVM: x86/mmu: Refactor low level rmap helpers to prep for walking w/o mmu_lockSean Christopherson1-33/+50
Refactor the pte_list and rmap code to always read and write rmap_head->val exactly once, e.g. by collecting changes in a local variable and then propagating those changes back to rmap_head->val as appropriate. This will allow implementing a per-rmap rwlock (of sorts) by adding a LOCKED bit into the rmap value alongside the MANY bit. Signed-off-by: James Houghton <jthoughton@google.com> Acked-by: Yu Zhao <yuzhao@google.com> Reviewed-by: James Houghton <jthoughton@google.com> Link: https://lore.kernel.org/r/20250204004038.1680123-9-jthoughton@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>