summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2024-12-19KVM: x86: Drop now-redundant MAXPHYADDR and GPA rsvd bits from vCPU creationSean Christopherson1-3/+0
Drop the manual initialization of maxphyaddr and reserved_gpa_bits during vCPU creation now that kvm_arch_vcpu_create() unconditionally invokes kvm_vcpu_after_set_cpuid(), which handles all such CPUID caching. None of the helpers between the existing code in kvm_arch_vcpu_create() and the call to kvm_vcpu_after_set_cpuid() consume maxphyaddr or reserved_gpa_bits (though auditing vmx_vcpu_create() and svm_vcpu_create() isn't exactly easy). Link: https://lore.kernel.org/r/20241128013424.4096668-13-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86/pmu: Drop now-redundant refresh() during init()Sean Christopherson1-1/+0
Drop the manual kvm_pmu_refresh() from kvm_pmu_init() now that kvm_arch_vcpu_create() performs the refresh via kvm_vcpu_after_set_cpuid(). Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20241128013424.4096668-12-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Move __kvm_is_valid_cr4() definition to x86.hSean Christopherson2-10/+5
Let vendor code inline __kvm_is_valid_cr4() now x86.c's cr4_reserved_bits no longer exists, as keeping cr4_reserved_bits local to x86.c was the only reason for "hiding" the definition of __kvm_is_valid_cr4(). No functional change intended. Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20241128013424.4096668-11-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Account for KVM-reserved CR4 bits when passing through CR4 on VMXSean Christopherson2-11/+5
Drop x86.c's local pre-computed cr4_reserved bits and instead fold KVM's reserved bits into the guest's reserved bits. This fixes a bug where VMX's set_cr4_guest_host_mask() fails to account for KVM-reserved bits when deciding which bits can be passed through to the guest. In most cases, letting the guest directly write reserved CR4 bits is ok, i.e. attempting to set the bit(s) will still #GP, but not if a feature is available in hardware but explicitly disabled by the host, e.g. if FSGSBASE support is disabled via "nofsgsbase". Note, the extra overhead of computing host reserved bits every time userspace sets guest CPUID is negligible. The feature bits that are queried are packed nicely into a handful of words, and so checking and setting each reserved bit costs in the neighborhood of ~5 cycles, i.e. the total cost will be in the noise even if the number of checked CR4 bits doubles over the next few years. In other words, x86 will run out of CR4 bits long before the overhead becomes problematic. Note #2, __cr4_reserved_bits() starts from CR4_RESERVED_BITS, which is why the existing __kvm_cpu_cap_has() processing doesn't explicitly OR in CR4_RESERVED_BITS (and why the new code doesn't do so either). Fixes: 2ed41aa631fc ("KVM: VMX: Intercept guest reserved CR4 bits to inject #GP fault") Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20241128013424.4096668-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Explicitly do runtime CPUID updates "after" initial setupSean Christopherson1-2/+16
Explicitly perform runtime CPUID adjustments as part of the "after set CPUID" flow to guard against bugs where KVM consumes stale vCPU/CPUID state during kvm_update_cpuid_runtime(). E.g. see commit 4736d85f0d18 ("KVM: x86: Use actual kvm_cpuid.base for clearing KVM_FEATURE_PV_UNHALT"). Whacking each mole individually is not sustainable or robust, e.g. while the aforemention commit fixed KVM's PV features, the same issue lurks for Xen and Hyper-V features, Xen and Hyper-V simply don't have any runtime features (though spoiler alert, neither should KVM). Updating runtime features in the "full" path will also simplify adding a snapshot of the guest's capabilities, i.e. of caching the intersection of guest CPUID and kvm_cpu_caps (modulo a few edge cases). Link: https://lore.kernel.org/r/20241128013424.4096668-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Do all post-set CPUID processing during vCPU creationSean Christopherson3-1/+3
During vCPU creation, process KVM's default, empty CPUID as if userspace set an empty CPUID to ensure consistent and correct behavior with respect to guest CPUID. E.g. if userspace never sets guest CPUID, KVM will never configure cr4_guest_rsvd_bits, and thus create divergent, incorrect, guest- visible behavior due to letting the guest set any KVM-supported CR4 bits despite the features not being allowed per guest CPUID. Note! This changes KVM's ABI, as lack of full CPUID processing allowed userspace to stuff garbage vCPU state, e.g. userspace could set CR4 to a guest-unsupported value via KVM_SET_SREGS. But it's extremely unlikely that this is a breaking change, as KVM already has many flows that require userspace to set guest CPUID before loading vCPU state. E.g. multiple MSR flows consult guest CPUID on host writes, and KVM_SET_SREGS itself already relies on guest CPUID being up-to-date, as KVM's validity check on CR3 consumes CPUID.0x7.1 (for LAM) and CPUID.0x80000008 (for MAXPHYADDR). Furthermore, the plan is to commit to enforcing guest CPUID for userspace writes to MSRs, at which point bypassing sregs CPUID checks is even more nonsensical. Link: https://lore.kernel.org/r/20241128013424.4096668-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Limit use of F() and SF() to kvm_cpu_cap_{mask,init_kvm_defined}()Sean Christopherson1-14/+17
Define and undefine the F() and SF() macros precisely around kvm_set_cpu_caps() to make it all but impossible to use the macros outside of kvm_cpu_cap_{mask,init_kvm_defined}(). Currently, F() is a simple passthrough, but SF() is actively dangerous as it checks that the scattered feature is supported by the host kernel. And usage outside of the aforementioned helpers will run afoul of future changes to harden KVM's CPUID management. Opportunistically switch to feature_bit() when stuffing LA57 based on raw hardware support. No functional change intended. Link: https://lore.kernel.org/r/20241128013424.4096668-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-19KVM: x86: Use feature_bit() to clear CONSTANT_TSC when emulating CPUIDSean Christopherson1-1/+1
When clearing CONSTANT_TSC during CPUID emulation due to a Hyper-V quirk, use feature_bit() instead of SF() to ensure the bit is actually cleared. SF() evaluates to zero if the _host_ doesn't support the feature. I.e. KVM could keep the bit set if userspace advertised CONSTANT_TSC despite it not being supported in hardware. Note, translating from a scattered feature to a the hardware version is done by __feature_translate(), not SF(). The sole purpose of SF() is to check kernel support for the scattered feature, *before* translation. Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20241128013424.4096668-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-18Merge tag 'hyperv-fixes-signed-20241217' of ↵Linus Torvalds1-0/+58
git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux Pull hyperv fixes from Wei Liu: - Various fixes to Hyper-V tools in the kernel tree (Dexuan Cui, Olaf Hering, Vitaly Kuznetsov) - Fix a bug in the Hyper-V TSC page based sched_clock() (Naman Jain) - Two bug fixes in the Hyper-V utility functions (Michael Kelley) - Convert open-coded timeouts to secs_to_jiffies() in Hyper-V drivers (Easwar Hariharan) * tag 'hyperv-fixes-signed-20241217' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux: tools/hv: reduce resource usage in hv_kvp_daemon tools/hv: add a .gitignore file tools/hv: reduce resouce usage in hv_get_dns_info helper hv/hv_kvp_daemon: Pass NIC name to hv_get_dns_info as well Drivers: hv: util: Avoid accessing a ringbuffer not initialized yet Drivers: hv: util: Don't force error code to ENODEV in util_probe() tools/hv: terminate fcopy daemon if read from uio fails drivers: hv: Convert open-coded timeouts to secs_to_jiffies() tools: hv: change permissions of NetworkManager configuration file x86/hyperv: Fix hv tsc page based sched_clock for hibernation tools: hv: Fix a complier warning in the fcopy uio daemon
2024-12-18x86/cpu: Make all all CPUID leaf names consistentDave Hansen10-37/+37
The leaf names are not consistent. Give them all a CPUID_LEAF_ prefix for consistency and vertical alignment. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> # for ioatdma bits Link: https://lore.kernel.org/all/20241213205040.7B0C3241%40davehans-spike.ostc.intel.com
2024-12-18x86/fpu: Remove unnecessary CPUID level checkDave Hansen1-5/+0
The CPUID level dependency table will entirely zap X86_FEATURE_XSAVE if the CPUID level is too low. This code is unreachable. Kill it. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Chang S. Bae <chang.seok.bae@intel.com> Link: https://lore.kernel.org/all/20241213205038.6E71F9A4%40davehans-spike.ostc.intel.com
2024-12-18x86/fpu: Move CPUID leaf definitions to common codeDave Hansen4-5/+4
Move the XSAVE-related CPUID leaf definitions to common code. Then, use the new definition to remove the last magic number from the CPUID level dependency table. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/all/20241213205037.43C57CDE%40davehans-spike.ostc.intel.com
2024-12-18x86/tsc: Remove CPUID "frequency" leaf magic numbers.Dave Hansen2-6/+7
All the code that reads the CPUID frequency information leaf hard-codes a magic number. Give it a symbolic name and use it. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/all/20241213205036.4397658F%40davehans-spike.ostc.intel.com
2024-12-18x86/tsc: Move away from TSC leaf magic numbersDave Hansen1-6/+5
The TSC code has a bunch of hard-coded references to leaf 0x15. Change them over to the symbolic name. Also zap the 'ART_CPUID_LEAF' definition. It was a duplicate of 'CPUID_TSC_LEAF'. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241213205034.B79D6224%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Move TSC CPUID leaf definitionDave Hansen3-3/+2
Prepare to use the TSC CPUID leaf definition more widely by moving it to the common header. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/all/20241213205033.68799E53%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Refresh DCA leaf reading codeDave Hansen2-2/+3
The DCA leaf number is also hard-coded in the CPUID level dependency table. Move its definition to common code and use it. While at it, fix up the naming and types in the probe code. All CPUID data is provided in 32-bit registers, not 'unsigned long'. Also stop referring to "level_9". Move away from test_bit() because the type is no longer an 'unsigned long'. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/all/20241213205032.476A30FE%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Remove unnecessary MwAIT leaf checksDave Hansen2-5/+0
The CPUID leaf dependency checker will remove X86_FEATURE_MWAIT if the CPUID level is below the required level (CPUID_MWAIT_LEAF). Thus, if you check X86_FEATURE_MWAIT you do not need to also check the CPUID level. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241213205030.9B42B458%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Use MWAIT leaf definitionDave Hansen1-1/+2
The leaf-to-feature dependency array uses hard-coded leaf numbers. Use the new common header definition for the MWAIT leaf. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/all/20241213205029.5B055D6E%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Move MWAIT leaf definition to common headerDave Hansen7-1/+7
Begin constructing a common place to keep all CPUID leaf definitions. Move CPUID_MWAIT_LEAF to the CPUID header and include it where needed. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/all/20241213205028.EE94D02A%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Remove 'x86_cpu_desc' infrastructureDave Hansen2-66/+0
All the users of 'x86_cpu_desc' are gone. Zap it from the tree. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241213185133.AF0BF2BC%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Move AMD erratum 1386 table over to 'x86_cpu_id'Dave Hansen1-5/+4
The AMD erratum 1386 detection code uses and old style 'x86_cpu_desc' table. Replace it with 'x86_cpu_id' so the old style can be removed. I did not create a new helper macro here. The new table is certainly more noisy than the old and it can be improved on. But I was hesitant to create a new macro just for a single site that is only two ugly lines in the end. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241213185132.07555E1D%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Replace PEBS use of 'x86_cpu_desc' use with 'x86_cpu_id'Dave Hansen1-36/+26
The 'x86_cpu_desc' and 'x86_cpu_id' structures are very similar. Reduce duplicate infrastructure by moving the few users of 'x86_cpu_desc' to the much more common variant. The existing X86_MATCH_VFM_STEPS() helper matches ranges of steppings. Instead of introducing a single-stepping match function which could get confusing when paired with the range, just use the stepping min/max match helper and use min==max. Note that this makes the table more vertically compact because multiple entries like this: INTEL_CPU_DESC(INTEL_SKYLAKE_X, 4, 0x00000000), INTEL_CPU_DESC(INTEL_SKYLAKE_X, 5, 0x00000000), INTEL_CPU_DESC(INTEL_SKYLAKE_X, 6, 0x00000000), INTEL_CPU_DESC(INTEL_SKYLAKE_X, 7, 0x00000000), can be consolidated down to a single stepping range. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241213185131.8B610039%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Expose only stepping min/max interfaceDave Hansen3-55/+56
The x86_match_cpu() infrastructure can match CPU steppings. Since there are only 16 possible steppings, the matching infrastructure goes all out and stores the stepping match as a bitmap. That means it can match any possible steppings in a single list entry. Fun. But it exposes this bitmap to each of the X86_MATCH_*() helpers when none of them really need a bitmap. It makes up for this by exporting a helper (X86_STEPPINGS()) which converts a contiguous stepping range into the bitmap which every single user leverages. Instead of a bitmap, have the main helper for this sort of thing (X86_MATCH_VFM_STEPS()) just take a stepping range. This ends up actually being even more compact than before. Leave the helper in place (renamed to __X86_STEPPINGS()) to make it more clear what is going on instead of just having a random GENMASK() in the middle of an already complicated macro. One oddity that I hit was this macro: X86_MATCH_VFM_STEPS(vfm, X86_STEPPING_MIN, max_stepping, issues) It *could* have been converted over to take a min/max stepping value for each entry. But that would have been a bit too verbose and would prevent the one oddball in the list (INTEL_COMETLAKE_L stepping 0) from sticking out. Instead, just have it take a *maximum* stepping and imply that the match is from 0=>max_stepping. This is functional for all the cases now and also retains the nice property of having INTEL_COMETLAKE_L stepping 0 stick out like a sore thumb. skx_cpuids[] is goofy. It uses the stepping match but encodes all possible steppings. Just use a normal, non-stepping match helper. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241213185129.65527B2A%40davehans-spike.ostc.intel.com
2024-12-18x86/cpu: Introduce new microcode matching helperDave Hansen2-0/+12
The 'x86_cpu_id' and 'x86_cpu_desc' structures are very similar and need to be consolidated. There is a microcode version matching function for 'x86_cpu_desc' but not 'x86_cpu_id'. Create one for 'x86_cpu_id'. This essentially just leverages the x86_cpu_id->driver_data field to replace the less generic x86_cpu_desc->x86_microcode_rev field. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241213185128.8F24EEFC%40davehans-spike.ostc.intel.com
2024-12-17perf/x86/intel/ds: Add PEBS format 6Kan Liang1-0/+1
The only difference between 5 and 6 is the new counters snapshotting group, without the following counters snapshotting enabling patches, it's impossible to utilize the feature in a PEBS record. It's safe to share the same code path with format 5. Add format 6, so the end user can at least utilize the legacy PEBS features. Fixes: a932aa0e868f ("perf/x86: Add Lunar Lake and Arrow Lake support") Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241216204505.748363-1-kan.liang@linux.intel.com
2024-12-17perf/x86/intel/uncore: Add Clearwater Forest supportKan Liang1-0/+1
From the perspective of the uncore PMU, the Clearwater Forest is the same as the previous Sierra Forest. The only difference is the event list, which will be supported in the perf tool later. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20241211161146.235253-1-kan.liang@linux.intel.com
2024-12-17x86/mm: Convert unreachable() to BUG()Peter Zijlstra1-1/+1
Commit 2190966fbc14 ("x86: Convert unreachable() to BUG()") missed one. And after commit 06e24745985c ("objtool: Remove annotate_{,un}reachable()") the invalid use of unreachable() (rightfully) triggers warnings: vmlinux.o: warning: objtool: page_fault_oops() falls through to next function is_prefetch() Fixes: 2190966fbc14 ("x86: Convert unreachable() to BUG()") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20241216093215.GD12338@noisy.programming.kicks-ass.net
2024-12-17Merge tag 'xsa465+xsa466-6.13-tag' of ↵Linus Torvalds15-92/+278
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "Fix xen netfront crash (XSA-465) and avoid using the hypercall page that doesn't do speculation mitigations (XSA-466)" * tag 'xsa465+xsa466-6.13-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: x86/xen: remove hypercall page x86/xen: use new hypercall functions instead of hypercall page x86/xen: add central hypercall functions x86/xen: don't do PV iret hypercall through hypercall page x86/static-call: provide a way to do very early static-call updates objtool/x86: allow syscall instruction x86: make get_cpu_vendor() accessible from Xen code xen/netfront: fix crash when removing device
2024-12-17x86/xen: remove hypercall pageJuergen Gross7-52/+1
The hypercall page is no longer needed. It can be removed, as from the Xen perspective it is optional. But, from Linux's perspective, it removes naked RET instructions that escape the speculative protections that Call Depth Tracking and/or Untrain Ret are trying to achieve. This is part of XSA-466 / CVE-2024-53241. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
2024-12-17x86/xen: use new hypercall functions instead of hypercall pageJuergen Gross1-13/+20
Call the Xen hypervisor via the new xen_hypercall_func static-call instead of the hypercall page. This is part of XSA-466 / CVE-2024-53241. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Co-developed-by: Peter Zijlstra <peterz@infradead.org> Co-developed-by: Josh Poimboeuf <jpoimboe@redhat.com>
2024-12-17x86/xen: add central hypercall functionsJuergen Gross7-1/+190
Add generic hypercall functions usable for all normal (i.e. not iret) hypercalls. Depending on the guest type and the processor vendor different functions need to be used due to the to be used instruction for entering the hypervisor: - PV guests need to use syscall - HVM/PVH guests on Intel need to use vmcall - HVM/PVH guests on AMD and Hygon need to use vmmcall As PVH guests need to issue hypercalls very early during boot, there is a 4th hypercall function needed for HVM/PVH which can be used on Intel and AMD processors. It will check the vendor type and then set the Intel or AMD specific function to use via static_call(). This is part of XSA-466 / CVE-2024-53241. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Co-developed-by: Peter Zijlstra <peterz@infradead.org>
2024-12-17KVM: SVM: Remove redundant TLB flush on guest CR4.PGE changeSean Christopherson1-5/+0
Drop SVM's direct TLB flush when CR4.PGE is toggled and NPT is enabled, as KVM already guarantees TLBs are flushed appropriately. For the call from cr_trap(), kvm_post_set_cr4() requests TLB_FLUSH_GUEST (which is a superset of TLB_FLUSH_CURRENT) when CR4.PGE is toggled, regardless of whether or not KVM is using TDP. The calls from nested_vmcb02_prepare_save() and nested_svm_vmexit() are checking guest (L2) vs. host (L1) CR4, and so a flush is unnecessary as L2 is defined to use a different ASID (from L1's perspective). Lastly, the call from svm_set_cr0() passes in the current CR4 value, i.e. can't toggle PGE. Link: https://lore.kernel.org/r/20241127235312.4048445-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-17KVM: SVM: Macrofy SEV=n versions of sev_xxx_guest()Sean Christopherson1-12/+5
Define sev_{,es_,snp_}guest() as "false" when SEV is disabled via Kconfig, i.e. when CONFIG_KVM_AMD_SEV=n. Despite the helpers being __always_inline, gcc-12 is somehow incapable of realizing that the return value is a compile-time constant and generates sub-optimal code. Opportunistically clump the paths together to reduce the amount of ifdeffery. No functional change intended. Link: https://lore.kernel.org/r/20241127234659.4046347-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-17KVM/x86: add comment to kvm_mmu_do_page_fault()Juergen Gross1-0/+4
On a first glance it isn't obvious why calling kvm_tdp_page_fault() in kvm_mmu_do_page_fault() is special cased, as the general case of using an indirect case would result in calling of kvm_tdp_page_fault() anyway. Add a comment to explain the reason. Signed-off-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20241108161416.28552-1-jgross@suse.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-17KVM: x86: Plumb in the vCPU to kvm_x86_ops.hwapic_isr_update()Sean Christopherson4-9/+8
Pass the target vCPU to the hwapic_isr_update() vendor hook so that VMX can defer the update until after nested VM-Exit if an EOI for L1's vAPIC occurs while L2 is active. Note, commit d39850f57d21 ("KVM: x86: Drop @vcpu parameter from kvm_x86_ops.hwapic_isr_update()") removed the parameter with the justification that doing so "allows for a decent amount of (future) cleanup in the APIC code", but it's not at all clear what cleanup was intended, or if it was ever realized. No functional change intended. Cc: stable@vger.kernel.org Reviewed-by: Chao Gao <chao.gao@intel.com> Tested-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20241128000010.4051275-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-12-15Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds3-5/+29
Pull kvm fixes from Paolo Bonzini: "ARM64: - Fix confusion with implicitly-shifted MDCR_EL2 masks breaking SPE/TRBE initialization - Align nested page table walker with the intended memory attribute combining rules of the architecture - Prevent userspace from constraining the advertised ASID width, avoiding horrors of guest TLBIs not matching the intended context in hardware - Don't leak references on LPIs when insertion into the translation cache fails RISC-V: - Replace csr_write() with csr_set() for HVIEN PMU overflow bit x86: - Cache CPUID.0xD XSTATE offsets+sizes during module init On Intel's Emerald Rapids CPUID costs hundreds of cycles and there are a lot of leaves under 0xD. Getting rid of the CPUIDs during nested VM-Enter and VM-Exit is planned for the next release, for now just cache them: even on Skylake that is 40% faster" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86: Cache CPUID.0xD XSTATE offsets+sizes during module init RISC-V: KVM: Fix csr_write -> csr_set for HVIEN PMU overflow bit KVM: arm64: vgic-its: Add error handling in vgic_its_cache_translation KVM: arm64: Do not allow ID_AA64MMFR0_EL1.ASIDbits to be overridden KVM: arm64: Fix S1/S2 combination when FWB==1 and S2 has Device memory type arm64: Fix usage of new shifted MDCR_EL2 values
2024-12-14x86/sev: Add full support for a segmented RMP tableTom Lendacky3-24/+245
A segmented RMP table allows for improved locality of reference between the memory protected by the RMP and the RMP entries themselves. Add support to detect and initialize a segmented RMP table with multiple segments as configured by the system BIOS. While the RMPREAD instruction will be used to read an RMP entry in a segmented RMP, initialization and debugging capabilities will require the mapping of the segments. The RMP_CFG MSR indicates if segmented RMP support is enabled and, if enabled, the amount of memory that an RMP segment covers. When segmented RMP support is enabled, the RMP_BASE MSR points to the start of the RMP bookkeeping area, which is 16K in size. The RMP Segment Table (RST) is located immediately after the bookkeeping area and is 4K in size. The RST contains up to 512 8-byte entries that identify the location of the RMP segment and amount of memory mapped by the segment (which must be less than or equal to the configured segment size). The physical address that is covered by a segment is based on the segment size and the index of the segment in the RST. The RMP entry for a physical address is based on the offset within the segment. For example, if the segment size is 64GB (0x1000000000 or 1 << 36), then physical address 0x9000800000 is RST entry 9 (0x9000800000 >> 36) and RST entry 9 covers physical memory 0x9000000000 to 0x9FFFFFFFFF. The RMP entry index within the RMP segment is the physical address AND-ed with the segment mask, 64GB - 1 (0xFFFFFFFFF), and then right-shifted 12 bits or PHYS_PFN(0x9000800000 & 0xFFFFFFFFF), which is 0x800. CPUID 0x80000025_EBX[9:0] describes the number of RMP segments that can be cached by the hardware. Additionally, if CPUID 0x80000025_EBX[10] is set, then the number of actual RMP segments defined cannot exceed the number of RMP segments that can be cached and can be used as a maximum RST index. [ bp: Unify printk hex format specifiers. ] Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> Link: https://lore.kernel.org/r/02afd0ffd097a19cb6e5fb1bb76eb110496c5b11.1734101742.git.thomas.lendacky@amd.com
2024-12-14x86/sev: Treat the contiguous RMP table as a single RMP segmentTom Lendacky1-19/+180
In preparation for support of a segmented RMP table, treat the contiguous RMP table as a segmented RMP table with a single segment covering all of memory. By treating a contiguous RMP table as a single segment, much of the code that initializes and accesses the RMP can be re-used. Segmented RMP tables can have up to 512 segment entries. Each segment will have metadata associated with it to identify the segment location, the segment size, etc. The segment data and the physical address are used to determine the index of the segment within the table and then the RMP entry within the segment. For an actual segmented RMP table environment, much of the segment information will come from a configuration MSR. For the contiguous RMP, though, much of the information will be statically defined. [ bp: Touchups, explain array_index_nospec() usage. ] Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> Link: https://lore.kernel.org/r/8c40fbc9c5217f0d79b37cf861eff03ab0330bef.1733172653.git.thomas.lendacky@amd.com
2024-12-14x86/sev: Map only the RMP table entries instead of the full RMP rangeTom Lendacky1-5/+31
In preparation for support of a segmented RMP table, map only the RMP table entries. The RMP bookkeeping area is only ever accessed when first enabling SNP and does not need to remain mapped. To accomplish this, split the initialization of the RMP bookkeeping area and the initialization of the RMP entry area. The RMP bookkeeping area will be mapped only while it is being initialized. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> Reviewed-by: Ashish Kalra <ashish.kalra@amd.com> Link: https://lore.kernel.org/r/22f179998d319834f49c13a8c01187fbf0fd308d.1733172653.git.thomas.lendacky@amd.com
2024-12-14x86/sev: Move the SNP probe routine out of the wayTom Lendacky1-30/+30
To make patch review easier for the segmented RMP support, move the SNP probe function out from in between the initialization-related routines. No functional change. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> Link: https://lore.kernel.org/r/6c2975bbf132d567dd12e1435be1d18c0bf9131c.1733172653.git.thomas.lendacky@amd.com
2024-12-14x86/sev: Require the RMPREAD instruction after Zen4Tom Lendacky1-2/+7
Limit usage of the non-architectural RMP format to Zen3/Zen4 processors. The RMPREAD instruction, with architectural defined output, is available and should be used for RMP access beyond Zen4. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> Reviewed-by: Ashish Kalra <ashish.kalra@amd.com> Link: https://lore.kernel.org/r/5be0093e091778a151266ea853352f62f838eb99.1733172653.git.thomas.lendacky@amd.com
2024-12-14crypto: x86 - remove assignments of 0 to cra_alignmaskEric Biggers5-5/+0
Struct fields are zero by default, so these lines of code have no effect. Remove them to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14x86/sev: Add support for the RMPREAD instructionTom Lendacky2-0/+13
The RMPREAD instruction returns an architecture defined format of an RMP table entry. This is the preferred method for examining RMP entries. The instruction is advertised in CPUID 0x8000001f_EAX[21]. Use this instruction when available. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> Reviewed-by: Ashish Kalra <ashish.kalra@amd.com> Link: https://lore.kernel.org/r/72c734ac8b324bbc0c839b2c093a11af4a8881fa.1733172653.git.thomas.lendacky@amd.com
2024-12-14x86/sev: Prepare for using the RMPREAD instruction to access the RMPTom Lendacky1-38/+94
The RMPREAD instruction returns an architecture defined format of an RMP entry. This is the preferred method for examining RMP entries. In preparation for using the RMPREAD instruction, convert the existing code that directly accesses the RMP to map the raw RMP information into the architecture defined format. RMPREAD output returns a status bit for the 2MB region status. If the input page address is 2MB aligned and any other pages within the 2MB region are assigned, then 2MB region status will be set to 1. Otherwise, the 2MB region status will be set to 0. For systems that do not support RMPREAD, calculating this value would require looping over all of the RMP table entries within that range until one is found with the assigned bit set. Since this bit is not defined in the current format, and so not used today, do not incur the overhead associated with calculating it. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nikunj A Dadhania <nikunj@amd.com> Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> Link: https://lore.kernel.org/r/da49d5af1eb7f9039f35f14a32ca091efb2dd818.1733172653.git.thomas.lendacky@amd.com
2024-12-13KVM: x86: Cache CPUID.0xD XSTATE offsets+sizes during module initSean Christopherson3-5/+29
Snapshot the output of CPUID.0xD.[1..n] during kvm.ko initiliaization to avoid the overead of CPUID during runtime. The offset, size, and metadata for CPUID.0xD.[1..n] sub-leaves does not depend on XCR0 or XSS values, i.e. is constant for a given CPU, and thus can be cached during module load. On Intel's Emerald Rapids, CPUID is *wildly* expensive, to the point where recomputing XSAVE offsets and sizes results in a 4x increase in latency of nested VM-Enter and VM-Exit (nested transitions can trigger xstate_required_size() multiple times per transition), relative to using cached values. The issue is easily visible by running `perf top` while triggering nested transitions: kvm_update_cpuid_runtime() shows up at a whopping 50%. As measured via RDTSC from L2 (using KVM-Unit-Test's CPUID VM-Exit test and a slightly modified L1 KVM to handle CPUID in the fastpath), a nested roundtrip to emulate CPUID on Skylake (SKX), Icelake (ICX), and Emerald Rapids (EMR) takes: SKX 11650 ICX 22350 EMR 28850 Using cached values, the latency drops to: SKX 6850 ICX 9000 EMR 7900 The underlying issue is that CPUID itself is slow on ICX, and comically slow on EMR. The problem is exacerbated on CPUs which support XSAVES and/or XSAVEC, as KVM invokes xstate_required_size() twice on each runtime CPUID update, and because there are more supported XSAVE features (CPUID for supported XSAVE feature sub-leafs is significantly slower). SKX: CPUID.0xD.2 = 348 cycles CPUID.0xD.3 = 400 cycles CPUID.0xD.4 = 276 cycles CPUID.0xD.5 = 236 cycles <other sub-leaves are similar> EMR: CPUID.0xD.2 = 1138 cycles CPUID.0xD.3 = 1362 cycles CPUID.0xD.4 = 1068 cycles CPUID.0xD.5 = 910 cycles CPUID.0xD.6 = 914 cycles CPUID.0xD.7 = 1350 cycles CPUID.0xD.8 = 734 cycles CPUID.0xD.9 = 766 cycles CPUID.0xD.10 = 732 cycles CPUID.0xD.11 = 718 cycles CPUID.0xD.12 = 734 cycles CPUID.0xD.13 = 1700 cycles CPUID.0xD.14 = 1126 cycles CPUID.0xD.15 = 898 cycles CPUID.0xD.16 = 716 cycles CPUID.0xD.17 = 748 cycles CPUID.0xD.18 = 776 cycles Note, updating runtime CPUID information multiple times per nested transition is itself a flaw, especially since CPUID is a mandotory intercept on both Intel and AMD. E.g. KVM doesn't need to ensure emulated CPUID state is up-to-date while running L2. That flaw will be fixed in a future patch, as deferring runtime CPUID updates is more subtle than it appears at first glance, the benefits aren't super critical to have once the XSAVE issue is resolved, and caching CPUID output is desirable even if KVM's updates are deferred. Cc: Jim Mattson <jmattson@google.com> Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20241211013302.1347853-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-12-13x86/xen: don't do PV iret hypercall through hypercall pageJuergen Gross1-9/+18
Instead of jumping to the Xen hypercall page for doing the iret hypercall, directly code the required sequence in xen-asm.S. This is done in preparation of no longer using hypercall page at all, as it has shown to cause problems with speculation mitigations. This is part of XSA-466 / CVE-2024-53241. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
2024-12-13x86/static-call: provide a way to do very early static-call updatesJuergen Gross3-3/+27
Add static_call_update_early() for updating static-call targets in very early boot. This will be needed for support of Xen guest type specific hypercall functions. This is part of XSA-466 / CVE-2024-53241. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Co-developed-by: Peter Zijlstra <peterz@infradead.org> Co-developed-by: Josh Poimboeuf <jpoimboe@redhat.com>
2024-12-13x86: make get_cpu_vendor() accessible from Xen codeJuergen Gross2-16/+24
In order to be able to differentiate between AMD and Intel based systems for very early hypercalls without having to rely on the Xen hypercall page, make get_cpu_vendor() non-static. Refactor early_cpu_init() for the same reason by splitting out the loop initializing cpu_devs() into an externally callable function. This is part of XSA-466 / CVE-2024-53241. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2024-12-12x86/resctrl: Add write option to "mba_MBps_event" fileTony Luck3-1/+44
The "mba_MBps" mount option provides an alternate method to control memory bandwidth. Instead of specifying allowable bandwidth as a percentage of maximum possible, the user provides a MiB/s limit value. There is a file in each CTRL_MON group directory that shows the event currently in use. Allow writing that file to choose a different event. A user can choose any of the memory bandwidth monitoring events listed in /sys/fs/resctrl/info/L3_mon/mon_features independently for each CTRL_MON group by writing to each of the "mba_MBps_event" files. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lore.kernel.org/r/20241206163148.83828-8-tony.luck@intel.com
2024-12-12x86/resctrl: Add "mba_MBps_event" file to CTRL_MON directoriesTony Luck3-0/+42
The "mba_MBps" mount option provides an alternate method to control memory bandwidth. Instead of specifying allowable bandwidth as a percentage of maximum possible, the user provides a MiB/s limit value. In preparation to allow the user to pick the memory bandwidth monitoring event used as input to the feedback loop, provide a file in each CTRL_MON group directory that shows the event currently in use. Note that this file is only visible when the "mba_MBps" mount option is in use. Suggested-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lore.kernel.org/r/20241206163148.83828-7-tony.luck@intel.com