diff options
| author | Sean Christopherson <seanjc@google.com> | 2025-11-14 02:14:19 +0300 |
|---|---|---|
| committer | Sean Christopherson <seanjc@google.com> | 2026-03-03 23:23:26 +0300 |
| commit | 3b7a320e491c87c6d25928f6798c2efeef2be0e8 (patch) | |
| tree | d691fc72bb5bc9fd86b36bd1c14dc46c552fad8b | |
| parent | c65106af8393fe45524b256d7836317a8b3f2c09 (diff) | |
| download | linux-3b7a320e491c87c6d25928f6798c2efeef2be0e8.tar.xz | |
KVM: SVM: Skip OSVW variable updates if current CPU's errata are a subset
Elide the OSVW variable updates if the current CPU's set of errata are a
subset of the errata tracked in the global values, i.e. if no update is
needed. There's no danger of under-reporting errata due to bailing early
as KVM is purely reducing the set of "known fixed" errata. I.e. a racing
update on a different CPU with _more_ errata doesn't change anything if
the current CPU has the same or fewer errata relative to the status quo.
If another CPU is writing osvw_len, then "len" is guaranteed to be larger
than the new osvw_len and so the osvw_len update would be skipped anyways.
If another CPU is setting new bits in osvw_status, then "status" is
guaranteed to be a subset of the new osvw_status and the bitwise-OR would
be an effective nop anyways.
Link: https://patch.msgid.link/20251113231420.1695919-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
| -rw-r--r-- | arch/x86/kvm/svm/svm.c | 22 |
1 files changed, 10 insertions, 12 deletions
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7fccb3d72e18..5c4328f42604 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -424,7 +424,6 @@ static void svm_init_osvw(struct kvm_vcpu *vcpu) static void svm_init_os_visible_workarounds(void) { u64 len, status; - int err; /* * Get OS-Visible Workarounds (OSVW) bits. @@ -453,20 +452,19 @@ static void svm_init_os_visible_workarounds(void) return; } - err = native_read_msr_safe(MSR_AMD64_OSVW_ID_LENGTH, &len); - if (!err) - err = native_read_msr_safe(MSR_AMD64_OSVW_STATUS, &status); + if (native_read_msr_safe(MSR_AMD64_OSVW_ID_LENGTH, &len) || + native_read_msr_safe(MSR_AMD64_OSVW_STATUS, &status)) + len = status = 0; + + if (status == READ_ONCE(osvw_status) && len >= READ_ONCE(osvw_len)) + return; guard(spinlock)(&osvw_lock); - if (err) { - osvw_status = osvw_len = 0; - } else { - if (len < osvw_len) - osvw_len = len; - osvw_status |= status; - osvw_status &= (1ULL << osvw_len) - 1; - } + if (len < osvw_len) + osvw_len = len; + osvw_status |= status; + osvw_status &= (1ULL << osvw_len) - 1; } static bool __kvm_is_svm_supported(void) |
