diff options
author | Sean Christopherson <seanjc@google.com> | 2025-02-12 23:58:39 +0300 |
---|---|---|
committer | Sean Christopherson <seanjc@google.com> | 2025-02-14 18:16:51 +0300 |
commit | 928c54b1c4caf981afbf060a1417d4255f758513 (patch) | |
tree | 66348b0aa5ab06e7d7b274df756ac269aa726b47 | |
parent | 61d65f2dc766c70673d45a4b787f49317384642c (diff) | |
download | linux-928c54b1c4caf981afbf060a1417d4255f758513.tar.xz |
KVM: x86/mmu: Always update A/D-disabled SPTEs atomically
In anticipation of aging SPTEs outside of mmu_lock, force A/D-disabled
SPTEs to be updated atomically, as aging A/D-disabled SPTEs will mark them
for access-tracking outside of mmu_lock. Coupled with restoring access-
tracked SPTEs in the fast page fault handler, the end result is that
A/D-disable SPTEs will be volatile at all times.
Reviewed-by: James Houghton <jthoughton@google.com>
Link: https://lore.kernel.org/all/Z60bhK96JnKIgqZQ@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
-rw-r--r-- | arch/x86/kvm/mmu/spte.c | 10 |
1 files changed, 6 insertions, 4 deletions
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 9663ba867178..0f9f47b4ab0e 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -141,8 +141,11 @@ bool spte_needs_atomic_update(u64 spte) if (!is_writable_pte(spte) && is_mmu_writable_spte(spte)) return true; - /* Access-tracked SPTEs can be restored by KVM's fast page fault handler. */ - if (is_access_track_spte(spte)) + /* + * A/D-disabled SPTEs can be access-tracked by aging, and access-tracked + * SPTEs can be restored by KVM's fast page fault handler. + */ + if (!spte_ad_enabled(spte)) return true; /* @@ -151,8 +154,7 @@ bool spte_needs_atomic_update(u64 spte) * invalidate TLBs when aging SPTEs, and so it's safe to clobber the * Accessed bit (and rare in practice). */ - return spte_ad_enabled(spte) && is_writable_pte(spte) && - !(spte & shadow_dirty_mask); + return is_writable_pte(spte) && !(spte & shadow_dirty_mask); } bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, |