diff options
author | Sean Christopherson <sean.j.christopherson@intel.com> | 2020-03-21 00:28:28 +0300 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2020-04-21 16:12:57 +0300 |
commit | 71fe70130d88729abc0e658d3202618c340d2e71 (patch) | |
tree | 536ae8214dac28b2a57c809d899df8a8c4f3bc1e /arch | |
parent | 4a632ac6ca66fb29b94a16495624c58f4d313f2f (diff) | |
download | linux-71fe70130d88729abc0e658d3202618c340d2e71.tar.xz |
KVM: x86/mmu: Add module param to force TLB flush on root reuse
Add a module param, flush_on_reuse, to override skip_tlb_flush and
skip_mmu_sync when performing a so called "fast cr3 switch", i.e. when
reusing a cached root. The primary motiviation for the control is to
provide a fallback mechanism in the event that TLB flushing and/or MMU
sync bugs are exposed/introduced by upcoming changes to stop
unconditionally flushing on nested VMX transitions.
Suggested-by: Jim Mattson <jmattson@google.com>
Suggested-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200320212833.3507-33-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kvm/mmu/mmu.c | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 97d1e9b80b8e..c98948459414 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -78,6 +78,9 @@ module_param_cb(nx_huge_pages_recovery_ratio, &nx_huge_pages_recovery_ratio_ops, &nx_huge_pages_recovery_ratio, 0644); __MODULE_PARM_TYPE(nx_huge_pages_recovery_ratio, "uint"); +static bool __read_mostly force_flush_and_sync_on_reuse; +module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644); + /* * When setting this variable to true it enables Two-Dimensional-Paging * where the hardware walks 2 page tables: @@ -4318,9 +4321,9 @@ static void __kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, */ kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu); - if (!skip_mmu_sync) + if (!skip_mmu_sync || force_flush_and_sync_on_reuse) kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); - if (!skip_tlb_flush) + if (!skip_tlb_flush || force_flush_and_sync_on_reuse) kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); /* |