diff options
author | Longpeng(Mike) <longpeng2@huawei.com> | 2017-08-08 07:05:33 +0300 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2017-08-08 11:57:43 +0300 |
commit | de63ad4cf4973462953c29c363f3cfa7117c2b2d (patch) | |
tree | b269d3a1c04045d9f4f7005f8004f7da082bf0be /arch/x86/kvm/svm.c | |
parent | 199b5763d329b43c88f6ad539db8a6c6b42f8edb (diff) | |
download | linux-de63ad4cf4973462953c29c363f3cfa7117c2b2d.tar.xz |
KVM: X86: implement the logic for spinlock optimization
get_cpl requires vcpu_load, so we must cache the result (whether the
vcpu was preempted when its cpl=0) in kvm_vcpu_arch.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/svm.c')
-rw-r--r-- | arch/x86/kvm/svm.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 0cc486fd9871..1fa9ee5660f4 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3749,7 +3749,10 @@ static int interrupt_window_interception(struct vcpu_svm *svm) static int pause_interception(struct vcpu_svm *svm) { - kvm_vcpu_on_spin(&svm->vcpu, false); + struct kvm_vcpu *vcpu = &svm->vcpu; + bool in_kernel = (svm_get_cpl(vcpu) == 0); + + kvm_vcpu_on_spin(vcpu, in_kernel); return 1; } |