diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2017-12-20 15:55:39 +0300 |
---|---|---|
committer | Radim Krčmář <rkrcmar@redhat.com> | 2018-01-16 18:50:13 +0300 |
commit | 74a497fae754bd970ad0573c26d0a3c5e784fa0c (patch) | |
tree | 2b502c8817fe37b9d021fdd9c306dc1f2e678237 /arch/x86/kvm/vmx_shadow_fields.h | |
parent | c9e9deae76b8fbbd48e7357247689016bb0d6242 (diff) | |
download | linux-74a497fae754bd970ad0573c26d0a3c5e784fa0c.tar.xz |
KVM: nVMX: track dirty state of non-shadowed VMCS fields
VMCS12 fields that are not handled through shadow VMCS are rarely
written, and thus they are also almost constant in the vmcs02. We can
thus optimize prepare_vmcs02 by skipping all the work for non-shadowed
fields in the common case.
This patch introduces the (pretty simple) tracking infrastructure; the
next patches will move work to prepare_vmcs02_full and save a few hundred
clock cycles per VMRESUME on a Haswell Xeon E5 system:
before after
cpuid 14159 13869
vmcall 15290 14951
inl_from_kernel 17703 17447
outl_to_kernel 16011 14692
self_ipi_sti_nop 16763 15825
self_ipi_tpr_sti_nop 17341 15935
wr_tsc_adjust_msr 14510 14264
rd_tsc_adjust_msr 15018 14311
mmio-wildcard-eventfd:pci-mem 16381 14947
mmio-datamatch-eventfd:pci-mem 18620 17858
portio-wildcard-eventfd:pci-io 15121 14769
portio-datamatch-eventfd:pci-io 15761 14831
(average savings 748, stdev 460).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Diffstat (limited to 'arch/x86/kvm/vmx_shadow_fields.h')
-rw-r--r-- | arch/x86/kvm/vmx_shadow_fields.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/arch/x86/kvm/vmx_shadow_fields.h b/arch/x86/kvm/vmx_shadow_fields.h index 31d7a15338ac..cd0c75f6d037 100644 --- a/arch/x86/kvm/vmx_shadow_fields.h +++ b/arch/x86/kvm/vmx_shadow_fields.h @@ -17,6 +17,12 @@ * (e.g. force a sync if VM_INSTRUCTION_ERROR is modified * by nested_vmx_failValid) * + * When adding or removing fields here, note that shadowed + * fields must always be synced by prepare_vmcs02, not just + * prepare_vmcs02_full. + */ + +/* * Keeping the fields ordered by size is an attempt at improving * branch prediction in vmcs_read_any and vmcs_write_any. */ |