diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2022-02-11 14:50:11 +0300 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2022-04-29 19:49:53 +0300 |
commit | e5ed0fb01004f93ddf8a0c632cbc8a3f1ea5b518 (patch) | |
tree | fd59eecd3558d9126ae32361880eb8900946031c /arch/x86/kvm/mmu/paging_tmpl.h | |
parent | b89805082adf1f502b1f0b993063e938e1bcb098 (diff) | |
download | linux-e5ed0fb01004f93ddf8a0c632cbc8a3f1ea5b518.tar.xz |
KVM: x86/mmu: split cpu_role from mmu_role
Snapshot the state of the processor registers that govern page walk into
a new field of struct kvm_mmu. This is a more natural representation
than having it *mostly* in mmu_role but not exclusively; the delta
right now is represented in other fields, such as root_level.
The nested MMU now has only the CPU role; and in fact the new function
kvm_calc_cpu_role is analogous to the previous kvm_calc_nested_mmu_role,
except that it has role.base.direct equal to !CR0.PG. For a walk-only
MMU, "direct" has no meaning, but we set it to !CR0.PG so that
role.ext.cr0_pg can go away in a future patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu/paging_tmpl.h')
-rw-r--r-- | arch/x86/kvm/mmu/paging_tmpl.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 07a7832f96cb..56544c542d05 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -281,7 +281,7 @@ static inline bool FNAME(is_last_gpte)(struct kvm_mmu *mmu, * is not reserved and does not indicate a large page at this level, * so clear PT_PAGE_SIZE_MASK in gpte if that is the case. */ - gpte &= level - (PT32_ROOT_LEVEL + mmu->mmu_role.ext.cr4_pse); + gpte &= level - (PT32_ROOT_LEVEL + mmu->cpu_role.ext.cr4_pse); #endif /* * PG_LEVEL_4K always terminates. The RHS has bit 7 set |