diff options
author | Paul Mackerras <paulus@samba.org> | 2013-09-20 08:52:51 +0400 |
---|---|---|
committer | Alexander Graf <agraf@suse.de> | 2013-10-17 16:49:35 +0400 |
commit | 93b159b466bdc9753bba5c3c51b40d7ddbbcc07c (patch) | |
tree | 723a5f54132c2f44e25cbc8ea8b365c3940a5e10 /arch/powerpc/kvm/book3s_64_mmu.c | |
parent | 4f6c11db10159e362b0100d41b35bf6d731eb4e2 (diff) | |
download | linux-93b159b466bdc9753bba5c3c51b40d7ddbbcc07c.tar.xz |
KVM: PPC: Book3S PR: Better handling of host-side read-only pages
Currently we request write access to all pages that get mapped into the
guest, even if the guest is only loading from the page. This reduces
the effectiveness of KSM because it means that we unshare every page we
access. Also, we always set the changed (C) bit in the guest HPTE if
it allows writing, even for a guest load.
This fixes both these problems. We pass an 'iswrite' flag to the
mmu.xlate() functions and to kvmppc_mmu_map_page() to indicate whether
the access is a load or a store. The mmu.xlate() functions now only
set C for stores. kvmppc_gfn_to_pfn() now calls gfn_to_pfn_prot()
instead of gfn_to_pfn() so that it can indicate whether we need write
access to the page, and get back a 'writable' flag to indicate whether
the page is writable or not. If that 'writable' flag is clear, we then
make the host HPTE read-only even if the guest HPTE allowed writing.
This means that we can get a protection fault when the guest writes to a
page that it has mapped read-write but which is read-only on the host
side (perhaps due to KSM having merged the page). Thus we now call
kvmppc_handle_pagefault() for protection faults as well as HPTE not found
faults. In kvmppc_handle_pagefault(), if the access was allowed by the
guest HPTE and we thus need to install a new host HPTE, we then need to
remove the old host HPTE if there is one. This is done with a new
function, kvmppc_mmu_unmap_page(), which uses kvmppc_mmu_pte_vflush() to
find and remove the old host HPTE.
Since the memslot-related functions require the KVM SRCU read lock to
be held, this adds srcu_read_lock/unlock pairs around the calls to
kvmppc_handle_pagefault().
Finally, this changes kvmppc_mmu_book3s_32_xlate_pte() to not ignore
guest HPTEs that don't permit access, and to return -EPERM for accesses
that are not permitted by the page protections.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Diffstat (limited to 'arch/powerpc/kvm/book3s_64_mmu.c')
-rw-r--r-- | arch/powerpc/kvm/book3s_64_mmu.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c index c110145522e6..83da1f868fd5 100644 --- a/arch/powerpc/kvm/book3s_64_mmu.c +++ b/arch/powerpc/kvm/book3s_64_mmu.c @@ -206,7 +206,8 @@ static int decode_pagesize(struct kvmppc_slb *slbe, u64 r) } static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr, - struct kvmppc_pte *gpte, bool data) + struct kvmppc_pte *gpte, bool data, + bool iswrite) { struct kvmppc_slb *slbe; hva_t ptegp; @@ -345,8 +346,8 @@ do_second: r |= HPTE_R_R; put_user(r >> 8, addr + 6); } - if (data && gpte->may_write && !(r & HPTE_R_C)) { - /* Set the dirty flag -- XXX even if not writing */ + if (iswrite && gpte->may_write && !(r & HPTE_R_C)) { + /* Set the dirty flag */ /* Use a single byte write */ char __user *addr = (char __user *) &pteg[i+1]; r |= HPTE_R_C; @@ -355,7 +356,7 @@ do_second: mutex_unlock(&vcpu->kvm->arch.hpt_mutex); - if (!gpte->may_read) + if (!gpte->may_read || (iswrite && !gpte->may_write)) return -EPERM; return 0; |