diff options
author | Paul Mackerras <paulus@ozlabs.org> | 2020-05-28 03:48:18 +0300 |
---|---|---|
committer | Paul Mackerras <paulus@ozlabs.org> | 2020-05-28 03:48:18 +0300 |
commit | 3d89c2ef24f0772b7806ed289bef482b24083de6 (patch) | |
tree | bea4b52aeb9a98926a963f6d6d1f577370e879ec /arch/powerpc/kvm/book3s_64_mmu_radix.c | |
parent | e3326ae3d59e443a379367c6936941d6ab55d316 (diff) | |
download | linux-3d89c2ef24f0772b7806ed289bef482b24083de6.tar.xz |
KVM: PPC: Book3S HV: Remove user-triggerable WARN_ON
Although in general we do not expect valid PTEs to be found in
kvmppc_create_pte when we are inserting a large page mapping, there
is one situation where this can occur. That is when dirty page
logging is turned off for a memslot while the VM is running.
Because the new memslots are installed before the old memslot is
flushed in kvmppc_core_commit_memory_region_hv(), there is a
window where a hypervisor page fault can try to install a 2MB
(or 1GB) page where there are already small page mappings which
were installed while dirty page logging was enabled and which
have not yet been flushed.
Since we have a situation where valid PTEs can legitimately be
found by kvmppc_unmap_free_pte, and which can be triggered by
userspace, just remove the WARN_ON_ONCE, since it is undesirable
to have userspace able to trigger a kernel warning.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Diffstat (limited to 'arch/powerpc/kvm/book3s_64_mmu_radix.c')
-rw-r--r-- | arch/powerpc/kvm/book3s_64_mmu_radix.c | 11 |
1 files changed, 7 insertions, 4 deletions
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 97b45eaa7014..bc3f795252fd 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -429,9 +429,13 @@ void kvmppc_unmap_pte(struct kvm *kvm, pte_t *pte, unsigned long gpa, * Callers are responsible for flushing the PWC. * * When page tables are being unmapped/freed as part of page fault path - * (full == false), ptes are not expected. There is code to unmap them - * and emit a warning if encountered, but there may already be data - * corruption due to the unexpected mappings. + * (full == false), valid ptes are generally not expected; however, there + * is one situation where they arise, which is when dirty page logging is + * turned off for a memslot while the VM is running. The new memslot + * becomes visible to page faults before the memslot commit function + * gets to flush the memslot, which can lead to a 2MB page mapping being + * installed for a guest physical address where there are already 64kB + * (or 4kB) mappings (of sub-pages of the same 2MB page). */ static void kvmppc_unmap_free_pte(struct kvm *kvm, pte_t *pte, bool full, unsigned int lpid) @@ -445,7 +449,6 @@ static void kvmppc_unmap_free_pte(struct kvm *kvm, pte_t *pte, bool full, for (it = 0; it < PTRS_PER_PTE; ++it, ++p) { if (pte_val(*p) == 0) continue; - WARN_ON_ONCE(1); kvmppc_unmap_pte(kvm, p, pte_pfn(*p) << PAGE_SHIFT, PAGE_SHIFT, NULL, lpid); |