diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2021-02-05 13:07:11 +0300 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2021-02-09 15:05:44 +0300 |
commit | 9fd6dad1261a541b3f5fa7dc5b152222306e6702 (patch) | |
tree | 07a393402e4e9d938a094731c816ca3b54da8711 /virt | |
parent | 897218ff7cf19290ec2d69652ce673d8ed6fedeb (diff) | |
download | linux-9fd6dad1261a541b3f5fa7dc5b152222306e6702.tar.xz |
mm: provide a saner PTE walking API for modules
Currently, the follow_pfn function is exported for modules but
follow_pte is not. However, follow_pfn is very easy to misuse,
because it does not provide protections (so most of its callers
assume the page is writable!) and because it returns after having
already unlocked the page table lock.
Provide instead a simplified version of follow_pte that does
not have the pmdpp and range arguments. The older version
survives as follow_invalidate_pte() for use by fs/dax.c.
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'virt')
-rw-r--r-- | virt/kvm/kvm_main.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 48ccdf4e3d04..ee4ac2618ec5 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1911,7 +1911,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, spinlock_t *ptl; int r; - r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl); + r = follow_pte(vma->vm_mm, addr, &ptep, &ptl); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does @@ -1926,7 +1926,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, if (r) return r; - r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl); + r = follow_pte(vma->vm_mm, addr, &ptep, &ptl); if (r) return r; } |