summaryrefslogtreecommitdiff
path: root/drivers/iommu/amd/io_pgtable.c
AgeCommit message (Collapse)AuthorFilesLines
2022-09-07iommu/amd/io-pgtable: Implement unmap_pages io_pgtable_ops callbackVasant Hegde1-8/+9
Implement the io_pgtable_ops->unmap_pages() callback for AMD driver and deprecate io_pgtable_ops->unmap callback. Also if fetch_pte() returns NULL then return from unmap_mapages() instead of trying to continue to unmap remaining pages. Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220825063939.8360-3-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callbackVasant Hegde1-25/+34
Implement the io_pgtable_ops->map_pages() callback for AMD driver. Also deprecate io_pgtable->map callback. Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220825063939.8360-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-06-23iommu/amd: Use try_cmpxchg64 in alloc_pte and free_clear_pteUros Bizjak1-4/+2
Use try_cmpxchg64 instead of cmpxchg64 (*ptr, old, new) != old in alloc_pte and free_clear_pte. cmpxchg returns success in ZF flag, so this change saves a compare after cmpxchg (and related move instruction in front of cmpxchg). Also, remove racy explicit assignment to pteval when cmpxchg fails, this is what try_cmpxchg does implicitly from *pte in an atomic way. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220525145416.10816-1-ubizjak@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-02-14iommu/amd: Fix I/O page table memory leakSuravee Suthikulpanit1-6/+6
The current logic updates the I/O page table mode for the domain before calling the logic to free memory used for the page table. This results in IOMMU page table memory leak, and can be observed when launching VM w/ pass-through devices. Fix by freeing the memory used for page table before updating the mode. Cc: Joerg Roedel <joro@8bytes.org> Reported-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Fixes: e42ba0633064 ("iommu/amd: Restructure code for freeing page table") Link: https://lore.kernel.org/all/20220118194720.urjgi73b7c3tq2o6@oracle.com/ Link: https://lore.kernel.org/r/20220210154745.11524-1-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-20iommu/amd: Use put_pages_listMatthew Wilcox (Oracle)1-32/+18
page->freelist is for the use of slab. We already have the ability to free a list of pages in the core mm, but it requires the use of a list_head and for the pages to be chained together through page->lru. Switch the AMD IOMMU code over to using free_pages_list(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> [rm: split from original patch, cosmetic tweaks] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/73af128f651aaa1f38f69e586c66765a88ad2de0.1639753638.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-20iommu/amd: Simplify pagetable freeingRobin Murphy1-48/+34
For reasons unclear, pagetable freeing is an effectively recursive method implemented via an elaborate system of templated functions that turns out to account for 25% of the object file size. Implementing it using regular straightforward recursion makes the code simpler, and seems like a good thing to do before we work on it further. As part of that, also fix the types to avoid all the needless casting back and forth which just gets in the way. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/d3d00c9f3fa0df4756b867072c201e6e82f9ce39.1639753638.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-07-26iommu: Streamline iommu_iova_to_phys()Robin Murphy1-3/+0
If people are going to insist on calling iommu_iova_to_phys() pointlessly and expecting it to work, we can at least do ourselves a favour by handling those cases in the core code, rather than repeatedly across an inconsistent handful of drivers. Since all the existing drivers implement the internal callback, and any future ones are likely to want to work with iommu-dma which relies on iova_to_phys a fair bit, we may as well remove that currently-redundant check as well and consider it mandatory. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/f564f3f6ff731b898ff7a898919bf871c2c7745a.1626354264.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-03-04iommu/amd: Fix sleeping in atomic in increase_address_space()Andrey Ryabinin1-4/+6
increase_address_space() calls get_zeroed_page(gfp) under spin_lock with disabled interrupts. gfp flags passed to increase_address_space() may allow sleeping, so it comes to this: BUG: sleeping function called from invalid context at mm/page_alloc.c:4342 in_atomic(): 1, irqs_disabled(): 1, pid: 21555, name: epdcbbf1qnhbsd8 Call Trace: dump_stack+0x66/0x8b ___might_sleep+0xec/0x110 __alloc_pages_nodemask+0x104/0x300 get_zeroed_page+0x15/0x40 iommu_map_page+0xdd/0x3e0 amd_iommu_map+0x50/0x70 iommu_map+0x106/0x220 vfio_iommu_type1_ioctl+0x76e/0x950 [vfio_iommu_type1] do_vfs_ioctl+0xa3/0x6f0 ksys_ioctl+0x66/0x70 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x4e/0x100 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fix this by moving get_zeroed_page() out of spin_lock/unlock section. Fixes: 754265bcab ("iommu/amd: Fix race in increase_address_space()") Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com> Acked-by: Will Deacon <will@kernel.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20210217143004.19165-1-arbn@yandex-team.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Introduce iommu_v1_map_page and iommu_v1_unmap_pageSuravee Suthikulpanit1-13/+12
These implement map and unmap for AMD IOMMU v1 pagetable, which will be used by the IO pagetable framework. Also clean up unused extern function declarations. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-13-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Introduce iommu_v1_iova_to_physSuravee Suthikulpanit1-0/+22
This implements iova_to_phys for AMD IOMMU v1 pagetable, which will be used by the IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-12-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Refactor fetch_pte to use struct amd_io_pgtableSuravee Suthikulpanit1-6/+7
To simplify the fetch_pte function. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-11-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Rename variables to be consistent with struct io_pgtable_opsSuravee Suthikulpanit1-16/+15
There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-10-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Remove amd_iommu_domain_get_pgtableSuravee Suthikulpanit1-24/+12
Since the IO page table root and mode parameters have been moved into the struct amd_io_pg, the function is no longer needed. Therefore, remove it along with the struct domain_pgtable. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-9-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Restructure code for freeing page tableSuravee Suthikulpanit1-17/+24
By consolidate logic into v1_free_pgtable helper function, which is called from IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-8-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Move IO page table related functionsSuravee Suthikulpanit1-0/+473
Preparing to migrate to use IO page table framework. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-7-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Prepare for generic IO page table frameworkSuravee Suthikulpanit1-0/+69
Add initial hook up code to implement generic IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-3-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>