summaryrefslogtreecommitdiff
path: root/include/linux/iommu.h
AgeCommit message (Collapse)AuthorFilesLines
2021-02-12Merge branches 'arm/renesas', 'arm/smmu', 'x86/amd', 'x86/vt-d' and 'core' ↵Joerg Roedel1-16/+5
into next
2021-02-02iommu: Check dev->iommu in dev_iommu_priv_get() before dereferencing itJoerg Roedel1-1/+4
The dev_iommu_priv_get() needs a similar check to dev_iommu_fwspec_get() to make sure no NULL-ptr is dereferenced. Fixes: 05a0542b456e1 ("iommu/amd: Store dev_data as device iommu private data") Cc: stable@vger.kernel.org # v5.8+ Link: https://lore.kernel.org/r/20210202145419.29143-1-joro@8bytes.org Reference: https://bugzilla.kernel.org/show_bug.cgi?id=211241 Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu: use the __iommu_attach_device() directly for deferred attachLianbo Jiang1-0/+1
Currently, because domain attach allows to be deferred from iommu driver to device driver, and when iommu initializes, the devices on the bus will be scanned and the default groups will be allocated. Due to the above changes, some devices could be added to the same group as below: [ 3.859417] pci 0000:01:00.0: Adding to iommu group 16 [ 3.864572] pci 0000:01:00.1: Adding to iommu group 16 [ 3.869738] pci 0000:02:00.0: Adding to iommu group 17 [ 3.874892] pci 0000:02:00.1: Adding to iommu group 17 But when attaching these devices, it doesn't allow that a group has more than one device, otherwise it will return an error. This conflicts with the deferred attaching. Unfortunately, it has two devices in the same group for my side, for example: [ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0 [ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1 ... [ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0 [ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1 Finally, which caused the failure of tg3 driver when tg3 driver calls the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma(). [ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting [ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12 [ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting [ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12 [ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting [ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12 [ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting [ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12 In addition, the similar situations also occur in other drivers such as the bnxt_en driver. That can be reproduced easily in kdump kernel when SME is active. Let's move the handling currently in iommu_dma_deferred_attach() into the iommu core code so that it can call the __iommu_attach_device() directly instead of the iommu_attach_device(). The external interface iommu_attach_device() is not suitable for handling this situation. Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu: Switch gather->end to the inclusive endYong Wu1-2/+2
Currently gather->end is "unsigned long" which may be overflow in arch32 in the corner case: 0xfff00000 + 0x100000(iova + size). Although it doesn't affect the size(end - start), it affects the checking "gather->end < end" This patch changes this "end" to the real end address (end = start + size - 1). Correspondingly, update the length to "end - start + 1". Fixes: a7d20dc19d9e ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes") Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Add iova and size as parameters in iotlb_sync_mapYong Wu1-1/+2
iotlb_sync_map allow IOMMU drivers tlb sync after completing the whole mapping. This patch adds iova and size as the parameters in it. then the IOMMU driver could flush tlb with the whole range once after iova mapping to improve performance. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-3-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Delete iommu_dev_has_feature()John Garry1-7/+0
Function iommu_dev_has_feature() has never been referenced in the tree, and there does not appear to be anything coming soon to use it, so delete it. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-7-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu: Delete iommu_domain_window_disable()John Garry1-6/+0
Function iommu_domain_window_disable() is not referenced in the tree, so delete it. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-6-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-12-08Merge branch 'for-next/iommu/vt-d' into for-next/iommu/coreWill Deacon1-0/+1
Intel VT-D updates for 5.11. The main thing here is converting the code over to the iommu-dma API, which required some improvements to the core code to preserve existing functionality. * for-next/iommu/vt-d: iommu/vt-d: Avoid GFP_ATOMIC where it is not needed iommu/vt-d: Remove set but not used variable iommu/vt-d: Cleanup after converting to dma-iommu ops iommu/vt-d: Convert intel iommu driver to the iommu ops iommu/vt-d: Update domain geometry in iommu_ops.at(de)tach_dev iommu: Add quirk for Intel graphic devices in map_sg iommu: Allow the dma-iommu api to use bounce buffers iommu: Add iommu_dma_free_cpu_cached_iovas() iommu: Handle freelists when using deferred flushing in iommu drivers iommu/vt-d: include conditionally on CONFIG_INTEL_IOMMU_SVM
2020-11-25iommu/io-pgtable: Add a domain attribute for pagetable configurationSai Prakash Ranjan1-0/+1
Add a new iommu domain attribute DOMAIN_ATTR_IO_PGTABLE_CFG for pagetable configuration which initially will be used to set quirks like for system cache aka last level cache to be used by client drivers like GPU to set right attributes for caching the hardware pagetables into the system cache and later can be extended to include other page table configuration data. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/9190aa16f378fc0a7f8e57b2b9f60b033e7eeb4f.1606287059.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-25iommu: Handle freelists when using deferred flushing in iommu driversTom Murphy1-0/+1
Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2020-10-14Merge tag 'iommu-updates-v5.10' of ↵Linus Torvalds1-17/+28
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: - ARM-SMMU Updates from Will: - Continued SVM enablement, where page-table is shared with CPU - Groundwork to support integrated SMMU with Adreno GPU - Allow disabling of MSI-based polling on the kernel command-line - Minor driver fixes and cleanups (octal permissions, error messages, ...) - Secure Nested Paging Support for AMD IOMMU. The IOMMU will fault when a device tries DMA on memory owned by a guest. This needs new fault-types as well as a rewrite of the IOMMU memory semaphore for command completions. - Allow broken Intel IOMMUs (wrong address widths reported) to still be used for interrupt remapping. - IOMMU UAPI updates for supporting vSVA, where the IOMMU can access address spaces of processes running in a VM. - Support for the MT8167 IOMMU in the Mediatek IOMMU driver. - Device-tree updates for the Renesas driver to support r8a7742. - Several smaller fixes and cleanups all over the place. * tag 'iommu-updates-v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (57 commits) iommu/vt-d: Gracefully handle DMAR units with no supported address widths iommu/vt-d: Check UAPI data processed by IOMMU core iommu/uapi: Handle data and argsz filled by users iommu/uapi: Rename uapi functions iommu/uapi: Use named union for user data iommu/uapi: Add argsz for user filled data docs: IOMMU user API iommu/qcom: add missing put_device() call in qcom_iommu_of_xlate() iommu/arm-smmu-v3: Add SVA device feature iommu/arm-smmu-v3: Check for SVA features iommu/arm-smmu-v3: Seize private ASID iommu/arm-smmu-v3: Share process page tables iommu/arm-smmu-v3: Move definitions to a header iommu/io-pgtable-arm: Move some definitions to a header iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer iommu/amd: Re-purpose Exclusion range registers to support SNP CWWB iommu/amd: Add support for RMP_PAGE_FAULT and RMP_HW_ERR iommu/amd: Use 4K page for completion wait write-back semaphore iommu/tegra-smmu: Allow to group clients in same swgroup iommu/tegra-smmu: Fix iova->phys translation ...
2020-10-01iommu/uapi: Handle data and argsz filled by usersJacob Pan1-9/+19
IOMMU user APIs are responsible for processing user data. This patch changes the interface such that user pointers can be passed into IOMMU code directly. Separate kernel APIs without user pointers are introduced for in-kernel users of the UAPI functionality. IOMMU UAPI data has a user filled argsz field which indicates the data length of the structure. User data is not trusted, argsz must be validated based on the current kernel data size, mandatory data size, and feature flags. User data may also be extended, resulting in possible argsz increase. Backward compatibility is ensured based on size and flags (or the functional equivalent fields) checking. This patch adds sanity checks in the IOMMU layer. In addition to argsz, reserved/unused fields in padding, flags, and version are also checked. Details are documented in Documentation/userspace-api/iommu.rst Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-6-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Rename uapi functionsJacob Pan1-15/+16
User APIs such as iommu_sva_unbind_gpasid() may also be used by the kernel. Since we introduced user pointer to the UAPI functions, in-kernel callers cannot share the same APIs. In-kernel callers are also trusted, there is no need to validate the data. We plan to have two flavors of the same API functions, one called through ioctls, carrying a user pointer and one called directly with valid IOMMU UAPI structs. To differentiate both, let's rename existing functions with an iommu_uapi_ prefix. Suggested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-5-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-17drm, iommu: Change type of pasid to u32Fenghua Yu1-5/+5
PASID is defined as a few different types in iommu including "int", "u32", and "unsigned int". To be consistent and to match with uapi definitions, define PASID and its variations (e.g. max PASID) as "u32". "u32" is also shorter and a little more explicit than "unsigned int". No PASID type change in uapi although it defines PASID as __u64 in some places. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lkml.kernel.org/r/1600187413-163670-2-git-send-email-fenghua.yu@intel.com
2020-09-04iommu: Rename iommu_tlb_* functions to iommu_iotlb_*Tom Murphy1-5/+5
To keep naming consistent we should stick with *iotlb*. This patch renames a few remaining functions. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Link: https://lore.kernel.org/r/20200817210051.13546-1-murphyt7@tcd.ie Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-07-29Merge branches 'arm/renesas', 'arm/qcom', 'arm/mediatek', 'arm/omap', ↵Joerg Roedel1-22/+16
'arm/exynos', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into next
2020-07-08iommu: Remove unused IOMMU_SYS_CACHE_ONLY flagWill Deacon1-6/+0
The IOMMU_SYS_CACHE_ONLY flag was never exposed via the DMA API and has no in-tree users. Remove it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Rob Clark <robdclark@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-06-30iommu: Move sg_table wrapper out of CONFIG_IOMMU_SUPPORTMarek Szyprowski1-16/+16
Move the recently added sg_table wrapper out of CONFIG_IOMMU_SUPPORT to let the client code copile also when IOMMU support is disabled. Fixes: 48530d9fab0d ("iommu: add generic helper for mapping sgtable objects") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200630081756.18526-1-m.szyprowski@samsung.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-06-08Merge tag 'iommu-updates-v5.8' of ↵Linus Torvalds1-47/+16
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: "A big part of this is a change in how devices get connected to IOMMUs in the core code. It contains the change from the old add_device() / remove_device() to the new probe_device() / release_device() call-backs. As a result functionality that was previously in the IOMMU drivers has been moved to the IOMMU core code, including IOMMU group allocation for each device. The reason for this change was to get more robust allocation of default domains for the iommu groups. A couple of fixes were necessary after this was merged into the IOMMU tree, but there are no known bugs left. The last fix is applied on-top of the merge commit for the topic branches. Other than that change, we have: - Removal of the driver private domain handling in the Intel VT-d driver. This was fragile code and I am glad it is gone now. - More Intel VT-d updates from Lu Baolu: - Nested Shared Virtual Addressing (SVA) support to the Intel VT-d driver - Replacement of the Intel SVM interfaces to the common IOMMU SVA API - SVA Page Request draining support - ARM-SMMU Updates from Will: - Avoid mapping reserved MMIO space on SMMUv3, so that it can be claimed by the PMU driver - Use xarray to manage ASIDs on SMMUv3 - Reword confusing shutdown message - DT compatible string updates - Allow implementations to override the default domain type - A new IOMMU driver for the Allwinner Sun50i platform - Support for ATS gets disabled for untrusted devices (like Thunderbolt devices). This includes a PCI patch, acked by Bjorn. - Some cleanups to the AMD IOMMU driver to make more use of IOMMU core features. - Unification of some printk formats in the Intel and AMD IOMMU drivers and in the IOVA code. - Updates for DT bindings - A number of smaller fixes and cleanups. * tag 'iommu-updates-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (109 commits) iommu: Check for deferred attach in iommu_group_do_dma_attach() iommu/amd: Remove redundant devid checks iommu/amd: Store dev_data as device iommu private data iommu/amd: Merge private header files iommu/amd: Remove PD_DMA_OPS_MASK iommu/amd: Consolidate domain allocation/freeing iommu/amd: Free page-table in protection_domain_free() iommu/amd: Allocate page-table in protection_domain_init() iommu/amd: Let free_pagetable() not rely on domain->pt_root iommu/amd: Unexport get_dev_data() iommu/vt-d: Fix compile warning iommu/vt-d: Remove real DMA lookup in find_domain iommu/vt-d: Allocate domain info for real DMA sub-devices iommu/vt-d: Only clear real DMA device's context entries iommu: Remove iommu_sva_ops::mm_exit() uacce: Remove mm_exit() op iommu/sun50i: Constify sun50i_iommu_ops iommu/hyper-v: Constify hyperv_ir_domain_ops iommu/vt-d: Use pci_ats_supported() iommu/arm-smmu-v3: Use pci_ats_supported() ...
2020-06-02Merge branches 'arm/msm', 'arm/allwinner', 'arm/smmu', 'x86/vt-d', ↵Joerg Roedel1-47/+16
'hyper-v', 'core' and 'x86/amd' into next
2020-05-29iommu: Remove iommu_sva_ops::mm_exit()Jean-Philippe Brucker1-30/+0
After binding a device to an mm, device drivers currently need to register a mm_exit handler. This function is called when the mm exits, to gracefully stop DMA targeting the address space and flush page faults to the IOMMU. This is deemed too complex for the MMU release() notifier, which may be triggered by any mmput() invocation, from about 120 callsites [1]. The upcoming SVA module has an example of such complexity: the I/O Page Fault handler would need to call mmput_async() instead of mmput() after handling an IOPF, to avoid triggering the release() notifier which would in turn drain the IOPF queue and lock up. Another concern is the DMA stop function taking too long, up to several minutes [2]. For some mmput() callers this may disturb other users. For example, if the OOM killer picks the mm bound to a device as the victim and that mm's memory is locked, if the release() takes too long, it might choose additional innocent victims to kill. To simplify the MMU release notifier, don't forward the notification to device drivers. Since they don't stop DMA on mm exit anymore, the PASID lifetime is extended: (1) The device driver calls bind(). A PASID is allocated. Here any DMA fault is handled by mm, and on error we don't print anything to dmesg. Userspace can easily trigger errors by issuing DMA on unmapped buffers. (2) exit_mmap(), for example the process took a SIGKILL. This step doesn't happen during normal operations. Remove the pgd from the PASID table, since the page tables are about to be freed. Invalidate the IOTLBs. Here the device may still perform DMA on the address space. Incoming transactions are aborted but faults aren't printed out. ATS Translation Requests return Successful Translation Completions with R=W=0. PRI Page Requests return with Invalid Request. (3) The device driver stops DMA, possibly following release of a fd, and calls unbind(). PASID table is cleared, IOTLB invalidated if necessary. The page fault queues are drained, and the PASID is freed. If DMA for that PASID is still running here, something went seriously wrong and errors should be reported. For now remove iommu_sva_ops entirely. We might need to re-introduce them at some point, for example to notify device drivers of unhandled IOPF. [1] https://lore.kernel.org/linux-iommu/20200306174239.GM31668@ziepe.ca/ [2] https://lore.kernel.org/linux-iommu/4d68da96-0ad5-b412-5987-2f7a6aa796c3@amd.com/ Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200423125329.782066-3-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-05-15iommu: Remove functions that support private domainSai Praneeth Prakhya1-12/+0
After moving iommu_group setup to iommu core code [1][2] and removing private domain support in vt-d [3], there are no users for functions such as iommu_request_dm_for_dev(), iommu_request_dma_domain_for_dev() and request_default_domain_for_dev(). So, remove these functions. [1] commit dce8d6964ebd ("iommu/amd: Convert to probe/release_device() call-backs") [2] commit e5d1841f18b2 ("iommu/vt-d: Convert to probe/release_device() call-backs") [3] commit 327d5b2fee91 ("iommu/vt-d: Allow 32bit devices to uses DMA domain") Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200513224721.20504-1-sai.praneeth.prakhya@intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-05-13iommu: add generic helper for mapping sgtable objectsMarek Szyprowski1-0/+16
struct sg_table is a common structure used for describing a memory buffer. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry). It turned out that it was a common mistake to misuse nents and orig_nents entries, calling mapping functions with a wrong number of entries. To avoid such issues, lets introduce a common wrapper operating directly on the struct sg_table objects, which take care of the proper use of the nents and orig_nents entries. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Joerg Roedel <jroedel@suse.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-05-05iommu: Unexport iommu_group_get_for_dev()Joerg Roedel1-1/+0
The function is now only used in IOMMU core code and shouldn't be used outside of it anyway, so remove the export for it. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-35-joro@8bytes.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-05-05iommu: Remove add_device()/remove_device() code-pathsJoerg Roedel1-4/+0
All drivers are converted to use the probe/release_device() call-backs, so the add_device/remove_device() pointers are unused and the code using them can be removed. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-33-joro@8bytes.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-05-05iommu: Export bus_iommu_probe() and make is safe for re-probingJoerg Roedel1-0/+1
Add a check to the bus_iommu_probe() call-path to make sure it ignores devices which have already been successfully probed. Then export the bus_iommu_probe() function so it can be used by IOMMU drivers. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-14-joro@8bytes.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-05-05iommu: Add probe_device() and release_device() call-backsJoerg Roedel1-0/+9
Add call-backs to 'struct iommu_ops' as an alternative to the add_device() and remove_device() call-backs, which will be removed when all drivers are converted. The new call-backs will not setup IOMMU groups and domains anymore, so also add a probe_finalize() call-back where the IOMMU driver can do per-device setup work which require the device to be set up with a group and a domain. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-8-joro@8bytes.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-05-05iommu: Add def_domain_type() callback in iommu_opsSai Praneeth Prakhya1-0/+6
Some devices are reqired to use a specific type (identity or dma) of default domain when they are used with a vendor iommu. When the system level default domain type is different from it, the vendor iommu driver has to request a new default domain with iommu_request_dma_domain_for_dev() and iommu_request_dm_for_dev() in the add_dev() callback. Unfortunately, these two helpers only work when the group hasn't been assigned to any other devices, hence, some vendor iommu driver has to use a private domain if it fails to request a new default one. This adds def_domain_type() callback in the iommu_ops, so that any special requirement of default domain for a device could be aware by the iommu generic layer. Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> [ jroedel@suse.de: Added iommu_get_def_domain_type() function and use it to allocate the default domain ] Co-developed-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-3-joro@8bytes.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-03-27iommu: Move fwspec->iommu_priv to struct dev_iommuJoerg Roedel1-3/+4
Move the pointer for iommu private data from struct iommu_fwspec to struct dev_iommu. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-17-joro@8bytes.org
2020-03-27iommu: Introduce accessors for iommu private dataJoerg Roedel1-0/+10
Add dev_iommu_priv_get/set() functions to access per-device iommu private data. This makes it easier to move the pointer to a different location. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-9-joro@8bytes.org
2020-03-27iommu: Move iommu_fwspec to struct dev_iommuJoerg Roedel1-4/+8
Move the iommu_fwspec pointer in struct device into struct dev_iommu. This is a step in the effort to reduce the iommu related pointers in struct device to one. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20200326150841.10083-7-joro@8bytes.org
2020-03-27iommu: Rename struct iommu_param to dev_iommuJoerg Roedel1-2/+2
The term dev_iommu aligns better with other existing structures and their accessor functions. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Will Deacon <will@kernel.org> # arm-smmu Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20200326150841.10083-6-joro@8bytes.org
2020-03-27iommu: Define dev_iommu_fwspec_get() for !CONFIG_IOMMU_APIJoerg Roedel1-0/+4
There are users outside of the IOMMU code that need to call that function. Define it for !CONFIG_IOMMU_API too so that compilation does not break. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20200326150841.10083-2-joro@8bytes.org
2020-02-28iommu: Use C99 flexible array in fwspecRobin Murphy1-1/+1
Although the 1-element array was a typical pre-C99 way to implement variable-length structures, and indeed is a fundamental construct in the APIs of certain other popular platforms, there's no good reason for it here (and in particular the sizeof() trick is far too "clever" for its own good). We can just as easily implement iommu_fwspec's preallocation behaviour using a standard flexible array member, so let's make it look the way most readers would expect. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-24Merge branches 'iommu/fixes', 'arm/smmu', 'x86/amd', 'x86/vt-d' and 'core' ↵Joerg Roedel1-3/+16
into next
2020-01-15iommu/arm-smmu-v3: Parse PASID devicetree property of platform devicesJean-Philippe Brucker1-0/+2
For platform devices that support SubstreamID (SSID), firmware provides the number of supported SSID bits. Restrict it to what the SMMU supports and cache it into master->ssid_bits, which will also be used for PCI PASID. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10drivers/iommu: Initialise module 'owner' field in iommu_device_set_ops()Will Deacon1-2/+9
Requiring each IOMMU driver to initialise the 'owner' field of their 'struct iommu_ops' is error-prone and easily forgotten. Follow the example set by PCI and USB by assigning THIS_MODULE automatically when registering the ops structure with IOMMU core. Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-12-23iommu: Implement generic_iommu_put_resv_regions()Thierry Reding1-0/+2
Implement a generic function for removing reserved regions. This can be used by drivers that don't do anything fancy with these regions other than allocating memory for them. Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23drivers/iommu: Take a ref to the IOMMU driver prior to ->add_device()Will Deacon1-1/+3
To avoid accidental removal of an active IOMMU driver module, take a reference to the driver module in 'iommu_probe_device()' immediately prior to invoking the '->add_device()' callback and hold it until the after the device has been removed by '->remove_device()'. Suggested-by: Joerg Roedel <joro@8bytes.org> Signed-off-by: Will Deacon <will@kernel.org> Tested-by: John Garry <john.garry@huawei.com> # smmu v3 Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-12Merge branches 'iommu/fixes', 'arm/qcom', 'arm/renesas', 'arm/rockchip', ↵Joerg Roedel1-5/+60
'arm/mediatek', 'arm/tegra', 'arm/smmu', 'x86/amd', 'x86/vt-d', 'virtio' and 'core' into next
2019-11-07iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve docWill Deacon1-4/+4
The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all users of the IOMMU API. Despite its name, the idea behind it isn't especially tied to Qualcomm implementations and could conceivably be used by other systems. Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe a bit better the idea behind it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-15iommu: Introduce guest PASID bind functionJacob Pan1-0/+22
Guest shared virtual address (SVA) may require host to shadow guest PASID tables. Guest PASID can also be allocated from the host via enlightened interfaces. In this case, guest needs to bind the guest mm, i.e. cr3 in guest physical address to the actual PASID table in the host IOMMU. Nesting will be turned on such that guest virtual address can go through a two level translation: - 1st level translates GVA to GPA - 2nd level translates GPA to HPA This patch introduces APIs to bind guest PASID data to the assigned device entry in the physical IOMMU. See the diagram below for usage explanation. .-------------. .---------------------------. | vIOMMU | | Guest process mm, FL only | | | '---------------------------' .----------------/ | PASID Entry |--- PASID cache flush - '-------------' | | | V | | GP '-------------' Guest ------| Shadow |----------------------- GP->HP* --------- v v | Host v .-------------. .----------------------. | pIOMMU | | Bind FL for GVA-GPA | | | '----------------------' .----------------/ | | PASID Entry | V (Nested xlate) '----------------\.---------------------. | | |Set SL to GPA-HPA | | | '---------------------' '-------------' Where: - FL = First level/stage one page tables - SL = Second level/stage two page tables - GP = Guest PASID - HP = Host PASID * Conversion needed if non-identity GP-HP mapping option is chosen. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu: Introduce cache_invalidate APIYi L Liu1-0/+14
In any virtualization use case, when the first translation stage is "owned" by the guest OS, the host IOMMU driver has no knowledge of caching structure updates unless the guest invalidation activities are trapped by the virtualizer and passed down to the host. Since the invalidation data can be obtained from user space and will be written into physical IOMMU, we must allow security check at various layers. Therefore, generic invalidation data format are proposed here, model specific IOMMU drivers need to convert them into their own format. Signed-off-by: Yi L Liu <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu: Add gfp parameter to iommu_ops::mapTom Murphy1-1/+20
Add a gfp_t parameter to the iommu_ops::map function. Remove the needless locking in the AMD iommu driver. The iommu_ops::map function (or the iommu_map function which calls it) was always supposed to be sleepable (according to Joerg's comment in this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so should probably have had a "might_sleep()" since it was written. However currently the dma-iommu api can call iommu_map in an atomic context, which it shouldn't do. This doesn't cause any problems because any iommu driver which uses the dma-iommu api uses gfp_atomic in it's iommu_ops::map function. But doing this wastes the memory allocators atomic pools. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23iommu: Add helpers to set/get default domain typeJoerg Roedel1-0/+16
Add a couple of functions to allow changing the default domain type from architecture code and a function for iommu drivers to request whether the default domain is passthrough. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-29iommu: Pass struct iommu_iotlb_gather to ->unmap() and ->iotlb_sync()Will Deacon1-3/+4
To allow IOMMU drivers to batch up TLB flushing operations and postpone them until ->iotlb_sync() is called, extend the prototypes for the ->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to the current iommu_iotlb_gather structure. All affected IOMMU drivers are updated, but there should be no functional change since the extra parameter is ignored for now. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Introduce iommu_iotlb_gather_add_page()Will Deacon1-0/+31
Introduce a helper function for drivers to use when updating an iommu_iotlb_gather structure in response to an ->unmap() call, rather than having to open-code the logic in every page-table implementation. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Introduce struct iommu_iotlb_gather for batching TLB flushesWill Deacon1-4/+39
To permit batching of TLB flushes across multiple calls to the IOMMU driver's ->unmap() implementation, introduce a new structure for tracking the address range to be flushed and the granularity at which the flushing is required. This is hooked into the IOMMU API and its caller are updated to make use of the new structure. Subsequent patches will plumb this into the IOMMU drivers as well, but for now the gathering information is ignored. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Remove empty iommu_tlb_range_add() callback from iommu_opsWill Deacon1-15/+0
Commit add02cfdc9bc ("iommu: Introduce Interface for IOMMU TLB Flushing") added three new TLB flushing operations to the IOMMU API so that the underlying driver operations can be batched when unmapping large regions of IO virtual address space. However, the ->iotlb_range_add() callback has not been implemented by any IOMMU drivers (amd_iommu.c implements it as an empty function, which incurs the overhead of an indirect branch). Instead, drivers either flush the entire IOTLB in the ->iotlb_sync() callback or perform the necessary invalidation during ->unmap(). Attempting to implement ->iotlb_range_add() for arm-smmu-v3.c revealed two major issues: 1. The page size used to map the region in the page-table is not known, and so it is not generally possible to issue TLB flushes in the most efficient manner. 2. The only mutable state passed to the callback is a pointer to the iommu_domain, which can be accessed concurrently and therefore requires expensive synchronisation to keep track of the outstanding flushes. Remove the callback entirely in preparation for extending ->unmap() and ->iotlb_sync() to update a token on the caller's stack. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-04Merge branches 'x86/vt-d', 'x86/amd', 'arm/smmu', 'arm/omap', ↵Joerg Roedel1-13/+94
'generic-dma-ops' and 'core' into next