summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)AuthorFilesLines
2021-01-29iommu/ipmmu-vmsa: Allow SDHI devicesYoshihiro Shimoda1-0/+4
Add SDHI devices into devices_allowlist. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Link: https://lore.kernel.org/r/1611838980-4940-3-git-send-email-yoshihiro.shimoda.uh@renesas.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-29iommu/ipmmu-vmsa: Refactor ipmmu_of_xlate()Yoshihiro Shimoda1-31/+18
Refactor ipmmu_of_xlate() to improve readability/scalability. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Link: https://lore.kernel.org/r/1611838980-4940-2-git-send-email-yoshihiro.shimoda.uh@renesas.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-29iommu/vt-d: Use INVALID response code instead of FAILURELu Baolu1-4/+1
The VT-d IOMMU response RESPONSE_FAILURE for a page request in below cases: - When it gets a Page_Request with no PASID; - When it gets a Page_Request with PASID that is not in use for this device. This is allowed by the spec, but IOMMU driver doesn't support such cases today. When the device receives RESPONSE_FAILURE, it sends the device state machine to HALT state. Now if we try to unload the driver, it hangs since the device doesn't send any outbound transactions to host when the driver is trying to clear things up. The only possible responses would be for invalidation requests. Let's use RESPONSE_INVALID instead for now, so that the device state machine doesn't enter HALT state. Suggested-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210126080730.2232859-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-29iommu/vt-d: Clear PRQ overflow only when PRQ is emptyLu Baolu1-2/+11
It is incorrect to always clear PRO when it's set w/o first checking whether the overflow condition has been cleared. Current code assumes that if an overflow condition occurs it must have been cleared by earlier loop. However since the code runs in a threaded context, the overflow condition could occur even after setting the head to the tail under some extreme condition. To be sane, we should read both head/tail again when seeing a pending PRO and only clear PRO after all pending PRs have been handled. Suggested-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/linux-iommu/MWHPR11MB18862D2EA5BD432BF22D99A48CA09@MWHPR11MB1886.namprd11.prod.outlook.com/ Link: https://lore.kernel.org/r/20210126080730.2232859-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-29iommu/io-pgtable: Remove TLBI_ON_MAP quirkRobin Murphy1-8/+1
IO_PGTABLE_QUIRK_TLBI_ON_MAP is now fully superseded by the core API's iotlb_sync_map callback. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/5abb80bba3a7c371d5ffb7e59c05586deddb9a91.1611764372.git.robin.murphy@arm.com [will: Remove unused 'iop' local variable from arm_v7s_map()] Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28iommu/msm: Hook up iotlb_sync_mapRobin Murphy1-1/+9
The core API can now accommodate invalidate-on-map style behaviour in a single efficient call, so hook that up instead of having io-pgatble do it piecemeal. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/e95223a0abf129230a0bec6743f837075f0a2fcb.1611764372.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28iommu/amd: Adopt IO page table framework for AMD IOMMU v1 page tableSuravee Suthikulpanit3-12/+39
Switch to using IO page table framework for AMD IOMMU v1 page table. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-14-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Introduce iommu_v1_map_page and iommu_v1_unmap_pageSuravee Suthikulpanit3-31/+20
These implement map and unmap for AMD IOMMU v1 pagetable, which will be used by the IO pagetable framework. Also clean up unused extern function declarations. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-13-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Introduce iommu_v1_iova_to_physSuravee Suthikulpanit2-15/+23
This implements iova_to_phys for AMD IOMMU v1 pagetable, which will be used by the IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-12-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Refactor fetch_pte to use struct amd_io_pgtableSuravee Suthikulpanit3-8/+11
To simplify the fetch_pte function. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-11-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Rename variables to be consistent with struct io_pgtable_opsSuravee Suthikulpanit1-16/+15
There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-10-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Remove amd_iommu_domain_get_pgtableSuravee Suthikulpanit4-61/+19
Since the IO page table root and mode parameters have been moved into the struct amd_io_pg, the function is no longer needed. Therefore, remove it along with the struct domain_pgtable. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-9-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Restructure code for freeing page tableSuravee Suthikulpanit3-35/+28
By consolidate logic into v1_free_pgtable helper function, which is called from IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-8-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Move IO page table related functionsSuravee Suthikulpanit3-474/+493
Preparing to migrate to use IO page table framework. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-7-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Declare functions as externSuravee Suthikulpanit2-20/+22
And move declaration to header file so that they can be included across multiple files. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-6-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Convert to using amd_io_pgtableSuravee Suthikulpanit2-15/+11
Make use of the new struct amd_io_pgtable in preparation to remove the struct domain_pgtable. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-5-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Move pt_root to struct amd_io_pgtableSuravee Suthikulpanit3-3/+3
To better organize the data structure since it contains IO page table related information. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-4-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Prepare for generic IO page table frameworkSuravee Suthikulpanit6-11/+109
Add initial hook up code to implement generic IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-3-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Do not use flush-queue when caching-mode is onNadav Amit1-1/+31
When an Intel IOMMU is virtualized, and a physical device is passed-through to the VM, changes of the virtual IOMMU need to be propagated to the physical IOMMU. The hypervisor therefore needs to monitor PTE mappings in the IOMMU page-tables. Intel specifications provide "caching-mode" capability that a virtual IOMMU uses to report that the IOMMU is virtualized and a TLB flush is needed after mapping to allow the hypervisor to propagate virtual IOMMU mappings to the physical IOMMU. To the best of my knowledge no real physical IOMMU reports "caching-mode" as turned on. Synchronizing the virtual and the physical IOMMU tables is expensive if the hypervisor is unaware which PTEs have changed, as the hypervisor is required to walk all the virtualized tables and look for changes. Consequently, domain flushes are much more expensive than page-specific flushes on virtualized IOMMUs with passthrough devices. The kernel therefore exploited the "caching-mode" indication to avoid domain flushing and use page-specific flushing in virtualized environments. See commit 78d5f0f500e6 ("intel-iommu: Avoid global flushes with caching mode.") This behavior changed after commit 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing"). Now, when batched TLB flushing is used (the default), full TLB domain flushes are performed frequently, requiring the hypervisor to perform expensive synchronization between the virtual TLB and the physical one. Getting batched TLB flushes to use page-specific invalidations again in such circumstances is not easy, since the TLB invalidation scheme assumes that "full" domain TLB flushes are performed for scalability. Disable batched TLB flushes when caching-mode is on, as the performance benefit from using batched TLB invalidations is likely to be much smaller than the overhead of the virtual-to-physical IOMMU page-tables synchronization. Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing") Signed-off-by: Nadav Amit <namit@vmware.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: stable@vger.kernel.org Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210127175317.1600473-1-namit@vmware.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu: use the __iommu_attach_device() directly for deferred attachLianbo Jiang2-15/+13
Currently, because domain attach allows to be deferred from iommu driver to device driver, and when iommu initializes, the devices on the bus will be scanned and the default groups will be allocated. Due to the above changes, some devices could be added to the same group as below: [ 3.859417] pci 0000:01:00.0: Adding to iommu group 16 [ 3.864572] pci 0000:01:00.1: Adding to iommu group 16 [ 3.869738] pci 0000:02:00.0: Adding to iommu group 17 [ 3.874892] pci 0000:02:00.1: Adding to iommu group 17 But when attaching these devices, it doesn't allow that a group has more than one device, otherwise it will return an error. This conflicts with the deferred attaching. Unfortunately, it has two devices in the same group for my side, for example: [ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0 [ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1 ... [ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0 [ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1 Finally, which caused the failure of tg3 driver when tg3 driver calls the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma(). [ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting [ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12 [ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting [ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12 [ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting [ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12 [ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting [ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12 In addition, the similar situations also occur in other drivers such as the bnxt_en driver. That can be reproduced easily in kdump kernel when SME is active. Let's move the handling currently in iommu_dma_deferred_attach() into the iommu core code so that it can call the __iommu_attach_device() directly instead of the iommu_attach_device(). The external interface iommu_attach_device() is not suitable for handling this situation. Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28dma-iommu: use static-key to minimize the impact in the fast-pathLianbo Jiang1-6/+11
Let's move out the is_kdump_kernel() check from iommu_dma_deferred_attach() to iommu_dma_init(), and use the static-key in the fast-path to minimize the impact in the normal case. Co-developed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210126115337.20068-2-lijiang@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Correctly check addr alignment in qi_flush_dev_iotlb_pasid()Lu Baolu1-1/+1
An incorrect address mask is being used in the qi_flush_dev_iotlb_pasid() to check the address alignment. This leads to a lot of spurious kernel warnings: [ 485.837093] DMAR: Invalidate non-aligned address 7f76f47f9000, order 0 [ 485.837098] DMAR: Invalidate non-aligned address 7f76f47f9000, order 0 [ 492.494145] qi_flush_dev_iotlb_pasid: 5734 callbacks suppressed [ 492.494147] DMAR: Invalidate non-aligned address 7f7728800000, order 11 [ 492.508965] DMAR: Invalidate non-aligned address 7f7728800000, order 11 Fix it by checking the alignment in right way. Fixes: 288d08e780088 ("iommu/vt-d: Handle non-page aligned address") Reported-and-tested-by: Guo Kaijie <Kaijie.Guo@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Cc: Liu Yi L <yi.l.liu@intel.com> Link: https://lore.kernel.org/r/20210119043500.1539596-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Use IVHD EFR for early initialization of IOMMU featuresSuravee Suthikulpanit3-7/+60
IOMMU Extended Feature Register (EFR) is used to communicate the supported features for each IOMMU to the IOMMU driver. This is normally read from the PCI MMIO register offset 0x30, and used by the iommu_feature() helper function. However, there are certain scenarios where the information is needed prior to PCI initialization, and the iommu_feature() function is used prematurely w/o warning. This has caused incorrect initialization of IOMMU. This is the case for the commit 6d39bdee238f ("iommu/amd: Enforce 4k mapping for certain IOMMU data structures") Since, the EFR is also available in the IVHD header, and is available to the driver prior to PCI initialization. Therefore, default to using the IVHD EFR instead. Fixes: 6d39bdee238f ("iommu/amd: Enforce 4k mapping for certain IOMMU data structures") Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by: Brijesh Singh <brijesh.singh@amd.com> Reviewed-by: Robert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/20210120135002.2682-1-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Preset Access/Dirty bits for IOVA over FLLu Baolu1-2/+12
The Access/Dirty bits in the first level page table entry will be set whenever a page table entry was used for address translation or write permission was successfully translated. This is always true when using the first-level page table for kernel IOVA. Instead of wasting hardware cycles to update the certain bits, it's better to set them up at the beginning. Suggested-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210115004202.953965-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Add qi_submit trace eventLu Baolu1-0/+3
This adds a new trace event to track the submissions of requests to the invalidation queue. This event will provide the information like: - IOMMU name - Invalidation type - Descriptor raw data A sample output like: | qi_submit: iotlb_inv dmar1: 0x100e2 0x0 0x0 0x0 | qi_submit: dev_tlb_inv dmar1: 0x1000000003 0x7ffffffffffff001 0x0 0x0 | qi_submit: iotlb_inv dmar2: 0x800f2 0xf9a00005 0x0 0x0 This will be helpful for queued invalidation related debugging. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210114090400.736104-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Consolidate duplicate cache invaliation codeLu Baolu2-62/+11
The pasid based IOTLB and devTLB invalidation code is duplicate in several places. Consolidate them by using the common helpers. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210114085021.717041-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu/mediatek: Remove the tlb-ops for v7sYong Wu1-23/+4
Until now, we have already used the tlb operations from iommu framework, then the tlb operations for v7s can be removed. Correspondingly, Switch the paramenter "cookie" to the internal structure. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-8-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu/mediatek: Gather iova in iommu_unmap to achieve tlb sync onceYong Wu1-3/+5
In current iommu_unmap, this code is: iommu_iotlb_gather_init(&iotlb_gather); ret = __iommu_unmap(domain, iova, size, &iotlb_gather); iommu_iotlb_sync(domain, &iotlb_gather); We could gather the whole iova range in __iommu_unmap, and then do tlb synchronization in the iommu_iotlb_sync. This patch implement this, Gather the range in mtk_iommu_unmap. then iommu_iotlb_sync call tlb synchronization for the gathered iova range. we don't call iommu_iotlb_gather_add_page since our tlb synchronization could be regardless of granule size. In this way, gather->start is impossible ULONG_MAX, remove the checking. This patch aims to do tlb synchronization *once* in the iommu_unmap. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-7-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Switch gather->end to the inclusive endYong Wu3-3/+3
Currently gather->end is "unsigned long" which may be overflow in arch32 in the corner case: 0xfff00000 + 0x100000(iova + size). Although it doesn't affect the size(end - start), it affects the checking "gather->end < end" This patch changes this "end" to the real end address (end = start + size - 1). Correspondingly, update the length to "end - start + 1". Fixes: a7d20dc19d9e ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes") Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu/mediatek: Add iotlb_sync_map to sync whole the iova rangeYong Wu1-1/+9
Remove IO_PGTABLE_QUIRK_TLBI_ON_MAP to avoid tlb sync for each a small chunk memory, Use the new iotlb_sync_map to tlb_sync once for whole the iova range of iommu_map. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-4-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Add iova and size as parameters in iotlb_sync_mapYong Wu2-4/+7
iotlb_sync_map allow IOMMU drivers tlb sync after completing the whole mapping. This patch adds iova and size as the parameters in it. then the IOMMU driver could flush tlb with the whole range once after iova mapping to improve performance. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-3-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Move iotlb_sync_map out from __iommu_mapYong Wu1-5/+18
In the end of __iommu_map, It alway call iotlb_sync_map. This patch moves iotlb_sync_map out from __iommu_map since it is unnecessary to call this for each sg segment especially iotlb_sync_map is flush tlb all currently. Add a little helper _iommu_map for this. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-2-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu/amd: Re-define amd_iommu_domain_encode_pgtable as inlineSuravee Suthikulpanit2-10/+13
Move the function to header file to allow inclusion in other files. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-2-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu/amd: remove h from printk format specifierTom Rix1-1/+1
See Documentation/core-api/printk-formats.rst. commit cbacb5ab0aa0 ("docs: printk-formats: Stop encouraging use of unnecessary %h[xudi] and %hh[xudi]") Standard integer promotion is already done and %hx and %hhx is useless so do not encourage the use of %hh[xudi] or %h[xudi]. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20201215213021.2090698-1-trix@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu/amd: Use DEFINE_SPINLOCK() for spinlockZheng Yongjun1-3/+1
Spinlock can be initialized automatically with DEFINE_SPINLOCK() rather than explicitly calling spin_lock_init(). Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com> Link: https://lore.kernel.org/r/20201228135112.28621-1-zhengyongjun3@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu/amd: Remove unnecessary assignmentAdrian Huang1-3/+2
From: Adrian Huang <ahuang12@lenovo.com> The values of local variables are assigned after local variables are declared, so no need to assign the initial value during the variable declaration. And, no need to assign NULL for the local variable 'ivrs_base' after invoking acpi_put_table(). Signed-off-by: Adrian Huang <ahuang12@lenovo.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201210021330.2022-1-adrianhuang0701@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu: Delete iommu_dev_has_feature()John Garry1-11/+0
Function iommu_dev_has_feature() has never been referenced in the tree, and there does not appear to be anything coming soon to use it, so delete it. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-7-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu: Delete iommu_domain_window_disable()John Garry1-9/+0
Function iommu_domain_window_disable() is not referenced in the tree, so delete it. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-6-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu: Stop exporting iommu_map_sg_atomic()John Garry1-1/+0
Function iommu_map_sg_atomic() is only referenced in dma-iommu.c, which can only be built-in, so stop exporting. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-5-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iova: Stop exporting some more functionsJohn Garry1-3/+0
The following functions are not referenced outside dma-iommu.c (and iova.c), which can only be built-in: - init_iova_flush_queue() - free_iova_fast() - queue_iova() - alloc_iova_fast() So stop exporting them. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-4-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iova: Delete copy_reserved_iova()John Garry1-30/+0
Since commit c588072bba6b ("iommu/vt-d: Convert intel iommu driver to the iommu ops"), function copy_reserved_iova() is not referenced, so delete it. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-3-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iova: Make has_iova_flush_queue() privateJohn Garry1-1/+1
Function has_iova_flush_queue() has no users outside iova.c, so make it private. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1609940111-28563-2-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-26iommu/arm-smmu-qcom: Fix mask extraction for bootloader programmed SMRsIsaac J. Manjarres1-0/+2
When extracting the mask for a SMR that was programmed by the bootloader, the SMR's valid bit is also extracted and is treated as part of the mask, which is not correct. Consider the scenario where an SMMU master whose context is determined by a bootloader programmed SMR is removed (omitting parts of device/driver core): ->iommu_release_device() -> arm_smmu_release_device() -> arm_smmu_master_free_smes() -> arm_smmu_free_sme() /* Assume that the SME is now free */ -> arm_smmu_write_sme() -> arm_smmu_write_smr() /* Construct SMR value using mask and SID */ Since the valid bit was considered as part of the mask, the SMR will be programmed as valid. Fix the SMR mask extraction step for bootloader programmed SMRs by masking out the valid bit when we know that we're already working with a valid SMR. Fixes: 07a7f2caaa5a ("iommu/arm-smmu-qcom: Read back stream mappings") Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org> Cc: stable@vger.kernel.org Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1611611545-19055-1-git-send-email-isaacm@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Add support for VHEJean-Philippe Brucker2-7/+28
ARMv8.1 extensions added Virtualization Host Extensions (VHE), which allow to run a host kernel at EL2. When using normal DMA, Device and CPU address spaces are dissociated, and do not need to implement the same capabilities, so VHE hasn't been used in the SMMU until now. With shared address spaces however, ASIDs are shared between MMU and SMMU, and broadcast TLB invalidations issued by a CPU are taken into account by the SMMU. TLB entries on both sides need to have identical exception level in order to be cleared with a single invalidation. When the CPU is using VHE, enable VHE in the SMMU for all STEs. Normal DMA mappings will need to use TLBI_EL2 commands instead of TLBI_NH, but shouldn't be otherwise affected by this change. Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-4-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Make BTM optional for SVAJean-Philippe Brucker3-3/+25
When BTM isn't supported by the SMMU, send invalidations on the command queue. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-3-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Split arm_smmu_tlb_inv_range()Jean-Philippe Brucker1-27/+36
Extract some of the cmd initialization and the ATC invalidation from arm_smmu_tlb_inv_range(), to allow an MMU notifier to invalidate a VA range by ASID. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-2-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Use DEFINE_RES_MEM() to simplify codeZhen Lei1-5/+1
No functional change. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20210122131448.1167-1-thunder.leizhen@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Remove the page 1 fixupRobin Murphy2-30/+20
Since we now keep track of page 1 via a separate pointer that already encapsulates aliasing to page 0 as necessary, we can remove the clunky fixup routine and simply use the relevant bases directly. The current architecture spec (IHI0070D.a) defines SMMU_{EVENTQ,PRIQ}_{PROD,CONS} as offsets relative to page 1, so the cleanup represents a little bit of convergence as well as just lines of code saved. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/08d9bda570bb5681f11a2f250a31be9ef763b8c5.1611238182.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu: arm-smmu-impl: Add SM8350 qcom iommu implementationVinod Koul1-0/+1
Add SM8350 qcom iommu implementation to the table of qcom_smmu_impl_of_match table which brings in iommu support for SM8350 SoC Signed-off-by: Vinod Koul <vkoul@kernel.org> Link: https://lore.kernel.org/r/20210115090322.2287538-2-vkoul@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-qcom: Add Qualcomm SC8180X implBjorn Andersson1-0/+2
The primary SMMU found in Qualcomm SC8180X platform needs to use the Qualcomm implementation, so add a specific compatible for this. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Link: https://lore.kernel.org/r/20210121014005.1612382-2-bjorn.andersson@linaro.org Signed-off-by: Will Deacon <will@kernel.org>