summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)AuthorFilesLines
2017-10-18iommu/amd: Finish TLB flush in amd_iommu_unmap()Joerg Roedel1-0/+1
commit ce76353f169a6471542d999baf3d29b121dce9c0 upstream. The function only sends the flush command to the IOMMU(s), but does not wait for its completion when it returns. Fix that. Fixes: 601367d76bd1 ('x86/amd-iommu: Remove iommu_flush_domain function') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08iommu/io-pgtable-arm: Check for leaf entry before dereferencing itOleksandr Tyshchenko1-1/+5
[ Upstream commit ed46e66cc1b3d684042f92dfa2ab15ee917b4cac ] Do a check for already installed leaf entry at the current level before dereferencing it in order to avoid walking the page table down with wrong pointer to the next level. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Will Deacon <will.deacon@arm.com> CC: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-05iommu/amd: Fix incorrect error handling in amd_iommu_bind_pasid()Pan Bian1-1/+1
commit 73dbd4a4230216b6a5540a362edceae0c9b4876b upstream. In function amd_iommu_bind_pasid(), the control flow jumps to label out_free when pasid_state->mm and mm is NULL. And mmput(mm) is called. In function mmput(mm), mm is referenced without validation. This will result in a NULL dereference bug. This patch fixes the bug. Signed-off-by: Pan Bian <bianpan2016@163.com> Fixes: f0aac63b873b ('iommu/amd: Don't hold a reference to mm_struct') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-05iommu: Handle default domain attach failureRobin Murphy1-13/+24
commit 797a8b4d768c58caac58ee3e8cb36a164d1b7751 upstream. We wouldn't normally expect ops->attach_dev() to fail, but on IOMMUs with limited hardware resources, or generally misconfigured systems, it is certainly possible. We report failure correctly from the external iommu_attach_device() interface, but do not do so in iommu_group_add() when attaching to the default domain. The result of failure there is that the device, group and domain all get left in a broken, part-configured state which leads to weird errors and misbehaviour down the line when IOMMU API calls sort-of-but-don't-quite work. Check the return value of __iommu_attach_device() on the default domain, and refactor the error handling paths to cope with its failure and clean up correctly in such cases. Fixes: e39cb8a3aa98 ("iommu: Make sure a device is always attached to a domain") Reported-by: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-05iommu/vt-d: Don't over-free page table directoriesDavid Dillow1-1/+1
commit f7116e115acdd74bc75a4daf6492b11d43505125 upstream. dma_pte_free_level() recurses down the IOMMU page tables and frees directory pages that are entirely contained in the given PFN range. Unfortunately, it incorrectly calculates the starting address covered by the PTE under consideration, which can lead to it clearing an entry that is still in use. This occurs if we have a scatterlist with an entry that has a length greater than 1026 MB and is aligned to 2 MB for both the IOMMU and physical addresses. For example, if __domain_mapping() is asked to map a two-entry scatterlist with 2 MB and 1028 MB segments to PFN 0xffff80000, it will ask if dma_pte_free_pagetable() is asked to PFNs from 0xffff80200 to 0xffffc05ff, it will also incorrectly clear the PFNs from 0xffff80000 to 0xffff801ff because of this issue. The current code will set level_pfn to 0xffff80200, and 0xffff80200-0xffffc01ff fits inside the range being cleared. Properly setting the level_pfn for the current level under consideration catches that this PTE is outside of the range being cleared. This patch also changes the value passed into dma_pte_free_level() when it recurses. This only affects the first PTE of the range being cleared, and is handled by the existing code that ensures we start our cursor no lower than start_pfn. This was found when using dma_map_sg() to map large chunks of contiguous memory, which immediatedly led to faults on the first access of the erroneously-deleted mappings. Fixes: 3269ee0bd668 ("intel-iommu: Fix leaks in pagetable freeing") Reviewed-by: Benjamin Serebrin <serebrin@google.com> Signed-off-by: David Dillow <dillow@google.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25iommu/vt-d: Flush the IOTLB to get rid of the initial kdump mappingsKarimAllah Ahmed1-1/+4
commit f73a7eee900e95404b61408a23a1df5c5811704c upstream. Ever since commit 091d42e43d ("iommu/vt-d: Copy translation tables from old kernel") the kdump kernel copies the IOMMU context tables from the previous kernel. Each device mappings will be destroyed once the driver for the respective device takes over. This unfortunately breaks the workflow of mapping and unmapping a new context to the IOMMU. The mapping function assumes that either: 1) Unmapping did the proper IOMMU flushing and it only ever flush if the IOMMU unit supports caching invalid entries. 2) The system just booted and the initialization code took care of flushing all IOMMU caches. This assumption is not true for the kdump kernel since the context tables have been copied from the previous kernel and translations could have been cached ever since. So make sure to flush the IOTLB as well when we destroy these old copied mappings. Cc: Joerg Roedel <joro@8bytes.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Anthony Liguori <aliguori@amazon.com> Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Fixes: 091d42e43d ("iommu/vt-d: Copy translation tables from old kernel") Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-30iommu/vt-d: Fix NULL pointer dereference in device_to_iommuKoos Vriezen1-1/+1
commit 5003ae1e735e6bfe4679d9bed6846274f322e77e upstream. The function device_to_iommu() in the Intel VT-d driver lacks a NULL-ptr check, resulting in this oops at boot on some platforms: BUG: unable to handle kernel NULL pointer dereference at 00000000000007ab IP: [<ffffffff8132234a>] device_to_iommu+0x11a/0x1a0 PGD 0 [...] Call Trace: ? find_or_alloc_domain.constprop.29+0x1a/0x300 ? dw_dma_probe+0x561/0x580 [dw_dmac_core] ? __get_valid_domain_for_dev+0x39/0x120 ? __intel_map_single+0x138/0x180 ? intel_alloc_coherent+0xb6/0x120 ? sst_hsw_dsp_init+0x173/0x420 [snd_soc_sst_haswell_pcm] ? mutex_lock+0x9/0x30 ? kernfs_add_one+0xdb/0x130 ? devres_add+0x19/0x60 ? hsw_pcm_dev_probe+0x46/0xd0 [snd_soc_sst_haswell_pcm] ? platform_drv_probe+0x30/0x90 ? driver_probe_device+0x1ed/0x2b0 ? __driver_attach+0x8f/0xa0 ? driver_probe_device+0x2b0/0x2b0 ? bus_for_each_dev+0x55/0x90 ? bus_add_driver+0x110/0x210 ? 0xffffffffa11ea000 ? driver_register+0x52/0xc0 ? 0xffffffffa11ea000 ? do_one_initcall+0x32/0x130 ? free_vmap_area_noflush+0x37/0x70 ? kmem_cache_alloc+0x88/0xd0 ? do_init_module+0x51/0x1c4 ? load_module+0x1ee9/0x2430 ? show_taint+0x20/0x20 ? kernel_read_file+0xfd/0x190 ? SyS_finit_module+0xa3/0xb0 ? do_syscall_64+0x4a/0xb0 ? entry_SYSCALL64_slow_path+0x25/0x25 Code: 78 ff ff ff 4d 85 c0 74 ee 49 8b 5a 10 0f b6 9b e0 00 00 00 41 38 98 e0 00 00 00 77 da 0f b6 eb 49 39 a8 88 00 00 00 72 ce eb 8f <41> f6 82 ab 07 00 00 04 0f 85 76 ff ff ff 0f b6 4d 08 88 0e 49 RIP [<ffffffff8132234a>] device_to_iommu+0x11a/0x1a0 RSP <ffffc90001457a78> CR2: 00000000000007ab ---[ end trace 16f974b6d58d0aad ]--- Add the missing pointer check. Fixes: 1c387188c60f53b338c20eee32db055dfe022a9b ("iommu/vt-d: Fix IOMMU lookup for SR-IOV Virtual Functions") Signed-off-by: Koos Vriezen <koos.vriezen@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-12iommu/vt-d: Tylersburg isoch identity map check is done too late.Ashok Raj1-1/+2
commit 21e722c4c8377b5bc82ad058fed12165af739c1b upstream. The check to set identity map for tylersburg is done too late. It needs to be done before the check for identity_map domain is done. To: Joerg Roedel <joro@8bytes.org> To: David Woodhouse <dwmw2@infradead.org> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Cc: Ashok Raj <ashok.raj@intel.com> Fixes: 86080ccc22 ("iommu/vt-d: Allocate si_domain in init_dmars()") Signed-off-by: Ashok Raj <ashok.raj@intel.com> Reported-by: Yunhong Jiang <yunhong.jiang@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12iommu/vt-d: Flush old iommu caches for kdump when the device gets context mappedXunlei Pang1-0/+19
commit aec0e86172a79eb5e44aff1055bb953fe4d47c59 upstream. We met the DMAR fault both on hpsa P420i and P421 SmartArray controllers under kdump, it can be steadily reproduced on several different machines, the dmesg log is like: HP HPSA Driver (v 3.4.16-0) hpsa 0000:02:00.0: using doorbell to reset controller hpsa 0000:02:00.0: board ready after hard reset. hpsa 0000:02:00.0: Waiting for controller to respond to no-op DMAR: Setting identity map for device 0000:02:00.0 [0xe8000 - 0xe8fff] DMAR: Setting identity map for device 0000:02:00.0 [0xf4000 - 0xf4fff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf6e000 - 0xbdf6efff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf6f000 - 0xbdf7efff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf7f000 - 0xbdf82fff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf83000 - 0xbdf84fff] DMAR: DRHD: handling fault status reg 2 DMAR: [DMA Read] Request device [02:00.0] fault addr fffff000 [fault reason 06] PTE Read access is not set hpsa 0000:02:00.0: controller message 03:00 timed out hpsa 0000:02:00.0: no-op failed; re-trying After some debugging, we found that the fault addr is from DMA initiated at the driver probe stage after reset(not in-flight DMA), and the corresponding pte entry value is correct, the fault is likely due to the old iommu caches of the in-flight DMA before it. Thus we need to flush the old cache after context mapping is setup for the device, where the device is supposed to finish reset at its driver probe stage and no in-flight DMA exists hereafter. I'm not sure if the hardware is responsible for invalidating all the related caches allocated in the iommu hardware before, but seems not the case for hpsa, actually many device drivers have problems in properly resetting the hardware. Anyway flushing (again) by software in kdump kernel when the device gets context mapped which is a quite infrequent operation does little harm. With this patch, the problematic machine can survive the kdump tests. CC: Myron Stowe <myron.stowe@gmail.com> CC: Joseph Szczypek <jszczype@redhat.com> CC: Don Brace <don.brace@microsemi.com> CC: Baoquan He <bhe@redhat.com> CC: Dave Young <dyoung@redhat.com> Fixes: 091d42e43d21 ("iommu/vt-d: Copy translation tables from old kernel") Fixes: dbcd861f252d ("iommu/vt-d: Do not re-use domain-ids from the old kernel") Fixes: cf484d0e6939 ("iommu/vt-d: Mark copied context entries") Signed-off-by: Xunlei Pang <xlpang@redhat.com> Tested-by: Don Brace <don.brace@microsemi.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12iommu/vt-d: Fix pasid table size encodingJacob Pan1-1/+22
commit 65ca7f5f7d1cdde6c25172fe6107cd16902f826f upstream. Different encodings are used to represent supported PASID bits and number of PASID table entries. The current code assigns ecap_pss directly to extended context table entry PTS which is wrong and could result in writing non-zero bits to the reserved fields. IOMMU fault reason 11 will be reported when reserved bits are nonzero. This patch converts ecap_pss to extend context entry pts encoding based on VT-d spec. Chapter 9.4 as follows: - number of PASID bits = ecap_pss + 1 - number of PASID table entries = 2^(pts + 5) Software assigned limit of pasid_max value is also respected to match the allocation limitation of PASID table. cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> cc: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Tested-by: Mika Kuoppala <mika.kuoppala@intel.com> Fixes: 2f26e0a9c9860 ('iommu/vt-d: Add basic SVM PASID support') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12iommu/amd: Fix the left value check of cmd bufferHuang Rui1-1/+1
commit 432abf68a79332282329286d190e21fe3ac02a31 upstream. The generic command buffer entry is 128 bits (16 bytes), so the offset of tail and head pointer should be 16 bytes aligned and increased with 0x10 per command. When cmd buf is full, head = (tail + 0x10) % CMD_BUFFER_SIZE. So when left space of cmd buf should be able to store only two command, we should be issued one COMPLETE_WAIT additionally to wait all older commands completed. Then the left space should be increased after IOMMU fetching from cmd buf. So left check value should be left <= 0x20 (two commands). Signed-off-by: Huang Rui <ray.huang@amd.com> Fixes: ac0ea6e92b222 ('x86/amd-iommu: Improve handling of full command buffer') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12iommu/amd: Missing error code in amd_iommu_init_device()Dan Carpenter1-1/+3
commit 24c790fbf5d8f54c8c82979db11edea8855b74bf upstream. We should set "ret" to -EINVAL if iommu_group_get() fails. Fixes: 55c99a4dc50f ("iommu/amd: Use iommu_attach_group()") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-12-02iommu/vt-d: Fix IOMMU lookup for SR-IOV Virtual FunctionsAshok Raj2-1/+16
commit 1c387188c60f53b338c20eee32db055dfe022a9b upstream. The VT-d specification (§8.3.3) says: ‘Virtual Functions’ of a ‘Physical Function’ are under the scope of the same remapping unit as the ‘Physical Function’. The BIOS is not required to list all the possible VFs in the scope tables, and arguably *shouldn't* make any attempt to do so, since there could be a huge number of them. This has been broken basically for ever — the VF is never going to match against a specific unit's scope, so it ends up being assigned to the INCLUDE_ALL IOMMU. Which was always actually correct by coincidence, but now we're looking at Root-Complex integrated devices with SR-IOV support it's going to start being wrong. Fix it to simply use pci_physfn() before doing the lookup for PCI devices. Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-12-02iommu/vt-d: Fix PASID table allocationDavid Woodhouse1-11/+17
commit 910170442944e1f8674fd5ddbeeb8ccd1877ea98 upstream. Somehow I ended up with an off-by-three error in calculating the size of the PASID and PASID State tables, which triggers allocations failures as those tables unfortunately have to be physically contiguous. In fact, even the *correct* maximum size of 8MiB is problematic and is wont to lead to allocation failures. Since I have extracted a promise that this *will* be fixed in hardware, I'm happy to limit it on the current hardware to a maximum of 0x20000 PASIDs, which gives us 1MiB tables — still not ideal, but better than before. Reported by Mika Kuoppala <mika.kuoppala@linux.intel.com> and also by Xunlei Pang <xlpang@redhat.com> who submitted a simpler patch to fix only the allocation (and not the free) to the "correct" limit... which was still problematic. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-18iommu/vt-d: Fix dead-locks in disable_dmar_iommu() pathJoerg Roedel1-2/+12
commit bea64033dd7b5fb6296eda8266acab6364ce1554 upstream. It turns out that the disable_dmar_iommu() code-path tried to get the device_domain_lock recursivly, which will dead-lock when this code runs on dmar removal. Fix both code-paths that could lead to the dead-lock. Fixes: 55d940430ab9 ('iommu/vt-d: Get rid of domain->iommu_lock') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-18iommu/amd: Free domain id when free a domain of struct dma_ops_domainBaoquan He1-0/+3
commit c3db901c54466a9c135d1e6e95fec452e8a42666 upstream. The current code missed freeing domain id when free a domain of struct dma_ops_domain. Signed-off-by: Baoquan He <bhe@redhat.com> Fixes: ec487d1a110a ('x86, AMD IOMMU: add domain allocation and deallocation functions') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30Add braces to avoid "ambiguous ‘else’" compiler warningsLinus Torvalds2-2/+4
commit 194dc870a5890e855ecffb30f3b80ba7c88f96d6 upstream. Some of our "for_each_xyz()" macro constructs make gcc unhappy about lack of braces around if-statements inside or outside the loop, because the loop construct itself has a "if-then-else" statement inside of it. The resulting warnings look something like this: drivers/gpu/drm/i915/i915_debugfs.c: In function ‘i915_dump_lrc’: drivers/gpu/drm/i915/i915_debugfs.c:2103:6: warning: suggest explicit braces to avoid ambiguous ‘else’ [-Wparentheses] if (ctx != dev_priv->kernel_context) ^ even if the code itself is fine. Since the warning is fairly easy to avoid by adding a braces around the if-statement near the for_each_xyz() construct, do so, rather than disabling the otherwise potentially useful warning. (The if-then-else statements used in the "for_each_xyz()" constructs are designed to be inherently safe even with no braces, but in this case it's quite understandable that gcc isn't really able to tell that). This finally leaves the standard "allmodconfig" build with just a handful of remaining warnings, so new and valid warnings hopefully will stand out. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-07iommu/arm-smmu: Don't BUG() if we find aborting STEs with disable_bypassWill Deacon1-0/+3
commit 5bc0a11664e17e9f9551983f5b660bd48b57483c upstream. The disable_bypass cmdline option changes the SMMUv3 driver to put down faulting stream table entries by default, as opposed to bypassing transactions from unconfigured devices. In this mode of operation, it is entirely expected to see aborting entries in the stream table if and when we come to installing a valid translation, so don't trigger a BUG() as a result of misdiagnosing these entries as stream table corruption. Fixes: 48ec83bcbcf5 ("iommu/arm-smmu: Add initial driver support for ARM SMMUv3 devices") Tested-by: Robin Murphy <robin.murphy@arm.com> Reported-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-07iommu/arm-smmu: Fix CMDQ error handlingWill Deacon1-2/+2
commit aea2037e0d3e23c3be1498feae29f71ca997d9e6 upstream. In the unlikely event of a global command queue error, the ARM SMMUv3 driver attempts to convert the problematic command into a CMD_SYNC and resume the command queue. Unfortunately, this code is pretty badly broken: 1. It uses the index into the error string table as the CMDQ index, so we probably read the wrong entry out of the queue 2. The arguments to queue_write are the wrong way round, so we end up writing from the queue onto the stack. These happily cancel out, so the kernel is likely to stay alive, but the command queue will probably fault again when we resume. This patch fixes the error handling code to use the correct queue index and write back the CMD_SYNC to the faulting entry. Fixes: 48ec83bcbcf5 ("iommu/arm-smmu: Add initial driver support for ARM SMMUv3 devices") Reported-by: Diwakar Subraveti <Diwakar.Subraveti@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-07iommu/dma: Don't put uninitialised IOVA domainsRobin Murphy1-1/+2
commit 3ec60043f7c02e1f79e4a90045ff2d2e80042941 upstream. Due to the limitations of having to wait until we see a device's DMA restrictions before we know how we want an IOVA domain initialised, there is a window for error if a DMA ops domain is allocated but later freed without ever being used. In that case, init_iova_domain() was never called, so calling put_iova_domain() from iommu_put_dma_cookie() ends up trying to take an uninitialised lock and crashing. Make things robust by skipping the call unless the IOVA domain actually has been initialised, as we probably should have done from the start. Fixes: 0db2e5d18f76 ("iommu: Implement common IOMMU ops for DMA mapping") Reported-by: Nate Watterson <nwatters@codeaurora.org> Reviewed-by: Nate Watterson <nwatters@codeaurora.org> Tested-by: Nate Watterson <nwatters@codeaurora.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20iommu/amd: Update Alias-DTE in update_device_table()Joerg Roedel1-1/+8
commit 3254de6bf74fe94c197c9f819fe62a3a3c36f073 upstream. Not doing so might cause IO-Page-Faults when a device uses an alias request-id and the alias-dte is left in a lower page-mode which does not cover the address allocated from the iova-allocator. Fixes: 492667dacc0a ('x86/amd-iommu: Remove amd_iommu_pd_table') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20iommu/amd: Init unity mappings only for dma_ops domainsJoerg Roedel1-2/+4
commit b548e786ce47017107765bbeb0f100202525ea83 upstream. The default domain for a device might also be identity-mapped. In this case the kernel would crash when unity mappings are defined for the device. Fix that by making sure the domain is a dma_ops domain. Fixes: 0bb6e243d7fb ('iommu/amd: Support IOMMU_DOMAIN_DMA type allocation') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20iommu/amd: Handle IOMMU_DOMAIN_DMA in ops->domain_free call-backJoerg Roedel1-8/+17
commit cda7005ba2cbd0744fea343dd5b2aa637eba5b9e upstream. This domain type is not yet handled in the iommu_ops->domain_free() call-back. Fix that. Fixes: 0bb6e243d7fb ('iommu/amd: Support IOMMU_DOMAIN_DMA type allocation') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20iommu/vt-d: Return error code in domain_context_mapping_one()Wei Yang1-1/+1
commit 5c365d18a73d3979db37006eaacefc0008869c0f upstream. In 'commit <55d940430ab9> ("iommu/vt-d: Get rid of domain->iommu_lock")', the error handling path is changed a little, which makes the function always return 0. This path fixes this. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Fixes: 55d940430ab9 ('iommu/vt-d: Get rid of domain->iommu_lock') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20iommu/exynos: Suppress unbinding to prevent system failureMarek Szyprowski1-0/+1
commit b54b874fbaf5e024723e50dfb035a9916d6752b4 upstream. Removal of IOMMU driver cannot be done reliably, so Exynos IOMMU driver doesn't support this operation. It is essential for system operation, so it makes sense to prevent unbinding by disabling bind/unbind sysfs feature for SYSMMU controller driver to avoid kernel ops or trashing memory caused by such operation. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-07-27iommu/amd: Fix unity mapping initialization raceJoerg Roedel1-2/+12
commit 522e5cb76d0663c88f96b6a8301451c8efa37207 upstream. There is a race condition in the AMD IOMMU init code that causes requested unity mappings to be blocked by the IOMMU for a short period of time. This results on boot failures and IO_PAGE_FAULTs on some machines. Fix this by making sure the unity mappings are installed before all other DMA is blocked. Fixes: aafd8ba0ca74 ('iommu/amd: Implement add_device and remove_device') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-07-27iommu/vt-d: Enable QI on all IOMMUs before setting root entryJoerg Roedel1-5/+12
commit a4c34ff1c029e90e7d5f8dd8d29b0a93b31c3cb2 upstream. This seems to be required on some X58 chipsets on systems with more than one IOMMU. QI does not work until it is enabled on all IOMMUs in the system. Reported-by: Dheeraj CVR <cvr.dheeraj@gmail.com> Tested-by: Dheeraj CVR <cvr.dheeraj@gmail.com> Fixes: 5f0a7f7614a9 ('iommu/vt-d: Make root entry visible for hardware right after allocation') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-07-27iommu/arm-smmu: Wire up map_sg for arm-smmu-v3Jean-Philippe Brucker1-0/+1
commit 9aeb26cfc2abc96be42b9df2d0f2dc5d805084ff upstream. The map_sg callback is missing from arm_smmu_ops, but is required by iommu.h. Similarly to most other IOMMU drivers, connect it to default_iommu_map_sg. Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-05-05iommu/dma: Restore scatterlist offsets correctlyRobin Murphy1-2/+2
commit 07b48ac4bbe527e68cfc555f2b2b206908437141 upstream. With the change to stashing just the IOVA-page-aligned remainder of the CPU-page offset rather than the whole thing, the failure path in __invalidate_sg() also needs tweaking to account for that in the case of differing page sizes where the two offsets may not be equivalent. Similarly in __finalise_sg(), lest the architecture-specific wrappers later get the wrong address for cache maintenance on sync or unmap. Fixes: 164afb1d85b8 ("iommu/dma: Use correct offset in map_sg") Reported-by: Magnus Damm <damm+renesas@opensource.se> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-05-05iommu/amd: Fix checking of pci dma aliasesJoerg Roedel1-11/+76
commit e3156048346c28c695f5cf9db67a8cf88c90f947 upstream. Commit 61289cb ('iommu/amd: Remove old alias handling code') removed the old alias handling code from the AMD IOMMU driver because this is now handled by the IOMMU core code. But this also removed the handling of PCI aliases, which is not handled by the core code. This caused issues with PCI devices that have hidden PCIe-to-PCI bridges that rewrite the request-id. Fix this bug by re-introducing some of the removed functions from commit 61289cbaf6c8 and add a alias field 'struct iommu_dev_data'. This field carrys the return value of the get_alias() function and uses that instead of the amd_iommu_alias_table[] array in the code. Fixes: 61289cbaf6c8 ('iommu/amd: Remove old alias handling code') Tested-by: Tomasz Golinski <tomaszg@math.uwb.edu.pl> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-04-20iommu: Don't overwrite domain pointer when there is no default_domainJoerg Roedel1-1/+2
commit eebb8034a5be8c2177cbf07ca2ecd2ff8a058958 upstream. IOMMU drivers that do not support default domains, but make use of the the group->domain pointer can get that pointer overwritten with NULL on device add/remove. Make sure this can't happen by only overwriting the domain pointer when it is NULL. Fixes: 1228236de5f9 ('iommu: Move default domain allocation to iommu_group_get_for_dev()') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-03-10iommu/vt-d: Use BUS_NOTIFY_REMOVED_DEVICE in hotplug pathJoerg Roedel2-4/+5
commit e6a8c9b337eed56eb481e1b4dd2180c25a1e5310 upstream. In the PCI hotplug path of the Intel IOMMU driver, replace the usage of the BUS_NOTIFY_DEL_DEVICE notifier, which is executed before the driver is unbound from the device, with BUS_NOTIFY_REMOVED_DEVICE, which runs after that. This fixes a kernel BUG being triggered in the VT-d code when the device driver tries to unmap DMA buffers and the VT-d driver already destroyed all mappings. Reported-by: Stefani Seibold <stefani@seibold.net> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-03-10iommu/amd: Fix boot warning when device 00:00.0 is not iommu coveredSuravee Suthikulpanit1-12/+22
commit 38e45d02ea9f194b89d6bf41e52ccafc8e2c2b47 upstream. The setup code for the performance counters in the AMD IOMMU driver tests whether the counters can be written. It tests to setup a counter for device 00:00.0, which fails on systems where this particular device is not covered by the IOMMU. Fix this by not relying on device 00:00.0 but only on the IOMMU being present. Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-03-10iommu/amd: Apply workaround for ATS write permission checkJay Cornwall1-0/+29
commit 358875fd52ab8f00f66328cbf1a1d2486f265829 upstream. The AMD Family 15h Models 30h-3Fh (Kaveri) BIOS and Kernel Developer's Guide omitted part of the BIOS IOMMU L2 register setup specification. Without this setup the IOMMU L2 does not fully respect write permissions when handling an ATS translation request. The IOMMU L2 will set PTE dirty bit when handling an ATS translation with write permission request, even when PTE RW bit is clear. This may occur by direct translation (which would cause a PPR) or by prefetch request from the ATC. This is observed in practice when the IOMMU L2 modifies a PTE which maps a pagecache page. The ext4 filesystem driver BUGs when asked to writeback these (non-modified) pages. Enable ATS write permission check in the Kaveri IOMMU L2 if BIOS has not. Signed-off-by: Jay Cornwall <jay@jcornwall.me> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-02-25iommu/vt-d: Clear PPR bit to ensure we get more page request interruptsDavid Woodhouse1-0/+4
commit 46924008273ed03bd11dbb32136e3da4cfe056e1 upstream. According to the VT-d specification we need to clear the PPR bit in the Page Request Status register when handling page requests, or the hardware won't generate any more interrupts. This wasn't actually necessary on SKL/KBL (which may well be the subject of a hardware erratum, although it's harmless enough). But other implementations do appear to get it right, and we only ever get one interrupt unless we clear the PPR bit. Reported-by: CQ Tang <cq.tang@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-02-25iommu/vt-d: Fix 64-bit accesses to 32-bit DMAR_GSTS_REGCQ Tang2-2/+2
commit fda3bec12d0979aae3f02ee645913d66fbc8a26e upstream. This is a 32-bit register. Apparently harmless on real hardware, but causing justified warnings in simulation. Signed-off-by: CQ Tang <cq.tang@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-02-25iommu/vt-d: Fix mm refcounting to hold mm_count not mm_usersDavid Woodhouse1-6/+27
commit e57e58bd390a6843db58560bf7b8341665d2e058 upstream. Holding mm_users works OK for graphics, which was the first user of SVM with VT-d. However, it works less well for other devices, where we actually do a mmap() from the file descriptor to which the SVM PASID state is tied. In this case on process exit we end up with a recursive reference count: - The MM remains alive until the file is closed and the driver's release() call ends up unbinding the PASID. - The VMA corresponding to the mmap() remains intact until the MM is destroyed. - Thus the file isn't closed, even when exit_files() runs, because the VMA is still holding a reference to it. And the MM remains alive… To address this issue, we *stop* holding mm_users while the PASID is bound. We already hold mm_count by virtue of the MMU notifier, and that can be made to be sufficient. It means that for a period during process exit, the fun part of mmput() has happened and exit_mmap() has been called so the MM is basically defunct. But the PGD still exists and the PASID is still bound to it. During this period, we have to be very careful — exit_mmap() doesn't use mm->mmap_sem because it doesn't expect anyone else to be touching the MM (quite reasonably, since mm_users is zero). So we also need to fix the fault handler to just report failure if mm_users is already zero, and to temporarily bump mm_users while handling any faults. Additionally, exit_mmap() calls mmu_notifier_release() *before* it tears down the page tables, which is too early for us to flush the IOTLB for this PASID. And __mmu_notifier_release() removes every notifier from the list, so when exit_mmap() finally *does* tear down the mappings and clear the page tables, we don't get notified. So we work around this by clearing the PASID table entry in our MMU notifier release() callback. That way, the hardware *can't* get any pages back from the page tables before they get cleared. Hardware designers have confirmed that the resulting 'PASID not present' faults should be handled just as gracefully as 'page not present' faults, the important criterion being that they don't perturb the operation for any *other* PASID in the system. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-02-25iommu/amd: Correct the wrong setting of alias DTE in do_attachBaoquan He1-1/+1
commit 9b1a12d29109234d2b9718d04d4d404b7da4e794 upstream. In below commit alias DTE is set when its peripheral is setting DTE. However there's a code bug here to wrongly set the alias DTE, correct it in this patch. commit e25bfb56ea7f046b71414e02f80f620deb5c6362 Author: Joerg Roedel <jroedel@suse.de> Date: Tue Oct 20 17:33:38 2015 +0200 iommu/amd: Set alias DTE in do_attach/do_detach Signed-off-by: Baoquan He <bhe@redhat.com> Tested-by: Mark Hounschell <markh@compro.net> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-02-25iommu/vt-d: Don't skip PCI devices when disabling IOTLBJeremy McNicoll1-1/+1
commit da972fb13bc5a1baad450c11f9182e4cd0a091f6 upstream. Fix a simple typo when disabling IOTLB on PCI(e) devices. Fixes: b16d0cb9e2fc ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS") Signed-off-by: Jeremy McNicoll <jmcnicol@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-02-17iommu/io-pgtable-arm: Ensure we free the final level on teardownWill Deacon1-5/+6
commit 12c2ab09571e8aae3a87da2a4a452632a5fac1e5 upstream. When tearing down page tables, we return early for the final level since we know that we won't have any table pointers to follow. Unfortunately, this also means that we forget to free the final level, so we end up leaking memory. Fix the issue by always freeing the current level, but just don't bother to iterate over the ptes if we're at the final level. Reported-by: Zhang Bo <zhangbo_a@xiaomi.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-01-07iommu/dma: Use correct offset in map_sgRobin Murphy1-1/+1
When mapping a non-page-aligned scatterlist entry, we copy the original offset to the output DMA address before aligning it to hand off to iommu_map_sg(), then later adding the IOVA page address portion to get the final mapped address. However, when the IOVA page size is smaller than the CPU page size, it is the offset within the IOVA page we want, not that within the CPU page, which can easily be larger than an IOVA page and thus result in an incorrect final address. Fix the bug by taking only the IOVA-aligned part of the offset as the basis of the DMA address, not the whole thing. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/ipmmu-vmsa: Don't truncate ttbr if LPAE is not enabledGeert Uytterhoeven1-1/+1
If CONFIG_PHYS_ADDR_T_64BIT=n: drivers/iommu/ipmmu-vmsa.c: In function 'ipmmu_domain_init_context': drivers/iommu/ipmmu-vmsa.c:434:2: warning: right shift count >= width of type ipmmu_ctx_write(domain, IMTTUBR0, ttbr >> 32); ^ As io_pgtable_cfg.arm_lpae_s1_cfg.ttbr[] is an array of u64s, assigning it to a phys_addr_t may truncates it. Make ttbr u64 to fix this. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/dma: Avoid unlikely high-order allocationsRobin Murphy1-2/+4
Doug reports that the equivalent page allocator on 32-bit ARM exhibits particularly pathalogical behaviour under memory pressure when fragmentation is high, where allocating a 4MB buffer takes tens of seconds and the number of calls to alloc_pages() is over 9000![1] We can drastically improve that situation without losing the other benefits of high-order allocations when they would succeed, by assuming memory pressure is relatively constant over the course of an allocation, and not retrying allocations at orders we know to have failed before. This way, the best-case behaviour remains unchanged, and in the worst case we should see at most a dozen or so (MAX_ORDER - 1) failed attempts before falling back to single pages for the remainder of the buffer. [1]:http://lists.infradead.org/pipermail/linux-arm-kernel/2015-December/394660.html Reported-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/dma: Add some missing #includesRobin Murphy1-0/+3
dma-iommu.c was naughtily relying on an implicit transitive #include of linux/vmalloc.h, which is apparently not present on some architectures. Add that, plus a couple more headers for other functions which are used similarly. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-18Merge tag 'iommu-fixes-v4.4-rc5' of ↵Linus Torvalds2-2/+38
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull IOMMU fixes from Joerg Roedel: "Two similar fixes for the Intel and AMD IOMMU drivers to add proper access checks before calling handle_mm_fault" * tag 'iommu-fixes-v4.4-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu/vt-d: Do access checks before calling handle_mm_fault() iommu/amd: Do proper access checking before calling handle_mm_fault()
2015-12-15Revert "scatterlist: use sg_phys()"Dan Williams2-3/+3
commit db0fa0cb0157 "scatterlist: use sg_phys()" did replacements of the form: phys_addr_t phys = page_to_phys(sg_page(s)); phys_addr_t phys = sg_phys(s) & PAGE_MASK; However, this breaks platforms where sizeof(phys_addr_t) > sizeof(unsigned long). Revert for 4.3 and 4.4 to make room for a combined helper in 4.5. Cc: <stable@vger.kernel.org> Cc: Jens Axboe <axboe@fb.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Fixes: db0fa0cb0157 ("scatterlist: use sg_phys()") Suggested-by: Joerg Roedel <joro@8bytes.org> Reported-by: Vitaly Lavrov <vel21ripn@gmail.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2015-12-14iommu/vt-d: Do access checks before calling handle_mm_fault()Joerg Roedel1-0/+20
Not doing so is a bug and might trigger a BUG_ON in handle_mm_fault(). So add the proper permission checks before calling into mm code. Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> Acked-By: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-14iommu/amd: Do proper access checking before calling handle_mm_fault()Joerg Roedel1-2/+18
The handle_mm_fault function expects the caller to do the access checks. Not doing so and calling the function with wrong permissions is a bug (catched by a BUG_ON). So fix this bug by adding proper access checking to the io page-fault code in the AMD IOMMUv2 driver. Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> Acked-By: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-11-09s390/pci_dma: handle dma table failuresSebastian Ott1-2/+21
We use lazy allocation for translation table entries but don't handle allocation (and other) failures during translation table updates. Handle these failures and undo translation table updates when it's meaningful. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-11-07mm, page_alloc: distinguish between being unable to sleep, unwilling to ↵Mel Gorman2-2/+2
sleep and avoiding waking kswapd __GFP_WAIT has been used to identify atomic context in callers that hold spinlocks or are in interrupts. They are expected to be high priority and have access one of two watermarks lower than "min" which can be referred to as the "atomic reserve". __GFP_HIGH users get access to the first lower watermark and can be called the "high priority reserve". Over time, callers had a requirement to not block when fallback options were available. Some have abused __GFP_WAIT leading to a situation where an optimisitic allocation with a fallback option can access atomic reserves. This patch uses __GFP_ATOMIC to identify callers that are truely atomic, cannot sleep and have no alternative. High priority users continue to use __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify callers that want to wake kswapd for background reclaim. __GFP_WAIT is redefined as a caller that is willing to enter direct reclaim and wake kswapd for background reclaim. This patch then converts a number of sites o __GFP_ATOMIC is used by callers that are high priority and have memory pools for those requests. GFP_ATOMIC uses this flag. o Callers that have a limited mempool to guarantee forward progress clear __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall into this category where kswapd will still be woken but atomic reserves are not used as there is a one-entry mempool to guarantee progress. o Callers that are checking if they are non-blocking should use the helper gfpflags_allow_blocking() where possible. This is because checking for __GFP_WAIT as was done historically now can trigger false positives. Some exceptions like dm-crypt.c exist where the code intent is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to flag manipulations. o Callers that built their own GFP flags instead of starting with GFP_KERNEL and friends now also need to specify __GFP_KSWAPD_RECLAIM. The first key hazard to watch out for is callers that removed __GFP_WAIT and was depending on access to atomic reserves for inconspicuous reasons. In some cases it may be appropriate for them to use __GFP_HIGH. The second key hazard is callers that assembled their own combination of GFP flags instead of starting with something like GFP_KERNEL. They may now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless if it's missed in most cases as other activity will wake kswapd. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Vitaly Wool <vitalywool@gmail.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>