summaryrefslogtreecommitdiff
path: root/drivers/xen/swiotlb-xen.c
AgeCommit message (Collapse)AuthorFilesLines
2022-05-11swiotlb-xen: fix DMA_ATTR_NO_KERNEL_MAPPING on armChristoph Hellwig1-65/+34
swiotlb-xen uses very different ways to allocate coherent memory on x86 vs arm. On the former it allocates memory from the page allocator, while on the later it reuses the dma-direct allocator the handles the complexities of non-coherent DMA on arm platforms. Unfortunately the complexities of trying to deal with the two cases in the swiotlb-xen.c code lead to a bug in the handling of DMA_ATTR_NO_KERNEL_MAPPING on arm. With the DMA_ATTR_NO_KERNEL_MAPPING flag the coherent memory allocator does not actually allocate coherent memory, but just a DMA handle for some memory that is DMA addressable by the device, but which does not have to have a kernel mapping. Thus dereferencing the return value will lead to kernel crashed and memory corruption. Fix this by using the dma-direct allocator directly for arm, which works perfectly fine because on arm swiotlb-xen is only used when the domain is 1:1 mapped, and then simplifying the remaining code to only cater for the x86 case with DMA coherent device. Reported-by: Rahul Singh <Rahul.Singh@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Rahul Singh <rahul.singh@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Rahul Singh <rahul.singh@arm.com>
2022-04-18swiotlb: merge swiotlb-xen initialization into swiotlbChristoph Hellwig1-127/+1
Reuse the generic swiotlb initialization for xen-swiotlb. For ARM/ARM64 this works trivially, while for x86 xen_swiotlb_fixup needs to be passed as the remap argument to swiotlb_init_remap/swiotlb_init_late. Note that the lower bound of the swiotlb size is changed to the smaller IO_TLB_MIN_SLABS based value with this patch, but that is fine as the 2MB value used in Xen before was just an optimization and is not the hard lower bound. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2022-04-18swiotlb: make the swiotlb_init interface more usefulChristoph Hellwig1-2/+2
Pass a boolean flag to indicate if swiotlb needs to be enabled based on the addressing needs, and replace the verbose argument with a set of flags, including one to force enable bounce buffering. Note that this patch removes the possibility to force xen-swiotlb use with the swiotlb=force parameter on the command line on x86 (arm and arm64 never supported that), but this interface will be restored shortly. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2022-04-18swiotlb: simplify swiotlb_max_segmentChristoph Hellwig1-2/+0
Remove the bogus Xen override that was usually larger than the actual size and just calculate the value on demand. Note that swiotlb_max_segment still doesn't make sense as an interface and should eventually be removed. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2021-11-07Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-1/+1
Merge misc updates from Andrew Morton: "257 patches. Subsystems affected by this patch series: scripts, ocfs2, vfs, and mm (slab-generic, slab, slub, kconfig, dax, kasan, debug, pagecache, gup, swap, memcg, pagemap, mprotect, mremap, iomap, tracing, vmalloc, pagealloc, memory-failure, hugetlb, userfaultfd, vmscan, tools, memblock, oom-kill, hugetlbfs, migration, thp, readahead, nommu, ksm, vmstat, madvise, memory-hotplug, rmap, zsmalloc, highmem, zram, cleanups, kfence, and damon)" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (257 commits) mm/damon: remove return value from before_terminate callback mm/damon: fix a few spelling mistakes in comments and a pr_debug message mm/damon: simplify stop mechanism Docs/admin-guide/mm/pagemap: wordsmith page flags descriptions Docs/admin-guide/mm/damon/start: simplify the content Docs/admin-guide/mm/damon/start: fix a wrong link Docs/admin-guide/mm/damon/start: fix wrong example commands mm/damon/dbgfs: add adaptive_targets list check before enable monitor_on mm/damon: remove unnecessary variable initialization Documentation/admin-guide/mm/damon: add a document for DAMON_RECLAIM mm/damon: introduce DAMON-based Reclamation (DAMON_RECLAIM) selftests/damon: support watermarks mm/damon/dbgfs: support watermarks mm/damon/schemes: activate schemes based on a watermarks mechanism tools/selftests/damon: update for regions prioritization of schemes mm/damon/dbgfs: support prioritization weights mm/damon/vaddr,paddr: support pageout prioritization mm/damon/schemes: prioritize regions within the quotas mm/damon/selftests: support schemes quotas mm/damon/dbgfs: support quotas of schemes ...
2021-11-06memblock: use memblock_free for freeing virtual pointersMike Rapoport1-1/+1
Rename memblock_free_ptr() to memblock_free() and use memblock_free() when freeing a virtual pointer so that memblock_free() will be a counterpart of memblock_alloc() The callers are updated with the below semantic patch and manual addition of (void *) casting to pointers that are represented by unsigned long variables. @@ identifier vaddr; expression size; @@ ( - memblock_phys_free(__pa(vaddr), size); + memblock_free(vaddr, size); | - memblock_free_ptr(vaddr, size); + memblock_free(vaddr, size); ) [sfr@canb.auug.org.au: fixup] Link: https://lkml.kernel.org/r/20211018192940.3d1d532f@canb.auug.org.au Link: https://lkml.kernel.org/r/20210930185031.18648-7-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Juergen Gross <jgross@suse.com> Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-11-06memblock: rename memblock_free to memblock_phys_freeMike Rapoport1-1/+1
Since memblock_free() operates on a physical range, make its name reflect it and rename it to memblock_phys_free(), so it will be a logical counterpart to memblock_phys_alloc(). The callers are updated with the below semantic patch: @@ expression addr; expression size; @@ - memblock_free(addr, size); + memblock_phys_free(addr, size); Link: https://lkml.kernel.org/r/20210930185031.18648-6-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Juergen Gross <jgross@suse.com> Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-09-29swiotlb: Support aligned swiotlb buffersDavid Stevens1-1/+1
Add an argument to swiotlb_tbl_map_single that specifies the desired alignment of the allocated buffer. This is used by dma-iommu to ensure the buffer is aligned to the iova granule size when using swiotlb with untrusted sub-granule mappings. This addresses an issue where adjacent slots could be exposed to the untrusted device if IO_TLB_SIZE < iova granule < PAGE_SIZE. Signed-off-by: David Stevens <stevensd@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210929023300.335969-7-stevensd@google.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-09-26Merge tag 'for-linus-5.15b-rc3-tag' of ↵Linus Torvalds1-3/+4
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "Some minor cleanups and fixes of some theoretical bugs, as well as a fix of a bug introduced in 5.15-rc1" * tag 'for-linus-5.15b-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/x86: fix PV trap handling on secondary processors xen/balloon: fix balloon kthread freezing swiotlb-xen: this is PV-only on x86 xen/pci-swiotlb: reduce visibility of symbols PCI: only build xen-pcifront in PV-enabled environments swiotlb-xen: ensure to issue well-formed XENMEM_exchange requests Xen/gntdev: don't ignore kernel unmapping error xen/x86: drop redundant zeroing from cpu_initialize_context()
2021-09-20swiotlb-xen: ensure to issue well-formed XENMEM_exchange requestsJan Beulich1-3/+4
While the hypervisor hasn't been enforcing this, we would still better avoid issuing requests with GFNs not aligned to the requested order. Instead of altering the value also in the call to panic(), drop it there for being static and hence easy to determine without being part of the panic message. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Link: https://lore.kernel.org/r/7b3998e3-1233-4e5a-89ec-d740e77eb166@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-17Merge tag 'for-linus-5.15b-rc2-tag' of ↵Linus Torvalds1-20/+17
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: - The first hunk of a Xen swiotlb fixup series fixing multiple minor issues and doing some small cleanups - Some further Xen related fixes avoiding WARN() splats when running as Xen guests or dom0 - A Kconfig fix allowing the pvcalls frontend to be built as a module * tag 'for-linus-5.15b-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: swiotlb-xen: drop DEFAULT_NSLABS swiotlb-xen: arrange to have buffer info logged swiotlb-xen: drop leftover __ref swiotlb-xen: limit init retries swiotlb-xen: suppress certain init retries swiotlb-xen: maintain slab count properly swiotlb-xen: fix late init retry swiotlb-xen: avoid double free xen/pvcalls: backend can be a module xen: fix usage of pmd_populate in mremap for pv guests xen: reset legacy rtc flag for PV domU PM: base: power: don't try to use non-existing RTC for storing data xen/balloon: use a kernel thread instead a workqueue
2021-09-15swiotlb-xen: drop DEFAULT_NSLABSJan Beulich1-2/+0
It was introduced by 4035b43da6da ("xen-swiotlb: remove xen_set_nslabs") and then not removed by 2d29960af0be ("swiotlb: dynamically allocate io_tlb_default_mem"). Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/15259326-209a-1d11-338c-5018dc38abe8@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-15swiotlb-xen: arrange to have buffer info loggedJan Beulich1-1/+1
I consider it unhelpful that address and size of the buffer aren't put in the log file; it makes diagnosing issues needlessly harder. The majority of callers of swiotlb_init() also passes 1 for the "verbose" parameter. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/2e3c8e68-36b2-4ae9-b829-bf7f75d39d47@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-15swiotlb-xen: drop leftover __refJan Beulich1-1/+1
Commit a98f565462f0 ("xen-swiotlb: split xen_swiotlb_init") should not only have added __init to the split off function, but also should have dropped __ref from the one left. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/7cd163e1-fe13-270b-384c-2708e8273d34@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-15swiotlb-xen: limit init retriesJan Beulich1-2/+2
Due to the use of max(1024, ...) there's no point retrying (and issuing bogus log messages) when the number of slabs is already no larger than this minimum value. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/984fa426-2b7b-4b77-5ce8-766619575b7f@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-15swiotlb-xen: suppress certain init retriesJan Beulich1-1/+2
Only on the 2nd of the paths leading to xen_swiotlb_init()'s "error" label it is useful to retry the allocation; the first one did already iterate through all possible order values. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/56477481-87da-4962-9661-5e1b277efde0@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-15swiotlb-xen: maintain slab count properlyJan Beulich1-10/+9
Generic swiotlb code makes sure to keep the slab count a multiple of the number of slabs per segment. Yet even without checking whether any such assumption is made elsewhere, it is easy to see that xen_swiotlb_fixup() might alter unrelated memory when calling xen_create_contiguous_region() for the last segment, when that's not a full one - the function acts on full order-N regions, not individual pages. Align the slab count suitably when halving it for a retry. Add a build time check and a runtime one. Replace the no longer useful local variable "slabs" by an "order" one calculated just once, outside of the loop. Re-use "order" for calculating "dma_bits", and change the type of the latter as well as the one of "i" while touching this anyway. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/dc054cb0-bec4-4db0-fc06-c9fc957b6e66@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-15swiotlb-xen: fix late init retryJan Beulich1-2/+2
The commit referenced below removed the assignment of "bytes" from xen_swiotlb_init() without - like done for xen_swiotlb_init_early() - adding an assignment on the retry path, thus leading to excessively sized allocations upon retries. Fixes: 2d29960af0be ("swiotlb: dynamically allocate io_tlb_default_mem") Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: stable@vger.kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/778299d6-9cfd-1c13-026e-25ee5d14ecb3@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-15swiotlb-xen: avoid double freeJan Beulich1-1/+0
Of the two paths leading to the "error" label in xen_swiotlb_init() one didn't allocate anything, while the other did already free what was allocated. Fixes: b82776005369 ("xen/swiotlb: Use the swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV PCI is used") Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: stable@vger.kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/ce9c2adb-8a52-6293-982a-0d6ece943ac6@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2021-09-03Merge branch 'stable/for-linus-5.15' of ↵Linus Torvalds1-4/+4
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "A new feature called restricted DMA pools. It allows SWIOTLB to utilize per-device (or per-platform) allocated memory pools instead of using the global one. The first big user of this is ARM Confidential Computing where the memory for DMA operations can be set per platform" * 'stable/for-linus-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: (23 commits) swiotlb: use depends on for DMA_RESTRICTED_POOL of: restricted dma: Don't fail device probe on rmem init failure of: Move of_dma_set_restricted_buffer() into device.c powerpc/svm: Don't issue ultracalls if !mem_encrypt_active() s390/pv: fix the forcing of the swiotlb swiotlb: Free tbl memory in swiotlb_exit() swiotlb: Emit diagnostic in swiotlb_exit() swiotlb: Convert io_default_tlb_mem to static allocation of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS swiotlb: add overflow checks to swiotlb_bounce swiotlb: fix implicit debugfs declarations of: Add plumbing for restricted DMA pool dt-bindings: of: Add restricted DMA pool swiotlb: Add restricted DMA pool initialization swiotlb: Add restricted DMA alloc/free support swiotlb: Refactor swiotlb_tbl_unmap_single swiotlb: Move alloc_size to swiotlb_find_slots swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing swiotlb: Update is_swiotlb_active to add a struct device argument swiotlb: Update is_swiotlb_buffer to add a struct device argument ...
2021-08-09xen: swiotlb: return error code from xen_swiotlb_map_sg()Martin Oliveira1-1/+1
The .map_sg() op now expects an error code instead of zero on failure. xen_swiotlb_map_sg() may only fail if xen_swiotlb_map_page() fails, but xen_swiotlb_map_page() only supports returning errors as DMA_MAPPING_ERROR. So coalesce all errors into EIO per the documentation for dma_map_sgtable(). Signed-off-by: Martin Oliveira <martin.oliveira@eideticom.com> Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-07-24swiotlb: Convert io_default_tlb_mem to static allocationWill Deacon1-2/+2
Since commit 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used"), 'struct device' may hold a copy of the global 'io_default_tlb_mem' pointer if the device is using swiotlb for DMA. A subsequent call to swiotlb_exit() will therefore leave dangling pointers behind in these device structures, resulting in KASAN splats such as: | BUG: KASAN: use-after-free in __iommu_dma_unmap_swiotlb+0x64/0xb0 | Read of size 8 at addr ffff8881d7830000 by task swapper/0/0 | | CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3-debug #1 | Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020 | Call Trace: | <IRQ> | dump_stack+0x9c/0xcf | print_address_description.constprop.0+0x18/0x130 | kasan_report.cold+0x7f/0x111 | __iommu_dma_unmap_swiotlb+0x64/0xb0 | nvme_pci_complete_rq+0x73/0x130 | blk_complete_reqs+0x6f/0x80 | __do_softirq+0xfc/0x3be Convert 'io_default_tlb_mem' to a static structure, so that the per-device pointers remain valid after swiotlb_exit() has been invoked. All users are updated to reference the static structure directly, using the 'nslabs' field to determine whether swiotlb has been initialised. The 'slots' array is still allocated dynamically and referenced via a pointer rather than a flexible array member. Cc: Claire Chang <tientzu@chromium.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Fixes: 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used") Reported-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
2021-07-14swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncingClaire Chang1-1/+1
Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and use it to determine whether to bounce the data or not. This will be useful later to allow for different pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v2: Includes Will's fix]
2021-07-14swiotlb: Update is_swiotlb_buffer to add a struct device argumentClaire Chang1-1/+1
Update is_swiotlb_buffer to add a struct device argument. This will be useful later to allow for different pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-05-14xen/swiotlb: check if the swiotlb has already been initializedStefano Stabellini1-0/+5
xen_swiotlb_init calls swiotlb_late_init_with_tbl, which fails with -ENOMEM if the swiotlb has already been initialized. Add an explicit check io_tlb_default_mem != NULL at the beginning of xen_swiotlb_init. If the swiotlb is already initialized print a warning and return -EEXIST. On x86, the error propagates. On ARM, we don't actually need a special swiotlb buffer (yet), any buffer would do. So ignore the error and continue. CC: boris.ostrovsky@oracle.com CC: jgross@suse.com Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210512201823.1963-3-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2021-03-19swiotlb: dynamically allocate io_tlb_default_memChristoph Hellwig1-14/+8
Instead of allocating ->list and ->orig_addr separately just do one dynamic allocation for the actual io_tlb_mem structure. This simplifies a lot of the initialization code, and also allows to just check io_tlb_default_mem to see if swiotlb is in use. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-19swiotlb: move global variables into a new io_tlb_mem structureClaire Chang1-1/+1
Added a new struct, io_tlb_mem, as the IO TLB memory pool descriptor and moved relevant global variables into that struct. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang <tientzu@chromium.org> [hch: rebased] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: remove the unused size argument from xen_swiotlb_fixupChristoph Hellwig1-4/+3
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: split xen_swiotlb_initChristoph Hellwig1-54/+70
Split xen_swiotlb_init into a normal an an early case. That makes both much simpler and more readable, and also allows marking the early code as __init and x86-only. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17swiotlb: lift the double initialization protection from xen-swiotlbChristoph Hellwig1-7/+0
Lift the double initialization protection from xen-swiotlb to the core code to avoid exposing too many swiotlb internals. Also upgrade the check to a warning as it should not happen. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabsChristoph Hellwig1-32/+25
The xen_io_tlb_start and xen_io_tlb_nslabs variables are now only used in xen_swiotlb_init, so replace them with local variables. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: remove xen_set_nslabsChristoph Hellwig1-12/+7
The xen_set_nslabs function is a little weird, as it has just one caller, that caller passes a global variable as the argument, which is then overriden in the function and a derivative of it returned. Just add a cpp symbol for the default size using a readable constant and open code the remaining three lines in the caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supportedChristoph Hellwig1-8/+2
Use the existing variable that holds the physical address for xen_io_tlb_end to simplify xen_swiotlb_dma_supported a bit, and remove the otherwise unused xen_io_tlb_end variable and the xen_virt_to_bus helper. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_bufferChristoph Hellwig1-4/+2
Use the is_swiotlb_buffer to check if a physical address is a swiotlb buffer. This works because xen-swiotlb does use the same buffer as the main swiotlb code, and xen_io_tlb_{start,end} are just the addresses for it that went through phys_to_virt. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17swiotlb: split swiotlb_tbl_sync_singleChristoph Hellwig1-2/+2
Split swiotlb_tbl_sync_single into two separate funtions for the to device and to cpu synchronization. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_singleChristoph Hellwig1-2/+2
Now that swiotlb remembers the allocation size there is no need to pass it back to swiotlb_tbl_unmap_single. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2020-11-02swiotlb: remove the tbl_dma_addr argument to swiotlb_tbl_map_singleChristoph Hellwig1-2/+1
The tbl_dma_addr argument is used to check the DMA boundary for the allocations, and thus needs to be a dma_addr_t. swiotlb-xen instead passed a physical address, which could lead to incorrect results for strange offsets. Fix this by removing the parameter entirely and hard code the DMA address for io_tlb_start instead. Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2020-10-06dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>Christoph Hellwig1-2/+1
Move more nitty gritty DMA implementation details into the common internal header. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-10-06dma-mapping: split <linux/dma-mapping.h>Christoph Hellwig1-0/+1
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-25dma-mapping: add a new dma_alloc_pages APIChristoph Hellwig1-0/+2
This API is the equivalent of alloc_pages, except that the returned memory is guaranteed to be DMA addressable by the passed in device. The implementation will also be used to provide a more sensible replacement for DMA_ATTR_NON_CONSISTENT flag. Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages as its backend. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
2020-08-04xen/arm: introduce phys/dma translations in xen_dma_sync_for_*Stefano Stabellini1-8/+24
xen_dma_sync_for_cpu, xen_dma_sync_for_device, xen_arch_need_swiotlb are getting called passing dma addresses. On some platforms dma addresses could be different from physical addresses. Before doing any operations on these addresses we need to convert them back to physical addresses using dma_to_phys. Move the arch_sync_dma_for_cpu and arch_sync_dma_for_device calls from xen_dma_sync_for_cpu/device to swiotlb-xen.c, and add a call dma_to_phys to do address translations there. dma_cache_maint is fixed by the next patch. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Acked-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200710223427.6897-10-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: introduce phys_to_dma/dma_to_phys translationsStefano Stabellini1-21/+32
With some devices physical addresses are different than dma addresses. To be able to deal with these cases, we need to call phys_to_dma on physical addresses (including machine addresses in Xen terminology) before returning them from xen_swiotlb_alloc_coherent and xen_swiotlb_map_page. We also need to convert dma addresses back to physical addresses using dma_to_phys in xen_swiotlb_free_coherent and xen_swiotlb_unmap_page if we want to do any operations on them. Call dma_to_phys in is_xen_swiotlb_buffer. Introduce xen_phys_to_dma and call phys_to_dma in its implementation. Introduce xen_dma_to_phys and call dma_to_phys in its implementation. Call xen_phys_to_dma/xen_dma_to_phys instead of xen_phys_to_bus/xen_bus_to_phys through swiotlb-xen.c. Everything is taken care of by these changes except for xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a few explicit phys_to_dma/dma_to_phys calls. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200710223427.6897-9-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: remove XEN_PFN_PHYSStefano Stabellini1-6/+1
XEN_PFN_PHYS is only used in one place in swiotlb-xen making things more complex than need to be. Remove the definition of XEN_PFN_PHYS and open code the cast in the one place where it is needed. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200710223427.6897-8-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to is_xen_swiotlb_bufferStefano Stabellini1-4/+4
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-7-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_dma_sync_for_deviceStefano Stabellini1-2/+2
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-6-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_dma_sync_for_cpuStefano Stabellini1-2/+2
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-5-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_bus_to_physStefano Stabellini1-5/+5
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-4-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_phys_to_busStefano Stabellini1-7/+7
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-3-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: remove start_dma_addrStefano Stabellini1-5/+2
It is not strictly needed. Call virt_to_phys on xen_io_tlb_start instead. It will be useful not to have a start_dma_addr around with the next patches. Note that virt_to_phys is not the same as xen_virt_to_bus but actually it is used to compared again __pa(xen_io_tlb_start) as passed to swiotlb_init_with_tbl, so virt_to_phys is actually what we want. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-2-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: use vmalloc_to_page on vmalloc virt addressesBoris Ostrovsky1-1/+7
xen_alloc_coherent_pages might return pages for which virt_to_phys and virt_to_page don't work, e.g. ioremap'ed pages. So in xen_swiotlb_free_coherent we can't assume that virt_to_page works. Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc virt addresses. This patch fixes the following crash at boot on RPi4 (the underlying issue is not RPi4 specific): https://marc.info/?l=xen-devel&m=158862573216800 Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-1-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>