summaryrefslogtreecommitdiff
path: root/kernel/dma
AgeCommit message (Collapse)AuthorFilesLines
2021-10-20Merge tag 'dma-mapping-5.15-2' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds3-36/+48
Pull dma-mapping fixes from Christoph Hellwig: - fix more dma-debug fallout (Gerald Schaefer, Hamza Mahfooz) - fix a kerneldoc warning (Logan Gunthorpe) * tag 'dma-mapping-5.15-2' of git://git.infradead.org/users/hch/dma-mapping: dma-debug: teach add_dma_entry() about DMA_ATTR_SKIP_CPU_SYNC dma-debug: fix sg checks in debug_dma_map_sg() dma-mapping: fix the kerneldoc for dma_map_sgtable()
2021-10-18dma-debug: teach add_dma_entry() about DMA_ATTR_SKIP_CPU_SYNCHamza Mahfooz3-24/+36
Mapping something twice should be possible as long as, DMA_ATTR_SKIP_CPU_SYNC is passed to the strictly speaking second relevant mapping operation (that attempts to map the same thing). So, don't issue a warning if the specified condition is met in add_dma_entry(). Signed-off-by: Hamza Mahfooz <someguy@effective-light.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-10-11dma-debug: fix sg checks in debug_dma_map_sg()Gerald Schaefer1-6/+6
The following warning occurred sporadically on s390: DMA-API: nvme 0006:00:00.0: device driver maps memory from kernel text or rodata [addr=0000000048cc5e2f] [len=131072] WARNING: CPU: 4 PID: 825 at kernel/dma/debug.c:1083 check_for_illegal_area+0xa8/0x138 It is a false-positive warning, due to broken logic in debug_dma_map_sg(). check_for_illegal_area() checks for overlay of sg elements with kernel text or rodata. It is called with sg_dma_len(s) instead of s->length as parameter. After the call to ->map_sg(), sg_dma_len() will contain the length of possibly combined sg elements in the DMA address space, and not the individual sg element length, which would be s->length. The check will then use the physical start address of an sg element, and add the DMA length for the overlap check, which could result in the false warning, because the DMA length can be larger than the actual single sg element length. In addition, the call to check_for_illegal_area() happens in the iteration over mapped_ents, which will not include all individual sg elements if any of them were combined in ->map_sg(). Fix this by using s->length instead of sg_dma_len(s). Also put the call to check_for_illegal_area() in a separate loop, iterating over all the individual sg elements ("nents" instead of "mapped_ents"). While at it, as suggested by Robin Murphy, also move check_for_stack() inside the new loop, as it is similarly concerned with validating the individual sg elements. Link: https://lore.kernel.org/lkml/20210705185252.4074653-1-gerald.schaefer@linux.ibm.com Fixes: 884d05970bfb ("dma-debug: use sg_dma_len accessor") Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-10-11dma-mapping: fix the kerneldoc for dma_map_sgtable()Logan Gunthorpe1-6/+6
htmldocs began producing the following warnings: kernel/dma/mapping.c:256: WARNING: Definition list ends without a blank line; unexpected unindent. kernel/dma/mapping.c:257: WARNING: Bullet list ends without a blank line; unexpected unindent. Reformatting the list without hyphens fixes the warnings and produces both a readable text and HTML output. Fixes: fffe3cc8c219 ("dma-mapping: allow map_sg() ops to return negative error code") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-09-17Merge tag 'dma-mapping-5.15-1' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2-2/+4
Pull dma-mapping fixes from Christoph Hellwig: - page align size in sparc32 arch_dma_alloc (Andreas Larsson) - tone down a new dma-debug message (Hamza Mahfooz) - fix the kerneldoc for dma_map_sg_attrs (me) * tag 'dma-mapping-5.15-1' of git://git.infradead.org/users/hch/dma-mapping: sparc32: page align size in arch_dma_alloc dma-debug: prevent an error message from causing runtime problems dma-mapping: fix the kerneldoc for dma_map_sg_attrs
2021-09-13dma-debug: prevent an error message from causing runtime problemsHamza Mahfooz1-1/+2
For some drivers, that use the DMA API. This error message can be reached several millions of times per second, causing spam to the kernel's printk buffer and bringing the CPU usage up to 100% (so, it should be rate limited). However, since there is at least one driver that is in the mainline and suffers from the error condition, it is more useful to err_printk() here instead of just rate limiting the error message (in hopes that it will make it easier for other drivers that suffer from this issue to be spotted). Link: https://lkml.kernel.org/r/fd67fbac-64bf-f0ea-01e1-5938ccfab9d0@arm.com Reported-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Hamza Mahfooz <someguy@effective-light.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-09-06dma-mapping: fix the kerneldoc for dma_map_sg_attrsChristoph Hellwig1-1/+2
Add the missing description for the nents parameter, and fix a trivial misalignment. Fixes: fffe3cc8c219 ("dma-mapping: allow map_sg() ops to return negative error codes") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-09-03Merge branch 'stable/for-linus-5.15' of ↵Linus Torvalds4-111/+319
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "A new feature called restricted DMA pools. It allows SWIOTLB to utilize per-device (or per-platform) allocated memory pools instead of using the global one. The first big user of this is ARM Confidential Computing where the memory for DMA operations can be set per platform" * 'stable/for-linus-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: (23 commits) swiotlb: use depends on for DMA_RESTRICTED_POOL of: restricted dma: Don't fail device probe on rmem init failure of: Move of_dma_set_restricted_buffer() into device.c powerpc/svm: Don't issue ultracalls if !mem_encrypt_active() s390/pv: fix the forcing of the swiotlb swiotlb: Free tbl memory in swiotlb_exit() swiotlb: Emit diagnostic in swiotlb_exit() swiotlb: Convert io_default_tlb_mem to static allocation of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS swiotlb: add overflow checks to swiotlb_bounce swiotlb: fix implicit debugfs declarations of: Add plumbing for restricted DMA pool dt-bindings: of: Add restricted DMA pool swiotlb: Add restricted DMA pool initialization swiotlb: Add restricted DMA alloc/free support swiotlb: Refactor swiotlb_tbl_unmap_single swiotlb: Move alloc_size to swiotlb_find_slots swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing swiotlb: Update is_swiotlb_active to add a struct device argument swiotlb: Update is_swiotlb_buffer to add a struct device argument ...
2021-08-31swiotlb: use depends on for DMA_RESTRICTED_POOLClaire Chang1-2/+1
Use depends on instead of select for DMA_RESTRICTED_POOL; otherwise it will make SWIOTLB user configurable and cause compile errors for some arch (e.g. mips). Fixes: 0b84e4f8b793 ("swiotlb: Add restricted DMA pool initialization") Acked-by: Will Deacon <will@kernel.org> Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Claire Chang <tientzu@chromium.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-08-19dma-mapping: make the global coherent pool conditionalChristoph Hellwig1-22/+27
Only build the code to support the global coherent pool if support for it is enabled. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Dillon Min <dillon.minfei@gmail.com>
2021-08-18dma-mapping: add a dma_init_global_coherent helperChristoph Hellwig1-18/+14
Add a new helper to initialize the global coherent pool. This both cleans up the existing initialization which indirects through the reserved_mem_ops that are normally only used for struct device, and also allows using the global pool for non-devicetree architectures. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Dillon Min <dillon.minfei@gmail.com>
2021-08-18dma-mapping: simplify dma_init_coherent_memoryChristoph Hellwig1-45/+33
Return the allocated dma_coherent_mem structure, set the use_dma_pfn_offset and print the failure warning inside of dma_init_coherent_memory instead of leaving that to the callers. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Dillon Min <dillon.minfei@gmail.com>
2021-08-18dma-mapping: allow using the global coherent pool for !ARMChristoph Hellwig1-0/+2
Switch an ifdef so that the global coherent pool is initialized for any architecture that selects the DMA_GLOBAL_POOL symbol insted of hardcoding ARM. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Dillon Min <dillon.minfei@gmail.com>
2021-08-18dma-direct: add support for dma_coherent_default_memoryChristoph Hellwig2-0/+19
Add an option to allocate uncached memory for dma_alloc_coherent from the global dma_coherent_default_memory. This will allow to move arm-nommu (and eventually other platforms) to use generic code for allocating uncached memory from a pre-populated pool. Note that this is a different pool from the one that platforms that can remap at runtime use for GFP_ATOMIC allocations for now, although there might be opportunities to eventually end up with a common codebase for the two use cases. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Dillon Min <dillon.minfei@gmail.com>
2021-08-14dma-mapping: return an unsigned int from dma_map_sg{,_attrs}Christoph Hellwig1-1/+1
These can only return 0 for failure or the number of entries, so turn the return value into an unsigned int. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
2021-08-09dma-mapping: disallow .map_sg operations from returning zero on errorLogan Gunthorpe1-3/+1
Now that all the .map_sg operations have been converted to returning proper error codes, drop the code to handle a zero return value, add a warning if a zero is returned. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-09dma-mapping: return error code from dma_dummy_map_sg()Martin Oliveira1-1/+1
The .map_sg() op now expects an error code instead of zero on failure. The only errno to return is -EINVAL in the case when DMA is not supported. Signed-off-by: Martin Oliveira <martin.oliveira@eideticom.com> Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-09dma-direct: return appropriate error code from dma_direct_map_sg()Logan Gunthorpe1-1/+1
Now that the map_sg() op expects error codes instead of return zero on error, convert dma_direct_map_sg() to return an error code. Per the documentation for dma_map_sgtable(), -EIO is returned due to an DMA_MAPPING_ERROR with unknown cause. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-09dma-mapping: allow map_sg() ops to return negative error codesLogan Gunthorpe1-8/+74
Allow dma_map_sgtable() to pass errors from the map_sg() ops. This will be required for returning appropriate error codes when mapping P2PDMA memory. Introduce __dma_map_sg_attrs() which will return the raw error code from the map_sg operation (whether it be negative or zero). Then add a dma_map_sg_attrs() wrapper to convert any negative errors to zero to satisfy the existing calling convention. dma_map_sgtable() defines three error codes that .map_sg implementations are allowed to return: -EINVAL, -ENOMEM and -EIO. The latter of which is a generic return for cases that are passing DMA_MAPPING_ERROR through. dma_map_sgtable() will convert a zero error return for old map_sg() ops into a -EIO return and return any negative errors as reported. This allows map_sg implementations to start returning multiple negative error codes. Legacy map_sg implementations can continue to return zero until they are all converted. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-09dma-debug: fix debugfs initialization orderAnthony Iliopoulos1-3/+4
Due to link order, dma_debug_init is called before debugfs has a chance to initialize (via debugfs_init which also happens in the core initcall stage), so the directories for dma-debug are never created. Decouple dma_debug_fs_init from dma_debug_init and defer its init until core_initcall_sync (after debugfs has been initialized) while letting dma-debug initialization occur as soon as possible to catch any early mappings, as suggested in [1]. [1] https://lore.kernel.org/linux-iommu/YIgGa6yF%2Fadg8OSN@kroah.com/ Fixes: 15b28bbcd567 ("dma-debug: move initialization to common code") Signed-off-by: Anthony Iliopoulos <ailiop@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-09dma-debug: use memory_intersects() directlyKefeng Wang1-12/+2
There is already a memory_intersects() helper in sections.h, use memory_intersects() directly instead of private overlap(). Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-07-24swiotlb: Free tbl memory in swiotlb_exit()Will Deacon1-6/+15
Although swiotlb_exit() frees the 'slots' metadata array referenced by 'io_tlb_default_mem', it leaves the underlying buffer pages allocated despite no longer being usable. Extend swiotlb_exit() to free the buffer pages as well as the slots array. Cc: Claire Chang <tientzu@chromium.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
2021-07-24swiotlb: Emit diagnostic in swiotlb_exit()Will Deacon1-0/+1
A recent debugging session would have been made a little bit easier if we had noticed sooner that swiotlb_exit() was being called during boot. Add a simple diagnostic message to swiotlb_exit() to complement the one from swiotlb_print_info() during initialisation. Cc: Claire Chang <tientzu@chromium.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210705190352.GA19461@willie-the-truck Suggested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
2021-07-24swiotlb: Convert io_default_tlb_mem to static allocationWill Deacon1-30/+36
Since commit 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used"), 'struct device' may hold a copy of the global 'io_default_tlb_mem' pointer if the device is using swiotlb for DMA. A subsequent call to swiotlb_exit() will therefore leave dangling pointers behind in these device structures, resulting in KASAN splats such as: | BUG: KASAN: use-after-free in __iommu_dma_unmap_swiotlb+0x64/0xb0 | Read of size 8 at addr ffff8881d7830000 by task swapper/0/0 | | CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3-debug #1 | Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020 | Call Trace: | <IRQ> | dump_stack+0x9c/0xcf | print_address_description.constprop.0+0x18/0x130 | kasan_report.cold+0x7f/0x111 | __iommu_dma_unmap_swiotlb+0x64/0xb0 | nvme_pci_complete_rq+0x73/0x130 | blk_complete_reqs+0x6f/0x80 | __do_softirq+0xfc/0x3be Convert 'io_default_tlb_mem' to a static structure, so that the per-device pointers remain valid after swiotlb_exit() has been invoked. All users are updated to reference the static structure directly, using the 'nslabs' field to determine whether swiotlb has been initialised. The 'slots' array is still allocated dynamically and referenced via a pointer rather than a flexible array member. Cc: Claire Chang <tientzu@chromium.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Fixes: 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used") Reported-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
2021-07-16dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable}Roman Skakun1-2/+10
xen-swiotlb can use vmalloc backed addresses for dma coherent allocations and uses the common helpers. Properly handle them to unbreak Xen on ARM platforms. Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> [hch: split the patch, renamed the helpers] Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-07-14swiotlb: add overflow checks to swiotlb_bounceDominique Martinet1-3/+17
This is a follow-up on 5f89468e2f06 ("swiotlb: manipulate orig_addr when tlb_addr has offset") which fixed unaligned dma mappings, making sure the following overflows are caught: - offset of the start of the slot within the device bigger than requested address' offset, in other words if the base address given in swiotlb_tbl_map_single to create the mapping (orig_addr) was after the requested address for the sync (tlb_offset) in the same block: |------------------------------------------| block <----------------------------> mapped part of the block ^ orig_addr ^ invalid tlb_addr for sync - if the resulting offset was bigger than the allocation size this one could happen if the mapping was not until the end. e.g. |------------------------------------------| block <---------------------> mapped part of the block ^ ^ orig_addr invalid tlb_addr Both should never happen so print a warning and bail out without trying to adjust the sizes/offsets: the first one could try to sync from orig_addr to whatever is left of the requested size, but the later really has nothing to sync there... Signed-off-by: Dominique Martinet <dominique.martinet@atmark-techno.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Bumyong Lee <bumyong.lee@samsung.com Cc: Chanho Park <chanho61.park@samsung.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
2021-07-14swiotlb: fix implicit debugfs declarationsClaire Chang1-5/+16
Factor out the debugfs bits from rmem_swiotlb_device_init() into a separate rmem_swiotlb_debugfs_init() to fix the implicit debugfs declarations. Fixes: 461021875c50 ("swiotlb: Add restricted DMA pool initialization") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Claire Chang <tientzu@chromium.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Add restricted DMA pool initializationClaire Chang2-0/+90
Add the initialization function to create restricted DMA pools from matching reserved-memory nodes. Regardless of swiotlb setting, the restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Add restricted DMA alloc/free supportClaire Chang2-14/+73
Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to support the memory allocation from restricted DMA pool. The restricted DMA pool is preferred if available. Note that since coherent allocation needs remapping, one must set up another device coherent pool by shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic coherent allocation. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Refactor swiotlb_tbl_unmap_singleClaire Chang1-15/+20
Add a new function, swiotlb_release_slots, to make the code reusable for supporting different bounce buffer pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Move alloc_size to swiotlb_find_slotsClaire Chang1-8/+9
Rename find_slots to swiotlb_find_slots and move the maintenance of alloc_size to it for better code reusability later. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncingClaire Chang3-2/+6
Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and use it to determine whether to bounce the data or not. This will be useful later to allow for different pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v2: Includes Will's fix]
2021-07-14swiotlb: Update is_swiotlb_active to add a struct device argumentClaire Chang2-3/+3
Update is_swiotlb_active to add a struct device argument. This will be useful later to allow for different pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Update is_swiotlb_buffer to add a struct device argumentClaire Chang2-6/+6
Update is_swiotlb_buffer to add a struct device argument. This will be useful later to allow for different pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool usedClaire Chang1-4/+4
Always have the pointer to the swiotlb pool used in struct device. This could help simplify the code for other pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Refactor swiotlb_create_debugfsClaire Chang1-7/+14
Split the debugfs creation to make the code reusable for supporting different bounce buffer pools. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-14swiotlb: Refactor swiotlb init functionsClaire Chang1-25/+25
Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct initialization to make the code reusable. Signed-off-by: Claire Chang <tientzu@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-07-02Merge tag 'dma-mapping-5.14' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2-5/+3
Pull dma-mapping updates from Christoph Hellwig: - a trivivial whitespace fix (Zhen Lei) - report -EEXIST errors in add_dma_entry (Hamza Mahfooz) * tag 'dma-mapping-5.14' of git://git.infradead.org/users/hch/dma-mapping: dma-debug: report -EEXIST errors in add_dma_entry dma-mapping: remove a trailing space
2021-06-23Merge branch 'stable/for-linus-5.14' of ↵Linus Torvalds1-8/+15
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb fix from Konrad Rzeszutek Wilk: "A fix for the regression for the DMA operations where the offset was ignored and corruptions would appear. Going forward there will be a cleanups to make the offset and alignment logic more clearer and better test-cases to help with this" * 'stable/for-linus-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: swiotlb: manipulate orig_addr when tlb_addr has offset
2021-06-22dma-debug: report -EEXIST errors in add_dma_entryHamza Mahfooz1-4/+2
Since, overlapping mappings are not supported by the DMA API we should report an error if active_cacheline_insert returns -EEXIST. Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Hamza Mahfooz <someguy@effective-light.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-06-22dma-mapping: remove a trailing spaceZhen Lei1-1/+1
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-06-21swiotlb: manipulate orig_addr when tlb_addr has offsetBumyong Lee1-8/+15
in case of driver wants to sync part of ranges with offset, swiotlb_tbl_sync_single() copies from orig_addr base to tlb_addr with offset and ends up with data mismatch. It was removed from "swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single", but said logic has to be added back in. From Linus's email: "That commit which the removed the offset calculation entirely, because the old (unsigned long)tlb_addr & (IO_TLB_SIZE - 1) was wrong, but instead of removing it, I think it should have just fixed it to be (tlb_addr - mem->start) & (IO_TLB_SIZE - 1); instead. That way the slot offset always matches the slot index calculation." (Unfortunatly that broke NVMe). The use-case that drivers are hitting is as follow: 1. Get dma_addr_t from dma_map_single() dma_addr_t tlb_addr = dma_map_single(dev, vaddr, vsize, DMA_TO_DEVICE); |<---------------vsize------------->| +-----------------------------------+ | | original buffer +-----------------------------------+ vaddr swiotlb_align_offset |<----->|<---------------vsize------------->| +-------+-----------------------------------+ | | | swiotlb buffer +-------+-----------------------------------+ tlb_addr 2. Do something 3. Sync dma_addr_t through dma_sync_single_for_device(..) dma_sync_single_for_device(dev, tlb_addr + offset, size, DMA_TO_DEVICE); Error case. Copy data to original buffer but it is from base addr (instead of base addr + offset) in original buffer: swiotlb_align_offset |<----->|<- offset ->|<- size ->| +-------+-----------------------------------+ | | |##########| | swiotlb buffer +-------+-----------------------------------+ tlb_addr |<- size ->| +-----------------------------------+ |##########| | original buffer +-----------------------------------+ vaddr The fix is to copy the data to the original buffer and take into account the offset, like so: swiotlb_align_offset |<----->|<- offset ->|<- size ->| +-------+-----------------------------------+ | | |##########| | swiotlb buffer +-------+-----------------------------------+ tlb_addr |<- offset ->|<- size ->| +-----------------------------------+ | |##########| | original buffer +-----------------------------------+ vaddr [One fix which was Linus's that made more sense to as it created a symmetry would break NVMe. The reason for that is the: unsigned int offset = (tlb_addr - mem->start) & (IO_TLB_SIZE - 1); would come up with the proper offset, but it would lose the alignment (which this patch contains).] Fixes: 16fc3cef33a0 ("swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single") Signed-off-by: Bumyong Lee <bumyong.lee@samsung.com> Signed-off-by: Chanho Park <chanho61.park@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reported-by: Dominique MARTINET <dominique.martinet@atmark-techno.com> Reported-by: Horia Geantă <horia.geanta@nxp.com> Tested-by: Horia Geantă <horia.geanta@nxp.com> CC: stable@vger.kernel.org Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-05-04Merge branch 'stable/for-linus-5.13' of ↵Linus Torvalds3-334/+200
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "Christoph Hellwig has taken a cleaver and trimmed off the not-needed code and nicely folded duplicate code in the generic framework. This lays the groundwork for more work to add extra DMA-backend-ish in the future. Along with that some bug-fixes to make this a nice working package" * 'stable/for-linus-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: swiotlb: don't override user specified size in swiotlb_adjust_size swiotlb: Fix the type of index swiotlb: Make SWIOTLB_NO_FORCE perform no allocation ARM: Qualify enabling of swiotlb_init() swiotlb: remove swiotlb_nr_tbl swiotlb: dynamically allocate io_tlb_default_mem swiotlb: move global variables into a new io_tlb_mem structure xen-swiotlb: remove the unused size argument from xen_swiotlb_fixup xen-swiotlb: split xen_swiotlb_init swiotlb: lift the double initialization protection from xen-swiotlb xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabs xen-swiotlb: remove xen_set_nslabs xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supported xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_buffer swiotlb: split swiotlb_tbl_sync_single swiotlb: move orig addr and size validation into swiotlb_bounce swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_single powerpc/svm: stop using io_tlb_start
2021-05-04Merge tag 'dma-mapping-5.13' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2-18/+153
Pull dma-mapping updates from Christoph Hellwig: - add a new dma_alloc_noncontiguous API (me, Ricardo Ribalda) - fix a copyright notice (Hao Fang) - add an unlikely annotation to dma_mapping_error (Heiner Kallweit) - remove a pointless empty line (Wang Qing) - add support for multi-pages map/unmap bencharking (Xiang Chen) * tag 'dma-mapping-5.13' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: add unlikely hint to error path in dma_mapping_error dma-mapping: benchmark: Add support for multi-pages map/unmap dma-mapping: benchmark: use the correct HiSilicon copyright dma-mapping: remove a pointless empty line in dma_alloc_coherent media: uvcvideo: Use dma_alloc_noncontiguous API dma-iommu: implement ->alloc_noncontiguous dma-iommu: refactor iommu_dma_alloc_remap dma-mapping: add a dma_alloc_noncontiguous API dma-mapping: refactor dma_{alloc,free}_pages dma-mapping: add a dma_mmap_pages helper
2021-04-30kernel/dma: remove unnecessary unmap_kernel_rangeNicholas Piggin1-1/+0
vunmap will remove ptes. Link: https://lkml.kernel.org/r/20210322021806.892164-3-npiggin@gmail.com Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Cédric Le Goater <clg@kaod.org> Cc: Uladzislau Rezki <urezki@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-04-29swiotlb: don't override user specified size in swiotlb_adjust_sizeChristoph Hellwig1-0/+2
If the user already specified a swiotlb size on the command line, swiotlb_adjust_size should not overwrite it. Fixes: 2cbc2776efe4 ("swiotlb: remove swiotlb_nr_tbl") Reported-by: Tom Lendacky <thomas.lendacky@amd.com> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-04-27swiotlb: Fix the type of indexClaire Chang1-1/+2
Fix the type of index from unsigned int to int since find_slots() might return -1. Fixes: 26a7e094783d ("swiotlb: refactor swiotlb_tbl_map_single") Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Claire Chang <tientzu@chromium.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
2021-04-02dma-mapping: benchmark: Add support for multi-pages map/unmapXiang Chen1-7/+14
Currently it only support one page map/unmap once a time for dma-map benchmark, but there are some other scenaries which need to support for multi-page map/unmap: for those multi-pages interfaces such as dma_alloc_coherent() and dma_map_sg(), the time spent on multi-pages map/unmap is not the time of a single page * npages (not linear) as it may use block description instead of page description when it is satified with the size such as 2M/1G, and also it can send a single TLB invalidation command to invalidate multi-pages instead of multi-times when RIL is enabled (which will short the time of unmap). So it is necessary to add support for multi-pages map/unmap. Add a parameter "-g" to support multi-pages map/unmap. Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com> Acked-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-04-02dma-mapping: benchmark: use the correct HiSilicon copyrightHao Fang1-1/+1
s/Hisilicon/HiSilicon/g. It should use capital S, according to https://www.hisilicon.com/en/terms-of-use. Signed-off-by: Hao Fang <fanghao11@huawei.com> Acked-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-04-01swiotlb: Make SWIOTLB_NO_FORCE perform no allocationFlorian Fainelli1-4/+14
When SWIOTLB_NO_FORCE is used, there should really be no allocations of default_nslabs to occur since we are not going to use those slabs. If a platform was somehow setting swiotlb_no_force and a later call to swiotlb_init() was to be made we would still be proceeding with allocating the default SWIOTLB size (64MB), whereas if swiotlb=noforce was set on the kernel command line we would have only allocated 2KB. This would be inconsistent and the point of initializing default_nslabs to 1, was intended to allocate the minimum amount of memory possible, so simply remove that minimal allocation period. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>