summaryrefslogtreecommitdiff
path: root/include/linux/swiotlb.h
diff options
context:
space:
mode:
authorChristoph Hellwig <hch@lst.de>2020-02-03 20:11:10 +0300
committerChristoph Hellwig <hch@lst.de>2020-02-05 20:50:55 +0300
commit91ef26f914171cf753330f13724fd9142b5b1640 (patch)
tree67c2f70a79ebbe6ba1fcc802d13f4d24b06daf43 /include/linux/swiotlb.h
parent8c8c5a4994a306c217fd061cbfc5903399fd4c1c (diff)
downloadlinux-91ef26f914171cf753330f13724fd9142b5b1640.tar.xz
dma-direct: relax addressability checks in dma_direct_supported
dma_direct_supported tries to find the minimum addressable bitmask based on the end pfn and optional magic that architectures can use to communicate the size of the magic ZONE_DMA that can be used for bounce buffering. But between the DMA offsets that can change per device (or sometimes even region), the fact the ZONE_DMA isn't even guaranteed to be the lowest addresses and failure of having proper interfaces to the MM code this fails at least for one arm subarchitecture. As all the legacy DMA implementations have supported 32-bit DMA masks, and 32-bit masks are guranteed to always work by the API contract (using bounce buffers if needed), we can short cut the complicated check and always return true without breaking existing assumptions. Hopefully we can properly clean up the interaction with the arch defined zones and the bootmem allocator eventually. Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs") Reported-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Diffstat (limited to 'include/linux/swiotlb.h')
0 files changed, 0 insertions, 0 deletions