diff options
author | Christoph Hellwig <hch@lst.de> | 2019-09-02 11:44:19 +0300 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2019-09-11 13:43:16 +0300 |
commit | 8e23c82c68635a8859820bc6feb0fb3798aed943 (patch) | |
tree | 7e42707e498229b72b89d6f782eef6ddb7051f5e /arch/arm/mm | |
parent | 78406ff566ecd72021928217908ca255406bd914 (diff) | |
download | linux-8e23c82c68635a8859820bc6feb0fb3798aed943.tar.xz |
xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainance
Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA
on-coherent devices.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Diffstat (limited to 'arch/arm/mm')
-rw-r--r-- | arch/arm/mm/dma-mapping.c | 8 |
1 files changed, 1 insertions, 7 deletions
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index d98898893140..143d7c79b4cb 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1105,10 +1105,6 @@ static const struct dma_map_ops *arm_get_dma_map_ops(bool coherent) * 32-bit DMA. * Use the generic dma-direct / swiotlb ops code in that case, as that * handles bounce buffering for us. - * - * Note: this checks CONFIG_ARM_LPAE instead of CONFIG_SWIOTLB as the - * latter is also selected by the Xen code, but that code for now relies - * on non-NULL dev_dma_ops. To be cleaned up later. */ if (IS_ENABLED(CONFIG_ARM_LPAE)) return NULL; @@ -2318,10 +2314,8 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, set_dma_ops(dev, dma_ops); #ifdef CONFIG_XEN - if (xen_initial_domain()) { - dev->archdata.dev_dma_ops = dev->dma_ops; + if (xen_initial_domain()) dev->dma_ops = xen_dma_ops; - } #endif dev->archdata.dma_ops_setup = true; } |