summaryrefslogtreecommitdiff
path: root/arch/arm/include/asm/xen/page-coherent.h
AgeCommit message (Collapse)AuthorFilesLines
2022-05-11swiotlb-xen: fix DMA_ATTR_NO_KERNEL_MAPPING on armChristoph Hellwig1-2/+0
swiotlb-xen uses very different ways to allocate coherent memory on x86 vs arm. On the former it allocates memory from the page allocator, while on the later it reuses the dma-direct allocator the handles the complexities of non-coherent DMA on arm platforms. Unfortunately the complexities of trying to deal with the two cases in the swiotlb-xen.c code lead to a bug in the handling of DMA_ATTR_NO_KERNEL_MAPPING on arm. With the DMA_ATTR_NO_KERNEL_MAPPING flag the coherent memory allocator does not actually allocate coherent memory, but just a DMA handle for some memory that is DMA addressable by the device, but which does not have to have a kernel mapping. Thus dereferencing the return value will lead to kernel crashed and memory corruption. Fix this by using the dma-direct allocator directly for arm, which works perfectly fine because on arm swiotlb-xen is only used when the domain is 1:1 mapped, and then simplifying the remaining code to only cater for the x86 case with DMA coherent device. Reported-by: Rahul Singh <Rahul.Singh@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Rahul Singh <rahul.singh@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Rahul Singh <rahul.singh@arm.com>
2019-09-11xen/arm: consolidate page-coherent.hChristoph Hellwig1-75/+0
Shared the duplicate arm/arm64 code in include/xen/arm/page-coherent.h. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2019-09-11xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainanceChristoph Hellwig1-45/+27
Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA on-coherent devices. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2019-01-24arm64/xen: fix xen-swiotlb cache flushingChristoph Hellwig1-0/+94
Xen-swiotlb hooks into the arm/arm64 arch code through a copy of the DMA DMA mapping operations stored in the struct device arch data. Switching arm64 to use the direct calls for the merged DMA direct / swiotlb code broke this scheme. Replace the indirect calls with direct-calls in xen-swiotlb as well to fix this problem. Fixes: 356da6d0cde3 ("dma-mapping: bypass indirect calls for dma-direct") Reported-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2016-12-02arm/arm64: xen: Move shared architecture headers to include/xen/armMarc Zyngier1-98/+1
ARM and arm64 Xen ports share a number of headers, leading to packaging issues when these headers needs to be exported, as it breaks the reasonable requirement that an architecture port has self-contained headers. Fix the issue by moving the 5 header files to include/xen/arm, and keep local placeholders to include the relevant files. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2016-08-04dma-mapping: use unsigned long for dma_attrsKrzysztof Kozlowski1-10/+6
The dma-mapping core and the implementations do not change the DMA attributes passed by pointer. Thus the pointer can point to const data. However the attributes do not have to be a bitfield. Instead unsigned long will do fine: 1. This is just simpler. Both in terms of reading the code and setting attributes. Instead of initializing local attributes on the stack and passing pointer to it to dma_set_attr(), just set the bits. 2. It brings safeness and checking for const correctness because the attributes are passed by value. Semantic patches for this change (at least most of them): virtual patch virtual context @r@ identifier f, attrs; @@ f(..., - struct dma_attrs *attrs + unsigned long attrs , ...) { ... } @@ identifier r.f; @@ f(..., - NULL + 0 ) and // Options: --all-includes virtual patch virtual context @r@ identifier f, attrs; type t; @@ t f(..., struct dma_attrs *attrs); @@ identifier r.f; @@ f(..., - NULL + 0 ) Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Salter <msalter@redhat.com> [c6x] Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris] Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm] Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp] Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core] Acked-by: David Vrabel <david.vrabel@citrix.com> [xen] Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb] Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32] Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-08xen/arm: correctly handle DMA mapping of compound pagesIan Campbell1-7/+14
Currently xen_dma_map_page concludes that DMA to anything other than the head page of a compound page must be foreign, since the PFN of the page is that of the head. Fix the check to instead consider the whole of a compound page to be local if the PFN of the head passes the 1:1 check. We can never see a compound page which is a mixture of foreign and local sub-pages. The comment already correctly described the intention, but fixup the spelling and some grammar. This fixes the various SSH protocol errors which we have been seeing on the cubietrucks in our automated test infrastructure. This has been broken since commit 3567258d281b ("xen/arm: use hypercall to flush caches in map_page"), which was in v3.19-rc1. NB arch/arm64/.../xen/page-coherent.h also includes this file. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: xen-devel@lists.xenproject.org Cc: linux-arm-kernel@lists.infradead.org Cc: stable@vger.kernel.org # v3.19+
2015-10-23xen/swiotlb: Add support for 64KB page granularityJulien Grall1-9/+17
Swiotlb is used on ARM64 to support DMA on platform where devices are not protected by an SMMU. Furthermore it's only enabled for DOM0. While Xen is always using 4KB page granularity in the stage-2 page table, Linux ARM64 may either use 4KB or 64KB. This means that a Linux page can be spanned accross multiple Xen page. The Swiotlb code has to validate that the buffer used for DMA is physically contiguous in the memory. As a Linux page can't be shared between local memory and foreign page by design (the balloon code always removing entirely a Linux page), the changes in the code are very minimal because we only need to check the first Xen PFN. Note that it may be possible to optimize the function check_page_physically_contiguous to avoid looping over every Xen PFN for local memory. Although I will let this optimization for a follow-up. Signed-off-by: Julien Grall <julien.grall@citrix.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2014-12-04xen/arm: use hypercall to flush caches in map_pageStefano Stabellini1-1/+12
In xen_dma_map_page, if the page is a local page, call the native map_page dma_ops. If the page is foreign, call __xen_dma_map_page that issues any required cache maintenane operations via hypercall. The reason for doing this is that the native dma_ops map_page could allocate buffers than need to be freed. If the page is foreign we don't call the native unmap_page dma_ops function, resulting in a memory leak. Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
2014-12-04xen: add a dma_addr_t dev_addr argument to xen_dma_map_pageStefano Stabellini1-2/+2
dev_addr is the machine address of the page. The new parameter can be used by the ARM and ARM64 implementations of xen_dma_map_page to find out if the page is a local page (pfn == mfn) or a foreign page (pfn != mfn). dev_addr could be retrieved again from the physical address, using pfn_to_mfn, but it requires accessing an rbtree. Since we already have the dev_addr in our hands at the call site there is no need to get the mfn twice. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2014-12-04xen/arm: if(pfn_valid(pfn)) call native dma_opsStefano Stabellini1-6/+43
Remove code duplication in mm32.c by calling the native dma_ops if the page is a local page (not a foreign page). Use a simple pfn_valid(pfn) check to figure out if the page is local, exploiting the fact that dom0 is mapped 1:1, therefore pfn_valid always returns false when called on a foreign mfn. Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-11xen/arm: reimplement xen_dma_unmap_page & friendsStefano Stabellini1-18/+7
xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device are currently implemented by calling into the corresponding generic ARM implementation of these functions. In order to do this, firstly the dma_addr_t handle, that on Xen is a machine address, needs to be translated into a physical address. The operation is expensive and inaccurate, given that a single machine address can correspond to multiple physical addresses in one domain, because the same page can be granted multiple times by the frontend. To avoid this problem, we introduce a Xen specific implementation of xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device, that can operate on machine addresses directly. The new implementation relies on the fact that the hypervisor creates a second p2m mapping of any grant pages at physical address == machine address of the page for dom0. Therefore we can access memory at physical address == dma_addr_r handle and perform the cache flushing there. Some cache maintenance operations require a virtual address. Instead of using ioremap_cache, that is not safe in interrupt context, we allocate a per-cpu PAGE_KERNEL scratch page and we manually update the pte for it. arm64 doesn't need cache maintenance operations on unmap for now. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: Denis Schneider <v1ne2go@gmail.com>
2013-10-25xen: introduce xen_dma_map/unmap_page and xen_dma_sync_single_for_cpu/deviceStefano Stabellini1-0/+28
Introduce xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device. They have empty implementations on x86 and ia64 but they call the corresponding platform dma_ops function on arm and arm64. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v9: - xen_dma_map_page return void, avoid page_to_phys.
2013-10-09xen: introduce xen_alloc/free_coherent_pagesStefano Stabellini1-0/+22
xen_swiotlb_alloc_coherent needs to allocate a coherent buffer for cpu and devices. On native x86 is sufficient to call __get_free_pages in order to get a coherent buffer, while on ARM (and potentially ARM64) we need to call the native dma_ops->alloc implementation. Introduce xen_alloc_coherent_pages to abstract the arch specific buffer allocation. Similarly introduce xen_free_coherent_pages to free a coherent buffer: on x86 is simply a call to free_pages while on ARM and ARM64 is arm_dma_ops.free. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v7: - rename __get_dma_ops to __generic_dma_ops; - call __generic_dma_ops(hwdev)->alloc/free on arm64 too. Changes in v6: - call __get_dma_ops to get the native dma_ops pointer on arm.