Age | Commit message (Collapse) | Author | Files | Lines |
|
ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get
rid of it from ARM by replacing it with arm_dma_zone_mask. Move
dma_supported() and dma_set_mask() out of line, and have
dma_supported() check this new variable instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
Add pud_offset() et.al. between the pgd and pmd code in preparation of
using pgtable-nopud.h rather than 4level-fixup.h.
This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for
uaccess_with_memcpy.c.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The kerneldoc for this function is at odds with the DMA-API
document, which holds, so fix it.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Conflicts:
arch/arm/Kconfig
arch/arm/common/Makefile
arch/arm/kernel/Makefile
arch/arm/kernel/smp.c
|
|
Conflicts:
arch/arm/kernel/entry-armv.S
arch/arm/mm/ioremap.c
|
|
Add ARM support for the DMA debug infrastructure, which allows the
DMA API usage to be debugged.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Replace the page_to_dma() and dma_to_page() macros with their PFN
equivalents. This allows us to map parts of memory which do not have
a struct page allocated to them to bus addresses. This will be used
internally by dma_alloc_coherent()/dma_alloc_writecombine().
Build tested on Versatile, OMAP1, IOP13xx and KS8695.
Tested-by: Janusz Krzysztofik <jkrzyszt@tis.icnet.pl>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Since commit 3e4d3af501 "mm: stack based kmap_atomic()", it is no longer
necessary to carry an ad hoc version of kmap_atomic() added in commit
7e5a69e83b "ARM: 6007/1: fix highmem with VIPT cache and DMA" to cope
with reentrancy.
In fact, it is now actively wrong to rely on fixed kmap type indices
(namely KM_L1_CACHE) as kmap_atomic() totally ignores them now and a
concurrent instance of it may reuse any slot for any purpose.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
|
|
An out by one bug meant that the DMA coherent allocator was aligning
to one more bit than it should, causing it to run out of available
memory quicker. Fix this.
Reported-by: Petr Štetiar <ynezz@true.cz>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
There are places in Linux where writes to newly allocated page cache
pages happen without a subsequent call to flush_dcache_page() (several
PIO drivers including USB HCD). This patch changes the meaning of
PG_arch_1 to be PG_dcache_clean and always flush the D-cache for a newly
mapped page in update_mmu_cache().
The patch also sets the PG_arch_1 bit in the DMA cache maintenance
function to avoid additional cache flushing in update_mmu_cache().
Tested-by: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Dave Hylands reports:
| We've observed a problem with dma_alloc_writecombine when the system
| is under heavy load (heavy bus traffic). We've managed to reduce the
| problem to the following snippet, which is run from a kthread in a
| continuous loop:
|
| void *virtAddr;
| dma_addr_t physAddr;
| unsigned int numBytes = 256;
|
| for (;;) {
| virtAddr = dma_alloc_writecombine(NULL,
| numBytes, &physAddr, GFP_KERNEL);
| if (virtAddr == NULL) {
| printk(KERN_ERR "Running out of memory\n");
| break;
| }
|
| /* access DMA memory allocated */
| tmp = virtAddr;
| *tmp = 0x77;
|
| /* free DMA memory */
| dma_free_writecombine(NULL,
| numBytes, virtAddr, physAddr);
|
| ...sleep here...
| }
|
| By itself, the code will run forever with no issues. However, as we
| increase our bus traffic (typically using DMA) then the *tmp = 0x77
| line will eventually cause a page fault. If we add a small delay (a
| few microseconds) before the *tmp = 0x77, then we don't see a page
| fault, even under heavy load.
A dsb() is required after modifying the PTE entries to ensure that they
will always be visible. Add this dsb().
Reported-by: Dave Hylands <dhylands@gmail.com>
Tested-by: Dave Hylands <dhylands@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The DMA coherent remap area is used to provide an uncached mapping
of memory for coherency with DMA engines. Currently, we look for
any free hole which our allocation will fit in with page alignment.
However, this can lead to fragmentation of the area, and allows small
allocations to cross L1 entry boundaries. This is undesirable as we
want to move towards allocating sections of memory.
Align allocations according to the size, limiting the alignment between
the page and section sizes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This macro is not defined when !CONFIG_MMU so this patch moves the
CONSISTENT_* definitions to the CONFIG_MMU section.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The VIVT cache of a highmem page is always flushed before the page
is unmapped. This cache flush is explicit through flush_cache_kmaps()
in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in
kunmap_atomic(). There is also an implicit flush of those highmem pages
that were part of a process that just terminated making those pages free
as the whole VIVT cache has to be flushed on every task switch. Hence
unmapped highmem pages need no cache maintenance in that case.
However unmapped pages may still be cached with a VIPT cache because the
cache is tagged with physical addresses. There is no need for a whole
cache flush during task switching for that reason, and despite the
explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(),
some highmem pages that were mapped in user space end up still cached
even when they become unmapped.
So, we do have to perform cache maintenance on those unmapped highmem
pages in the context of DMA when using a VIPT cache. Unfortunately,
it is not possible to perform that cache maintenance using physical
addresses as all the L1 cache maintenance coprocessor functions accept
virtual addresses only. Therefore we have no choice but to set up a
temporary virtual mapping for that purpose.
And of course the explicit cache flushing when unmapping a highmem page
on a system with a VIPT cache now can go, which should increase
performance.
While at it, because the code in __flush_dcache_page() has to be modified
anyway, let's also make sure the mapped highmem pages are pinned with
kmap_high_get() for the duration of the cache maintenance operation.
Because kunmap() does unmap highmem pages lazily, it was reported by
Gary King <GKing@nvidia.com> that those pages ended up being unmapped
during cache maintenance on SMP causing segmentation faults.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
|
|
|
|
Adds DMA area to 'virtual memory map' startup message
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes
DMA cache coherency handling slightly more interesting. Rather than
being able to rely upon the CPU not accessing the DMA buffer until DMA
has completed, we now must expect that the cache could be loaded with
possibly stale data from the DMA buffer.
Where DMA involves data being transferred to the device, we clean the
cache before handing it over for DMA, otherwise we invalidate the buffer
to get rid of potential writebacks. On DMA Completion, if data was
transferred from the device, we invalidate the buffer to get rid of
any stale speculative prefetches.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
dma_cache_maint_contiguous is now simple enough to live inside
dma_cache_maint_page, so move it there.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API. This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-By: Jamie Iles <jamie@jamieiles.com>
|
|
On ARMv7, it is invalid to map the same physical address multiple times
with different memory types. Since system RAM is already mapped as
'memory', subsequent remapping of it must retain this attribute.
However, DMA memory maps it as "strongly ordered". Fix this by introducing
'pgprot_dmacoherent()' which provides the necessary page table bits for
DMA mappings.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
It's unnecessary; x86 doesn't do it, and ALSA doesn't require it
anymore.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
This entirely separates the DMA coherent buffer remapping code from
the allocation code, and gets rid of the duplicate copy in the !MMU
section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
IXP23xx added support for dma_alloc_coherent() for DMA arches with an
exception in dma_alloc_coherent(). This is a subset of what goes on
in __dma_alloc(), and there is no reason why dma_alloc_writecombine()
should not be given the same treatment (except, maybe, that IXP23xx
doesn't use it.)
We can better deal with this by moving the arch_is_coherent() test
inside __dma_alloc() and killing the code duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
No point wrapping the contents of this function with #ifdef CONFIG_MMU
when we can place it and the core_initcall() entirely within the
existing conditional block.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
We effectively have three implementations of dma_free_coherent() mixed up
in the code; the incoherent MMU, coherent MMU and noMMU versions.
The coherent MMU and noMMU versions are actually functionally identical.
The incoherent MMU version is almost the same, but with the additional
step of unmapping the secondary mapping.
Separate out this additional step into __dma_free_remap() and simplify
the resulting dma_free_coherent() code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
The nommu version of dma_alloc_coherent was using kmalloc/kfree to manage
the memory. dma_alloc_coherent() is expected to work with a granularity
of a page, so this is wrong. Fix it by using the helper functions now
provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
The coherent architecture dma_alloc_coherent was using kmalloc/kfree to
manage the memory. dma_alloc_coherent() is expected to work with a
granularity of a page, so this is wrong. Fix it by using the helper
functions now provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
We were using GFP_DMA for masks other than 0xffffffff, which is
wrong when some masks are initialized to 0xffffffffffffffff.
This caused such masks to obtain memory from the precious DMA
pool.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This is a helper to be used by the DMA mapping API to handle cache
maintenance for memory identified by a page structure instead of a
virtual address. Those pages may or may not be highmem pages, and
when they're highmem pages, they may or may not be virtually mapped.
When they're not mapped then there is no L1 cache to worry about. But
even in that case the L2 cache must be processed since unmapped highmem
pages can still be L2 cached.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
|
|
The current use of these macros works well when the conversion is
entirely linear. In this case, we can be assured that the following
holds true:
__va(p + s) - s = __va(p)
However, this is not always the case, especially when there is a
non-linear conversion (eg, when there is a 3.5GB hole in memory.)
In this case, if 's' is the size of the region (eg, PAGE_SIZE) and
'p' is the final page, the above is most definitely not true.
So, we must ensure that __va() and __pa() are only used with valid
kernel direct mapped RAM addresses. This patch tweaks the code
to achieve this.
Tested-by: Charles Moschel <fred99@carolina.rr.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rename ARM's struct vm_region so that I can introduce my own global version
for NOMMU. It's feasible that the ARM version may wish to use my global one
instead.
The NOMMU vm_region struct defines areas of the physical memory map that are
under mmap. This may include chunks of RAM or regions of memory mapped
devices, such as flash. It is also used to retain copies of file content so
that shareable private memory mappings of files can be made. As such, it may
be compatible with what is described in the banner comment for ARM's vm_region
struct.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
As per the dma_unmap_* calls, we don't touch the cache when a DMA
buffer transitions from device to CPU ownership. Presently, no
problems have been identified with speculative cache prefetching
which in itself is a new feature in later architectures. We may
have to revisit the DMA API later for these architectures anyway.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
No point having two of these; dma_map_page() can do all the work
for us.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Update the ARM DMA scatter gather APIs for the scatterlist changes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|