diff options
author | Haavard Skinnemoen <hskinnemoen@atmel.com> | 2007-06-11 19:17:14 +0400 |
---|---|---|
committer | Haavard Skinnemoen <hskinnemoen@atmel.com> | 2007-06-14 20:30:50 +0400 |
commit | 093d0faf57e59feee224217273f944e10e4e3562 (patch) | |
tree | afecbd1b0ee1ca66fad714a9accadd8ce9efd897 /include/asm-avr32 | |
parent | 2fdfe8d9a2687718b07a35196b89fbf48ba0c82f (diff) | |
download | linux-093d0faf57e59feee224217273f944e10e4e3562.tar.xz |
[AVR32] Define ARCH_KMALLOC_MINALIGN to L1_CACHE_BYTES
This allows SLUB debugging to be used without fear of messing up DMA
transfers. SPI is one example that easily breaks without this patch.
Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Diffstat (limited to 'include/asm-avr32')
-rw-r--r-- | include/asm-avr32/cache.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/include/asm-avr32/cache.h b/include/asm-avr32/cache.h index dabb955f3c00..d3cf35ab11ab 100644 --- a/include/asm-avr32/cache.h +++ b/include/asm-avr32/cache.h @@ -4,6 +4,15 @@ #define L1_CACHE_SHIFT 5 #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) +/* + * Memory returned by kmalloc() may be used for DMA, so we must make + * sure that all such allocations are cache aligned. Otherwise, + * unrelated code may cause parts of the buffer to be read into the + * cache before the transfer is done, causing old data to be seen by + * the CPU. + */ +#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES + #ifndef __ASSEMBLER__ struct cache_info { unsigned int ways; |