diff options
author | Paul Burton <paul.burton@imgtec.com> | 2016-08-02 13:40:57 +0300 |
---|---|---|
committer | Ralf Baechle <ralf@linux-mips.org> | 2016-08-02 15:01:02 +0300 |
commit | 0d8d83d0447deb526c3125250eb391b5d76a3472 (patch) | |
tree | 8302cb9bc4fb6b6d735116d857b329eff9e65345 /arch | |
parent | 5c315e3984291b5bbf1bb873040182ead0637160 (diff) | |
download | linux-0d8d83d0447deb526c3125250eb391b5d76a3472.tar.xz |
MIPS: Use CPHYSADDR to implement mips32 __pa
Use CPHYSADDR to implement the __pa macro converting from a virtual to a
physical address for MIPS32, much as is already done for MIPS64 (though
without the complication of having both compatibility & XKPHYS
segments).
This allows for __pa to work regardless of whether the address being
translated is in kseg0 or kseg1, unlike the previous subtraction based
approach which only worked for addresses in kseg0. Working for kseg1
addresses is important if __pa is used on addresses allocated by
dma_alloc_coherent, where on systems with non-coherent I/O we provide
addresses in kseg1. If this address is then used with
dma_map_single_attrs then it is provided to virt_to_page, which in turn
calls virt_to_phys which is a wrapper around __pa. The result is that we
end up with a physical address 0x20000000 bytes (ie. the size of kseg0)
too high.
In addition to providing consistency with MIPS64 & fixing the kseg1 case
above this has the added bonus of generating smaller code for systems
implementing MIPS32r2 & beyond, where a single ext instruction can
extract the physical address rather than needing to load an immediate
into a temp register & subtract it. This results in ~1.3KB savings for a
boston_defconfig kernel adjusted to set CONFIG_32BIT=y.
This patch does not change the EVA case, which may or may not have
similar issues around handling both cached & uncached addresses but is
beyond the scope of this patch.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13836/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/mips/include/asm/page.h | 38 |
1 files changed, 28 insertions, 10 deletions
diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h index 74cb004c2868..ea0cd9773914 100644 --- a/arch/mips/include/asm/page.h +++ b/arch/mips/include/asm/page.h @@ -162,16 +162,34 @@ typedef struct { unsigned long pgprot; } pgprot_t; /* * __pa()/__va() should be used only during mem init. */ -#ifdef CONFIG_64BIT -#define __pa(x) \ -({ \ - unsigned long __x = (unsigned long)(x); \ - __x < CKSEG0 ? XPHYSADDR(__x) : CPHYSADDR(__x); \ -}) -#else -#define __pa(x) \ - ((unsigned long)(x) - PAGE_OFFSET + PHYS_OFFSET) -#endif +static inline unsigned long ___pa(unsigned long x) +{ + if (config_enabled(CONFIG_64BIT)) { + /* + * For MIPS64 the virtual address may either be in one of + * the compatibility segements ckseg0 or ckseg1, or it may + * be in xkphys. + */ + return x < CKSEG0 ? XPHYSADDR(x) : CPHYSADDR(x); + } + + if (!config_enabled(CONFIG_EVA)) { + /* + * We're using the standard MIPS32 legacy memory map, ie. + * the address x is going to be in kseg0 or kseg1. We can + * handle either case by masking out the desired bits using + * CPHYSADDR. + */ + return CPHYSADDR(x); + } + + /* + * EVA is in use so the memory map could be anything, making it not + * safe to just mask out bits. + */ + return x - PAGE_OFFSET + PHYS_OFFSET; +} +#define __pa(x) ___pa((unsigned long)(x)) #define __va(x) ((void *)((unsigned long)(x) + PAGE_OFFSET - PHYS_OFFSET)) #include <asm/io.h> |