summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/cache.h
diff options
context:
space:
mode:
authorJungseok Lee <jungseoklee85@gmail.com>2014-12-02 20:49:24 +0300
committerWill Deacon <will.deacon@arm.com>2014-12-03 13:19:35 +0300
commite4f88d833bec29b8e6fadc1b2488f0c6370935e1 (patch)
treee4062286dd04734147b5901d3d1e86bd7cacdb1c /arch/arm64/include/asm/cache.h
parenta2d25a5391ca219f196f9fee7b535c40d201c6bf (diff)
downloadlinux-e4f88d833bec29b8e6fadc1b2488f0c6370935e1.tar.xz
arm64: Implement support for read-mostly sections
As putting data which is read mostly together, we can avoid unnecessary cache line bouncing. Other architectures, such as ARM and x86, adopted the same idea. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'arch/arm64/include/asm/cache.h')
-rw-r--r--arch/arm64/include/asm/cache.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 88cc05b5f3ac..bde449936e2f 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -32,6 +32,8 @@
#ifndef __ASSEMBLY__
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))
+
static inline int cache_line_size(void)
{
u32 cwg = cache_type_cwg();