summaryrefslogtreecommitdiff
path: root/arch/arm/mach-omap2/omap-secure.c
diff options
context:
space:
mode:
authorR Sricharan <r.sricharan@ti.com>2012-09-12 11:14:13 +0400
committerTony Lindgren <tony@atomide.com>2012-10-09 01:04:50 +0400
commit7a2852908e37e20be065e7765806daf1df077496 (patch)
tree5f54ff51f0f643448e8a7a79ed52fd59b1ff7bd3 /arch/arm/mach-omap2/omap-secure.c
parent9d7d6e363b06934221b81a859d509844c97380df (diff)
downloadlinux-7a2852908e37e20be065e7765806daf1df077496.tar.xz
ARM: OMAP2+: Round of the carve out memory requested to section_size
memblock_steal tries to reserve physical memory during boot. When the requested size is not aligned on the section size then, the remaining memory available for lowmem becomes unaligned on the section boundary. There is a issue with this, which is discussed in the thread below. https://lkml.org/lkml/2012/6/28/112 The final conclusion from the thread seems to be align the memblock_steal calls on the SECTION boundary. The issue comes out when LPAE is enabled, where the section size is 2MB. Boot tested this on OMAP5 evm with and without LPAE. Signed-off-by: R Sricharan <r.sricharan@ti.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
Diffstat (limited to 'arch/arm/mach-omap2/omap-secure.c')
-rw-r--r--arch/arm/mach-omap2/omap-secure.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/arm/mach-omap2/omap-secure.c b/arch/arm/mach-omap2/omap-secure.c
index a004cb9acf52..e089e4d1ae38 100644
--- a/arch/arm/mach-omap2/omap-secure.c
+++ b/arch/arm/mach-omap2/omap-secure.c
@@ -61,8 +61,8 @@ int __init omap_secure_ram_reserve_memblock(void)
{
u32 size = OMAP_SECURE_RAM_STORAGE;
- size = ALIGN(size, SZ_1M);
- omap_secure_memblock_base = arm_memblock_steal(size, SZ_1M);
+ size = ALIGN(size, SECTION_SIZE);
+ omap_secure_memblock_base = arm_memblock_steal(size, SECTION_SIZE);
return 0;
}