summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDave P Martin <Dave.Martin@arm.com>2015-06-16 19:38:47 +0300
committerJiri Slaby <jslaby@suse.cz>2015-07-30 15:10:48 +0300
commit60b519808996e04b5bf19cdf42f28f9b3689f252 (patch)
treeca937829ffa311b45e9f9d3ffa53c97090d215e8
parent4dd03eff07b66352a307bb3eeb7d3973ec34ecbe (diff)
downloadlinux-60b519808996e04b5bf19cdf42f28f9b3689f252.tar.xz
arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP
commit b9bcc919931611498e856eae9bf66337330d04cc upstream. The memmap freeing code in free_unused_memmap() computes the end of each memblock by adding the memblock size onto the base. However, if SPARSEMEM is enabled then the value (start) used for the base may already have been rounded downwards to work out which memmap entries to free after the previous memblock. This may cause memmap entries that are in use to get freed. In general, you're not likely to hit this problem unless there are at least 2 memblocks and one of them is not aligned to a sparsemem section boundary. Note that carve-outs can increase the number of memblocks by splitting the regions listed in the device tree. This problem doesn't occur with SPARSEMEM_VMEMMAP, because the vmemmap code deals with freeing the unused regions of the memmap instead of requiring the arch code to do it. This patch gets the memblock base out of the memblock directly when computing the block end address to ensure the correct value is used. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-rw-r--r--arch/arm64/mm/init.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index de2de5db628d..cfe3ad835d16 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -253,7 +253,7 @@ static void __init free_unused_memmap(void)
* memmap entries are valid from the bank end aligned to
* MAX_ORDER_NR_PAGES.
*/
- prev_end = ALIGN(start + __phys_to_pfn(reg->size),
+ prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
MAX_ORDER_NR_PAGES);
}