diff options
author | Bjorn Andersson <bjorn.andersson@linaro.org> | 2019-04-18 07:29:29 +0300 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2019-04-23 12:56:24 +0300 |
commit | d4d18e3ec6091843f607e8929a56723e28f393a6 (patch) | |
tree | ebe9dab3dcc03344ca8f47f3b00e6c37242bde02 /arch | |
parent | 085b7755808aa11f78ab9377257e1dad2e6fa4bb (diff) | |
download | linux-d4d18e3ec6091843f607e8929a56723e28f393a6.tar.xz |
arm64: mm: Ensure tail of unaligned initrd is reserved
In the event that the start address of the initrd is not aligned, but
has an aligned size, the base + size will not cover the entire initrd
image and there is a chance that the kernel will corrupt the tail of the
image.
By aligning the end of the initrd to a page boundary and then
subtracting the adjusted start address the memblock reservation will
cover all pages that contains the initrd.
Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
Cc: stable@vger.kernel.org
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/arm64/mm/init.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 6bc135042f5e..7cae155e81a5 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -363,7 +363,7 @@ void __init arm64_memblock_init(void) * Otherwise, this is a no-op */ u64 base = phys_initrd_start & PAGE_MASK; - u64 size = PAGE_ALIGN(phys_initrd_size); + u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base; /* * We can only add back the initrd memory if we don't end up |