diff options
author | Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> | 2017-04-18 10:01:27 +0300 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2017-04-19 13:00:19 +0300 |
commit | 321f7d29e5163d7f1c9c0b705acc45bd1be34aa6 (patch) | |
tree | ccb80ac79e28a7b4fad3a2187dad26d571686e6e /arch/powerpc/mm | |
parent | 41e20d959e5919c70058369323cefa57428b7aaf (diff) | |
download | linux-321f7d29e5163d7f1c9c0b705acc45bd1be34aa6.tar.xz |
powerpc/mmap: Any hint > 128TB searches the full VA space
As part of the new large address space support, processes start out life with a
128TB virtual address space. However when calling mmap() a process can pass a
hint address, and if that hint is > 128TB the kernel will use the full 512TB
address space to try and satisfy the mmap() request.
Currently we have a check that the hint is > 128TB and < 512TB (TASK_SIZE),
which was added as an optimisation to avoid updating addr_limit unnecessarily
and also to avoid calling slice_flush_segments() on all CPUs more than
necessary.
However this has the user-visible side effect that an mmap() hint above 512TB
does not search the full address space unless a preceding mmap() used a hint
value > 128TB && < 512TB.
So fix it to treat any hint above 128TB as a hint to search the full address
space, instead of checking the hint against TASK_SIZE, we instead check if the
addr_limit is already == TASK_SIZE.
This also brings the ABI in-line with what is proposed on x86. ie, that a hint
address above 128TB up to and including (2^64)-1 is an indication to search the
full address space.
Fixes: f4ea6dcb08ea2c (powerpc/mm: Enable mappings above 128TB)
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/mm')
-rw-r--r-- | arch/powerpc/mm/mmap.c | 6 | ||||
-rw-r--r-- | arch/powerpc/mm/slice.c | 3 |
2 files changed, 6 insertions, 3 deletions
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c index 106a86406c77..82fc5762f971 100644 --- a/arch/powerpc/mm/mmap.c +++ b/arch/powerpc/mm/mmap.c @@ -97,7 +97,8 @@ radix__arch_get_unmapped_area(struct file *filp, unsigned long addr, struct vm_area_struct *vma; struct vm_unmapped_area_info info; - if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE)) + if (unlikely(addr > mm->context.addr_limit && + mm->context.addr_limit != TASK_SIZE)) mm->context.addr_limit = TASK_SIZE; if (len > mm->task_size - mmap_min_addr) @@ -139,7 +140,8 @@ radix__arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr = addr0; struct vm_unmapped_area_info info; - if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE)) + if (unlikely(addr > mm->context.addr_limit && + mm->context.addr_limit != TASK_SIZE)) mm->context.addr_limit = TASK_SIZE; /* requested length too big for entire address space */ diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c index ade66c3ecdce..966b9fccfa66 100644 --- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -419,7 +419,8 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, /* * Check if we need to expland slice area. */ - if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE)) { + if (unlikely(addr > mm->context.addr_limit && + mm->context.addr_limit != TASK_SIZE)) { mm->context.addr_limit = TASK_SIZE; on_each_cpu(slice_flush_segments, mm, 1); } |