diff options
author | Jann Horn <jannh@google.com> | 2019-10-18 23:56:30 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-11-14 06:44:47 +0300 |
commit | a7a74d7ff55a0c657bc46238b050460b9eacea95 (patch) | |
tree | 37be4c7f15bfa4535a2aa0a3a1a28ee58d10627f /drivers/android | |
parent | 8eb52a1ee37aafd9b796713aa0b3ab9cbc455be3 (diff) | |
download | linux-a7a74d7ff55a0c657bc46238b050460b9eacea95.tar.xz |
binder: Prevent repeated use of ->mmap() via NULL mapping
binder_alloc_mmap_handler() attempts to detect the use of ->mmap() on a
binder_proc whose binder_alloc has already been initialized by checking
whether alloc->buffer is non-zero.
Before commit 880211667b20 ("binder: remove kernel vm_area for buffer
space"), alloc->buffer was a kernel mapping address, which is always
non-zero, but since that commit, it is a userspace mapping address.
A sufficiently privileged user can map /dev/binder at NULL, tricking
binder_alloc_mmap_handler() into assuming that the binder_proc has not been
mapped yet. This leads to memory unsafety.
Luckily, no context on Android has such privileges, and on a typical Linux
desktop system, you need to be root to do that.
Fix it by using the mapping size instead of the mapping address to
distinguish the mapped case. A valid VMA can't have size zero.
Fixes: 880211667b20 ("binder: remove kernel vm_area for buffer space")
Cc: stable@vger.kernel.org
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20191018205631.248274-2-jannh@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'drivers/android')
-rw-r--r-- | drivers/android/binder_alloc.c | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index d718ed0537be..1f73d12409e3 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -680,17 +680,17 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc, struct binder_buffer *buffer; mutex_lock(&binder_alloc_mmap_lock); - if (alloc->buffer) { + if (alloc->buffer_size) { ret = -EBUSY; failure_string = "already mapped"; goto err_already_mapped; } + alloc->buffer_size = min_t(unsigned long, vma->vm_end - vma->vm_start, + SZ_4M); + mutex_unlock(&binder_alloc_mmap_lock); alloc->buffer = (void __user *)vma->vm_start; - mutex_unlock(&binder_alloc_mmap_lock); - alloc->buffer_size = min_t(unsigned long, vma->vm_end - vma->vm_start, - SZ_4M); alloc->pages = kcalloc(alloc->buffer_size / PAGE_SIZE, sizeof(alloc->pages[0]), GFP_KERNEL); @@ -721,8 +721,9 @@ err_alloc_buf_struct_failed: kfree(alloc->pages); alloc->pages = NULL; err_alloc_pages_failed: - mutex_lock(&binder_alloc_mmap_lock); alloc->buffer = NULL; + mutex_lock(&binder_alloc_mmap_lock); + alloc->buffer_size = 0; err_already_mapped: mutex_unlock(&binder_alloc_mmap_lock); binder_alloc_debug(BINDER_DEBUG_USER_ERROR, |