diff options
author | Jens Axboe <axboe@kernel.dk> | 2024-05-29 18:38:38 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2024-05-29 18:53:14 +0300 |
commit | 06fe9b1df1086b42718d632aa57e8f7cd1a66a21 (patch) | |
tree | 2b76323364eae2e8177d8418ad204e114e7c2e3a /io_uring/memmap.c | |
parent | 1613e604df0cd359cf2a7fbd9be7a0bcfacfabd0 (diff) | |
download | linux-06fe9b1df1086b42718d632aa57e8f7cd1a66a21.tar.xz |
io_uring: don't attempt to mmap larger than what the user asks for
If IORING_FEAT_SINGLE_MMAP is ignored, as can happen if an application
uses an ancient liburing or does setup manually, then 3 mmap's are
required to map the ring into userspace. The kernel will still have
collapsed the mappings, however userspace may ask for mapping them
individually. If so, then we should not use the full number of ring
pages, as it may exceed the partial mapping. Doing so will yield an
-EFAULT from vm_insert_pages(), as we pass in more pages than what the
application asked for.
Cap the number of pages to match what the application asked for, for
the particular mapping operation.
Reported-by: Lucas Mülling <lmulling@proton.me>
Link: https://github.com/axboe/liburing/issues/1157
Fixes: 3ab1db3c6039 ("io_uring: get rid of remap_pfn_range() for mapping rings/sqes")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/memmap.c')
-rw-r--r-- | io_uring/memmap.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/io_uring/memmap.c b/io_uring/memmap.c index 4785d6af5fee..a0f32a255fd1 100644 --- a/io_uring/memmap.c +++ b/io_uring/memmap.c @@ -244,6 +244,7 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma) struct io_ring_ctx *ctx = file->private_data; size_t sz = vma->vm_end - vma->vm_start; long offset = vma->vm_pgoff << PAGE_SHIFT; + unsigned int npages; void *ptr; ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz); @@ -253,8 +254,8 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma) switch (offset & IORING_OFF_MMAP_MASK) { case IORING_OFF_SQ_RING: case IORING_OFF_CQ_RING: - return io_uring_mmap_pages(ctx, vma, ctx->ring_pages, - ctx->n_ring_pages); + npages = min(ctx->n_ring_pages, (sz + PAGE_SIZE - 1) >> PAGE_SHIFT); + return io_uring_mmap_pages(ctx, vma, ctx->ring_pages, npages); case IORING_OFF_SQES: return io_uring_mmap_pages(ctx, vma, ctx->sqe_pages, ctx->n_sqe_pages); |