diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2026-04-15 22:59:16 +0300 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2026-04-15 22:59:16 +0300 |
| commit | 334fbe734e687404f346eba7d5d96ed2b44d35ab (patch) | |
| tree | 65d5c8f4de18335209b2529146e6b06960a48b43 /drivers | |
| parent | 5bdb4078e1efba9650c03753616866192d680718 (diff) | |
| parent | 3bac01168982ec3e3bf87efdc1807c7933590a85 (diff) | |
| download | linux-334fbe734e687404f346eba7d5d96ed2b44d35ab.tar.xz | |
Merge tag 'mm-stable-2026-04-13-21-45' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
- "maple_tree: Replace big node with maple copy" (Liam Howlett)
Mainly prepararatory work for ongoing development but it does reduce
stack usage and is an improvement.
- "mm, swap: swap table phase III: remove swap_map" (Kairui Song)
Offers memory savings by removing the static swap_map. It also yields
some CPU savings and implements several cleanups.
- "mm: memfd_luo: preserve file seals" (Pratyush Yadav)
File seal preservation to LUO's memfd code
- "mm: zswap: add per-memcg stat for incompressible pages" (Jiayuan
Chen)
Additional userspace stats reportng to zswap
- "arch, mm: consolidate empty_zero_page" (Mike Rapoport)
Some cleanups for our handling of ZERO_PAGE() and zero_pfn
- "mm/kmemleak: Improve scan_should_stop() implementation" (Zhongqiu
Han)
A robustness improvement and some cleanups in the kmemleak code
- "Improve khugepaged scan logic" (Vernon Yang)
Improve khugepaged scan logic and reduce CPU consumption by
prioritizing scanning tasks that access memory frequently
- "Make KHO Stateless" (Jason Miu)
Simplify Kexec Handover by transitioning KHO from an xarray-based
metadata tracking system with serialization to a radix tree data
structure that can be passed directly to the next kernel
- "mm: vmscan: add PID and cgroup ID to vmscan tracepoints" (Thomas
Ballasi and Steven Rostedt)
Enhance vmscan's tracepointing
- "mm: arch/shstk: Common shadow stack mapping helper and
VM_NOHUGEPAGE" (Catalin Marinas)
Cleanup for the shadow stack code: remove per-arch code in favour of
a generic implementation
- "Fix KASAN support for KHO restored vmalloc regions" (Pasha Tatashin)
Fix a WARN() which can be emitted the KHO restores a vmalloc area
- "mm: Remove stray references to pagevec" (Tal Zussman)
Several cleanups, mainly udpating references to "struct pagevec",
which became folio_batch three years ago
- "mm: Eliminate fake head pages from vmemmap optimization" (Kiryl
Shutsemau)
Simplify the HugeTLB vmemmap optimization (HVO) by changing how tail
pages encode their relationship to the head page
- "mm/damon/core: improve DAMOS quota efficiency for core layer
filters" (SeongJae Park)
Improve two problematic behaviors of DAMOS that makes it less
efficient when core layer filters are used
- "mm/damon: strictly respect min_nr_regions" (SeongJae Park)
Improve DAMON usability by extending the treatment of the
min_nr_regions user-settable parameter
- "mm/page_alloc: pcp locking cleanup" (Vlastimil Babka)
The proper fix for a previously hotfixed SMP=n issue. Code
simplifications and cleanups ensued
- "mm: cleanups around unmapping / zapping" (David Hildenbrand)
A bunch of cleanups around unmapping and zapping. Mostly
simplifications, code movements, documentation and renaming of
zapping functions
- "support batched checking of the young flag for MGLRU" (Baolin Wang)
Batched checking of the young flag for MGLRU. It's part cleanups; one
benchmark shows large performance benefits for arm64
- "memcg: obj stock and slab stat caching cleanups" (Johannes Weiner)
memcg cleanup and robustness improvements
- "Allow order zero pages in page reporting" (Yuvraj Sakshith)
Enhance free page reporting - it is presently and undesirably order-0
pages when reporting free memory.
- "mm: vma flag tweaks" (Lorenzo Stoakes)
Cleanup work following from the recent conversion of the VMA flags to
a bitmap
- "mm/damon: add optional debugging-purpose sanity checks" (SeongJae
Park)
Add some more developer-facing debug checks into DAMON core
- "mm/damon: test and document power-of-2 min_region_sz requirement"
(SeongJae Park)
An additional DAMON kunit test and makes some adjustments to the
addr_unit parameter handling
- "mm/damon/core: make passed_sample_intervals comparisons
overflow-safe" (SeongJae Park)
Fix a hard-to-hit time overflow issue in DAMON core
- "mm/damon: improve/fixup/update ratio calculation, test and
documentation" (SeongJae Park)
A batch of misc/minor improvements and fixups for DAMON
- "mm: move vma_(kernel|mmu)_pagesize() out of hugetlb.c" (David
Hildenbrand)
Fix a possible issue with dax-device when CONFIG_HUGETLB=n. Some code
movement was required.
- "zram: recompression cleanups and tweaks" (Sergey Senozhatsky)
A somewhat random mix of fixups, recompression cleanups and
improvements in the zram code
- "mm/damon: support multiple goal-based quota tuning algorithms"
(SeongJae Park)
Extend DAMOS quotas goal auto-tuning to support multiple tuning
algorithms that users can select
- "mm: thp: reduce unnecessary start_stop_khugepaged()" (Breno Leitao)
Fix the khugpaged sysfs handling so we no longer spam the logs with
reams of junk when starting/stopping khugepaged
- "mm: improve map count checks" (Lorenzo Stoakes)
Provide some cleanups and slight fixes in the mremap, mmap and vma
code
- "mm/damon: support addr_unit on default monitoring targets for
modules" (SeongJae Park)
Extend the use of DAMON core's addr_unit tunable
- "mm: khugepaged cleanups and mTHP prerequisites" (Nico Pache)
Cleanups to khugepaged and is a base for Nico's planned khugepaged
mTHP support
- "mm: memory hot(un)plug and SPARSEMEM cleanups" (David Hildenbrand)
Code movement and cleanups in the memhotplug and sparsemem code
- "mm: remove CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE and cleanup
CONFIG_MIGRATION" (David Hildenbrand)
Rationalize some memhotplug Kconfig support
- "change young flag check functions to return bool" (Baolin Wang)
Cleanups to change all young flag check functions to return bool
- "mm/damon/sysfs: fix memory leak and NULL dereference issues" (Josh
Law and SeongJae Park)
Fix a few potential DAMON bugs
- "mm/vma: convert vm_flags_t to vma_flags_t in vma code" (Lorenzo
Stoakes)
Convert a lot of the existing use of the legacy vm_flags_t data type
to the new vma_flags_t type which replaces it. Mainly in the vma
code.
- "mm: expand mmap_prepare functionality and usage" (Lorenzo Stoakes)
Expand the mmap_prepare functionality, which is intended to replace
the deprecated f_op->mmap hook which has been the source of bugs and
security issues for some time. Cleanups, documentation, extension of
mmap_prepare into filesystem drivers
- "mm/huge_memory: refactor zap_huge_pmd()" (Lorenzo Stoakes)
Simplify and clean up zap_huge_pmd(). Additional cleanups around
vm_normal_folio_pmd() and the softleaf functionality are performed.
* tag 'mm-stable-2026-04-13-21-45' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (369 commits)
mm: fix deferred split queue races during migration
mm/khugepaged: fix issue with tracking lock
mm/huge_memory: add and use has_deposited_pgtable()
mm/huge_memory: add and use normal_or_softleaf_folio_pmd()
mm: add softleaf_is_valid_pmd_entry(), pmd_to_softleaf_folio()
mm/huge_memory: separate out the folio part of zap_huge_pmd()
mm/huge_memory: use mm instead of tlb->mm
mm/huge_memory: remove unnecessary sanity checks
mm/huge_memory: deduplicate zap deposited table call
mm/huge_memory: remove unnecessary VM_BUG_ON_PAGE()
mm/huge_memory: add a common exit path to zap_huge_pmd()
mm/huge_memory: handle buggy PMD entry in zap_huge_pmd()
mm/huge_memory: have zap_huge_pmd return a boolean, add kdoc
mm/huge: avoid big else branch in zap_huge_pmd()
mm/huge_memory: simplify vma_is_specal_huge()
mm: on remap assert that input range within the proposed VMA
mm: add mmap_action_map_kernel_pages[_full]()
uio: replace deprecated mmap hook with mmap_prepare in uio_info
drivers: hv: vmbus: replace deprecated mmap hook with mmap_prepare
mm: allow handling of stacked mmap_prepare hooks in more drivers
...
Diffstat (limited to 'drivers')
33 files changed, 317 insertions, 279 deletions
diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs index 60e20fcf7c94..e54a90e62402 100644 --- a/drivers/android/binder/page_range.rs +++ b/drivers/android/binder/page_range.rs @@ -132,7 +132,7 @@ pub(crate) struct ShrinkablePageRange { pid: Pid, /// The mm for the relevant process. mm: ARef<Mm>, - /// Used to synchronize calls to `vm_insert_page` and `zap_page_range_single`. + /// Used to synchronize calls to `vm_insert_page` and `zap_vma_range`. #[pin] mm_lock: Mutex<()>, /// Spinlock protecting changes to pages. @@ -764,7 +764,7 @@ unsafe extern "C" fn rust_shrink_free_page( if let Some(unchecked_vma) = mmap_read.vma_lookup(vma_addr) { if let Some(vma) = check_vma(unchecked_vma, range_ptr) { let user_page_addr = vma_addr + (page_index << PAGE_SHIFT); - vma.zap_page_range_single(user_page_addr, PAGE_SIZE); + vma.zap_vma_range(user_page_addr, PAGE_SIZE); } } diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 241f16a9b63d..e4488ad86a65 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1185,7 +1185,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, if (vma) { trace_binder_unmap_user_start(alloc, index); - zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL); + zap_vma_range(vma, page_addr, PAGE_SIZE); trace_binder_unmap_user_end(alloc, index); } diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 0d6ccc7cdf05..f806a683b767 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -452,7 +452,7 @@ static ssize_t phys_device_show(struct device *dev, static int print_allowed_zone(char *buf, int len, int nid, struct memory_group *group, unsigned long start_pfn, unsigned long nr_pages, - int online_type, struct zone *default_zone) + enum mmop online_type, struct zone *default_zone) { struct zone *zone; diff --git a/drivers/block/zram/backend_lz4.c b/drivers/block/zram/backend_lz4.c index 04e186614760..c449d511ba86 100644 --- a/drivers/block/zram/backend_lz4.c +++ b/drivers/block/zram/backend_lz4.c @@ -14,13 +14,38 @@ struct lz4_ctx { static void lz4_release_params(struct zcomp_params *params) { + LZ4_stream_t *dict_stream = params->drv_data; + + params->drv_data = NULL; + if (!dict_stream) + return; + + kfree(dict_stream); } static int lz4_setup_params(struct zcomp_params *params) { + LZ4_stream_t *dict_stream; + int ret; + if (params->level == ZCOMP_PARAM_NOT_SET) params->level = LZ4_ACCELERATION_DEFAULT; + if (!params->dict || !params->dict_sz) + return 0; + + dict_stream = kzalloc_obj(*dict_stream, GFP_KERNEL); + if (!dict_stream) + return -ENOMEM; + + ret = LZ4_loadDict(dict_stream, + params->dict, params->dict_sz); + if (ret != params->dict_sz) { + kfree(dict_stream); + return -EINVAL; + } + params->drv_data = dict_stream; + return 0; } @@ -79,9 +104,7 @@ static int lz4_compress(struct zcomp_params *params, struct zcomp_ctx *ctx, zctx->mem); } else { /* Cstrm needs to be reset */ - ret = LZ4_loadDict(zctx->cstrm, params->dict, params->dict_sz); - if (ret != params->dict_sz) - return -EINVAL; + memcpy(zctx->cstrm, params->drv_data, sizeof(*zctx->cstrm)); ret = LZ4_compress_fast_continue(zctx->cstrm, req->src, req->dst, req->src_len, req->dst_len, params->level); diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index a771a8ecc540..974c4691887e 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -84,9 +84,14 @@ static const struct zcomp_ops *lookup_backend_ops(const char *comp) return backends[i]; } -bool zcomp_available_algorithm(const char *comp) +const char *zcomp_lookup_backend_name(const char *comp) { - return lookup_backend_ops(comp) != NULL; + const struct zcomp_ops *backend = lookup_backend_ops(comp); + + if (backend) + return backend->name; + + return NULL; } /* show available compressors */ diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index eacfd3f7d61d..81a0f3f6ff48 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -80,7 +80,7 @@ struct zcomp { int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node); int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node); ssize_t zcomp_available_show(const char *comp, char *buf, ssize_t at); -bool zcomp_available_algorithm(const char *comp); +const char *zcomp_lookup_backend_name(const char *comp); struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params); void zcomp_destroy(struct zcomp *comp); diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index af679375b193..c2afd1c34f4a 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1196,9 +1196,9 @@ static int parse_mode(char *val, u32 *mode) return 0; } -static int scan_slots_for_writeback(struct zram *zram, u32 mode, - unsigned long lo, unsigned long hi, - struct zram_pp_ctl *ctl) +static void scan_slots_for_writeback(struct zram *zram, u32 mode, + unsigned long lo, unsigned long hi, + struct zram_pp_ctl *ctl) { u32 index = lo; @@ -1230,8 +1230,6 @@ next: break; index++; } - - return 0; } static ssize_t writeback_store(struct device *dev, @@ -1429,21 +1427,21 @@ static void zram_async_read_endio(struct bio *bio) queue_work(system_highpri_wq, &req->work); } -static void read_from_bdev_async(struct zram *zram, struct page *page, - u32 index, unsigned long blk_idx, - struct bio *parent) +static int read_from_bdev_async(struct zram *zram, struct page *page, + u32 index, unsigned long blk_idx, + struct bio *parent) { struct zram_rb_req *req; struct bio *bio; req = kmalloc_obj(*req, GFP_NOIO); if (!req) - return; + return -ENOMEM; bio = bio_alloc(zram->bdev, 1, parent->bi_opf, GFP_NOIO); if (!bio) { kfree(req); - return; + return -ENOMEM; } req->zram = zram; @@ -1459,6 +1457,8 @@ static void read_from_bdev_async(struct zram *zram, struct page *page, __bio_add_page(bio, page, PAGE_SIZE, 0); bio_inc_remaining(parent); submit_bio(bio); + + return 0; } static void zram_sync_read(struct work_struct *w) @@ -1507,8 +1507,7 @@ static int read_from_bdev(struct zram *zram, struct page *page, u32 index, return -EIO; return read_from_bdev_sync(zram, page, index, blk_idx); } - read_from_bdev_async(zram, page, index, blk_idx, parent); - return 0; + return read_from_bdev_async(zram, page, index, blk_idx, parent); } #else static inline void reset_bdev(struct zram *zram) {}; @@ -1619,45 +1618,62 @@ static void zram_debugfs_register(struct zram *zram) {}; static void zram_debugfs_unregister(struct zram *zram) {}; #endif -static void comp_algorithm_set(struct zram *zram, u32 prio, const char *alg) +/* Only algo parameter given, lookup by algo name */ +static int lookup_algo_priority(struct zram *zram, const char *algo, + u32 min_prio) { - /* Do not free statically defined compression algorithms */ - if (zram->comp_algs[prio] != default_compressor) - kfree(zram->comp_algs[prio]); + s32 prio; + + for (prio = min_prio; prio < ZRAM_MAX_COMPS; prio++) { + if (!zram->comp_algs[prio]) + continue; + if (!strcmp(zram->comp_algs[prio], algo)) + return prio; + } + + return -EINVAL; +} + +/* Both algo and priority parameters given, validate them */ +static int validate_algo_priority(struct zram *zram, const char *algo, u32 prio) +{ + if (prio >= ZRAM_MAX_COMPS) + return -EINVAL; + /* No algo at given priority */ + if (!zram->comp_algs[prio]) + return -EINVAL; + /* A different algo at given priority */ + if (strcmp(zram->comp_algs[prio], algo)) + return -EINVAL; + return 0; +} + +static void comp_algorithm_set(struct zram *zram, u32 prio, const char *alg) +{ zram->comp_algs[prio] = alg; } static int __comp_algorithm_store(struct zram *zram, u32 prio, const char *buf) { - char *compressor; + const char *alg; size_t sz; sz = strlen(buf); if (sz >= ZRAM_MAX_ALGO_NAME_SZ) return -E2BIG; - compressor = kstrdup(buf, GFP_KERNEL); - if (!compressor) - return -ENOMEM; - - /* ignore trailing newline */ - if (sz > 0 && compressor[sz - 1] == '\n') - compressor[sz - 1] = 0x00; - - if (!zcomp_available_algorithm(compressor)) { - kfree(compressor); + alg = zcomp_lookup_backend_name(buf); + if (!alg) return -EINVAL; - } guard(rwsem_write)(&zram->dev_lock); if (init_done(zram)) { - kfree(compressor); pr_info("Can't change algorithm for initialized device\n"); return -EBUSY; } - comp_algorithm_set(zram, prio, compressor); + comp_algorithm_set(zram, prio, alg); return 0; } @@ -1705,6 +1721,7 @@ static ssize_t algorithm_params_store(struct device *dev, char *args, *param, *val, *algo = NULL, *dict_path = NULL; struct deflate_params deflate_params; struct zram *zram = dev_to_zram(dev); + bool prio_param = false; int ret; deflate_params.winbits = ZCOMP_PARAM_NOT_SET; @@ -1717,6 +1734,7 @@ static ssize_t algorithm_params_store(struct device *dev, return -EINVAL; if (!strcmp(param, "priority")) { + prio_param = true; ret = kstrtoint(val, 10, &prio); if (ret) return ret; @@ -1748,24 +1766,26 @@ static ssize_t algorithm_params_store(struct device *dev, } } - /* Lookup priority by algorithm name */ - if (algo) { - s32 p; + guard(rwsem_write)(&zram->dev_lock); + if (init_done(zram)) + return -EBUSY; - prio = -EINVAL; - for (p = ZRAM_PRIMARY_COMP; p < ZRAM_MAX_COMPS; p++) { - if (!zram->comp_algs[p]) - continue; + if (prio_param) { + if (prio < ZRAM_PRIMARY_COMP || prio >= ZRAM_MAX_COMPS) + return -EINVAL; + } - if (!strcmp(zram->comp_algs[p], algo)) { - prio = p; - break; - } - } + if (algo && prio_param) { + ret = validate_algo_priority(zram, algo, prio); + if (ret) + return ret; } - if (prio < ZRAM_PRIMARY_COMP || prio >= ZRAM_MAX_COMPS) - return -EINVAL; + if (algo && !prio_param) { + prio = lookup_algo_priority(zram, algo, ZRAM_PRIMARY_COMP); + if (prio < 0) + return -EINVAL; + } ret = comp_params_store(zram, prio, level, dict_path, &deflate_params); return ret ? ret : len; @@ -2334,8 +2354,20 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, #define RECOMPRESS_IDLE (1 << 0) #define RECOMPRESS_HUGE (1 << 1) -static int scan_slots_for_recompress(struct zram *zram, u32 mode, u32 prio_max, - struct zram_pp_ctl *ctl) +static bool highest_priority_algorithm(struct zram *zram, u32 prio) +{ + u32 p; + + for (p = prio + 1; p < ZRAM_MAX_COMPS; p++) { + if (zram->comp_algs[p]) + return false; + } + + return true; +} + +static void scan_slots_for_recompress(struct zram *zram, u32 mode, u32 prio, + struct zram_pp_ctl *ctl) { unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; unsigned long index; @@ -2360,8 +2392,8 @@ static int scan_slots_for_recompress(struct zram *zram, u32 mode, u32 prio_max, test_slot_flag(zram, index, ZRAM_INCOMPRESSIBLE)) goto next; - /* Already compressed with same of higher priority */ - if (get_slot_comp_priority(zram, index) + 1 >= prio_max) + /* Already compressed with same or higher priority */ + if (get_slot_comp_priority(zram, index) >= prio) goto next; ok = place_pp_slot(zram, ctl, index); @@ -2370,8 +2402,6 @@ next: if (!ok) break; } - - return 0; } /* @@ -2382,8 +2412,7 @@ next: * Corresponding ZRAM slot should be locked. */ static int recompress_slot(struct zram *zram, u32 index, struct page *page, - u64 *num_recomp_pages, u32 threshold, u32 prio, - u32 prio_max) + u64 *num_recomp_pages, u32 threshold, u32 prio) { struct zcomp_strm *zstrm = NULL; unsigned long handle_old; @@ -2417,51 +2446,10 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, */ clear_slot_flag(zram, index, ZRAM_IDLE); - class_index_old = zs_lookup_class_index(zram->mem_pool, comp_len_old); - - prio = max(prio, get_slot_comp_priority(zram, index) + 1); - /* - * Recompression slots scan should not select slots that are - * already compressed with a higher priority algorithm, but - * just in case - */ - if (prio >= prio_max) - return 0; - - /* - * Iterate the secondary comp algorithms list (in order of priority) - * and try to recompress the page. - */ - for (; prio < prio_max; prio++) { - if (!zram->comps[prio]) - continue; - - zstrm = zcomp_stream_get(zram->comps[prio]); - src = kmap_local_page(page); - ret = zcomp_compress(zram->comps[prio], zstrm, - src, &comp_len_new); - kunmap_local(src); - - if (ret) { - zcomp_stream_put(zstrm); - zstrm = NULL; - break; - } - - class_index_new = zs_lookup_class_index(zram->mem_pool, - comp_len_new); - - /* Continue until we make progress */ - if (class_index_new >= class_index_old || - (threshold && comp_len_new >= threshold)) { - zcomp_stream_put(zstrm); - zstrm = NULL; - continue; - } - - /* Recompression was successful so break out */ - break; - } + zstrm = zcomp_stream_get(zram->comps[prio]); + src = kmap_local_page(page); + ret = zcomp_compress(zram->comps[prio], zstrm, src, &comp_len_new); + kunmap_local(src); /* * Decrement the limit (if set) on pages we can recompress, even @@ -2472,21 +2460,27 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, if (*num_recomp_pages) *num_recomp_pages -= 1; - /* Compression error */ - if (ret) + if (ret) { + zcomp_stream_put(zstrm); return ret; + } + + class_index_old = zs_lookup_class_index(zram->mem_pool, comp_len_old); + class_index_new = zs_lookup_class_index(zram->mem_pool, comp_len_new); + + if (class_index_new >= class_index_old || + (threshold && comp_len_new >= threshold)) { + zcomp_stream_put(zstrm); - if (!zstrm) { /* * Secondary algorithms failed to re-compress the page * in a way that would save memory. * - * Mark the object incompressible if the max-priority - * algorithm couldn't re-compress it. + * Mark the object incompressible if the max-priority (the + * last configured one) algorithm couldn't re-compress it. */ - if (prio < zram->num_active_comps) - return 0; - set_slot_flag(zram, index, ZRAM_INCOMPRESSIBLE); + if (highest_priority_algorithm(zram, prio)) + set_slot_flag(zram, index, ZRAM_INCOMPRESSIBLE); return 0; } @@ -2531,15 +2525,13 @@ static ssize_t recompress_store(struct device *dev, char *args, *param, *val, *algo = NULL; u64 num_recomp_pages = ULLONG_MAX; struct zram_pp_ctl *ctl = NULL; - struct zram_pp_slot *pps; + s32 prio = ZRAM_SECONDARY_COMP; u32 mode = 0, threshold = 0; - u32 prio, prio_max; + struct zram_pp_slot *pps; struct page *page = NULL; + bool prio_param = false; ssize_t ret; - prio = ZRAM_SECONDARY_COMP; - prio_max = zram->num_active_comps; - args = skip_spaces(buf); while (*args) { args = next_arg(args, ¶m, &val); @@ -2585,14 +2577,10 @@ static ssize_t recompress_store(struct device *dev, } if (!strcmp(param, "priority")) { - ret = kstrtouint(val, 10, &prio); + prio_param = true; + ret = kstrtoint(val, 10, &prio); if (ret) return ret; - - if (prio == ZRAM_PRIMARY_COMP) - prio = ZRAM_SECONDARY_COMP; - - prio_max = prio + 1; continue; } } @@ -2604,32 +2592,26 @@ static ssize_t recompress_store(struct device *dev, if (!init_done(zram)) return -EINVAL; - if (algo) { - bool found = false; - - for (; prio < ZRAM_MAX_COMPS; prio++) { - if (!zram->comp_algs[prio]) - continue; - - if (!strcmp(zram->comp_algs[prio], algo)) { - prio_max = prio + 1; - found = true; - break; - } - } + if (prio_param) { + if (prio < ZRAM_SECONDARY_COMP || prio >= ZRAM_MAX_COMPS) + return -EINVAL; + } - if (!found) { - ret = -EINVAL; - goto out; - } + if (algo && prio_param) { + ret = validate_algo_priority(zram, algo, prio); + if (ret) + return ret; } - prio_max = min(prio_max, (u32)zram->num_active_comps); - if (prio >= prio_max) { - ret = -EINVAL; - goto out; + if (algo && !prio_param) { + prio = lookup_algo_priority(zram, algo, ZRAM_SECONDARY_COMP); + if (prio < 0) + return -EINVAL; } + if (!zram->comps[prio]) + return -EINVAL; + page = alloc_page(GFP_KERNEL); if (!page) { ret = -ENOMEM; @@ -2642,7 +2624,7 @@ static ssize_t recompress_store(struct device *dev, goto out; } - scan_slots_for_recompress(zram, mode, prio_max, ctl); + scan_slots_for_recompress(zram, mode, prio, ctl); ret = len; while ((pps = select_pp_slot(ctl))) { @@ -2656,8 +2638,7 @@ static ssize_t recompress_store(struct device *dev, goto next; err = recompress_slot(zram, pps->index, page, - &num_recomp_pages, threshold, - prio, prio_max); + &num_recomp_pages, threshold, prio); next: slot_unlock(zram, pps->index); release_pp_slot(zram, pps); @@ -2837,15 +2818,10 @@ static void zram_destroy_comps(struct zram *zram) if (!comp) continue; zcomp_destroy(comp); - zram->num_active_comps--; } - for (prio = ZRAM_PRIMARY_COMP; prio < ZRAM_MAX_COMPS; prio++) { - /* Do not free statically defined compression algorithms */ - if (zram->comp_algs[prio] != default_compressor) - kfree(zram->comp_algs[prio]); + for (prio = ZRAM_PRIMARY_COMP; prio < ZRAM_MAX_COMPS; prio++) zram->comp_algs[prio] = NULL; - } zram_comp_params_reset(zram); } @@ -2906,7 +2882,6 @@ static ssize_t disksize_store(struct device *dev, struct device_attribute *attr, } zram->comps[prio] = comp; - zram->num_active_comps++; } zram->disksize = disksize; set_capacity_and_notify(zram->disk, zram->disksize >> SECTOR_SHIFT); diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index f0de8f8218f5..08d1774c15db 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -125,7 +125,6 @@ struct zram { */ u64 disksize; /* bytes */ const char *comp_algs[ZRAM_MAX_COMPS]; - s8 num_active_comps; /* * zram is claimed so open request will be failed */ diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c index 60dd09a56f50..8f128cc40147 100644 --- a/drivers/char/hpet.c +++ b/drivers/char/hpet.c @@ -354,8 +354,9 @@ static __init int hpet_mmap_enable(char *str) } __setup("hpet_mmap=", hpet_mmap_enable); -static int hpet_mmap(struct file *file, struct vm_area_struct *vma) +static int hpet_mmap_prepare(struct vm_area_desc *desc) { + struct file *file = desc->file; struct hpet_dev *devp; unsigned long addr; @@ -368,11 +369,12 @@ static int hpet_mmap(struct file *file, struct vm_area_struct *vma) if (addr & (PAGE_SIZE - 1)) return -ENOSYS; - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - return vm_iomap_memory(vma, addr, PAGE_SIZE); + desc->page_prot = pgprot_noncached(desc->page_prot); + mmap_action_simple_ioremap(desc, addr, PAGE_SIZE); + return 0; } #else -static int hpet_mmap(struct file *file, struct vm_area_struct *vma) +static int hpet_mmap_prepare(struct vm_area_desc *desc) { return -ENOSYS; } @@ -710,7 +712,7 @@ static const struct file_operations hpet_fops = { .open = hpet_open, .release = hpet_release, .fasync = hpet_fasync, - .mmap = hpet_mmap, + .mmap_prepare = hpet_mmap_prepare, }; static int hpet_is_known(struct hpet_data *hdp) diff --git a/drivers/char/mem.c b/drivers/char/mem.c index cca4529431f8..5fd421e48c04 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -520,7 +520,7 @@ static int mmap_zero_prepare(struct vm_area_desc *desc) #ifndef CONFIG_MMU return -ENOSYS; #endif - if (vma_desc_test_flags(desc, VMA_SHARED_BIT)) + if (vma_desc_test(desc, VMA_SHARED_BIT)) return shmem_zero_setup_desc(desc); desc->action.success_hook = mmap_zero_private_success; diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c index 0df9f4636fb6..c09bbe04be6c 100644 --- a/drivers/comedi/comedi_fops.c +++ b/drivers/comedi/comedi_fops.c @@ -2590,7 +2590,7 @@ static int comedi_mmap(struct file *file, struct vm_area_struct *vma) * remap_pfn_range() because we call remap_pfn_range() in a loop. */ if (retval) - zap_vma_ptes(vma, vma->vm_start, size); + zap_special_vma_range(vma, vma->vm_start, size); #endif if (retval == 0) { diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 528e81240c4d..381021c2e031 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -24,7 +24,7 @@ static int __check_vma(struct dev_dax *dev_dax, vma_flags_t flags, return -ENXIO; /* prevent private mappings from being established */ - if (!vma_flags_test(&flags, VMA_MAYSHARE_BIT)) { + if (!vma_flags_test_any(&flags, VMA_MAYSHARE_BIT)) { dev_info_ratelimited(dev, "%s: %s: fail, attempted private mapping\n", current->comm, func); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 0377a5fd402d..d6424267260b 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -38,7 +38,7 @@ #include <linux/mman.h> #include <linux/module.h> #include <linux/pagemap.h> -#include <linux/pagevec.h> +#include <linux/folio_batch.h> #include <linux/sched/mm.h> #include <linux/shmem_fs.h> #include <linux/slab.h> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 720a9ad39aa2..06543ae60706 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -3,7 +3,7 @@ * Copyright © 2014-2016 Intel Corporation */ -#include <linux/pagevec.h> +#include <linux/folio_batch.h> #include <linux/shmem_fs.h> #include <linux/swap.h> #include <linux/uio.h> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 9d3a3ad567a0..b54ee4f25af1 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -19,7 +19,7 @@ #include <linux/io-mapping.h> #include <linux/kref.h> #include <linux/mm.h> -#include <linux/pagevec.h> +#include <linux/folio_batch.h> #include <linux/scatterlist.h> #include <linux/workqueue.h> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 0469c4467f2b..5353a9c66694 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -31,7 +31,7 @@ #include <linux/debugfs.h> #include <linux/highmem.h> #include <linux/nmi.h> -#include <linux/pagevec.h> +#include <linux/folio_batch.h> #include <linux/scatterlist.h> #include <linux/string_helpers.h> #include <linux/utsname.h> diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c index c33bd3d83069..fd89e7c7d8d6 100644 --- a/drivers/gpu/drm/i915/i915_mm.c +++ b/drivers/gpu/drm/i915/i915_mm.c @@ -108,7 +108,7 @@ int remap_io_mapping(struct vm_area_struct *vma, err = apply_to_page_range(r.mm, addr, size, remap_pfn, &r); if (unlikely(err)) { - zap_vma_ptes(vma, addr, (r.pfn - pfn) << PAGE_SHIFT); + zap_special_vma_range(vma, addr, (r.pfn - pfn) << PAGE_SHIFT); return err; } @@ -156,7 +156,7 @@ int remap_io_sg(struct vm_area_struct *vma, err = apply_to_page_range(r.mm, addr, size, remap_sg, &r); if (unlikely(err)) { - zap_vma_ptes(vma, addr, r.pfn << PAGE_SHIFT); + zap_special_vma_range(vma, addr, r.pfn << PAGE_SHIFT); return err; } diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index a848400a59a2..9a55f5c43307 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -1663,7 +1663,7 @@ static void enable_page_reporting(void) * We let the page_reporting_order parameter decide the order * in the page_reporting code */ - dm_device.pr_dev_info.order = 0; + dm_device.pr_dev_info.order = PAGE_REPORTING_ORDER_UNSPECIFIED; ret = page_reporting_register(&dm_device.pr_dev_info); if (ret < 0) { dm_device.pr_dev_info.report = NULL; diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h index 7bd8f8486e85..31f576464f18 100644 --- a/drivers/hv/hyperv_vmbus.h +++ b/drivers/hv/hyperv_vmbus.h @@ -545,8 +545,8 @@ static inline int hv_debug_add_dev_dir(struct hv_device *dev) /* Create and remove sysfs entry for memory mapped ring buffers for a channel */ int hv_create_ring_sysfs(struct vmbus_channel *channel, - int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, - struct vm_area_struct *vma)); + int (*hv_mmap_prepare_ring_buffer)(struct vmbus_channel *channel, + struct vm_area_desc *desc)); int hv_remove_ring_sysfs(struct vmbus_channel *channel); #endif /* _HYPERV_VMBUS_H */ diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c index e7ac79e2fb49..f0d0803d1e16 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c @@ -1948,12 +1948,19 @@ static int hv_mmap_ring_buffer_wrapper(struct file *filp, struct kobject *kobj, struct vm_area_struct *vma) { struct vmbus_channel *channel = container_of(kobj, struct vmbus_channel, kobj); + struct vm_area_desc desc; + int err; /* - * hv_(create|remove)_ring_sysfs implementation ensures that mmap_ring_buffer - * is not NULL. + * hv_(create|remove)_ring_sysfs implementation ensures that + * mmap_prepare_ring_buffer is not NULL. */ - return channel->mmap_ring_buffer(channel, vma); + compat_set_desc_from_vma(&desc, filp, vma); + err = channel->mmap_prepare_ring_buffer(channel, &desc); + if (err) + return err; + + return __compat_vma_mmap(&desc, vma); } static struct bin_attribute chan_attr_ring_buffer = { @@ -2045,13 +2052,13 @@ static const struct kobj_type vmbus_chan_ktype = { /** * hv_create_ring_sysfs() - create "ring" sysfs entry corresponding to ring buffers for a channel. * @channel: Pointer to vmbus_channel structure - * @hv_mmap_ring_buffer: function pointer for initializing the function to be called on mmap of + * @hv_mmap_prepare_ring_buffer: function pointer for initializing the function to be called on mmap * channel's "ring" sysfs node, which is for the ring buffer of that channel. * Function pointer is of below type: - * int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, - * struct vm_area_struct *vma)) - * This has a pointer to the channel and a pointer to vm_area_struct, - * used for mmap, as arguments. + * int (*hv_mmap_prepare_ring_buffer)(struct vmbus_channel *channel, + * struct vm_area_desc *desc)) + * This has a pointer to the channel and a pointer to vm_area_desc, + * used for mmap_prepare, as arguments. * * Sysfs node for ring buffer of a channel is created along with other fields, however its * visibility is disabled by default. Sysfs creation needs to be controlled when the use-case @@ -2068,12 +2075,12 @@ static const struct kobj_type vmbus_chan_ktype = { * Returns 0 on success or error code on failure. */ int hv_create_ring_sysfs(struct vmbus_channel *channel, - int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, - struct vm_area_struct *vma)) + int (*hv_mmap_prepare_ring_buffer)(struct vmbus_channel *channel, + struct vm_area_desc *desc)) { struct kobject *kobj = &channel->kobj; - channel->mmap_ring_buffer = hv_mmap_ring_buffer; + channel->mmap_prepare_ring_buffer = hv_mmap_prepare_ring_buffer; channel->ring_sysfs_visible = true; return sysfs_update_group(kobj, &vmbus_chan_group); @@ -2095,7 +2102,7 @@ int hv_remove_ring_sysfs(struct vmbus_channel *channel) channel->ring_sysfs_visible = false; ret = sysfs_update_group(kobj, &vmbus_chan_group); - channel->mmap_ring_buffer = NULL; + channel->mmap_prepare_ring_buffer = NULL; return ret; } EXPORT_SYMBOL_GPL(hv_remove_ring_sysfs); diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c index 37584e786bb5..f48c6a8a0654 100644 --- a/drivers/hwtracing/stm/core.c +++ b/drivers/hwtracing/stm/core.c @@ -666,6 +666,16 @@ static ssize_t stm_char_write(struct file *file, const char __user *buf, return count; } +static int stm_mmap_mapped(unsigned long start, unsigned long end, pgoff_t pgoff, + const struct file *file, void **vm_private_data) +{ + struct stm_file *stmf = file->private_data; + struct stm_device *stm = stmf->stm; + + pm_runtime_get_sync(&stm->dev); + return 0; +} + static void stm_mmap_open(struct vm_area_struct *vma) { struct stm_file *stmf = vma->vm_file->private_data; @@ -684,12 +694,14 @@ static void stm_mmap_close(struct vm_area_struct *vma) } static const struct vm_operations_struct stm_mmap_vmops = { + .mapped = stm_mmap_mapped, .open = stm_mmap_open, .close = stm_mmap_close, }; -static int stm_char_mmap(struct file *file, struct vm_area_struct *vma) +static int stm_char_mmap_prepare(struct vm_area_desc *desc) { + struct file *file = desc->file; struct stm_file *stmf = file->private_data; struct stm_device *stm = stmf->stm; unsigned long size, phys; @@ -697,10 +709,10 @@ static int stm_char_mmap(struct file *file, struct vm_area_struct *vma) if (!stm->data->mmio_addr) return -EOPNOTSUPP; - if (vma->vm_pgoff) + if (desc->pgoff) return -EINVAL; - size = vma->vm_end - vma->vm_start; + size = vma_desc_size(desc); if (stmf->output.nr_chans * stm->data->sw_mmiosz != size) return -EINVAL; @@ -712,13 +724,12 @@ static int stm_char_mmap(struct file *file, struct vm_area_struct *vma) if (!phys) return -EINVAL; - pm_runtime_get_sync(&stm->dev); - - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - vm_flags_set(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &stm_mmap_vmops; - vm_iomap_memory(vma, phys, size); + desc->page_prot = pgprot_noncached(desc->page_prot); + vma_desc_set_flags(desc, VMA_IO_BIT, VMA_DONTEXPAND_BIT, + VMA_DONTDUMP_BIT); + desc->vm_ops = &stm_mmap_vmops; + mmap_action_simple_ioremap(desc, phys, size); return 0; } @@ -836,7 +847,7 @@ static const struct file_operations stm_fops = { .open = stm_char_open, .release = stm_char_release, .write = stm_char_write, - .mmap = stm_char_mmap, + .mmap_prepare = stm_char_mmap_prepare, .unlocked_ioctl = stm_char_ioctl, .compat_ioctl = compat_ptr_ioctl, }; diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index 7b68967a6301..f5837da47299 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -756,7 +756,7 @@ out_zap: * point, so zap it. */ vma->vm_private_data = NULL; - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); + zap_special_vma_range(vma, vma->vm_start, vma->vm_end - vma->vm_start); } static void rdma_umap_close(struct vm_area_struct *vma) @@ -782,7 +782,7 @@ static void rdma_umap_close(struct vm_area_struct *vma) } /* - * Once the zap_vma_ptes has been called touches to the VMA will come here and + * Once the zap_special_vma_range has been called touches to the VMA will come here and * we return a dummy writable zero page for all the pfns. */ static vm_fault_t rdma_umap_fault(struct vm_fault *vmf) @@ -878,7 +878,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile) continue; list_del_init(&priv->list); - zap_vma_ptes(vma, vma->vm_start, + zap_special_vma_range(vma, vma->vm_start, vma->vm_end - vma->vm_start); if (priv->entry) { diff --git a/drivers/misc/open-dice.c b/drivers/misc/open-dice.c index 24c29e0f00ef..45060fb4ea27 100644 --- a/drivers/misc/open-dice.c +++ b/drivers/misc/open-dice.c @@ -86,29 +86,32 @@ static ssize_t open_dice_write(struct file *filp, const char __user *ptr, /* * Creates a mapping of the reserved memory region in user address space. */ -static int open_dice_mmap(struct file *filp, struct vm_area_struct *vma) +static int open_dice_mmap_prepare(struct vm_area_desc *desc) { + struct file *filp = desc->file; struct open_dice_drvdata *drvdata = to_open_dice_drvdata(filp); - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_desc_test(desc, VMA_MAYSHARE_BIT)) { /* Do not allow userspace to modify the underlying data. */ - if (vma->vm_flags & VM_WRITE) + if (vma_desc_test(desc, VMA_WRITE_BIT)) return -EPERM; /* Ensure userspace cannot acquire VM_WRITE later. */ - vm_flags_clear(vma, VM_MAYWRITE); + vma_desc_clear_flags(desc, VMA_MAYWRITE_BIT); } /* Create write-combine mapping so all clients observe a wipe. */ - vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); - vm_flags_set(vma, VM_DONTCOPY | VM_DONTDUMP); - return vm_iomap_memory(vma, drvdata->rmem->base, drvdata->rmem->size); + desc->page_prot = pgprot_writecombine(desc->page_prot); + vma_desc_set_flags(desc, VMA_DONTCOPY_BIT, VMA_DONTDUMP_BIT); + mmap_action_simple_ioremap(desc, drvdata->rmem->base, + drvdata->rmem->size); + return 0; } static const struct file_operations open_dice_fops = { .owner = THIS_MODULE, .read = open_dice_read, .write = open_dice_write, - .mmap = open_dice_mmap, + .mmap_prepare = open_dice_mmap_prepare, }; static int __init open_dice_probe(struct platform_device *pdev) diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c index 8d749f345246..278b76cbd281 100644 --- a/drivers/misc/sgi-gru/grumain.c +++ b/drivers/misc/sgi-gru/grumain.c @@ -542,7 +542,7 @@ void gru_unload_context(struct gru_thread_state *gts, int savestate) int ctxnum = gts->ts_ctxnum; if (!is_kernel_context(gts)) - zap_vma_ptes(gts->ts_vma, UGRUADDR(gts), GRU_GSEG_PAGESIZE); + zap_special_vma_range(gts->ts_vma, UGRUADDR(gts), GRU_GSEG_PAGESIZE); cch = get_cch(gru->gs_gru_base_vaddr, ctxnum); gru_dbg(grudev, "gts %p, cbrmap 0x%lx, dsrmap 0x%lx\n", diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c index 55a43682c567..bf01e6ac7293 100644 --- a/drivers/mtd/mtdchar.c +++ b/drivers/mtd/mtdchar.c @@ -1376,27 +1376,12 @@ static unsigned mtdchar_mmap_capabilities(struct file *file) /* * set up a mapping for shared memory segments */ -static int mtdchar_mmap(struct file *file, struct vm_area_struct *vma) +static int mtdchar_mmap_prepare(struct vm_area_desc *desc) { #ifdef CONFIG_MMU - struct mtd_file_info *mfi = file->private_data; - struct mtd_info *mtd = mfi->mtd; - struct map_info *map = mtd->priv; - - /* This is broken because it assumes the MTD device is map-based - and that mtd->priv is a valid struct map_info. It should be - replaced with something that uses the mtd_get_unmapped_area() - operation properly. */ - if (0 /*mtd->type == MTD_RAM || mtd->type == MTD_ROM*/) { -#ifdef pgprot_noncached - if (file->f_flags & O_DSYNC || map->phys >= __pa(high_memory)) - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); -#endif - return vm_iomap_memory(vma, map->phys, map->size); - } return -ENODEV; #else - return vma->vm_flags & VM_SHARED ? 0 : -EACCES; + return vma_desc_test(desc, VMA_SHARED_BIT) ? 0 : -EACCES; #endif } @@ -1411,7 +1396,7 @@ static const struct file_operations mtd_fops = { #endif .open = mtdchar_open, .release = mtdchar_close, - .mmap = mtdchar_mmap, + .mmap_prepare = mtdchar_mmap_prepare, #ifndef CONFIG_MMU .get_unmapped_area = mtdchar_get_unmapped_area, .mmap_capabilities = mtdchar_mmap_capabilities, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index d3bab198c99c..190b8b66b3ce 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -708,7 +708,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); page = xdpi.page.page; - /* No need to check page_pool_page_is_pp() as we + /* No need to check PageNetpp() as we * know this is a page_pool page. */ page_pool_recycle_direct(pp_page_to_nmdesc(page)->pp, diff --git a/drivers/staging/vme_user/vme.c b/drivers/staging/vme_user/vme.c index f10a00c05f12..7220aba7b919 100644 --- a/drivers/staging/vme_user/vme.c +++ b/drivers/staging/vme_user/vme.c @@ -735,9 +735,9 @@ unsigned int vme_master_rmw(struct vme_resource *resource, unsigned int mask, EXPORT_SYMBOL(vme_master_rmw); /** - * vme_master_mmap - Mmap region of VME master window. + * vme_master_mmap_prepare - Mmap region of VME master window. * @resource: Pointer to VME master resource. - * @vma: Pointer to definition of user mapping. + * @desc: Pointer to descriptor of user mapping. * * Memory map a region of the VME master window into user space. * @@ -745,12 +745,13 @@ EXPORT_SYMBOL(vme_master_rmw); * resource or -EFAULT if map exceeds window size. Other generic mmap * errors may also be returned. */ -int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *vma) +int vme_master_mmap_prepare(struct vme_resource *resource, + struct vm_area_desc *desc) { + const unsigned long vma_size = vma_desc_size(desc); struct vme_bridge *bridge = find_bridge(resource); struct vme_master_resource *image; phys_addr_t phys_addr; - unsigned long vma_size; if (resource->type != VME_MASTER) { dev_err(bridge->parent, "Not a master resource\n"); @@ -758,19 +759,18 @@ int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *vma) } image = list_entry(resource->entry, struct vme_master_resource, list); - phys_addr = image->bus_resource.start + (vma->vm_pgoff << PAGE_SHIFT); - vma_size = vma->vm_end - vma->vm_start; + phys_addr = image->bus_resource.start + (desc->pgoff << PAGE_SHIFT); if (phys_addr + vma_size > image->bus_resource.end + 1) { dev_err(bridge->parent, "Map size cannot exceed the window size\n"); return -EFAULT; } - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - - return vm_iomap_memory(vma, phys_addr, vma->vm_end - vma->vm_start); + desc->page_prot = pgprot_noncached(desc->page_prot); + mmap_action_simple_ioremap(desc, phys_addr, vma_size); + return 0; } -EXPORT_SYMBOL(vme_master_mmap); +EXPORT_SYMBOL(vme_master_mmap_prepare); /** * vme_master_free - Free VME master window diff --git a/drivers/staging/vme_user/vme.h b/drivers/staging/vme_user/vme.h index 797e9940fdd1..b6413605ea49 100644 --- a/drivers/staging/vme_user/vme.h +++ b/drivers/staging/vme_user/vme.h @@ -151,7 +151,7 @@ ssize_t vme_master_read(struct vme_resource *resource, void *buf, size_t count, ssize_t vme_master_write(struct vme_resource *resource, void *buf, size_t count, loff_t offset); unsigned int vme_master_rmw(struct vme_resource *resource, unsigned int mask, unsigned int compare, unsigned int swap, loff_t offset); -int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *vma); +int vme_master_mmap_prepare(struct vme_resource *resource, struct vm_area_desc *desc); void vme_master_free(struct vme_resource *resource); struct vme_resource *vme_dma_request(struct vme_dev *vdev, u32 route); diff --git a/drivers/staging/vme_user/vme_user.c b/drivers/staging/vme_user/vme_user.c index d95dd7d9190a..11e25c2f6b0a 100644 --- a/drivers/staging/vme_user/vme_user.c +++ b/drivers/staging/vme_user/vme_user.c @@ -446,24 +446,14 @@ static void vme_user_vm_close(struct vm_area_struct *vma) kfree(vma_priv); } -static const struct vm_operations_struct vme_user_vm_ops = { - .open = vme_user_vm_open, - .close = vme_user_vm_close, -}; - -static int vme_user_master_mmap(unsigned int minor, struct vm_area_struct *vma) +static int vme_user_vm_mapped(unsigned long start, unsigned long end, pgoff_t pgoff, + const struct file *file, void **vm_private_data) { - int err; + const unsigned int minor = iminor(file_inode(file)); struct vme_user_vma_priv *vma_priv; mutex_lock(&image[minor].mutex); - err = vme_master_mmap(image[minor].resource, vma); - if (err) { - mutex_unlock(&image[minor].mutex); - return err; - } - vma_priv = kmalloc_obj(*vma_priv); if (!vma_priv) { mutex_unlock(&image[minor].mutex); @@ -472,22 +462,41 @@ static int vme_user_master_mmap(unsigned int minor, struct vm_area_struct *vma) vma_priv->minor = minor; refcount_set(&vma_priv->refcnt, 1); - vma->vm_ops = &vme_user_vm_ops; - vma->vm_private_data = vma_priv; - + *vm_private_data = vma_priv; image[minor].mmap_count++; mutex_unlock(&image[minor].mutex); - return 0; } -static int vme_user_mmap(struct file *file, struct vm_area_struct *vma) +static const struct vm_operations_struct vme_user_vm_ops = { + .mapped = vme_user_vm_mapped, + .open = vme_user_vm_open, + .close = vme_user_vm_close, +}; + +static int vme_user_master_mmap_prepare(unsigned int minor, + struct vm_area_desc *desc) +{ + int err; + + mutex_lock(&image[minor].mutex); + + err = vme_master_mmap_prepare(image[minor].resource, desc); + if (!err) + desc->vm_ops = &vme_user_vm_ops; + + mutex_unlock(&image[minor].mutex); + return err; +} + +static int vme_user_mmap_prepare(struct vm_area_desc *desc) { - unsigned int minor = iminor(file_inode(file)); + const struct file *file = desc->file; + const unsigned int minor = iminor(file_inode(file)); if (type[minor] == MASTER_MINOR) - return vme_user_master_mmap(minor, vma); + return vme_user_master_mmap_prepare(minor, desc); return -ENODEV; } @@ -498,7 +507,7 @@ static const struct file_operations vme_user_fops = { .llseek = vme_user_llseek, .unlocked_ioctl = vme_user_unlocked_ioctl, .compat_ioctl = compat_ptr_ioctl, - .mmap = vme_user_mmap, + .mmap_prepare = vme_user_mmap_prepare, }; static int vme_user_match(struct vme_dev *vdev) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index af95531ddd35..edc2afd5f4ee 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -1860,6 +1860,17 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi) return NULL; } +static int tcmu_vma_mapped(unsigned long start, unsigned long end, pgoff_t pgoff, + const struct file *file, void **vm_private_data) +{ + struct tcmu_dev *udev = *vm_private_data; + + pr_debug("vma_mapped\n"); + + kref_get(&udev->kref); + return 0; +} + static void tcmu_vma_open(struct vm_area_struct *vma) { struct tcmu_dev *udev = vma->vm_private_data; @@ -1919,26 +1930,25 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf) } static const struct vm_operations_struct tcmu_vm_ops = { + .mapped = tcmu_vma_mapped, .open = tcmu_vma_open, .close = tcmu_vma_close, .fault = tcmu_vma_fault, }; -static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma) +static int tcmu_mmap_prepare(struct uio_info *info, struct vm_area_desc *desc) { struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info); - vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &tcmu_vm_ops; + vma_desc_set_flags(desc, VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + desc->vm_ops = &tcmu_vm_ops; - vma->vm_private_data = udev; + desc->private_data = udev; /* Ensure the mmap is exactly the right size */ - if (vma_pages(vma) != udev->mmap_pages) + if (vma_desc_pages(desc) != udev->mmap_pages) return -EINVAL; - tcmu_vma_open(vma); - return 0; } @@ -2253,7 +2263,7 @@ static int tcmu_configure_device(struct se_device *dev) info->irqcontrol = tcmu_irqcontrol; info->irq = UIO_IRQ_CUSTOM; - info->mmap = tcmu_mmap; + info->mmap_prepare = tcmu_mmap_prepare; info->open = tcmu_open; info->release = tcmu_release; diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c index 5a4998e2caf8..1e4ade78ed84 100644 --- a/drivers/uio/uio.c +++ b/drivers/uio/uio.c @@ -850,8 +850,14 @@ static int uio_mmap(struct file *filep, struct vm_area_struct *vma) goto out; } - if (idev->info->mmap) { - ret = idev->info->mmap(idev->info, vma); + if (idev->info->mmap_prepare) { + struct vm_area_desc desc; + + compat_set_desc_from_vma(&desc, filep, vma); + ret = idev->info->mmap_prepare(idev->info, &desc); + if (ret) + goto out; + ret = __compat_vma_mmap(&desc, vma); goto out; } diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c index 3f8e2e27697f..29ec2d15ada8 100644 --- a/drivers/uio/uio_hv_generic.c +++ b/drivers/uio/uio_hv_generic.c @@ -154,15 +154,16 @@ static void hv_uio_rescind(struct vmbus_channel *channel) * The ring buffer is allocated as contiguous memory by vmbus_open */ static int -hv_uio_ring_mmap(struct vmbus_channel *channel, struct vm_area_struct *vma) +hv_uio_ring_mmap_prepare(struct vmbus_channel *channel, struct vm_area_desc *desc) { void *ring_buffer = page_address(channel->ringbuffer_page); if (channel->state != CHANNEL_OPENED_STATE) return -ENODEV; - return vm_iomap_memory(vma, virt_to_phys(ring_buffer), - channel->ringbuffer_pagecount << PAGE_SHIFT); + mmap_action_simple_ioremap(desc, virt_to_phys(ring_buffer), + channel->ringbuffer_pagecount << PAGE_SHIFT); + return 0; } /* Callback from VMBUS subsystem when new channel created. */ @@ -183,7 +184,7 @@ hv_uio_new_channel(struct vmbus_channel *new_sc) } set_channel_read_mode(new_sc, HV_CALL_ISR); - ret = hv_create_ring_sysfs(new_sc, hv_uio_ring_mmap); + ret = hv_create_ring_sysfs(new_sc, hv_uio_ring_mmap_prepare); if (ret) { dev_err(device, "sysfs create ring bin file failed; %d\n", ret); vmbus_close(new_sc); @@ -366,7 +367,7 @@ hv_uio_probe(struct hv_device *dev, * or decoupled from uio_hv_generic probe. Userspace programs can make use of inotify * APIs to make sure that ring is created. */ - hv_create_ring_sysfs(channel, hv_uio_ring_mmap); + hv_create_ring_sysfs(channel, hv_uio_ring_mmap_prepare); hv_set_drvdata(dev, pdata); diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index d1fbc8fe8470..f6c2dff33f8a 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -369,13 +369,13 @@ static inline unsigned int update_balloon_vm_stats(struct virtio_balloon *vb) update_stat(vb, idx++, VIRTIO_BALLOON_S_ALLOC_STALL, stall); update_stat(vb, idx++, VIRTIO_BALLOON_S_ASYNC_SCAN, - pages_to_bytes(events[PGSCAN_KSWAPD])); + pages_to_bytes(global_node_page_state(PGSCAN_KSWAPD))); update_stat(vb, idx++, VIRTIO_BALLOON_S_DIRECT_SCAN, - pages_to_bytes(events[PGSCAN_DIRECT])); + pages_to_bytes(global_node_page_state(PGSCAN_DIRECT))); update_stat(vb, idx++, VIRTIO_BALLOON_S_ASYNC_RECLAIM, - pages_to_bytes(events[PGSTEAL_KSWAPD])); + pages_to_bytes(global_node_page_state(PGSTEAL_KSWAPD))); update_stat(vb, idx++, VIRTIO_BALLOON_S_DIRECT_RECLAIM, - pages_to_bytes(events[PGSTEAL_DIRECT])); + pages_to_bytes(global_node_page_state(PGSTEAL_DIRECT))); #ifdef CONFIG_HUGETLB_PAGE update_stat(vb, idx++, VIRTIO_BALLOON_S_HTLB_PGALLOC, @@ -1022,6 +1022,8 @@ static int virtballoon_probe(struct virtio_device *vdev) goto out_unregister_oom; } + vb->pr_dev_info.order = PAGE_REPORTING_ORDER_UNSPECIFIED; + /* * The default page reporting order is @pageblock_order, which * corresponds to 512MB in size on ARM64 when 64KB base page |
