summaryrefslogtreecommitdiff
path: root/mm
AgeCommit message (Collapse)AuthorFilesLines
2022-10-04swapfile: convert __try_to_reclaim_swap() to use a folioMatthew Wilcox (Oracle)1-11/+11
Saves five calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-36-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04swapfile: convert try_to_unuse() to use a folioMatthew Wilcox (Oracle)1-11/+11
Saves five calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-35-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: remove shmem_getpage()Matthew Wilcox (Oracle)1-14/+1
With all callers removed, remove this wrapper function. The flags are now mysteriously called SGP, but I think we can live with that. Link: https://lkml.kernel.org/r/20220902194653.1739778-34-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04userfaultfd: convert mcontinue_atomic_pte() to use a folioMatthew Wilcox (Oracle)1-6/+8
shmem_getpage() is being replaced by shmem_get_folio() so use a folio throughout this function. Saves several calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-33-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04khugepaged: call shmem_get_folio()Matthew Wilcox (Oracle)2-4/+7
shmem_getpage() is being removed, so call its replacement and find the precise page ourselves. Link: https://lkml.kernel.org/r/20220902194653.1739778-32-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_get_link() to use a folioMatthew Wilcox (Oracle)1-16/+17
Symlinks will never use a large folio, but using the folio API removes a lot of unnecessary folio->page->folio conversions. Link: https://lkml.kernel.org/r/20220902194653.1739778-31-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_symlink() to use a folioMatthew Wilcox (Oracle)1-7/+7
While symlinks will always be < PAGE_SIZE, using the folio APIs gets rid of unnecessary calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-30-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_fallocate() to use a folioMatthew Wilcox (Oracle)1-19/+17
Call shmem_get_folio() and use the folio APIs instead of the page APIs. Saves several calls to compound_head() and removes assumptions about the size of a large folio. Link: https://lkml.kernel.org/r/20220902194653.1739778-29-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_file_read_iter() to use shmem_get_folio()Matthew Wilcox (Oracle)1-9/+11
Use a folio throughout, saving five calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-28-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_write_begin() to use shmem_get_folio()Matthew Wilcox (Oracle)1-3/+5
Use a folio throughout this function, saving a couple of calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-27-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_get_partial_folio() to use shmem_get_folio()Matthew Wilcox (Oracle)1-5/+4
Get rid of an unnecessary folio->page->folio conversion. Link: https://lkml.kernel.org/r/20220902194653.1739778-26-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: add shmem_get_folio()Matthew Wilcox (Oracle)1-13/+10
With no remaining callers of shmem_getpage_gfp(), add shmem_get_folio() and reimplement shmem_getpage() as a call to shmem_get_folio(). Link: https://lkml.kernel.org/r/20220902194653.1739778-25-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_read_mapping_page_gfp() to use shmem_get_folio_gfp()Matthew Wilcox (Oracle)1-3/+5
Saves a couple of calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-24-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_fault() to use shmem_get_folio_gfp()Matthew Wilcox (Oracle)1-1/+4
No particular advantage for this function, but necessary to remove shmem_getpage_gfp(). [hughd@google.com: fix crash] Link: https://lkml.kernel.org/r/7693a84-bdc2-27b5-2695-d0fe8566571f@google.com Link: https://lkml.kernel.org/r/20220902194653.1739778-23-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_getpage_gfp() to shmem_get_folio_gfp()Matthew Wilcox (Oracle)1-29/+41
Add a shmem_getpage_gfp() wrapper for compatibility with current users. Link: https://lkml.kernel.org/r/20220902194653.1739778-22-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: eliminate struct page from shmem_swapin_folio()Matthew Wilcox (Oracle)1-8/+8
Convert shmem_swapin() to return a folio and use swap_cache_get_folio(), removing all uses of struct page in this function. Link: https://lkml.kernel.org/r/20220902194653.1739778-21-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04swap: add swap_cache_get_folio()Matthew Wilcox (Oracle)2-11/+29
Convert lookup_swap_cache() into swap_cache_get_folio() and add a lookup_swap_cache() wrapper around it. [akpm@linux-foundation.org: add CONFIG_SWAP=n stub for swap_cache_get_folio()] Link: https://lkml.kernel.org/r/20220902194653.1739778-20-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_replace_page() to shmem_replace_folio()Matthew Wilcox (Oracle)1-5/+4
The caller has a folio, so convert the calling convention and rename the function. Link: https://lkml.kernel.org/r/20220902194653.1739778-19-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_mfill_atomic_pte() to use a folioMatthew Wilcox (Oracle)1-26/+19
Assert that this is a single-page folio as there are several assumptions in here that it's exactly PAGE_SIZE bytes large. Saves several calls to compound_head() and removes the last caller of shmem_alloc_page(). Link: https://lkml.kernel.org/r/20220902194653.1739778-18-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04memcg: convert mem_cgroup_swapin_charge_page() to ↵Matthew Wilcox (Oracle)3-9/+8
mem_cgroup_swapin_charge_folio() All callers now have a folio, so pass it in here and remove an unnecessary call to page_folio(). Link: https://lkml.kernel.org/r/20220902194653.1739778-17-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm: convert do_swap_page()'s swapcache variable to a folioMatthew Wilcox (Oracle)1-16/+15
The 'swapcache' variable is used to track whether the page is from the swapcache or not. It can do this equally well by being the folio of the page rather than the page itself, and this saves a number of calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-16-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm: convert do_swap_page() to use a folioMatthew Wilcox (Oracle)1-24/+33
Removes quite a lot of calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-15-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm/swap: convert put_swap_page() to put_swap_folio()Matthew Wilcox (Oracle)5-8/+8
With all callers now using a folio, we can convert this function. Link: https://lkml.kernel.org/r/20220902194653.1739778-14-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm/swap: convert add_to_swap_cache() to take a folioMatthew Wilcox (Oracle)3-20/+20
With all callers using folios, we can convert add_to_swap_cache() to take a folio and use it throughout. Link: https://lkml.kernel.org/r/20220902194653.1739778-13-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm/swap: convert __read_swap_cache_async() to use a folioMatthew Wilcox (Oracle)1-19/+19
Remove a few hidden (and one visible) calls to compound_head(). Link: https://lkml.kernel.org/r/20220902194653.1739778-12-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm/swapfile: convert try_to_free_swap() to folio_free_swap()Matthew Wilcox (Oracle)3-15/+26
Add kernel-doc for folio_free_swap() and make it return bool. Add a try_to_free_swap() compatibility wrapper. Link: https://lkml.kernel.org/r/20220902194653.1739778-11-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm/swapfile: remove page_swapcount()Matthew Wilcox (Oracle)1-33/+13
By restructuring folio_swapped(), it can use swap_swapcount() instead of page_swapcount(). It's even a little more efficient. Link: https://lkml.kernel.org/r/20220902194653.1739778-10-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_replace_page() to use folios throughoutMatthew Wilcox (Oracle)1-35/+32
Introduce folio_set_swap_entry() to abstract how both folio->private and swp_entry_t work. Use swap_address_space() directly instead of indirecting through folio_mapping(). Include an assertion that the old folio is not large as we only allocate a single-page folio to replace it. Use folio_put_refs() instead of calling folio_put() twice. Link: https://lkml.kernel.org/r/20220902194653.1739778-9-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_delete_from_page_cache() to take a folioMatthew Wilcox (Oracle)1-12/+11
Remove the assertion that the page is not Compound as this function now handles large folios correctly. Link: https://lkml.kernel.org/r/20220902194653.1739778-8-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04shmem: convert shmem_writepage() to use a folio throughoutMatthew Wilcox (Oracle)1-23/+24
Even though we will split any large folio that comes in, write the code to handle large folios so as to not leave a trap for whoever tries to handle large folios in the swap cache. Link: https://lkml.kernel.org/r/20220902194653.1739778-7-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm: add folio_add_lru_vma()Matthew Wilcox (Oracle)2-10/+15
Convert lru_cache_add_inactive_or_unevictable() to folio_add_lru_vma() and add a compatibility wrapper. Link: https://lkml.kernel.org/r/20220902194653.1739778-6-willy@infradead.org Signed-off-by: "Matthew Wilcox (Oracle)" <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm: add split_folio()Matthew Wilcox (Oracle)2-2/+2
This wrapper removes a need to use split_huge_page(&folio->page). Convert two callers. Link: https://lkml.kernel.org/r/20220902194653.1739778-5-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm/vmscan: fix a lot of commentsMatthew Wilcox (Oracle)1-133/+130
Patch series "MM folio changes for 6.1", v2. My focus this round has been on shmem. I believe it is now fully converted to folios. Of course, shmem interacts with a lot of the swap cache and other parts of the kernel, so there are patches all over the MM. This patch series survives a round of xfstests on tmpfs, which is nice, but hardly an exhaustive test. Hugh was nice enough to run a round of tests on it and found a bug which is fixed in this edition. This patch (of 57): A lot of comments mention pages when they should say folios. Fix them up. [akpm@linux-foundation.org: fixups for mglru additions] Link: https://lkml.kernel.org/r/20220902194653.1739778-1-willy@infradead.org Link: https://lkml.kernel.org/r/20220902194653.1739778-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04ksm: convert to use common struct mm_slotQi Zheng1-76/+56
Convert to use common struct mm_slot, no functional change. Link: https://lkml.kernel.org/r/20220831031951.43152-8-zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04ksm: convert ksm_mm_slot.link to ksm_mm_slot.hashQi Zheng1-7/+7
In order to use common struct mm_slot, convert ksm_mm_slot.link to ksm_mm_slot.hash in advance, no functional change. Link: https://lkml.kernel.org/r/20220831031951.43152-7-zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04ksm: convert ksm_mm_slot.mm_list to ksm_mm_slot.mm_nodeQi Zheng1-20/+20
In order to use common struct mm_slot, convert ksm_mm_slot.mm_list to ksm_mm_slot.mm_node in advance, no functional change. Link: https://lkml.kernel.org/r/20220831031951.43152-6-zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04ksm: add the ksm prefix to the names of the ksm private structuresQi Zheng1-108/+108
In order to prevent the name of the private structure of ksm from being the same as the name of the common structure used in subsequent patches, prefix their names with ksm in advance. Link: https://lkml.kernel.org/r/20220831031951.43152-5-zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm: thp: convert to use common struct mm_slotQi Zheng1-71/+52
Rename private struct mm_slot to struct khugepaged_mm_slot and convert to use common struct mm_slot with no functional change. [zhengqi.arch@bytedance.com: fix build error with CONFIG_SHMEM disabled] Link: https://lkml.kernel.org/r/639fa8d5-8e5b-2333-69dc-40ed46219364@bytedance.com Link: https://lkml.kernel.org/r/20220831031951.43152-3-zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-04mm: introduce common struct mm_slotQi Zheng1-0/+55
Patch series "add common struct mm_slot and use it in THP and KSM", v2. At present, both THP and KSM module have similar structures mm_slot for organizing and recording the information required for scanning mm, and each defines the following exactly the same operation functions: - alloc_mm_slot - free_mm_slot - get_mm_slot - insert_to_mm_slots_hash In order to de-duplicate these codes, this patchset introduces a common struct mm_slot, and lets THP and KSM to use it. This patch (of 7): At present, both THP and KSM module have similar structures mm_slot for organizing and recording the information required for scanning mm, and each defines the following exactly the same operation functions: - alloc_mm_slot - free_mm_slot - get_mm_slot - insert_to_mm_slots_hash In order to de-duplicate these codes, this patch introduces a common struct mm_slot, and subsequent patches will let THP and KSM to use it. Link: https://lkml.kernel.org/r/20220831031951.43152-1-zhengqi.arch@bytedance.com Link: https://lkml.kernel.org/r/20220831031951.43152-2-zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-01Merge tag 'mm-hotfixes-stable-2022-09-30' of ↵Linus Torvalds2-1/+5
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull more hotfixes from Andrew Morton: "One MAINTAINERS update, two MM fixes, both cc:stable" The previous pull wasn't fated to be the last one.. * tag 'mm-hotfixes-stable-2022-09-30' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: damon/sysfs: fix possible memleak on damon_sysfs_add_target mm: fix BUG splat with kvmalloc + GFP_ATOMIC MAINTAINERS: drop entry to removed file in ARM/RISCPC ARCHITECTURE
2022-10-01damon/sysfs: fix possible memleak on damon_sysfs_add_targetLevi Yun1-1/+1
When damon_sysfs_add_target couldn't find proper task, New allocated damon_target structure isn't registered yet, So, it's impossible to free new allocated one by damon_sysfs_destroy_targets. By calling damon_add_target as soon as allocating new target, Fix this possible memory leak. Link: https://lkml.kernel.org/r/20220926160611.48536-1-sj@kernel.org Fixes: a61ea561c871 ("mm/damon/sysfs: link DAMON for virtual address spaces monitoring") Signed-off-by: Levi Yun <ppbuk5246@gmail.com> Signed-off-by: SeongJae Park <sj@kernel.org> Reviewed-by: SeongJae Park <sj@kernel.org> Cc: <stable@vger.kernel.org> [5.17.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-10-01mm: fix BUG splat with kvmalloc + GFP_ATOMICFlorian Westphal1-0/+4
Martin Zaharinov reports BUG with 5.19.10 kernel: kernel BUG at mm/vmalloc.c:2437! invalid opcode: 0000 [#1] SMP CPU: 28 PID: 0 Comm: swapper/28 Tainted: G W O 5.19.9 #1 [..] RIP: 0010:__get_vm_area_node+0x120/0x130 __vmalloc_node_range+0x96/0x1e0 kvmalloc_node+0x92/0xb0 bucket_table_alloc.isra.0+0x47/0x140 rhashtable_try_insert+0x3a4/0x440 rhashtable_insert_slow+0x1b/0x30 [..] bucket_table_alloc uses kvzalloc(GPF_ATOMIC). If kmalloc fails, this now falls through to vmalloc and hits code paths that assume GFP_KERNEL. Link: https://lkml.kernel.org/r/20220926151650.15293-1-fw@strlen.de Fixes: a421ef303008 ("mm: allow !GFP_KERNEL allocations for kvmalloc") Signed-off-by: Florian Westphal <fw@strlen.de> Suggested-by: Michal Hocko <mhocko@suse.com> Link: https://lore.kernel.org/linux-mm/Yy3MS2uhSgjF47dy@pc636/T/#t Acked-by: Michal Hocko <mhocko@suse.com> Reported-by: Martin Zaharinov <micron10@gmail.com> Cc: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-09-30Merge branch 'slab/for-6.1/slub_validation_locking' into slab/for-nextVlastimil Babka1-11/+14
A fix for a regression in slub_debug caches that could cause slab page leaks and subsequent warnings on cache shutdown, by Feng Tang.
2022-09-30mm/slub: fix a slab missed to be freed problemFeng Tang1-11/+14
When enable kasan and kfence's in-kernel kunit test with slub_debug on, it caught a problem (in linux-next tree): ------------[ cut here ]------------ kmem_cache_destroy test: Slab cache still has objects when called from test_exit+0x1a/0x30 WARNING: CPU: 3 PID: 240 at mm/slab_common.c:492 kmem_cache_destroy+0x16c/0x170 Modules linked in: CPU: 3 PID: 240 Comm: kunit_try_catch Tainted: G B N 6.0.0-rc7-next-20220929 #52 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 RIP: 0010:kmem_cache_destroy+0x16c/0x170 Code: 41 5c 41 5d e9 a5 04 0b 00 c3 cc cc cc cc 48 8b 55 60 48 8b 4c 24 20 48 c7 c6 40 37 d2 82 48 c7 c7 e8 a0 33 83 e8 4e d7 14 01 <0f> 0b eb a7 41 56 41 89 d6 41 55 49 89 f5 41 54 49 89 fc 55 48 89 RSP: 0000:ffff88800775fea0 EFLAGS: 00010282 RAX: 0000000000000000 RBX: ffffffff83bdec48 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 1ffff11000eebf9e RDI: ffffed1000eebfc6 RBP: ffff88804362fa00 R08: ffffffff81182e58 R09: ffff88800775fbdf R10: ffffed1000eebf7b R11: 0000000000000001 R12: 000000008c800d00 R13: ffff888005e78040 R14: 0000000000000000 R15: ffff888005cdfad0 FS: 0000000000000000(0000) GS:ffff88807ed00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000000360e001 CR4: 0000000000370ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> test_exit+0x1a/0x30 kunit_try_run_case+0xad/0xc0 kunit_generic_run_threadfn_adapter+0x26/0x50 kthread+0x17b/0x1b0 It was biscted to commit c7323a5ad078 ("mm/slub: restrict sysfs validation to debug caches and make it safe") The problem is inside free_debug_processing(), under certain circumstances the slab can be removed from the partial list but not freed by discard_slab() and thus n->nr_slabs is not decreased accordingly. During shutdown, this non-zero n->nr_slabs is detected and reported. Specifically, the problem is that there are two checks for detecting a full partial list by comparing n->nr_partial >= s->min_partial where the latter check is affected by remove_partial() decreasing n->nr_partial between the checks. Reoganize the code so there is a single check upfront. Link: https://lore.kernel.org/all/20220930100730.250248-1-feng.tang@intel.com/ Fixes: c7323a5ad078 ("mm/slub: restrict sysfs validation to debug caches and make it safe") Signed-off-by: Feng Tang <feng.tang@intel.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2022-09-29kfence: use better stack hash seedJason A. Donenfeld1-1/+1
As of the prior commit, the RNG will have incorporated both a cycle counter value and RDRAND, in addition to various other environmental noise. Therefore, using get_random_u32() will supply a stronger seed than simply using random_get_entropy(). N.B.: random_get_entropy() should be considered an internal API of random.c and not generally consumed. Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Marco Elver <elver@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-09-29Merge branch 'slab/for-6.1/kmalloc_size_roundup' into slab/for-nextVlastimil Babka2-3/+48
The first two patches from a series by Kees Cook [1] that introduce kmalloc_size_roundup(). This will allow merging of per-subsystem patches using the new function and ultimately stop (ab)using ksize() in a way that causes ongoing trouble for debugging functionality and static checkers. [1] https://lore.kernel.org/all/20220923202822.2667581-1-keescook@chromium.org/ -- Resolved a conflict of modifying mm/slab.c __ksize() comment with a commit that unifies __ksize() implementation into mm/slab_common.c
2022-09-29Merge branch 'slab/for-6.1/slub_debug_waste' into slab/for-nextVlastimil Babka2-38/+119
A patch from Feng Tang that enhances the existing debugfs alloc_traces file for kmalloc caches with information about how much space is wasted by allocations that needs less space than the particular kmalloc cache provides.
2022-09-29Merge branch 'slab/for-6.1/trivial' into slab/for-nextVlastimil Babka1-3/+6
Additional cleanup by Chao Yu removing a BUG_ON() in create_unique_id().
2022-09-29slab: Introduce kmalloc_size_roundup()Kees Cook3-3/+40
In the effort to help the compiler reason about buffer sizes, the __alloc_size attribute was added to allocators. This improves the scope of the compiler's ability to apply CONFIG_UBSAN_BOUNDS and (in the near future) CONFIG_FORTIFY_SOURCE. For most allocations, this works well, as the vast majority of callers are not expecting to use more memory than what they asked for. There is, however, one common exception to this: anticipatory resizing of kmalloc allocations. These cases all use ksize() to determine the actual bucket size of a given allocation (e.g. 128 when 126 was asked for). This comes in two styles in the kernel: 1) An allocation has been determined to be too small, and needs to be resized. Instead of the caller choosing its own next best size, it wants to minimize the number of calls to krealloc(), so it just uses ksize() plus some additional bytes, forcing the realloc into the next bucket size, from which it can learn how large it is now. For example: data = krealloc(data, ksize(data) + 1, gfp); data_len = ksize(data); 2) The minimum size of an allocation is calculated, but since it may grow in the future, just use all the space available in the chosen bucket immediately, to avoid needing to reallocate later. A good example of this is skbuff's allocators: data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); ... /* kmalloc(size) might give us more room than requested. * Put skb_shared_info exactly at the end of allocated zone, * to allow max possible filling before reallocation. */ osize = ksize(data); size = SKB_WITH_OVERHEAD(osize); In both cases, the "how much was actually allocated?" question is answered _after_ the allocation, where the compiler hinting is not in an easy place to make the association any more. This mismatch between the compiler's view of the buffer length and the code's intention about how much it is going to actually use has already caused problems[1]. It is possible to fix this by reordering the use of the "actual size" information. We can serve the needs of users of ksize() and still have accurate buffer length hinting for the compiler by doing the bucket size calculation _before_ the allocation. Code can instead ask "how large an allocation would I get for a given size?". Introduce kmalloc_size_roundup(), to serve this function so we can start replacing the "anticipatory resizing" uses of ksize(). [1] https://github.com/ClangBuiltLinux/linux/issues/1599 https://github.com/KSPP/linux/issues/183 [ vbabka@suse.cz: add SLOB version ] Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2022-09-29slab: Remove __malloc attribute from realloc functionsKees Cook1-2/+2
The __malloc attribute should not be applied to "realloc" functions, as the returned pointer may alias the storage of the prior pointer. Instead of splitting __malloc from __alloc_size, which would be a huge amount of churn, just create __realloc_size for the few cases where it is needed. Thanks to Geert Uytterhoeven <geert@linux-m68k.org> for reporting build failures with gcc-8 in earlier version which tried to remove the #ifdef. While the "alloc_size" attribute is available on all GCC versions, I forgot that it gets disabled explicitly by the kernel in GCC < 9.1 due to misbehaviors. Add a note to the compiler_attributes.h entry for it. Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Marco Elver <elver@google.com> Cc: linux-mm@kvack.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>