diff options
| author | Kaitao Cheng <chengkaitao@kylinos.cn> | 2026-03-21 15:08:47 +0300 |
|---|---|---|
| committer | Andrew Morton <akpm@linux-foundation.org> | 2026-04-05 23:53:34 +0300 |
| commit | c4a9439a5a372c6c0eb7cd2bc9dbb2494699e98d (patch) | |
| tree | 181aa437367091c01d9b29cfe9b1e39cb2d7126f | |
| parent | 4fb61d95ad21c3b6f1c09f357ff49d70abb0535e (diff) | |
| download | linux-c4a9439a5a372c6c0eb7cd2bc9dbb2494699e98d.tar.xz | |
mm: mark early-init static variables with __meminitdata
Static variables defined inside __meminit functions should also be marked
with __meminitdata, so that their storage is placed in the .init.data
section and reclaimed with free_initmem(), thereby reducing permanent .bss
memory usage when CONFIG_MEMORY_HOTPLUG is disabled.
Link: https://lkml.kernel.org/r/20260321120847.8159-1-pilgrimtao@gmail.com
Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| -rw-r--r-- | mm/mm_init.c | 2 | ||||
| -rw-r--r-- | mm/sparse-vmemmap.c | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/mm/mm_init.c b/mm/mm_init.c index 4324b93ccebd..79f93f2a90cf 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -812,7 +812,7 @@ void __meminit reserve_bootmem_region(phys_addr_t start, static bool __meminit overlap_memmap_init(unsigned long zone, unsigned long *pfn) { - static struct memblock_region *r; + static struct memblock_region *r __meminitdata; if (mirrored_kernelcore && zone == ZONE_MOVABLE) { if (!r || *pfn >= memblock_region_memory_end_pfn(r)) { diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 24a37676cecb..6eadb9d116e4 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -62,7 +62,7 @@ void * __meminit vmemmap_alloc_block(unsigned long size, int node) if (slab_is_available()) { gfp_t gfp_mask = GFP_KERNEL|__GFP_RETRY_MAYFAIL|__GFP_NOWARN; int order = get_order(size); - static bool warned; + static bool warned __meminitdata; struct page *page; page = alloc_pages_node(node, gfp_mask, order); |
