diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2020-12-15 06:10:56 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-12-15 23:13:43 +0300 |
commit | 7612921f2376d51d020ae2f06ffb7da40422b75b (patch) | |
tree | 9ad69648279abc90b408aa470e03a77da5daffb8 /mm/memory_hotplug.c | |
parent | 952eaf815925f106eb6b68346b3458a68bb18ec1 (diff) | |
download | linux-7612921f2376d51d020ae2f06ffb7da40422b75b.tar.xz |
mm, page_alloc: move draining pcplists to page isolation users
Currently, pcplists are drained during set_migratetype_isolate() which
means once per pageblock processed start_isolate_page_range(). This is
somewhat wasteful. Moreover, the callers might need different guarantees,
and the draining is currently prone to races and does not guarantee that
no page from isolated pageblock will end up on the pcplist after the
drain.
Better guarantees are added by later patches and require explicit actions
by page isolation users that need them. Thus it makes sense to move the
current imperfect draining to the callers also as a preparation step.
Link: https://lkml.kernel.org/r/20201111092812.11329-7-vbabka@suse.cz
Suggested-by: David Hildenbrand <david@redhat.com>
Suggested-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory_hotplug.c')
-rw-r--r-- | mm/memory_hotplug.c | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 41c62295292b..3c494ab0d075 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1500,6 +1500,8 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) goto failed_removal; } + drain_all_pages(zone); + arg.start_pfn = start_pfn; arg.nr_pages = nr_pages; node_states_check_changes_offline(nr_pages, zone, &arg); @@ -1550,11 +1552,10 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) } /* - * per-cpu pages are drained in start_isolate_page_range, but if - * there are still pages that are not free, make sure that we - * drain again, because when we isolated range we might - * have raced with another thread that was adding pages to pcp - * list. + * per-cpu pages are drained after start_isolate_page_range, but + * if there are still pages that are not free, make sure that we + * drain again, because when we isolated range we might have + * raced with another thread that was adding pages to pcp list. * * Forward progress should be still guaranteed because * pages on the pcp list can only belong to MOVABLE_ZONE |