diff options
author | Mel Gorman <mgorman@techsingularity.net> | 2022-11-18 13:17:13 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2023-10-10 23:00:36 +0300 |
commit | 570786ac6f04b6958b2224254a5138f619f334a8 (patch) | |
tree | 5e8c2847e648c19ebaa806e5cf3fa6cf8bf2a4fa | |
parent | 939189aedfac6b7b8dfd6eb02f70b251c51a8163 (diff) | |
download | linux-570786ac6f04b6958b2224254a5138f619f334a8.tar.xz |
mm/page_alloc: always remove pages from temporary list
[ Upstream commit c3e58a70425ac6ddaae1529c8146e88b4f7252bb ]
Patch series "Leave IRQs enabled for per-cpu page allocations", v3.
This patch (of 2):
free_unref_page_list() has neglected to remove pages properly from the
list of pages to free since forever. It works by coincidence because
list_add happened to do the right thing adding the pages to just the PCP
lists. However, a later patch added pages to either the PCP list or the
zone list but only properly deleted the page from the list in one path
leading to list corruption and a subsequent failure. As a preparation
patch, always delete the pages from one list properly before adding to
another. On its own, this fixes nothing although it adds a fractional
amount of overhead but is critical to the next patch.
Link: https://lkml.kernel.org/r/20221118101714.19590-1-mgorman@techsingularity.net
Link: https://lkml.kernel.org/r/20221118101714.19590-2-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reported-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Stable-dep-of: 7b086755fb8c ("mm: page_alloc: fix CMA and HIGHATOMIC landing on the wrong buddy list")
Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r-- | mm/page_alloc.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 69668817fed3..d94ac6d87bc9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3547,6 +3547,8 @@ void free_unref_page_list(struct list_head *list) list_for_each_entry_safe(page, next, list, lru) { struct zone *zone = page_zone(page); + list_del(&page->lru); + /* Different zone, different pcp lock. */ if (zone != locked_zone) { if (pcp) |