summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2017-05-04 00:52:03 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2017-05-04 01:52:08 +0300
commita2d7f8e461881394167bafb616112a96f5f567d0 (patch)
tree58f850860e0ab32b19592050d082d7d359329c3d
parent15038d0de9eca33f66bc1fed4243914906e04de4 (diff)
downloadlinux-a2d7f8e461881394167bafb616112a96f5f567d0.tar.xz
mm: don't avoid high-priority reclaim on unreclaimable nodes
Commit 246e87a93934 ("memcg: fix get_scan_count() for small targets") sought to avoid high reclaim priorities for kswapd by forcing it to scan a minimum amount of pages when lru_pages >> priority yielded nothing. Commit b95a2f2d486d ("mm: vmscan: convert global reclaim to per-memcg LRU lists"), due to switching global reclaim to a round-robin scheme over all cgroups, had to restrict this forceful behavior to unreclaimable zones in order to prevent massive overreclaim with many cgroups. The latter patch effectively neutered the behavior completely for all but extreme memory pressure. But in those situations we might as well drop the reclaimers to lower priority levels. Remove the check. Link: http://lkml.kernel.org/r/20170228214007.5621-6-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Jia He <hejianet@gmail.com> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/vmscan.c19
1 files changed, 5 insertions, 14 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 014d0d181be0..2fd50ca88016 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2130,22 +2130,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
int pass;
/*
- * If the zone or memcg is small, nr[l] can be 0. This
- * results in no scanning on this priority and a potential
- * priority drop. Global direct reclaim can go to the next
- * zone and tends to have no problems. Global kswapd is for
- * zone balancing and it needs to scan a minimum amount. When
+ * If the zone or memcg is small, nr[l] can be 0. When
* reclaiming for a memcg, a priority drop can cause high
- * latencies, so it's better to scan a minimum amount there as
- * well.
+ * latencies, so it's better to scan a minimum amount. When a
+ * cgroup has already been deleted, scrape out the remaining
+ * cache forcefully to get rid of the lingering state.
*/
- if (current_is_kswapd()) {
- if (!pgdat_reclaimable(pgdat))
- force_scan = true;
- if (!mem_cgroup_online(memcg))
- force_scan = true;
- }
- if (!global_reclaim(sc))
+ if (!global_reclaim(sc) || !mem_cgroup_online(memcg))
force_scan = true;
/* If we have no swap space, do not bother scanning anon pages. */