diff options
author | Mel Gorman <mgorman@techsingularity.net> | 2016-07-29 01:47:31 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-07-29 02:07:41 +0300 |
commit | 5a1c84b404a7176b8b36e2a0041b6f0adb3151a3 (patch) | |
tree | ff98e242c5d4d3a24ca49f6ddc707028aeb938f9 /mm/vmscan.c | |
parent | bb4cc2bea6df7854d629bff114ca03237cc718d6 (diff) | |
download | linux-5a1c84b404a7176b8b36e2a0041b6f0adb3151a3.tar.xz |
mm: remove reclaim and compaction retry approximations
If per-zone LRU accounting is available then there is no point
approximating whether reclaim and compaction should retry based on pgdat
statistics. This is effectively a revert of "mm, vmstat: remove zone
and node double accounting by approximating retries" with the difference
that inactive/active stats are still available. This preserves the
history of why the approximation was retried and why it had to be
reverted to handle OOM kills on 32-bit systems.
Link: http://lkml.kernel.org/r/1469110261-7365-4-git-send-email-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r-- | mm/vmscan.c | 18 |
1 files changed, 18 insertions, 0 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 222d5403dd4b..134381a20099 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -194,6 +194,24 @@ static bool sane_reclaim(struct scan_control *sc) } #endif +/* + * This misses isolated pages which are not accounted for to save counters. + * As the data only determines if reclaim or compaction continues, it is + * not expected that isolated pages will be a dominating factor. + */ +unsigned long zone_reclaimable_pages(struct zone *zone) +{ + unsigned long nr; + + nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); + if (get_nr_swap_pages() > 0) + nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); + + return nr; +} + unsigned long pgdat_reclaimable_pages(struct pglist_data *pgdat) { unsigned long nr; |