diff options
author | Mel Gorman <mgorman@suse.de> | 2012-11-19 14:59:15 +0400 |
---|---|---|
committer | Mel Gorman <mgorman@suse.de> | 2012-12-11 18:42:51 +0400 |
commit | e14808b49f55e0e1135da5e4a154a540dd9f3662 (patch) | |
tree | d66708455dcc1b6e2e15937d732ab12c121e623a /include/linux | |
parent | a8f6077213d285ca08dbf6d4a67470787388138b (diff) | |
download | linux-e14808b49f55e0e1135da5e4a154a540dd9f3662.tar.xz |
mm: numa: Rate limit setting of pte_numa if node is saturated
If there are a large number of NUMA hinting faults and all of them
are resulting in migrations it may indicate that memory is just
bouncing uselessly around. NUMA balancing cost is likely exceeding
any benefit from locality. Rate limit the PTE updates if the node
is migration rate-limited. As noted in the comments, this distorts
the NUMA faulting statistics.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/migrate.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/include/linux/migrate.h b/include/linux/migrate.h index f0d0313eea6f..91556889adac 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -77,11 +77,17 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping, #ifdef CONFIG_NUMA_BALANCING extern int migrate_misplaced_page(struct page *page, int node); +extern int migrate_misplaced_page(struct page *page, int node); +extern bool migrate_ratelimited(int node); #else static inline int migrate_misplaced_page(struct page *page, int node) { return -EAGAIN; /* can't migrate now */ } +static inline bool migrate_ratelimited(int node) +{ + return false; +} #endif /* CONFIG_NUMA_BALANCING */ #endif /* _LINUX_MIGRATE_H */ |