diff options
author | Mel Gorman <mgorman@suse.de> | 2012-11-15 13:01:14 +0400 |
---|---|---|
committer | Mel Gorman <mgorman@suse.de> | 2012-12-11 18:42:51 +0400 |
commit | fb003b80daa0dead5b87f4e2e4fb8da68b110ff2 (patch) | |
tree | aa9c3694cb21e61774471e88bd22918c5746c706 /kernel | |
parent | e14808b49f55e0e1135da5e4a154a540dd9f3662 (diff) | |
download | linux-fb003b80daa0dead5b87f4e2e4fb8da68b110ff2.tar.xz |
sched: numa: Slowly increase the scanning period as NUMA faults are handled
Currently the rate of scanning for an address space is controlled
by the individual tasks. The next scan is simply determined by
2*p->numa_scan_period.
The 2*p->numa_scan_period is arbitrary and never changes. At this point
there is still no proper policy that decides if a task or process is
properly placed. It just scans and assumes the next NUMA fault will
place it properly. As it is assumed that pages will get properly placed
over time, increase the scan window each time a fault is incurred. This
is a big assumption as noted in the comments.
It should be noted that changing to p->numa_scan_period will increase
system CPU usage because now the scanning rate has effectively doubled.
If that is a problem then the min_rate should be made 200ms instead of
restoring the 2* logic.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 37e895a941ab..dd18087fd369 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -812,6 +812,15 @@ void task_numa_fault(int node, int pages) /* FIXME: Allocate task-specific structure for placement policy here */ + /* + * Assume that as faults occur that pages are getting properly placed + * and fewer NUMA hints are required. Note that this is a big + * assumption, it assumes processes reach a steady steady with no + * further phase changes. + */ + p->numa_scan_period = min(sysctl_numa_balancing_scan_period_max, + p->numa_scan_period + jiffies_to_msecs(2)); + task_numa_placement(p); } @@ -858,7 +867,7 @@ void task_numa_work(struct callback_head *work) if (p->numa_scan_period == 0) p->numa_scan_period = sysctl_numa_balancing_scan_period_min; - next_scan = now + 2*msecs_to_jiffies(p->numa_scan_period); + next_scan = now + msecs_to_jiffies(p->numa_scan_period); if (cmpxchg(&mm->numa_next_scan, migrate, next_scan) != migrate) return; |