diff options
author | Peter Zijlstra <peterz@infradead.org> | 2017-04-10 14:20:45 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-04-14 11:26:36 +0300 |
commit | bb0bd044e65c2bf0f26b29613fcc441dfdeedf14 (patch) | |
tree | 1248b06e789e378c26652059d9efeda6951085c8 /kernel/sched | |
parent | 3841cdc31099fe3b84c93903c63e3d60348c0ea1 (diff) | |
download | linux-bb0bd044e65c2bf0f26b29613fcc441dfdeedf14.tar.xz |
sched/fair: Increase PELT accuracy for small tasks
We truncate (and loose) the lower 10 bits of runtime in
___update_load_avg(), this means there's a consistent bias to
under-account tasks. This is esp. significant for small tasks.
Cure this by only forwarding last_update_time to the point we've
actually accounted for, leaving the remainder for the next time.
Reported-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d43e9ac9c3c5..1e3b99a9ab69 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2915,7 +2915,8 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, delta >>= 10; if (!delta) return 0; - sa->last_update_time = now; + + sa->last_update_time += delta << 10; /* * Now we know we crossed measurement unit boundaries. The *_avg |