diff options
author | Waiman Long <Waiman.Long@hpe.com> | 2015-12-02 21:41:50 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-12-04 12:34:49 +0300 |
commit | aa0b7ae06387d40a988ce16a189082dee6e570bc (patch) | |
tree | cd18162e482e88149c28a3657393fc4d45262aff /kernel/sched/fair.c | |
parent | b0367629acf62a78404c467cd09df447c2fea804 (diff) | |
download | linux-aa0b7ae06387d40a988ce16a189082dee6e570bc.tar.xz |
sched/fair: Disable the task group load_avg update for the root_task_group
Currently, the update_tg_load_avg() function attempts to update the
tg's load_avg value whenever the load changes even for root_task_group
where the load_avg value will never be used. This patch will disable
the load_avg update when the given task group is the root_task_group.
Running a Java benchmark with noautogroup and a 4.3 kernel on a
16-socket IvyBridge-EX system, the amount of CPU time (as reported by
perf) consumed by task_tick_fair() which includes update_tg_load_avg()
decreased from 0.71% to 0.22%, a more than 3X reduction. The Max-jOPs
results also increased slightly from 983015 to 986449.
Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ben Segall <bsegall@google.com>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yuyang Du <yuyang.du@intel.com>
Link: http://lkml.kernel.org/r/1449081710-20185-4-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/fair.c')
-rw-r--r-- | kernel/sched/fair.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4b0e8b8700fd..1093873dcd0f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2709,6 +2709,12 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) { long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib; + /* + * No need to update load_avg for root_task_group as it is not used. + */ + if (cfs_rq->tg == &root_task_group) + return; + if (force || abs(delta) > cfs_rq->tg_load_avg_contrib / 64) { atomic_long_add(delta, &cfs_rq->tg->load_avg); cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg; |