diff options
author | Chengming Zhou <zhouchengming@bytedance.com> | 2022-08-18 15:47:59 +0300 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2022-08-23 12:01:18 +0300 |
commit | 5d6da83c44af70ede7bfd0fd6d1ef8a3b3e0402c (patch) | |
tree | e6924adb72d878b590c9464b4338411688aa8252 /kernel/sched/fair.c | |
parent | 39c4261191bf05e7eb310f852980a6d0afe5582a (diff) | |
download | linux-5d6da83c44af70ede7bfd0fd6d1ef8a3b3e0402c.tar.xz |
sched/fair: Reset sched_avg last_update_time before set_task_rq()
set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked
task's sched_avg when migrate, which is not needed for already detached
task.
task_change_group_fair() will detached the task sched_avg from prev cfs_rq
first, so reset sched_avg last_update_time before set_task_rq() to avoid that.
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20220818124805.601-4-zhouchengming@bytedance.com
Diffstat (limited to 'kernel/sched/fair.c')
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2c0eb2a4e341..e4c0929a6e71 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11660,12 +11660,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); - set_task_rq(p, task_cpu(p)); #ifdef CONFIG_SMP /* Tell se's cfs_rq has been changed -- migrated */ p->se.avg.last_update_time = 0; #endif + set_task_rq(p, task_cpu(p)); attach_task_cfs_rq(p); } |