diff options
author | Peter Zijlstra <peterz@infradead.org> | 2020-09-25 17:42:31 +0300 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-11-10 20:38:59 +0300 |
commit | 120455c514f7321981c907a01c543b05aff3f254 (patch) | |
tree | fbc0bfe58c8e457bea81218250965b0c70abffe7 /kernel/sched/rt.c | |
parent | 1cf12e08bc4d50a76b80c42a3109c53d8794a0c9 (diff) | |
download | linux-120455c514f7321981c907a01c543b05aff3f254.tar.xz |
sched: Fix hotplug vs CPU bandwidth control
Since we now migrate tasks away before DYING, we should also move
bandwidth unthrottle, otherwise we can gain tasks from unthrottle
after we expect all tasks to be gone already.
Also; it looks like the RT balancers don't respect cpu_active() and
instead rely on rq->online in part, complete this. This too requires
we do set_rq_offline() earlier to match the cpu_active() semantics.
(The bigger patch is to convert RT to cpu_active() entirely)
Since set_rq_online() is called from sched_cpu_activate(), place
set_rq_offline() in sched_cpu_deactivate().
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Link: https://lkml.kernel.org/r/20201023102346.639538965@infradead.org
Diffstat (limited to 'kernel/sched/rt.c')
-rw-r--r-- | kernel/sched/rt.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 49ec096a8aa1..40a46639f78a 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -265,7 +265,7 @@ static void pull_rt_task(struct rq *this_rq); static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev) { /* Try to pull RT tasks here if we lower this rq's prio */ - return rq->rt.highest_prio.curr > prev->prio; + return rq->online && rq->rt.highest_prio.curr > prev->prio; } static inline int rt_overloaded(struct rq *rq) |