diff options
author | Tejun Heo <tj@kernel.org> | 2024-10-11 00:41:44 +0300 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2024-10-11 00:41:44 +0300 |
commit | 3fdb9ebcec10a91e7825b95840c5a627dabcbca7 (patch) | |
tree | 496d5031d95122310ae4ca61c848fa731c991c19 /kernel | |
parent | 54baa7ac0cebe53a03ba3083905021f92d2420db (diff) | |
download | linux-3fdb9ebcec10a91e7825b95840c5a627dabcbca7.tar.xz |
sched_ext: Start schedulers with consistent p->scx.slice values
The disable path caps p->scx.slice to SCX_SLICE_DFL. As the field is already
being ignored at this stage during disable, the only effect this has is that
when the next BPF scheduler is loaded, it won't see unreasonable left-over
slices. Ultimately, this shouldn't matter but it's better to start in a
known state. Drop p->scx.slice capping from the disable path and instead
reset it to SCX_SLICE_DFL in the enable path.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David Vernet <void@manifault.com>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/ext.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 2cb304b37014..4e56230e6e4a 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -4473,7 +4473,6 @@ static void scx_ops_disable_workfn(struct kthread_work *work) sched_deq_and_put_task(p, DEQUEUE_SAVE | DEQUEUE_MOVE, &ctx); - p->scx.slice = min_t(u64, p->scx.slice, SCX_SLICE_DFL); __setscheduler_prio(p, p->prio); check_class_changing(task_rq(p), p, old_class); @@ -5190,6 +5189,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link) sched_deq_and_put_task(p, DEQUEUE_SAVE | DEQUEUE_MOVE, &ctx); + p->scx.slice = SCX_SLICE_DFL; __setscheduler_prio(p, p->prio); check_class_changing(task_rq(p), p, old_class); |