diff options
| author | Andrea Righi <arighi@nvidia.com> | 2026-02-18 11:32:15 +0300 |
|---|---|---|
| committer | Tejun Heo <tj@kernel.org> | 2026-02-23 23:00:53 +0300 |
| commit | b75aaea24c9fc776e5bd14df38147270a3c00450 (patch) | |
| tree | 6330d3c839a5229bd7defabe1af1d46787ec8980 | |
| parent | 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f (diff) | |
| download | linux-b75aaea24c9fc776e5bd14df38147270a3c00450.tar.xz | |
sched_ext: Properly mark SCX-internal migrations via sticky_cpu
Reposition the setting and clearing of sticky_cpu to better define the
scope of SCX-internal migrations.
This ensures @sticky_cpu is set for the entire duration of an internal
migration (from dequeue through enqueue), making it a reliable indicator
that an SCX-internal migration is in progress. The dequeue and enqueue
paths can then use @sticky_cpu to identify internal migrations and skip
BPF scheduler notifications accordingly.
This prepares for a later commit fixing the ops.dequeue() semantics.
No functional change intended.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
| -rw-r--r-- | kernel/sched/ext.c | 13 |
1 files changed, 8 insertions, 5 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 62b1f3ac5630..87397781c1bf 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -1476,9 +1476,6 @@ static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int enq_flags enq_flags |= rq->scx.extra_enq_flags; - if (sticky_cpu >= 0) - p->scx.sticky_cpu = -1; - /* * Restoring a running task will be immediately followed by * set_next_task_scx() which expects the task to not be on the BPF @@ -1509,6 +1506,9 @@ static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int enq_flags dl_server_start(&rq->ext_server); do_enqueue_task(rq, p, enq_flags, sticky_cpu); + + if (sticky_cpu >= 0) + p->scx.sticky_cpu = -1; out: rq->scx.flags &= ~SCX_RQ_IN_WAKEUP; @@ -1670,10 +1670,13 @@ static void move_remote_task_to_local_dsq(struct task_struct *p, u64 enq_flags, { lockdep_assert_rq_held(src_rq); - /* the following marks @p MIGRATING which excludes dequeue */ + /* + * Set sticky_cpu before deactivate_task() to properly mark the + * beginning of an SCX-internal migration. + */ + p->scx.sticky_cpu = cpu_of(dst_rq); deactivate_task(src_rq, p, 0); set_task_cpu(p, cpu_of(dst_rq)); - p->scx.sticky_cpu = cpu_of(dst_rq); raw_spin_rq_unlock(src_rq); raw_spin_rq_lock(dst_rq); |
