diff options
author | Peter Zijlstra <peterz@infradead.org> | 2020-09-25 16:45:11 +0300 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-11-10 20:38:58 +0300 |
commit | 06249738a41a70f2201a148866899f84cbebc45e (patch) | |
tree | 984a912bd2359989bfc427489f0a633b5b8fa579 /kernel/workqueue.c | |
parent | f2469a1fb43f85d243ce72638367fb6e15c33491 (diff) | |
download | linux-06249738a41a70f2201a148866899f84cbebc45e.tar.xz |
workqueue: Manually break affinity on hotplug
Don't rely on the scheduler to force break affinity for us -- it will
stop doing that for per-cpu-kthreads.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Link: https://lkml.kernel.org/r/20201023102346.464718669@infradead.org
Diffstat (limited to 'kernel/workqueue.c')
-rw-r--r-- | kernel/workqueue.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 437935e7a199..c71da2a59e12 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -4908,6 +4908,10 @@ static void unbind_workers(int cpu) pool->flags |= POOL_DISASSOCIATED; raw_spin_unlock_irq(&pool->lock); + + for_each_pool_worker(worker, pool) + WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0); + mutex_unlock(&wq_pool_attach_mutex); /* |