diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-06-24 05:03:45 +0300 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2015-07-18 00:58:45 +0300 |
commit | c190c3b16c0f56ff338df12df53c03859155951b (patch) | |
tree | c8fde7650b167ccd0bfa9a159d55cf47d1540e85 /include/net/dcbevent.h | |
parent | 75c27f119b6475d95374bdad872c6938b5c26196 (diff) | |
download | linux-c190c3b16c0f56ff338df12df53c03859155951b.tar.xz |
rcu: Switch synchronize_sched_expedited() to stop_one_cpu()
The synchronize_sched_expedited() currently invokes try_stop_cpus(),
which schedules the stopper kthreads on each online non-idle CPU,
and waits until all those kthreads are running before letting any
of them stop. This is disastrous for real-time workloads, which
get hit with a preemption that is as long as the longest scheduling
latency on any CPU, including any non-realtime housekeeping CPUs.
This commit therefore switches to using stop_one_cpu() on each CPU
in turn. This avoids inflicting the worst-case scheduling latency
on the worst-case CPU onto all other CPUs, and also simplifies the
code a little bit.
Follow-up commits will simplify the counter-snapshotting algorithm
and convert a number of the counters that are now protected by the
new ->expedited_mutex to non-atomic.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
[ paulmck: Kept stop_one_cpu(), dropped disabling of "guardrails". ]
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'include/net/dcbevent.h')
0 files changed, 0 insertions, 0 deletions