diff options
author | Frederic Weisbecker <frederic@kernel.org> | 2022-06-08 17:40:32 +0300 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2022-07-05 23:32:59 +0300 |
commit | 564506495ca96a6e66d077d3d5b9f02d4b9b0f45 (patch) | |
tree | 3efd8622bf23512e649d8e154f82b77109091502 /kernel/rcu | |
parent | 95e04f48ec0a634e2f221081f5fa1a904755f326 (diff) | |
download | linux-564506495ca96a6e66d077d3d5b9f02d4b9b0f45.tar.xz |
rcu/context-tracking: Move deferred nocb resched to context tracking
To prepare for migrating the RCU eqs accounting code to context tracking,
split the last-resort deferred nocb resched from rcu_user_enter() and
move it into a separate call from context tracking.
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicolas Saenz Julienne <nsaenz@kernel.org>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Cc: Yu Liao <liaoyu15@huawei.com>
Cc: Phil Auld <pauld@redhat.com>
Cc: Paul Gortmaker<paul.gortmaker@windriver.com>
Cc: Alex Belits <abelits@marvell.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Tested-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Diffstat (limited to 'kernel/rcu')
-rw-r--r-- | kernel/rcu/tree.c | 15 |
1 files changed, 2 insertions, 13 deletions
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 006939b29e82..8c0c3490532e 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -681,7 +681,7 @@ static DEFINE_PER_CPU(struct irq_work, late_wakeup_work) = * last resort is to fire a local irq_work that will trigger a reschedule once IRQs * get re-enabled again. */ -noinstr static void rcu_irq_work_resched(void) +noinstr void rcu_irq_work_resched(void) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); @@ -697,10 +697,7 @@ noinstr static void rcu_irq_work_resched(void) } instrumentation_end(); } - -#else -static inline void rcu_irq_work_resched(void) { } -#endif +#endif /* #if !defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK) */ /** * rcu_user_enter - inform RCU that we are resuming userspace. @@ -715,14 +712,6 @@ static inline void rcu_irq_work_resched(void) { } */ noinstr void rcu_user_enter(void) { - lockdep_assert_irqs_disabled(); - - /* - * Other than generic entry implementation, we may be past the last - * rescheduling opportunity in the entry code. Trigger a self IPI - * that will fire and reschedule once we resume in user/guest mode. - */ - rcu_irq_work_resched(); rcu_eqs_enter(true); } |