diff options
| author | Paul E. McKenney <paulmck@kernel.org> | 2025-01-10 00:19:42 +0300 |
|---|---|---|
| committer | Boqun Feng <boqun.feng@gmail.com> | 2025-02-05 18:12:05 +0300 |
| commit | c4020620528e4e22a051900654a70dcff0ab218d (patch) | |
| tree | 2ed55f6aa4118d7282a65113aa5282c37ef6315c /include/linux/srcutree.h | |
| parent | 443971156cebfc54a5f5c37a5702c13a2bb06755 (diff) | |
| download | linux-c4020620528e4e22a051900654a70dcff0ab218d.tar.xz | |
srcu: Add SRCU-fast readers
This commit adds srcu_read_{,un}lock_fast(), which is similar
to srcu_read_{,un}lock_lite(), but avoids the array-indexing and
pointer-following overhead. On a microbenchmark featuring tight
loops around empty readers, this results in about a 20% speedup
compared to RCU Tasks Trace on my x86 laptop.
Please note that SRCU-fast has drawbacks compared to RCU Tasks
Trace, including:
o Lack of CPU stall warnings.
o SRCU-fast readers permitted only where rcu_is_watching().
o A pointer-sized return value from srcu_read_lock_fast() must
be passed to the corresponding srcu_read_unlock_fast().
o In the absence of readers, a synchronize_srcu() having _fast()
readers will incur the latency of at least two normal RCU grace
periods.
o RCU Tasks Trace priority boosting could be easily added.
Boosting SRCU readers is more difficult.
SRCU-fast also has a drawback compared to SRCU-lite, namely that the
return value from srcu_read_lock_fast()-fast is a 64-bit pointer and
that from srcu_read_lock_lite() is only a 32-bit int.
[ paulmck: Apply feedback from Akira Yokosawa. ]
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: <bpf@vger.kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Diffstat (limited to 'include/linux/srcutree.h')
| -rw-r--r-- | include/linux/srcutree.h | 38 |
1 files changed, 38 insertions, 0 deletions
diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index ef3065c0cadc..bdc467efce3a 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -228,6 +228,44 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss /* * Counts the new reader in the appropriate per-CPU element of the + * srcu_struct. Returns a pointer that must be passed to the matching + * srcu_read_unlock_fast(). + * + * Note that this_cpu_inc() is an RCU read-side critical section either + * because it disables interrupts, because it is a single instruction, + * or because it is a read-modify-write atomic operation, depending on + * the whims of the architecture. + */ +static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp) +{ + struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); + + RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_fast()."); + this_cpu_inc(scp->srcu_locks.counter); /* Y */ + barrier(); /* Avoid leaking the critical section. */ + return scp; +} + +/* + * Removes the count for the old reader from the appropriate + * per-CPU element of the srcu_struct. Note that this may well be a + * different CPU than that which was incremented by the corresponding + * srcu_read_lock_fast(), but it must be within the same task. + * + * Note that this_cpu_inc() is an RCU read-side critical section either + * because it disables interrupts, because it is a single instruction, + * or because it is a read-modify-write atomic operation, depending on + * the whims of the architecture. + */ +static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) +{ + barrier(); /* Avoid leaking the critical section. */ + this_cpu_inc(scp->srcu_unlocks.counter); /* Z */ + RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_unlock_fast()."); +} + +/* + * Counts the new reader in the appropriate per-CPU element of the * srcu_struct. Returns an index that must be passed to the matching * srcu_read_unlock_lite(). * |
