diff options
Diffstat (limited to 'Documentation/locking/locktypes.rst')
-rw-r--r-- | Documentation/locking/locktypes.rst | 21 |
1 files changed, 21 insertions, 0 deletions
diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst index 80c914f6eae7..37b6a5670c2f 100644 --- a/Documentation/locking/locktypes.rst +++ b/Documentation/locking/locktypes.rst @@ -204,6 +204,27 @@ per-CPU data structures on a non PREEMPT_RT kernel. local_lock is not suitable to protect against preemption or interrupts on a PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics. +CPU local scope and bottom-half +------------------------------- + +Per-CPU variables that are accessed only in softirq context should not rely on +the assumption that this context is implicitly protected due to being +non-preemptible. In a PREEMPT_RT kernel, softirq context is preemptible, and +synchronizing every bottom-half-disabled section via implicit context results +in an implicit per-CPU "big kernel lock." + +A local_lock_t together with local_lock_nested_bh() and +local_unlock_nested_bh() for locking operations help to identify the locking +scope. + +When lockdep is enabled, these functions verify that data structure access +occurs within softirq context. +Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and +does not add overhead when used without lockdep. + +On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and +local_unlock_nested_bh() serializes access to the data structure, which allows +removal of serialization via local_bh_disable(). raw_spinlock_t and spinlock_t ============================= |