diff options
Diffstat (limited to 'Documentation/RCU')
-rw-r--r-- | Documentation/RCU/listRCU.rst | 10 | ||||
-rw-r--r-- | Documentation/RCU/rcubarrier.rst | 5 | ||||
-rw-r--r-- | Documentation/RCU/stallwarn.rst | 7 | ||||
-rw-r--r-- | Documentation/RCU/whatisRCU.rst | 13 |
4 files changed, 27 insertions, 8 deletions
diff --git a/Documentation/RCU/listRCU.rst b/Documentation/RCU/listRCU.rst index ed5c9d8c9afe..d8bb98623c12 100644 --- a/Documentation/RCU/listRCU.rst +++ b/Documentation/RCU/listRCU.rst @@ -334,7 +334,7 @@ If the system-call audit module were to ever need to reject stale data, one way to accomplish this would be to add a ``deleted`` flag and a ``lock`` spinlock to the ``audit_entry`` structure, and modify audit_filter_task() as follows:: - static enum audit_state audit_filter_task(struct task_struct *tsk) + static struct audit_entry *audit_filter_task(struct task_struct *tsk, char **key) { struct audit_entry *e; enum audit_state state; @@ -346,16 +346,18 @@ to accomplish this would be to add a ``deleted`` flag and a ``lock`` spinlock to if (e->deleted) { spin_unlock(&e->lock); rcu_read_unlock(); - return AUDIT_BUILD_CONTEXT; + return NULL; } rcu_read_unlock(); if (state == AUDIT_STATE_RECORD) *key = kstrdup(e->rule.filterkey, GFP_ATOMIC); - return state; + /* As long as e->lock is held, e is valid and + * its value is not stale */ + return e; } } rcu_read_unlock(); - return AUDIT_BUILD_CONTEXT; + return NULL; } The ``audit_del_rule()`` function would need to set the ``deleted`` flag under the diff --git a/Documentation/RCU/rcubarrier.rst b/Documentation/RCU/rcubarrier.rst index 6da7f66da2a8..12a7b059654f 100644 --- a/Documentation/RCU/rcubarrier.rst +++ b/Documentation/RCU/rcubarrier.rst @@ -329,10 +329,7 @@ Answer: was first added back in 2005. This is because on_each_cpu() disables preemption, which acted as an RCU read-side critical section, thus preventing CPU 0's grace period from completing - until on_each_cpu() had dealt with all of the CPUs. However, - with the advent of preemptible RCU, rcu_barrier() no longer - waited on nonpreemptible regions of code in preemptible kernels, - that being the job of the new rcu_barrier_sched() function. + until on_each_cpu() had dealt with all of the CPUs. However, with the RCU flavor consolidation around v4.20, this possibility was once again ruled out, because the consolidated diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst index 30080ff6f406..d1ccd6039a8c 100644 --- a/Documentation/RCU/stallwarn.rst +++ b/Documentation/RCU/stallwarn.rst @@ -96,6 +96,13 @@ warnings: the ``rcu_.*timer wakeup didn't happen for`` console-log message, which will include additional debugging information. +- A timer issue causes time to appear to jump forward, so that RCU + believes that the RCU CPU stall-warning timeout has been exceeded + when in fact much less time has passed. This could be due to + timer hardware bugs, timer driver bugs, or even corruption of + the "jiffies" global variable. These sorts of timer hardware + and driver bugs are not uncommon when testing new hardware. + - A low-level kernel issue that either fails to invoke one of the variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(), ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one diff --git a/Documentation/RCU/whatisRCU.rst b/Documentation/RCU/whatisRCU.rst index 1ef5784c1b84..be2eb6be16ec 100644 --- a/Documentation/RCU/whatisRCU.rst +++ b/Documentation/RCU/whatisRCU.rst @@ -15,6 +15,9 @@ to start learning about RCU: | 2014 Big API Table https://lwn.net/Articles/609973/ | 6. The RCU API, 2019 Edition https://lwn.net/Articles/777036/ | 2019 Big API Table https://lwn.net/Articles/777165/ +| 7. The RCU API, 2024 Edition https://lwn.net/Articles/988638/ +| 2024 Background Information https://lwn.net/Articles/988641/ +| 2024 Big API Table https://lwn.net/Articles/988666/ For those preferring video: @@ -971,6 +974,16 @@ unfortunately any spinlock in a ``SLAB_TYPESAFE_BY_RCU`` object must be initialized after each and every call to kmem_cache_alloc(), which renders reference-free spinlock acquisition completely unsafe. Therefore, when using ``SLAB_TYPESAFE_BY_RCU``, make proper use of a reference counter. +If using refcount_t, the specialized refcount_{add|inc}_not_zero_acquire() +and refcount_set_release() APIs should be used to ensure correct operation +ordering when verifying object identity and when initializing newly +allocated objects. Acquire fence in refcount_{add|inc}_not_zero_acquire() +ensures that identity checks happen *after* reference count is taken. +refcount_set_release() should be called after a newly allocated object is +fully initialized and release fence ensures that new values are visible +*before* refcount can be successfully taken by other users. Once +refcount_set_release() is called, the object should be considered visible +by other tasks. (Those willing to initialize their locks in a kmem_cache constructor may also use locking, including cache-friendly sequence locking.) |