diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-10-23 15:32:34 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-11-23 11:37:52 +0300 |
commit | 69e51e92a394088fc3266ed5136903074b44f3c4 (patch) | |
tree | aa82af638e05ee38ad4a3c0b8c5c36ae2f8365a5 /include/linux/wait.h | |
parent | 38c6ade2dd4dcc3bca06c981e2a1b91289046177 (diff) | |
download | linux-69e51e92a394088fc3266ed5136903074b44f3c4.tar.xz |
sched/wait: Document waitqueue_active()
Kosuku reports that there were a fair number of buggy
waitqueue_active() users and this function deserves a big comment in
order to avoid growing more.
Reported-by: Kosuke Tatsukawa <tatsu@ab.jp.nec.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/wait.h')
-rw-r--r-- | include/linux/wait.h | 30 |
1 files changed, 30 insertions, 0 deletions
diff --git a/include/linux/wait.h b/include/linux/wait.h index 1e1bf9f963a9..f3bac30587f7 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -102,6 +102,36 @@ init_waitqueue_func_entry(wait_queue_t *q, wait_queue_func_t func) q->func = func; } +/** + * waitqueue_active -- locklessly test for waiters on the queue + * @q: the waitqueue to test for waiters + * + * returns true if the wait list is not empty + * + * NOTE: this function is lockless and requires care, incorrect usage _will_ + * lead to sporadic and non-obvious failure. + * + * Use either while holding wait_queue_head_t::lock or when used for wakeups + * with an extra smp_mb() like: + * + * CPU0 - waker CPU1 - waiter + * + * for (;;) { + * @cond = true; prepare_to_wait(&wq, &wait, state); + * smp_mb(); // smp_mb() from set_current_state() + * if (waitqueue_active(wq)) if (@cond) + * wake_up(wq); break; + * schedule(); + * } + * finish_wait(&wq, &wait); + * + * Because without the explicit smp_mb() it's possible for the + * waitqueue_active() load to get hoisted over the @cond store such that we'll + * observe an empty wait list while the waiter might not observe @cond. + * + * Also note that this 'optimization' trades a spin_lock() for an smp_mb(), + * which (when the lock is uncontended) are of roughly equal cost. + */ static inline int waitqueue_active(wait_queue_head_t *q) { return !list_empty(&q->task_list); |