summaryrefslogtreecommitdiff
path: root/net
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2016-05-20 19:04:36 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2016-05-21 05:30:32 +0300
commit54cf809b9512be95f53ed4a5e3b631d1ac42f0fa (patch)
treeeeeeb69b689c041b08741bd97ea23872020c48d3 /net
parentb99a9e8776ca837344c6b64d518483fc5d5eefb4 (diff)
downloadlinux-54cf809b9512be95f53ed4a5e3b631d1ac42f0fa.tar.xz
locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()
Similar to commits: 51d7d5205d33 ("powerpc: Add smp_mb() to arch_spin_is_locked()") d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") qspinlock suffers from the fact that the _Q_LOCKED_VAL store is unordered inside the ACQUIRE of the lock. And while this is not a problem for the regular mutual exclusive critical section usage of spinlocks, it breaks creative locking like: spin_lock(A) spin_lock(B) spin_unlock_wait(B) if (!spin_is_locked(A)) do_something() do_something() In that both CPUs can end up running do_something at the same time, because our _Q_LOCKED_VAL store can drop past the spin_unlock_wait() spin_is_locked() loads (even on x86!!). To avoid making the normal case slower, add smp_mb()s to the less used spin_unlock_wait() / spin_is_locked() side of things to avoid this problem. Reported-and-tested-by: Davidlohr Bueso <dave@stgolabs.net> Reported-by: Giovanni Gherdovich <ggherdovich@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org # v4.2 and later Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'net')
0 files changed, 0 insertions, 0 deletions