diff options
author | Nicholas Piggin <npiggin@gmail.com> | 2020-07-24 16:14:22 +0300 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2020-07-26 17:01:29 +0300 |
commit | 2f6560e652dfdbdb59df28b45a3458bf36d3c580 (patch) | |
tree | eb923dd64c883cf1cbd977111baf6a146ad55e23 /arch/powerpc/include/asm/qspinlock.h | |
parent | 20c0e8269e9d515e677670902c7e1cc0209d6ad9 (diff) | |
download | linux-2f6560e652dfdbdb59df28b45a3458bf36d3c580.tar.xz |
powerpc/qspinlock: Optimised atomic_try_cmpxchg_lock() that adds the lock hint
This brings the behaviour of the uncontended fast path back to roughly
equivalent to simple spinlocks -- a single atomic op with lock hint.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Waiman Long <longman@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200724131423.1362108-6-npiggin@gmail.com
Diffstat (limited to 'arch/powerpc/include/asm/qspinlock.h')
-rw-r--r-- | arch/powerpc/include/asm/qspinlock.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h index f5066f00a08c..b752d34517b3 100644 --- a/arch/powerpc/include/asm/qspinlock.h +++ b/arch/powerpc/include/asm/qspinlock.h @@ -37,7 +37,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) { u32 val = 0; - if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + if (likely(atomic_try_cmpxchg_lock(&lock->val, &val, _Q_LOCKED_VAL))) return; queued_spin_lock_slowpath(lock, val); |