summaryrefslogtreecommitdiff
path: root/include/asm-generic
diff options
context:
space:
mode:
authorKumar Kartikeya Dwivedi <memxor@gmail.com>2026-01-22 14:59:11 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2026-02-27 02:00:47 +0300
commitdf570284cb3be52a70e54e89de62ad2d39d67299 (patch)
treebb3516c16e98103dd6d38e69ab7e13b211270308 /include/asm-generic
parent074c1c58698bd53b6d600b382c5f91c72f5abcaf (diff)
downloadlinux-df570284cb3be52a70e54e89de62ad2d39d67299.tar.xz
rqspinlock: Fix TAS fallback lock entry creation
[ Upstream commit 82f3b142c99cf44c7b1e70b7720169c646b9760f ] The TAS fallback can be invoked directly when queued spin locks are disabled, and through the slow path when paravirt is enabled for queued spin locks. In the latter case, the res_spin_lock macro will attempt the fast path and already hold the entry when entering the slow path. This will lead to creation of extraneous entries that are not released, which may cause false positives for deadlock detection. Fix this by always preceding invocation of the TAS fallback in every case with the grabbing of the held lock entry, and add a comment to make note of this. Fixes: c9102a68c070 ("rqspinlock: Add a test-and-set fallback") Reported-by: Amery Hung <ameryhung@gmail.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Tested-by: Amery Hung <ameryhung@gmail.com> Link: https://lore.kernel.org/r/20260122115911.3668985-1-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'include/asm-generic')
-rw-r--r--include/asm-generic/rqspinlock.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h
index 0f2dcbbfee2f..5c5cf2f7fc39 100644
--- a/include/asm-generic/rqspinlock.h
+++ b/include/asm-generic/rqspinlock.h
@@ -191,7 +191,7 @@ static __always_inline int res_spin_lock(rqspinlock_t *lock)
#else
-#define res_spin_lock(lock) resilient_tas_spin_lock(lock)
+#define res_spin_lock(lock) ({ grab_held_lock_entry(lock); resilient_tas_spin_lock(lock); })
#endif /* CONFIG_QUEUED_SPINLOCKS */