summaryrefslogtreecommitdiff
path: root/scripts/atomic/fallbacks/add_unless
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2020-01-25 00:13:03 +0300
committerThomas Gleixner <tglx@linutronix.de>2020-06-11 09:03:24 +0300
commit37f8173dd84936ea78000ed1cad24f8b18d48ebb (patch)
tree0b715066a7f5c16a71988e176627c46b61481b3c /scripts/atomic/fallbacks/add_unless
parent765dcd209947e7b3666c08fb109ab8b879f7a471 (diff)
downloadlinux-37f8173dd84936ea78000ed1cad24f8b18d48ebb.tar.xz
locking/atomics: Flip fallbacks and instrumentation
Currently instrumentation of atomic primitives is done at the architecture level, while composites or fallbacks are provided at the generic level. The result is that there are no uninstrumented variants of the fallbacks. Since there is now need of such variants to isolate text poke from any form of instrumentation invert this ordering. Doing this means moving the instrumentation into the generic code as well as having (for now) two variants of the fallbacks. Notes: - the various *cond_read* primitives are not proper fallbacks and got moved into linux/atomic.c. No arch_ variants are generated because the base primitives smp_cond_load*() are instrumented. - once all architectures are moved over to arch_atomic_ one of the fallback variants can be removed and some 2300 lines reclaimed. - atomic_{read,set}*() are no longer double-instrumented Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/20200505134058.769149955@linutronix.de
Diffstat (limited to 'scripts/atomic/fallbacks/add_unless')
-rwxr-xr-xscripts/atomic/fallbacks/add_unless6
1 files changed, 3 insertions, 3 deletions
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index daf87a04c850..2ff598a3f9ec 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,6 +1,6 @@
cat << EOF
/**
- * ${atomic}_add_unless - add unless the number is already a given value
+ * ${arch}${atomic}_add_unless - add unless the number is already a given value
* @v: pointer of type ${atomic}_t
* @a: the amount to add to v...
* @u: ...unless v is equal to u.
@@ -9,8 +9,8 @@ cat << EOF
* Returns true if the addition was done.
*/
static __always_inline bool
-${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
+${arch}${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
- return ${atomic}_fetch_add_unless(v, a, u) != u;
+ return ${arch}${atomic}_fetch_add_unless(v, a, u) != u;
}
EOF