diff options
author | Marco Elver <elver@google.com> | 2021-11-30 14:44:23 +0300 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2021-12-10 03:42:27 +0300 |
commit | 2505a51ac6f249956735e0a369e2404f96eebef0 (patch) | |
tree | edaea77c1a0eb71bd8e157b369c8f927e682f6ad /include/asm-generic | |
parent | f948666de517cf8ebef7cb2c9b2d669dec4bfe2e (diff) | |
download | linux-2505a51ac6f249956735e0a369e2404f96eebef0.tar.xz |
locking/barriers, kcsan: Support generic instrumentation
Thus far only smp_*() barriers had been defined by asm-generic/barrier.h
based on __smp_*() barriers, because the !SMP case is usually generic.
With the introduction of instrumentation, it also makes sense to have
asm-generic/barrier.h assist in the definition of instrumented versions
of mb(), rmb(), wmb(), dma_rmb(), and dma_wmb().
Because there is no requirement to distinguish the !SMP case, the
definition can be simpler: we can avoid also providing fallbacks for the
__ prefixed cases, and only check if `defined(__<barrier>)`, to finally
define the KCSAN-instrumented versions.
This also allows for the compiler to complain if an architecture
accidentally defines both the normal and __ prefixed variant.
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'include/asm-generic')
-rw-r--r-- | include/asm-generic/barrier.h | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 27a9c9edfef6..02c4339c8eeb 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -22,6 +22,31 @@ #endif /* + * Architectures that want generic instrumentation can define __ prefixed + * variants of all barriers. + */ + +#ifdef __mb +#define mb() do { kcsan_mb(); __mb(); } while (0) +#endif + +#ifdef __rmb +#define rmb() do { kcsan_rmb(); __rmb(); } while (0) +#endif + +#ifdef __wmb +#define wmb() do { kcsan_wmb(); __wmb(); } while (0) +#endif + +#ifdef __dma_rmb +#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0) +#endif + +#ifdef __dma_wmb +#define dma_wmb() do { kcsan_wmb(); __dma_wmb(); } while (0) +#endif + +/* * Force strict CPU ordering. And yes, this is required on UP too when we're * talking to devices. * |