diff options
author | Peter Zijlstra <peterz@infradead.org> | 2014-03-13 22:00:35 +0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-04-18 16:20:46 +0400 |
commit | d00a569284b1340c16fe2c148099e077ea09ebc9 (patch) | |
tree | 5e44a663929cdd1288989150ca9a7304a29cabd9 /arch/x86/include/asm/atomic.h | |
parent | ce3609f93445846f7b5a5b4bacb236a9bdc35216 (diff) | |
download | linux-d00a569284b1340c16fe2c148099e077ea09ebc9.tar.xz |
arch,x86: Convert smp_mb__*()
x86 is strongly ordered and all its atomic ops imply a full barrier.
Implement the two new primitives as the old ones were.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-knswsr5mldkr0w1lrdxvc81w@git.kernel.org
Cc: Dave Jones <davej@redhat.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michel Lespinasse <walken@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/include/asm/atomic.h')
-rw-r--r-- | arch/x86/include/asm/atomic.h | 7 |
1 files changed, 1 insertions, 6 deletions
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index b17f4f48ecd7..6dd1c7dd0473 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -7,6 +7,7 @@ #include <asm/alternative.h> #include <asm/cmpxchg.h> #include <asm/rmwcc.h> +#include <asm/barrier.h> /* * Atomic operations that C can't guarantee us. Useful for @@ -243,12 +244,6 @@ static inline void atomic_or_long(unsigned long *v1, unsigned long v2) : : "r" ((unsigned)(mask)), "m" (*(addr)) \ : "memory") -/* Atomic operations are already serializing on x86 */ -#define smp_mb__before_atomic_dec() barrier() -#define smp_mb__after_atomic_dec() barrier() -#define smp_mb__before_atomic_inc() barrier() -#define smp_mb__after_atomic_inc() barrier() - #ifdef CONFIG_X86_32 # include <asm/atomic64_32.h> #else |