diff options
author | Arnaldo Carvalho de Melo <acme@redhat.com> | 2015-05-07 00:35:20 +0300 |
---|---|---|
committer | Arnaldo Carvalho de Melo <acme@redhat.com> | 2015-05-08 22:05:03 +0300 |
commit | e43a19c9c2c30cf88ffafb8390a4c53400b2467e (patch) | |
tree | 8afdb88cae36616a387cb038ebfab91589c639cf /tools/arch | |
parent | 361c564eeff4b78f1303b86e8e8f07fc547bd2c9 (diff) | |
download | linux-e43a19c9c2c30cf88ffafb8390a4c53400b2467e.tar.xz |
perf tools: Move powerpc barrier.h stuff to tools/arch/powerpc/include/asm/barrier.h
We will need it for atomic.h, so move it from the ad-hoc tools/perf/
place to a tools/ subset of the kernel arch/ hierarchy.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-pk6f5x9vh8k2ebzhh9uj5wo2@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools/arch')
-rw-r--r-- | tools/arch/powerpc/include/asm/barrier.h | 29 |
1 files changed, 29 insertions, 0 deletions
diff --git a/tools/arch/powerpc/include/asm/barrier.h b/tools/arch/powerpc/include/asm/barrier.h new file mode 100644 index 000000000000..b23aee8e6d90 --- /dev/null +++ b/tools/arch/powerpc/include/asm/barrier.h @@ -0,0 +1,29 @@ +/* + * Copied from the kernel sources: + * + * Copyright (C) 1999 Cort Dougan <cort@cs.nmt.edu> + */ +#ifndef _TOOLS_LINUX_ASM_POWERPC_BARRIER_H +#define _TOOLS_LINUX_ASM_POWERPC_BARRIER_H + +/* + * Memory barrier. + * The sync instruction guarantees that all memory accesses initiated + * by this processor have been performed (with respect to all other + * mechanisms that access memory). The eieio instruction is a barrier + * providing an ordering (separately) for (a) cacheable stores and (b) + * loads and stores to non-cacheable memory (e.g. I/O devices). + * + * mb() prevents loads and stores being reordered across this point. + * rmb() prevents loads being reordered across this point. + * wmb() prevents stores being reordered across this point. + * + * *mb() variants without smp_ prefix must order all types of memory + * operations with one another. sync is the only instruction sufficient + * to do this. + */ +#define mb() __asm__ __volatile__ ("sync" : : : "memory") +#define rmb() __asm__ __volatile__ ("sync" : : : "memory") +#define wmb() __asm__ __volatile__ ("sync" : : : "memory") + +#endif /* _TOOLS_LINUX_ASM_POWERPC_BARRIER_H */ |