summaryrefslogtreecommitdiff
path: root/include/linux/atomic-fallback.h
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2019-02-11 16:20:35 +0300
committerIngo Molnar <mingo@kernel.org>2019-02-13 10:07:31 +0300
commit0cf264b3133dce56a60ca8b4335d1f76fe26870a (patch)
tree9afbe4db4847974c5f357fde4fd76f92aaa37b01 /include/linux/atomic-fallback.h
parentb14e77f89aca1c2763f65dc274b5837a185ab13f (diff)
downloadlinux-0cf264b3133dce56a60ca8b4335d1f76fe26870a.tar.xz
locking/atomics: Check atomic headers with sha1sum
We currently check the atomic headers at build-time to ensure they haven't been modified directly, and these checks require regenerating the headers in full. As this takes a few seconds, even when parallelized, this is too slow to run for every kernel build. Instead, we can generate a hash of each header as we generate them, which we can cheaply check at build time (~0.16s for all headers). This patch does so, updating headers with their hashes using the new gen-atomics.sh script. As some users apparently build the kernel wihout coreutils, lacking sha1sum, the checks are skipped in this case. Presumably, most developers have a working coreutils installation. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: anders.roxell@linaro.org Cc: linux-kernel@vger.kernel.rg Cc: naresh.kamboju@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/atomic-fallback.h')
-rw-r--r--include/linux/atomic-fallback.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/atomic-fallback.h b/include/linux/atomic-fallback.h
index 1c02c0112fbb..a7d240e465c0 100644
--- a/include/linux/atomic-fallback.h
+++ b/include/linux/atomic-fallback.h
@@ -2292,3 +2292,4 @@ atomic64_dec_if_positive(atomic64_t *v)
#define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c))
#endif /* _LINUX_ATOMIC_FALLBACK_H */
+// 25de4a2804d70f57e994fe3b419148658bb5378a