diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-05-27 04:39:36 +0300 |
---|---|---|
committer | Rusty Russell <rusty@rustcorp.com.au> | 2015-05-28 05:02:06 +0300 |
commit | 7fc26327b75685f37f58d64bdb061460f834f80d (patch) | |
tree | 69fecbbe48ac91608e88987c0bd0c8e5cebfa1b5 /include/linux | |
parent | 0a04b0166929405cd833c1cc40f99e862b965ddc (diff) | |
download | linux-7fc26327b75685f37f58d64bdb061460f834f80d.tar.xz |
seqlock: Introduce raw_read_seqcount_latch()
Because with latches there is a strict data dependency on the seq load
we can avoid the rmb in favour of a read_barrier_depends.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/seqlock.h | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 1c0cf3102fdc..890c7ef709d5 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -35,6 +35,7 @@ #include <linux/spinlock.h> #include <linux/preempt.h> #include <linux/lockdep.h> +#include <linux/compiler.h> #include <asm/processor.h> /* @@ -233,6 +234,11 @@ static inline void raw_write_seqcount_end(seqcount_t *s) s->sequence++; } +static inline int raw_read_seqcount_latch(seqcount_t *s) +{ + return lockless_dereference(s->sequence); +} + /** * raw_write_seqcount_latch - redirect readers to even/odd copy * @s: pointer to seqcount_t @@ -284,8 +290,7 @@ static inline void raw_write_seqcount_end(seqcount_t *s) * unsigned seq, idx; * * do { - * seq = latch->seq; - * smp_rmb(); + * seq = lockless_dereference(latch->seq); * * idx = seq & 0x01; * entry = data_query(latch->data[idx], ...); |