diff options
author | Peter Zijlstra <peterz@infradead.org> | 2016-02-24 20:45:45 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-02-25 10:42:34 +0300 |
commit | 9107c89e269d2738019861bb518e3d59bef01781 (patch) | |
tree | 052f40c95151a2e752d50e03088c30eb5077b1dd /include/linux/perf_event.h | |
parent | a69b0ca4ac3bf5427b571f11cbf33f0a32b728d5 (diff) | |
download | linux-9107c89e269d2738019861bb518e3d59bef01781.tar.xz |
perf: Fix race between event install and jump_labels
perf_install_in_context() relies upon the context switch hooks to have
scheduled in events when the IPI misses its target -- after all, if
the task has moved from the CPU (or wasn't running at all), it will
have to context switch to run elsewhere.
This however doesn't appear to be happening.
It is possible for the IPI to not happen (task wasn't running) only to
later observe the task running with an inactive context.
The only possible explanation is that the context switch hooks are not
called. Therefore put in a sync_sched() after toggling the jump_label
to guarantee all CPUs will have them enabled before we install an
event.
A simple if (0->1) sync_sched() will not in fact work, because any
further increment can race and complete before the sync_sched().
Therefore we must jump through some hoops.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dvyukov@google.com
Cc: eranian@google.com
Cc: oleg@redhat.com
Cc: panand@redhat.com
Cc: sasha.levin@oracle.com
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/20160224174947.980211985@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/perf_event.h')
-rw-r--r-- | include/linux/perf_event.h | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 39156619e108..f5c5a3fa2c81 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -906,7 +906,7 @@ perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) } } -extern struct static_key_deferred perf_sched_events; +extern struct static_key_false perf_sched_events; static __always_inline bool perf_sw_migrate_enabled(void) @@ -925,7 +925,7 @@ static inline void perf_event_task_migrate(struct task_struct *task) static inline void perf_event_task_sched_in(struct task_struct *prev, struct task_struct *task) { - if (static_key_false(&perf_sched_events.key)) + if (static_branch_unlikely(&perf_sched_events)) __perf_event_task_sched_in(prev, task); if (perf_sw_migrate_enabled() && task->sched_migrated) { @@ -942,7 +942,7 @@ static inline void perf_event_task_sched_out(struct task_struct *prev, { perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0); - if (static_key_false(&perf_sched_events.key)) + if (static_branch_unlikely(&perf_sched_events)) __perf_event_task_sched_out(prev, next); } |