diff options
author | Stephane Eranian <eranian@google.com> | 2019-05-10 00:45:56 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-05-10 09:04:17 +0300 |
commit | 6b89d4c1ae8596a8c9240f169ef108704de373f2 (patch) | |
tree | 89627cd17081c3e937c3e9487ceb95cb85922d30 /arch/x86 | |
parent | 4abf1ee16e25ba97bc9e04ddc64e0cd2a1bc41a8 (diff) | |
download | linux-6b89d4c1ae8596a8c9240f169ef108704de373f2.tar.xz |
perf/x86/intel: Fix INTEL_FLAGS_EVENT_CONSTRAINT* masking
On Intel Westmere, a cmdline as follows:
$ perf record -e cpu/event=0xc4,umask=0x2,name=br_inst_retired.near_call/p ....
was failing. Yet the event+ umask support PEBS.
It turns out this is due to a bug in the the PEBS event constraint table for
westmere. All forms of BR_INST_RETIRED.* support PEBS. Therefore the constraint
mask should ignore the umask. The name of the macro INTEL_FLAGS_EVENT_CONSTRAINT()
hint that this is the case but it was not. That macros was checking both the
event code and event umask. Therefore, it was only matching on 0x00c4.
There are code+umask macros, they all have *UEVENT*.
This bug fixes the issue by checking only the event code in the mask.
Both single and range version are modified.
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/20190509214556.123493-1-eranian@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86')
-rw-r--r-- | arch/x86/events/perf_event.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 07fc84bb85c1..a6ac2f4f76fc 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -394,10 +394,10 @@ struct cpu_hw_events { /* Event constraint, but match on all event flags too. */ #define INTEL_FLAGS_EVENT_CONSTRAINT(c, n) \ - EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS) + EVENT_CONSTRAINT(c, n, ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS) #define INTEL_FLAGS_EVENT_CONSTRAINT_RANGE(c, e, n) \ - EVENT_CONSTRAINT_RANGE(c, e, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS) + EVENT_CONSTRAINT_RANGE(c, e, n, ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS) /* Check only flags, but allow all event/umask */ #define INTEL_ALL_EVENT_CONSTRAINT(code, n) \ |