diff options
author | Namhyung Kim <namhyung@kernel.org> | 2022-09-12 08:53:14 +0300 |
---|---|---|
committer | Arnaldo Carvalho de Melo <acme@redhat.com> | 2022-10-04 14:55:22 +0300 |
commit | c1da8dd5c11dabd50b1578c6b43d73c7bbc28963 (patch) | |
tree | f86057657e90c8ce137aaffa5417d2ccbf216be1 /tools/perf/util/bpf_lock_contention.c | |
parent | 96532a83ee8e30035e584b046c859adb001a3b8d (diff) | |
download | linux-c1da8dd5c11dabd50b1578c6b43d73c7bbc28963.tar.xz |
perf lock contention: Skip stack trace from BPF
Currently it collects stack traces to max size then skip entries.
Because we don't have control how to skip perf callchains. But BPF can
do it with bpf_get_stackid() with a flag.
Say we have max-stack=4 and stack-skip=2, we get these stack traces.
Before: After:
.---> +---+ <--. .---> +---+ <--.
| | | | | | | |
| +---+ usable | +---+ |
max | | | max | | |
stack +---+ <--' stack +---+ usable
| | X | | | | |
| +---+ skip | +---+ |
| | X | | | | |
`---> +---+ `---> +---+ <--' <=== collection
| X |
+---+ skip
| X |
+---+
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Song Liu <songliubraving@fb.com>
Cc: bpf@vger.kernel.org
Link: https://lore.kernel.org/r/20220912055314.744552-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools/perf/util/bpf_lock_contention.c')
-rw-r--r-- | tools/perf/util/bpf_lock_contention.c | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c index ef5323c78ffc..efe5b9968e77 100644 --- a/tools/perf/util/bpf_lock_contention.c +++ b/tools/perf/util/bpf_lock_contention.c @@ -93,6 +93,8 @@ int lock_contention_prepare(struct lock_contention *con) bpf_map_update_elem(fd, &pid, &val, BPF_ANY); } + skel->bss->stack_skip = con->stack_skip; + lock_contention_bpf__attach(skel); return 0; } @@ -127,7 +129,7 @@ int lock_contention_read(struct lock_contention *con) while (!bpf_map_get_next_key(fd, &prev_key, &key)) { struct map *kmap; struct symbol *sym; - int idx; + int idx = 0; bpf_map_lookup_elem(fd, &key, &data); st = zalloc(sizeof(*st)); @@ -146,8 +148,7 @@ int lock_contention_read(struct lock_contention *con) bpf_map_lookup_elem(stack, &key, stack_trace); - /* skip BPF + lock internal functions */ - idx = con->stack_skip; + /* skip lock internal functions */ while (is_lock_function(machine, stack_trace[idx]) && idx < con->max_stack - 1) idx++; |