diff options
author | Namhyung Kim <namhyung@kernel.org> | 2022-03-14 21:20:41 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2022-04-08 15:24:14 +0300 |
commit | 398ac11f4425d1e52aaf0d05d4fc90524e1a5b5e (patch) | |
tree | 26a412a78bcb6c88f01106c84849d4e4ea97d584 /tools/include/asm-generic/unaligned.h | |
parent | 21b6b8d43d87708c88ae34be67bb6cbba7bc60a0 (diff) | |
download | linux-398ac11f4425d1e52aaf0d05d4fc90524e1a5b5e.tar.xz |
bpf: Adjust BPF stack helper functions to accommodate skip > 0
commit ee2a098851bfbe8bcdd964c0121f4246f00ff41e upstream.
Let's say that the caller has storage for num_elem stack frames. Then,
the BPF stack helper functions walk the stack for only num_elem frames.
This means that if skip > 0, one keeps only 'num_elem - skip' frames.
This is because it sets init_nr in the perf_callchain_entry to the end
of the buffer to save num_elem entries only. I believe it was because
the perf callchain code unwound the stack frames until it reached the
global max size (sysctl_perf_event_max_stack).
However it now has perf_callchain_entry_ctx.max_stack to limit the
iteration locally. This simplifies the code to handle init_nr in the
BPF callstack entries and removes the confusion with the perf_event's
__PERF_SAMPLE_CALLCHAIN_EARLY which sets init_nr to 0.
Also change the comment on bpf_get_stack() in the header file to be
more explicit what the return value means.
Fixes: c195651e565a ("bpf: add bpf_get_stack helper")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/30a7b5d5-6726-1cc2-eaee-8da2828a9a9c@oracle.com
Link: https://lore.kernel.org/bpf/20220314182042.71025-1-namhyung@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based-on-patch-by: Eugene Loh <eugene.loh@oracle.com>
Diffstat (limited to 'tools/include/asm-generic/unaligned.h')
0 files changed, 0 insertions, 0 deletions