diff options
author | Alexei Starovoitov <ast@kernel.org> | 2022-09-03 00:10:46 +0300 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2022-09-05 16:33:05 +0300 |
commit | 89dc8d0c38e0df27e580876a1681a55c686a51ff (patch) | |
tree | bc6607523f6ca1f49178bc1a7d5af506a53b1047 /samples/bpf/hbm_edt_kern.c | |
parent | 37521bffdd2d1efcb1dbdfd3ee89584c8943421c (diff) | |
download | linux-89dc8d0c38e0df27e580876a1681a55c686a51ff.tar.xz |
samples/bpf: Reduce syscall overhead in map_perf_test.
Make map_perf_test for preallocated and non-preallocated hash map
spend more time inside bpf program to focus performance analysis
on the speed of update/lookup/delete operations performed by bpf program.
It makes 'perf report' of bpf_mem_alloc look like:
11.76% map_perf_test [k] _raw_spin_lock_irqsave
11.26% map_perf_test [k] htab_map_update_elem
9.70% map_perf_test [k] _raw_spin_lock
9.47% map_perf_test [k] htab_map_delete_elem
8.57% map_perf_test [k] memcpy_erms
5.58% map_perf_test [k] alloc_htab_elem
4.09% map_perf_test [k] __htab_map_lookup_elem
3.44% map_perf_test [k] syscall_exit_to_user_mode
3.13% map_perf_test [k] lookup_nulls_elem_raw
3.05% map_perf_test [k] migrate_enable
3.04% map_perf_test [k] memcmp
2.67% map_perf_test [k] unit_free
2.39% map_perf_test [k] lookup_elem_raw
Reduce default iteration count as well to make 'map_perf_test' quick enough
even on debug kernels.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-5-alexei.starovoitov@gmail.com
Diffstat (limited to 'samples/bpf/hbm_edt_kern.c')
0 files changed, 0 insertions, 0 deletions