summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2025-06-12selftests/coredump: fix buildChristian Brauner2-13/+6
Fix various warnings in the selftest build. Link: https://lore.kernel.org/20250603-work-coredump-socket-protocol-v2-2-05a5f0c18ecc@kernel.org Acked-by: Lennart Poettering <lennart@poettering.net> Reviewed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-06-12selftests/mm: skip failed memfd setups in gup_longtermMark Brown1-1/+6
Unlike the other cases gup_longterm's memfd tests previously skipped the test when failing to set up the file descriptor to test. Restore this behavior to avoid hitting failures when hugetlb isn't configured. Link: https://lkml.kernel.org/r/20250605-selftest-mm-gup-longterm-tweaks-v1-1-2fae34b05958@kernel.org Fixes: 66bce7afbaca ("selftests/mm: fix test result reporting in gup_longterm") Signed-off-by: Mark Brown <broonie@kernel.org> Reported-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Closes: https://lkml.kernel.org/r/a76fc252-0fe3-4d4b-a9a1-4a2895c2680d@lucifer.local Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Tested-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-06-12Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfLinus Torvalds2-4/+4
Pull BPF fixes from Alexei Starovoitov - Fix libbpf backward compatibility (Andrii Nakryiko) - Add Stanislav Fomichev as bpf/net reviewer - Fix resolve_btfid build when cross compiling (Suleiman Souhlal) * tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: MAINTAINERS: Add myself as bpf networking reviewer tools/resolve_btfids: Fix build when cross compiling kernel with clang. libbpf: Handle unsupported mmap-based /sys/kernel/btf/vmlinux correctly
2025-06-12selftests: net: add test case for NAT46 looping back dstJakub Kicinski2-0/+16
Simple test for crash involving multicast loopback and stale dst. Reuse exising NAT46 program. Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250610001245.1981782-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-11perf thread: Ensure comm_lock held for comm_listIan Rogers3-8/+29
Add thread safety annotations for comm_list and add locking for two instances where the list is accessed without the lock held (in contradiction to ____thread__set_comm()). Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250529192206.971199-1-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-11selftests/vsock: add initial vmtest.sh for vsockBobby Eshleman5-0/+618
This commit introduces a new vmtest.sh runner for vsock. It uses virtme-ng/qemu to run tests in a VM. The tests validate G2H, H2G, and loopback. The testing tools from tools/testing/vsock/ are reused. Currently, only vsock_test is used. VMCI and hyperv support is included in the config file to be built with the -b option, though not used in the tests. Only tested on x86. To run: $ make -C tools/testing/selftests TARGETS=vsock $ tools/testing/selftests/vsock/vmtest.sh or $ make -C tools/testing/selftests TARGETS=vsock run_tests Example runs (after make -C tools/testing/selftests TARGETS=vsock): $ ./tools/testing/selftests/vsock/vmtest.sh 1..3 ok 0 vm_server_host_client ok 1 vm_client_host_server ok 2 vm_loopback SUMMARY: PASS=3 SKIP=0 FAIL=0 Log: /tmp/vsock_vmtest_m7DI.log $ ./tools/testing/selftests/vsock/vmtest.sh vm_loopback 1..1 ok 0 vm_loopback SUMMARY: PASS=1 SKIP=0 FAIL=0 Log: /tmp/vsock_vmtest_a1IO.log $ mkdir -p ~/scratch $ make -C tools/testing/selftests install TARGETS=vsock INSTALL_PATH=~/scratch [... omitted ...] $ cd ~/scratch $ ./run_kselftest.sh TAP version 13 1..1 # timeout set to 300 # selftests: vsock: vmtest.sh # 1..3 # ok 0 vm_server_host_client # ok 1 vm_client_host_server # ok 2 vm_loopback # SUMMARY: PASS=3 SKIP=0 FAIL=0 # Log: /tmp/vsock_vmtest_svEl.log ok 1 selftests: vsock: vmtest.sh Future work can include vsock_diag_test. Because vsock requires a VM to test anything other than loopback, this patch adds vmtest.sh as a kselftest itself. This is different than other systems that have a "vmtest.sh", where it is used as a utility script to spin up a VM to run the selftests as a guest (but isn't hooked into kselftest). Signed-off-by: Bobby Eshleman <bobbyeshleman@gmail.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Link: https://patch.msgid.link/20250609-vsock-vmtest-v10-1-7f37198e1cd4@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-11selftests/bpf: Fix cgroup_mprog_ordering failure due to uninitialized variableYonghong Song1-1/+1
On arm64, the cgroup_mprog_ordering selftest failed with test_progs run when building with clang compiler. The reason is due to socklen_t optlen not initialized. In kernel function do_ip_getsockopt(), we have if (copy_from_sockptr(&len, optlen, sizeof(int))) return -EFAULT; if (len < 0) return -EINVAL; The above 'len' variable is a negative value and hence the test failed. But the test is okay on x86_64. I checked the x86_64 asm code and I didn't see explicit initialization of 'optlen' but its value is 0 so kernel didn't return error. This should be a pure luck. Fix the bug by initializing 'oplen' var properly. Fixes: e422d5f118e4 ("selftests/bpf: Add two selftests for mprog API based cgroup progs") Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20250611162103.1623692-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-06-11Merge tag 'kvmarm-fixes-6.16-2' of ↵Paolo Bonzini1-14/+25
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 6.16, take #2 - Rework of system register accessors for system registers that are directly writen to memory, so that sanitisation of the in-memory value happens at the correct time (after the read, or before the write). For convenience, RMW-style accessors are also provided. - Multiple fixes for the so-called "arch-timer-edge-cases' selftest, which was always broken.
2025-06-11selftests/bpf: Add test to cover ktls with bpf_msg_pop_dataJiayuan Chen2-0/+95
The selftest can reproduce an issue where using bpf_msg_pop_data() in ktls causes errors on the receiving end. Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20250609020910.397930-3-jiayuan.chen@linux.dev
2025-06-11selftests/net: packetdrill: more xfail changesJakub Kicinski1-0/+4
Most of the packetdrill tests have not flaked once last week. Add the few which did to the XFAIL list. Acked-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250610000001.1970934-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-11net: stop napi kthreads when THREADED napi is disabledSamiullah Khawaja1-2/+36
Once the THREADED napi is disabled, the napi kthread should also be stopped. Keeping the kthread intact after disabling THREADED napi makes the PID of this kthread show up in the output of netlink 'napi-get' and ps -ef output. The is discussed in the patch below: https://lore.kernel.org/all/20250502191548.559cc416@kernel.org NAPI kthread should stop only if, - There are no pending napi poll scheduled for this thread. - There are no new napi poll scheduled for this thread while it has stopped. - The ____napi_schedule can correctly fallback to the softirq for napi polling. Since napi_schedule_prep provides mutual exclusion over STATE_SCHED bit, it is safe to unset the STATE_THREADED when SCHED_THREADED is set or the SCHED bit is not set. SCHED_THREADED being set means that SCHED is already set and the kthread owns this napi. To disable threaded napi, unset STATE_THREADED bit safely if SCHED_THREADED is set or SCHED is unset. Once STATE_THREADED is unset safely then wait for the kthread to unset the SCHED_THREADED bit so it safe to stop the kthread. Add a new test in nl_netdev to verify this behaviour. Tested: ./tools/testing/selftests/net/nl_netdev.py TAP version 13 1..6 ok 1 nl_netdev.empty_check ok 2 nl_netdev.lo_check ok 3 nl_netdev.page_pool_check ok 4 nl_netdev.napi_list_check ok 5 nl_netdev.dev_set_threaded ok 6 nl_netdev.nsim_rxq_reset_down # Totals: pass:6 fail:0 xfail:0 xpass:0 skip:0 error:0 Ran neper for 300 seconds and did enable/disable of thread napi in a loop continuously. Signed-off-by: Samiullah Khawaja <skhawaja@google.com> Link: https://patch.msgid.link/20250609173015.3851695-1-skhawaja@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-11selftests: netconsole: Add support for basic netconsole target formatBreno Leitao2-22/+52
Extend the netconsole selftest to validate both basic and extended target formats. The basic format is a simpler variant that doesn't support userdata or release functionality. The test now validates that netconsole works correctly in both configurations, improving test coverage for different netconsole deployment scenarios. Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://patch.msgid.link/20250609-netcons_ext-v3-4-5336fa670326@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-11selftests: netconsole: Do not exit from inside the validation functionBreno Leitao2-1/+2
Remove the exit call from validate_result() function and move the test exit logic to the main script. This allows the function to be reused in scenarios where the test needs to continue execution after validation, rather than terminating immediately. The validate_result() function should focus on validation logic only, while the calling script maintains control over program flow and exit conditions. This change improves code modularity and prepares for potential future enhancements where multiple validations might be needed in a single test run. Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://patch.msgid.link/20250609-netcons_ext-v3-3-5336fa670326@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-10tools/resolve_btfids: Fix build when cross compiling kernel with clang.Suleiman Souhlal1-1/+1
When cross compiling the kernel with clang, we need to override CLANG_CROSS_FLAGS when preparing the step libraries. Prior to commit d1d096312176 ("tools: fix annoying "mkdir -p ..." logs when building tools in parallel"), MAKEFLAGS would have been set to a value that wouldn't set a value for CLANG_CROSS_FLAGS, hiding the fact that we weren't properly overriding it. Fixes: 56a2df7615fa ("tools/resolve_btfids: Compile resolve_btfids as host program") Signed-off-by: Suleiman Souhlal <suleiman@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/bpf/20250606074538.1608546-1-suleiman@google.com
2025-06-10selftests/nolibc: make stackprotector probing more robustThomas Weißschuh1-1/+4
nolibc only supports symbol-based stackprotectors, based on the global variable __stack_chk_guard. Support for this differs between architectures and toolchains. Some use the symbol mode by default, some require a flag to enable it and some don't support it at all. Before the nolibc test Makefile required the availability of "-mstack-protector-guard=global" to enable stackprotectors. While this flag makes sure that the correct mode is available it doesn't work where the correct mode is the only supported one and therefore the flag is not implemented. Switch to a more dynamic probing mechanism. This correctly enables stack protectors for mips, loongarch and m68k. Acked-by: Willy Tarreau <w@1wt.eu> Link: https://lore.kernel.org/r/20250609-nolibc-stackprotector-robust-v1-1-a1cfc92a568a@weissschuh.net Signed-off-by: Thomas Weißschuh <linux@weissschuh.net>
2025-06-10bpf: adjust path to trace_output sample eBPF programTobias Klauser1-1/+1
The sample file was renamed from trace_output_kern.c to trace_output.bpf.c in commit d4fffba4d04b ("samples/bpf: Change _kern suffix to .bpf with syscall tracing program"). Adjust the path in the documentation comment for bpf_perf_event_output. Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Link: https://lore.kernel.org/r/20250610140756.16332-1-tklauser@distanz.ch Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-06-10perf: Fix libjvmti.c sign compare errorYuzhuo Jing1-2/+2
Fix the compile errors when compiling with -Werror=sign-compare. This is a follow-up patch to a previous patch series for a separate issue. Link: https://lore.kernel.org/lkml/aC9lXhPFcs5fkHWH@x1/ Signed-off-by: Yuzhuo Jing <yuzhuo@google.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604173632.2362759-1-yuzhuo@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-10perf script: perf script tests fails with segfaultAditya Bodkhe2-4/+9
pert script tests fails with segmentation fault as below: 92: perf script tests: --- start --- test child forked, pid 103769 DB test [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.012 MB /tmp/perf-test-script.7rbftEpOzX/perf.data (9 samples) ] /usr/libexec/perf-core/tests/shell/script.sh: line 35: 103780 Segmentation fault (core dumped) perf script -i "${perfdatafile}" -s "${db_test}" --- Cleaning up --- ---- end(-1) ---- 92: perf script tests : FAILED! Backtrace pointed to : #0 0x0000000010247dd0 in maps.machine () #1 0x00000000101d178c in db_export.sample () #2 0x00000000103412c8 in python_process_event () #3 0x000000001004eb28 in process_sample_event () #4 0x000000001024fcd0 in machines.deliver_event () #5 0x000000001025005c in perf_session.deliver_event () #6 0x00000000102568b0 in __ordered_events__flush.part.0 () #7 0x0000000010251618 in perf_session.process_events () #8 0x0000000010053620 in cmd_script () #9 0x00000000100b5a28 in run_builtin () #10 0x00000000100b5f94 in handle_internal_command () #11 0x0000000010011114 in main () Further investigation reveals that this occurs in the `perf script tests`, because it uses `db_test.py` script. This script sets `perf_db_export_mode = True`. With `perf_db_export_mode` enabled, if a sample originates from a hypervisor, perf doesn't set maps for "[H]" sample in the code. Consequently, `al->maps` remains NULL when `maps__machine(al->maps)` is called from `db_export__sample`. As al->maps can be NULL in case of Hypervisor samples , use thread->maps because even for Hypervisor sample, machine should exist. If we don't have machine for some reason, return -1 to avoid segmentation fault. Reported-by: Disha Goel <disgoel@linux.ibm.com> Signed-off-by: Aditya Bodkhe <aditya.b1@linux.ibm.com> Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Disha Goel <disgoel@linux.ibm.com> Link: https://lore.kernel.org/r/20250429065132.36839-1-adityab1@linux.ibm.com Suggested-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-10tools/power turbostat: regression fix: --show C1E%Len Brown1-4/+0
The new default idle counter groupings broke "--show C1E%" (or any other C-state %) Also delete a stray debug printf from the same offending commit. Reported-by: Zhang Rui <rui.zhang@intel.com> Fixes: ec4acd3166d8 ("tools/power turbostat: disable "cpuidle" invocation counters, by default") Signed-off-by: Len Brown <len.brown@intel.com>
2025-06-10selftests/bpf: Add test for Spectre v1 mitigationLuis Gerhorst1-0/+57
This is based on the gadget from the description of commit 9183671af6db ("bpf: Fix leakage under speculation on mispredicted branches"). Signed-off-by: Luis Gerhorst <luis.gerhorst@fau.de> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20250603212814.338867-1-luis.gerhorst@fau.de Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-06-10bpf: Fall back to nospec for Spectre v1Luis Gerhorst9-50/+109
This implements the core of the series and causes the verifier to fall back to mitigating Spectre v1 using speculation barriers. The approach was presented at LPC'24 [1] and RAID'24 [2]. If we find any forbidden behavior on a speculative path, we insert a nospec (e.g., lfence speculation barrier on x86) before the instruction and stop verifying the path. While verifying a speculative path, we can furthermore stop verification of that path whenever we encounter a nospec instruction. A minimal example program would look as follows: A = true B = true if A goto e f() if B goto e unsafe() e: exit There are the following speculative and non-speculative paths (`cur->speculative` and `speculative` referring to the value of the push_stack() parameters): - A = true - B = true - if A goto e - A && !cur->speculative && !speculative - exit - !A && !cur->speculative && speculative - f() - if B goto e - B && cur->speculative && !speculative - exit - !B && cur->speculative && speculative - unsafe() If f() contains any unsafe behavior under Spectre v1 and the unsafe behavior matches `state->speculative && error_recoverable_with_nospec(err)`, do_check() will now add a nospec before f() instead of rejecting the program: A = true B = true if A goto e nospec f() if B goto e unsafe() e: exit Alternatively, the algorithm also takes advantage of nospec instructions inserted for other reasons (e.g., Spectre v4). Taking the program above as an example, speculative path exploration can stop before f() if a nospec was inserted there because of Spectre v4 sanitization. In this example, all instructions after the nospec are dead code (and with the nospec they are also dead code speculatively). For this, it relies on the fact that speculation barriers generally prevent all later instructions from executing if the speculation was not correct: * On Intel x86_64, lfence acts as full speculation barrier, not only as a load fence [3]: An LFENCE instruction or a serializing instruction will ensure that no later instructions execute, even speculatively, until all prior instructions complete locally. [...] Inserting an LFENCE instruction after a bounds check prevents later operations from executing before the bound check completes. This was experimentally confirmed in [4]. * On AMD x86_64, lfence is dispatch-serializing [5] (requires MSR C001_1029[1] to be set if the MSR is supported, this happens in init_amd()). AMD further specifies "A dispatch serializing instruction forces the processor to retire the serializing instruction and all previous instructions before the next instruction is executed" [8]. As dispatch is not specific to memory loads or branches, lfence therefore also affects all instructions there. Also, if retiring a branch means it's PC change becomes architectural (should be), this means any "wrong" speculation is aborted as required for this series. * ARM's SB speculation barrier instruction also affects "any instruction that appears later in the program order than the barrier" [6]. * PowerPC's barrier also affects all subsequent instructions [7]: [...] executing an ori R31,R31,0 instruction ensures that all instructions preceding the ori R31,R31,0 instruction have completed before the ori R31,R31,0 instruction completes, and that no subsequent instructions are initiated, even out-of-order, until after the ori R31,R31,0 instruction completes. The ori R31,R31,0 instruction may complete before storage accesses associated with instructions preceding the ori R31,R31,0 instruction have been performed Regarding the example, this implies that `if B goto e` will not execute before `if A goto e` completes. Once `if A goto e` completes, the CPU should find that the speculation was wrong and continue with `exit`. If there is any other path that leads to `if B goto e` (and therefore `unsafe()`) without going through `if A goto e`, then a nospec will still be needed there. However, this patch assumes this other path will be explored separately and therefore be discovered by the verifier even if the exploration discussed here stops at the nospec. This patch furthermore has the unfortunate consequence that Spectre v1 mitigations now only support architectures which implement BPF_NOSPEC. Before this commit, Spectre v1 mitigations prevented exploits by rejecting the programs on all architectures. Because some JITs do not implement BPF_NOSPEC, this patch therefore may regress unpriv BPF's security to a limited extent: * The regression is limited to systems vulnerable to Spectre v1, have unprivileged BPF enabled, and do NOT emit insns for BPF_NOSPEC. The latter is not the case for x86 64- and 32-bit, arm64, and powerpc 64-bit and they are therefore not affected by the regression. According to commit a6f6a95f2580 ("LoongArch, bpf: Fix jit to skip speculation barrier opcode"), LoongArch is not vulnerable to Spectre v1 and therefore also not affected by the regression. * To the best of my knowledge this regression may therefore only affect MIPS. This is deemed acceptable because unpriv BPF is still disabled there by default. As stated in a previous commit, BPF_NOSPEC could be implemented for MIPS based on GCC's speculation_barrier implementation. * It is unclear which other architectures (besides x86 64- and 32-bit, ARM64, PowerPC 64-bit, LoongArch, and MIPS) supported by the kernel are vulnerable to Spectre v1. Also, it is not clear if barriers are available on these architectures. Implementing BPF_NOSPEC on these architectures therefore is non-trivial. Searching GCC and the kernel for speculation barrier implementations for these architectures yielded no result. * If any of those regressed systems is also vulnerable to Spectre v4, the system was already vulnerable to Spectre v4 attacks based on unpriv BPF before this patch and the impact is therefore further limited. As an alternative to regressing security, one could still reject programs if the architecture does not emit BPF_NOSPEC (e.g., by removing the empty BPF_NOSPEC-case from all JITs except for LoongArch where it appears justified). However, this will cause rejections on these archs that are likely unfounded in the vast majority of cases. In the tests, some are now successful where we previously had a false-positive (i.e., rejection). Change them to reflect where the nospec should be inserted (using __xlated_unpriv) and modify the error message if the nospec is able to mitigate a problem that previously shadowed another problem (in that case __xlated_unpriv does not work, therefore just add a comment). Define SPEC_V1 to avoid duplicating this ifdef whenever we check for nospec insns using __xlated_unpriv, define it here once. This also improves readability. PowerPC can probably also be added here. However, omit it for now because the BPF CI currently does not include a test. Limit it to EPERM, EACCES, and EINVAL (and not everything except for EFAULT and ENOMEM) as it already has the desired effect for most real-world programs. Briefly went through all the occurrences of EPERM, EINVAL, and EACCESS in verifier.c to validate that catching them like this makes sense. Thanks to Dustin for their help in checking the vendor documentation. [1] https://lpc.events/event/18/contributions/1954/ ("Mitigating Spectre-PHT using Speculation Barriers in Linux eBPF") [2] https://arxiv.org/pdf/2405.00078 ("VeriFence: Lightweight and Precise Spectre Defenses for Untrusted Linux Kernel Extensions") [3] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/runtime-speculative-side-channel-mitigations.html ("Managed Runtime Speculative Execution Side Channel Mitigations") [4] https://dl.acm.org/doi/pdf/10.1145/3359789.3359837 ("Speculator: a tool to analyze speculative execution attacks and mitigations" - Section 4.6 "Stopping Speculative Execution") [5] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/software-techniques-for-managing-speculation.pdf ("White Paper - SOFTWARE TECHNIQUES FOR MANAGING SPECULATION ON AMD PROCESSORS - REVISION 5.09.23") [6] https://developer.arm.com/documentation/ddi0597/2020-12/Base-Instructions/SB--Speculation-Barrier- ("SB - Speculation Barrier - Arm Armv8-A A32/T32 Instruction Set Architecture (2020-12)") [7] https://wiki.raptorcs.com/w/images/5/5f/OPF_PowerISA_v3.1C.pdf ("Power ISA™ - Version 3.1C - May 26, 2024 - Section 9.2.1 of Book III") [8] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/40332.pdf ("AMD64 Architecture Programmer’s Manual Volumes 1–5 - Revision 4.08 - April 2024 - 7.6.4 Serializing Instructions") Signed-off-by: Luis Gerhorst <luis.gerhorst@fau.de> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: Henriette Herzog <henriette.herzog@rub.de> Cc: Dustin Nguyen <nguyen@cs.fau.de> Cc: Maximilian Ott <ott@cs.fau.de> Cc: Milan Stephan <milan.stephan@fau.de> Link: https://lore.kernel.org/r/20250603212428.338473-1-luis.gerhorst@fau.de Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-06-10bpftool: Display cookie for tracing link probeTao Chen1-0/+3
Display cookie for tracing link probe, in plain mode: #bpftool link 5: tracing prog 34 prog_type tracing attach_type trace_fentry target_obj_id 1 target_btf_id 60355 cookie 4503599627370496 pids test_progs(176) And in json mode: #bpftool link -j | jq { "id": 5, "type": "tracing", "prog_id": 34, "prog_type": "tracing", "attach_type": "trace_fentry", "target_obj_id": 1, "target_btf_id": 60355, "cookie": 4503599627370496, "pids": [ { "pid": 176, "comm": "test_progs" } ] } Signed-off-by: Tao Chen <chen.dylane@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250606165818.3394397-3-chen.dylane@linux.dev
2025-06-10selftests/bpf: Add cookies check for tracing fill_link_info testTao Chen1-1/+23
Adding tests for getting cookie with fill_link_info for tracing. Signed-off-by: Tao Chen <chen.dylane@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250606165818.3394397-2-chen.dylane@linux.dev
2025-06-10bpf: Add cookie to tracing bpf_link_infoTao Chen1-0/+2
bpf_tramp_link includes cookie info, we can add it in bpf_link_info. Signed-off-by: Tao Chen <chen.dylane@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250606165818.3394397-1-chen.dylane@linux.dev
2025-06-10selftests/bpf: Add test cases with CONST_PTR_TO_MAP null checksIhor Solodrai1-0/+118
A test requires the following to happen: * CONST_PTR_TO_MAP value is checked for null * the code in the null branch fails verification Add test cases: * direct global map_ptr comparison to null * lookup inner map, then two checks (the first transforms map_value_or_null into map_ptr) * lookup inner map, spill-fill it, then check for null * use an array of ringbufs to recreate a common coding pattern [1] [1] https://lore.kernel.org/bpf/CAEf4BzZNU0gX_sQ8k8JaLe1e+Veth3Rk=4x7MDhv=hQxvO8EDw@mail.gmail.com/ Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Ihor Solodrai <isolodrai@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250609183024.359974-4-isolodrai@meta.com
2025-06-10selftests/bpf: Add cmp_map_pointer_with_const testIhor Solodrai1-1/+16
Add a test for CONST_PTR_TO_MAP comparison with a non-0 constant. A BPF program with this code must not pass verification in unpriv. Signed-off-by: Ihor Solodrai <isolodrai@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250609183024.359974-3-isolodrai@meta.com
2025-06-10bpf: Make reg_not_null() true for CONST_PTR_TO_MAPIhor Solodrai1-1/+1
When reg->type is CONST_PTR_TO_MAP, it can not be null. However the verifier explores the branches under rX == 0 in check_cond_jmp_op() even if reg->type is CONST_PTR_TO_MAP, because it was not checked for in reg_not_null(). Fix this by adding CONST_PTR_TO_MAP to the set of types that are considered non nullable in reg_not_null(). An old "unpriv: cmp map pointer with zero" selftest fails with this change, because now early out correctly triggers in check_cond_jmp_op(), making the verification to pass. In practice verifier may allow pointer to null comparison in unpriv, since in many cases the relevant branch and comparison op are removed as dead code. So change the expected test result to __success_unpriv. Signed-off-by: Ihor Solodrai <isolodrai@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250609183024.359974-2-isolodrai@meta.com
2025-06-10selftests/bpf: Add two selftests for mprog API based cgroup progsYonghong Song3-0/+724
Two tests are added: - cgroup_mprog_opts, which mimics tc_opts.c ([1]). Both prog and link attach are tested. Some negative tests are also included. - cgroup_mprog_ordering, which actually runs the program with some mprog API flags. [1] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/tc_opts.c Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250606163156.2429955-1-yonghong.song@linux.dev
2025-06-10selftests/bpf: Move some tc_helpers.h functions to test_progs.hYonghong Song2-28/+28
Move static inline functions id_from_prog_fd() and id_from_link_fd() from prog_tests/tc_helpers.h to test_progs.h so these two functions can be reused for later cgroup mprog selftests. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250606163151.2429325-1-yonghong.song@linux.dev
2025-06-10libbpf: Support link-based cgroup attach with optionsYonghong Song5-0/+93
Currently libbpf supports bpf_program__attach_cgroup() with signature: LIBBPF_API struct bpf_link * bpf_program__attach_cgroup(const struct bpf_program *prog, int cgroup_fd); To support mprog style attachment, additionsl fields like flags, relative_{fd,id} and expected_revision are needed. Add a new API: LIBBPF_API struct bpf_link * bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd, const struct bpf_cgroup_opts *opts); where bpf_cgroup_opts contains all above needed fields. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250606163146.2429212-1-yonghong.song@linux.dev
2025-06-10bpf: Implement mprog API on top of existing cgroup progsYonghong Song1-0/+7
Current cgroup prog ordering is appending at attachment time. This is not ideal. In some cases, users want specific ordering at a particular cgroup level. To address this, the existing mprog API seems an ideal solution with supporting BPF_F_BEFORE and BPF_F_AFTER flags. But there are a few obstacles to directly use kernel mprog interface. Currently cgroup bpf progs already support prog attach/detach/replace and link-based attach/detach/replace. For example, in struct bpf_prog_array_item, the cgroup_storage field needs to be together with bpf prog. But the mprog API struct bpf_mprog_fp only has bpf_prog as the member, which makes it difficult to use kernel mprog interface. In another case, the current cgroup prog detach tries to use the same flag as in attach. This is different from mprog kernel interface which uses flags passed from user space. So to avoid modifying existing behavior, I made the following changes to support mprog API for cgroup progs: - The support is for prog list at cgroup level. Cross-level prog list (a.k.a. effective prog list) is not supported. - Previously, BPF_F_PREORDER is supported only for prog attach, now BPF_F_PREORDER is also supported by link-based attach. - For attach, BPF_F_BEFORE/BPF_F_AFTER/BPF_F_ID/BPF_F_LINK is supported similar to kernel mprog but with different implementation. - For detach and replace, use the existing implementation. - For attach, detach and replace, the revision for a particular prog list, associated with a particular attach type, will be updated by increasing count by 1. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250606163141.2428937-1-yonghong.song@linux.dev
2025-06-09perf test trace: Change the regex pattern in the struct testHoward Chu1-1/+1
Ian mentioned a reliably occurred failure in the trace_btf_general test where he obtained trace output of: sleep/279619 clock_nanosleep(0, 0, {1,1,}, 0x7ffcd47b6450) = 0 But the regex pattern used for verification is "^sleep/[0-9]+ clock_nanosleep\(0, 0, \{1,\}, ..." This lead to a mismatch. The reason is, different sleep commands use different timespec data to call clock_nanosleep, on my machine, the value of tv_nsec is 0. ~~~ $ sudo /tmp/perf/perf trace -e clock_nanosleep -- sleep 1 0.000 (1000.196 ms): sleep/54261 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7ffe13529550) = 0 ~~~ While Ian had this trace log: ~~~ $ sudo /tmp/perf/perf trace -e clock_nanosleep -- sleep 1 0.000 (1000.208 ms): sleep/1710732 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 1 }, rmtp: 0x7ffc091f4090) = 0 ~~~ Because sleep's behavior of setting 'tv_nsec' is not certain, and tv_sec is most definitely 1, this patch relaxes the key regex pattern to '\{1,.*\}' for a better chance of matching. Signed-off-by: Howard Chu <howardchu95@gmail.com> Tested-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20250528191148.89118-7-howardchu95@gmail.com Reported-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf test trace: Use --sort-events in BTF general testsHoward Chu1-3/+3
Without the '--sort-events' flag, perf trace doesn't receive and process events based on their arrival time, thus PERF_RECORD_COMM event that assigns the correct comm to a PID, may be delivered and processed after regular samples, causing trace outputs not having a 'comm', e.g. 'mv', instead, having the default PID placeholder, e.g. ':14514'. Hopefully this answers Namhyung's question in [1]. You can simply justify the statement with this diff: [2]. Now, simply run this command multiple times: $ touch /tmp/file1 && sudo /tmp/perf trace -e renameat* -- mv /tmp/file1 /tmp/file2 && rm -f /tmp/file2 And you should see two types of results: $ touch /tmp/file1 && sudo /tmp/perf trace -e renameat* -- mv /tmp/file1 /tmp/file2 && rm -f /tmp/file2 [debug] deliver [debug] machine__process_comm_event [OVERRIDE] old :1221169 new mv str mv [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver 0.000 ( 0.013 ms): mv/1221169 renameat2(olddfd: CWD, oldname: "/tmp/file1", newdfd: CWD, newname: "/tmp/file2", flags: NOREPLACE) = 0 [debug] deliver $ touch /tmp/file1 && sudo /tmp/perf trace -e renameat* -- mv /tmp/file1 /tmp/file2 && rm -f /tmp/file2 [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver [debug] deliver 0.000 ( 0.014 ms): :1221398/1221398 renameat2(olddfd: CWD, oldname: "/tmp/file1", newdfd: CWD, newname: "/tmp/file2", flags: NOREPLACE) = 0 [debug] deliver [debug] deliver [debug] machine__process_comm_event [OVERRIDE] old :1221398 new mv str mv [debug] deliver [debug] deliver [debug] deliver Anyway, use --sort-events in BTF general tests to avoid :PID, a comm is preferred. [1]: https://lore.kernel.org/linux-perf-users/Z_AeswETE5xLcPT8@google.com/ [2]: https://gist.githubusercontent.com/Sberm/6b72b2a1cf1c62244f1f996481769baf/raw/529667bd74a2e7e1953bbd4be545bf875da8a3e7/unsorted.patch Signed-off-by: Howard Chu <howardchu95@gmail.com> Tested-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20250528191148.89118-6-howardchu95@gmail.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf test trace: Remove set -e for BTF general testsHoward Chu1-10/+9
Remove set -e and print error messages in BTF general tests. Before: $ sudo /tmp/perf test btf -vv 108: perf trace BTF general tests: 108: perf trace BTF general tests : Running --- start --- test child forked, pid 889299 Checking if vmlinux BTF exists Testing perf trace's string augmentation String augmentation test failed ---- end(-1) ---- 108: perf trace BTF general tests : FAILED! After: $ sudo /tmp/perf test btf -vv 108: perf trace BTF general tests: 108: perf trace BTF general tests : Running --- start --- test child forked, pid 886551 Checking if vmlinux BTF exists Testing perf trace's string augmentation String augmentation test failed, output: :886566/886566 renameat2(CWD, "/tmp/file1_RcMa", CWD, "/tmp/file2_RcMa", NOREPLACE) = 0---- end(-1) ---- 108: perf trace BTF general tests : FAILED! Signed-off-by: Howard Chu <howardchu95@gmail.com> Tested-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20250528191148.89118-5-howardchu95@gmail.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf test trace: Stop tracing hrtimer_setup event in trace enum testHoward Chu1-1/+1
The event 'timer:hrtimer_setup' is relatively new, for older kernels, perf trace enum tests won't run as the event 'timer:hrtimer_setup' cannot be found. It was originally called 'timer:hrtimer_init', before being renamed in: commit 244132c4e577 ("tracing/timers: Rename the hrtimer_init event to hrtimer_setup") Using timer:hrtimer_start should be enough for current testing, and hopefully 'start' won't be renamed in the future. Before: $ sudo /tmp/perf test enum -vv 107: perf trace enum augmentation tests: 107: perf trace enum augmentation tests : Running --- start --- test child forked, pid 786187 Checking if vmlinux exists Tracing syscall landlock_add_rule Tracing non-syscall tracepoint timer:hrtimer_setup,timer:hrtimer_start [tracepoint failure] Failed to trace timer:hrtimer_setup,timer:hrtimer_start tracepoint, output: event syntax error: 'timer:hrtimer_setup,timer:hrtimer_start' \___ unknown tracepoint Error: File /sys/kernel/tracing//events/timer/hrtimer_setup not found. Hint: Perhaps this kernel misses some CONFIG_ setting to enable this feature?. Run 'perf list' for a list of valid events Usage: perf trace [<options>] [<command>] or: perf trace [<options>] -- <command> [<options>] or: perf trace record [<options>] [<command>] or: perf trace record [<options>] -- <command> [<options>] -e, --event <event> event/syscall selector. use 'perf list' to list available events ---- end(-1) ---- 107: perf trace enum augmentation tests : FAILED! After: $ sudo /tmp/perf test enum -vv 107: perf trace enum augmentation tests: 107: perf trace enum augmentation tests : Running --- start --- test child forked, pid 808547 Checking if vmlinux exists Tracing syscall landlock_add_rule Tracing non-syscall tracepoint timer:hrtimer_start ---- end(0) ---- 107: perf trace enum augmentation tests : Ok Signed-off-by: Howard Chu <howardchu95@gmail.com> Tested-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20250528191148.89118-4-howardchu95@gmail.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf test trace: Remove set -e and print trace test's error messagesHoward Chu1-6/+7
Currently perf test utilizes the set -e option in shell that exit immediately if a command exits with a non-zero status, this prevents further error handling and introduces ambiguity. This patch removes set -e and prints the error message after invoking perf trace during perf tests. In my case, the command that exits with a non-zero status is perf trace instead of grep, because it can't find the 'timer:hrtimer_setup' tracepoint, see below. Before: $ sudo /tmp/perf test enum -vv 107: perf trace enum augmentation tests: 107: perf trace enum augmentation tests : Running --- start --- test child forked, pid 783533 Checking if vmlinux exists Tracing syscall landlock_add_rule Tracing non-syscall tracepoint syscall ---- end(-1) ---- 107: perf trace enum augmentation tests : FAILED! After: $ sudo /tmp/perf test enum -vv 107: perf trace enum augmentation tests: 107: perf trace enum augmentation tests : Running --- start --- test child forked, pid 851658 Checking if vmlinux exists Tracing syscall landlock_add_rule Tracing non-syscall tracepoint timer:hrtimer_setup,timer:hrtimer_start [tracepoint failure] Failed to trace tracepoint timer:hrtimer_setup,timer:hrtimer_start, output: event syntax error: 'timer:hrtimer_setup,timer:hrtimer_start' \___ unknown tracepoint Error: File /sys/kernel/tracing//events/timer/hrtimer_setup not found. Hint: Perhaps this kernel misses some CONFIG_ setting to enable this feature?. Run 'perf list' for a list of valid events Usage: perf trace [<options>] [<command>] or: perf trace [<options>] -- <command> [<options>] or: perf trace record [<options>] [<command>] or: perf trace record [<options>] -- <command> [<options>] -e, --event <event> event/syscall selector. use 'perf list' to list available events---- end(-1) ---- 107: perf trace enum augmentation tests : FAILED! Signed-off-by: Howard Chu <howardchu95@gmail.com> Tested-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20250528191148.89118-3-howardchu95@gmail.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf test trace: Use shell's -f flag to check if vmlinux existsHoward Chu1-1/+1
To match the style of the existing codebase, no functional changes were applied. Signed-off-by: Howard Chu <howardchu95@gmail.com> Tested-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20250528191148.89118-2-howardchu95@gmail.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf trace: Remove --map-dump documentationHoward Chu1-8/+0
The --map-dump option was removed in 5e6da6be3082 ("perf trace: Migrate BPF augmentation to use a skeleton"), this patch removes its remaining documentation. Fixes: 5e6da6be3082 ("perf trace: Migrate BPF augmentation to use a skeleton") Signed-off-by: Howard Chu <howardchu95@gmail.com> Link: https://lore.kernel.org/r/20250601173252.717780-1-howardchu95@gmail.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09tools/build: Remove some unused libbpf pre-1.0 feature test logicIan Rogers2-27/+0
Commit 76a97cf2e169 ("perf build: Remove libbpf pre-1.0 feature tests") removed the libbpf feature test logic used by perf in favor of using LIBBPF_MAJOR_VERSION. Remove some build targets that should have been removed as part of that clean up. Fixes: 76a97cf2e169 ("perf build: Remove libbpf pre-1.0 feature tests") Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250603221358.2562167-1-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf thread_map: Remove uid optionsIan Rogers12-48/+20
Now the target doesn't have a uid, it is handled through BPF filters, remove the uid options to thread_map creation. Tidy up the functions used in tests to avoid passing unused arguments. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-11-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf target: Remove uid from targetIan Rogers13-73/+6
Gathering threads with a uid by scanning /proc is inherently racy leading to perf_event_open failures that quit perf. All users of the functionality now use BPF filters, so remove uid and uid_str from target. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-10-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf bench evlist-open-close: Switch user option to use BPF filterIan Rogers1-15/+21
Finding user processes by scanning /proc is inherently racy and results in perf_event_open failures. Use a BPF filter to drop samples where the uid doesn't match. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-9-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf trace: Switch user option to use BPF filterIan Rogers1-9/+17
Finding user processes by scanning /proc is inherently racy and results in perf_event_open failures. Use a BPF filter to drop samples where the uid doesn't match. Ensure adding the BPF filter forces system-wide. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-8-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf top: Switch user option to use BPF filterIan Rogers3-12/+15
Finding user processes by scanning /proc is inherently racy and results in perf_event_open failures. Use a BPF filter to drop samples where the uid doesn't match. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-7-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf tests record: Add basic uid filtering testIan Rogers1-0/+26
Based on the system-wide test with changes around how failure is handled as BPF permissions are a bigger issue than perf event paranoia. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-6-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf record: Switch user option to use BPF filterIan Rogers1-11/+16
Finding user processes by scanning /proc is inherently racy and results in perf_event_open failures. Use a BPF filter to drop samples where the uid doesn't match. Ensure adding the BPF filter forces system-wide. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-5-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf parse-events: Add parse_uid_filter helperIan Rogers2-1/+34
Add parse_uid_filter filter as a helper to parse_filter, that constructs a uid filter string. As uid filters don't work with tracepoint filters, add a is_possible_tp_filter function so the tracepoint filter isn't attempted for tracepoint evsels. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-4-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf target: Separate parse_uid into its own functionIan Rogers2-11/+14
Allow parse_uid to be called without a struct target. Rather than have two errors, remove TARGET_ERRNO__USER_NOT_FOUND and use TARGET_ERRNO__INVALID_UID as the handling is identical. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-3-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf parse-events filter: Use evsel__find_pmuIan Rogers1-10/+4
Rather than manually scanning PMUs, use evsel__find_pmu that can use the PMU set during event parsing. Signed-off-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174545.2853620-2-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-06-09perf bpf-filter: Improve error messagesNamhyung Kim4-1/+36
The BPF filter needs libbpf/BPF-skeleton support and root privilege. Add error messages to help users understand the problem easily. When it's not build with BPF support (make BUILD_BPF_SKEL=0). $ sudo perf record -e cycles --filter "pid != 0" true Error: BPF filter is requested but perf is not built with BPF. Please make sure to build with libbpf and BPF skeleton. Usage: perf record [<options>] [<command>] or: perf record [<options>] -- <command> [<options>] --filter <filter> event filter When it supports BPF but runs without root or CAP_BPF. Note that it also checks pinned BPF filters. $ perf record -e cycles --filter "pid != 0" -o /dev/null true Error: BPF filter only works for users with the CAP_BPF capability! Please run 'perf record --setup-filter pin' as root first. Usage: perf record [<options>] [<command>] or: perf record [<options>] -- <command> [<options>] --filter <filter> event filter Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250604174835.1852481-1-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>