summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2022-07-27selftests/bpf: Copy over libbpf configsDaniel Müller4-0/+471
This change integrates libbpf maintained configurations and black/white lists [0] into the repository, co-located with the BPF selftests themselves. We minimize the kernel configurations to keep future updates as small as possible [1]. Furthermore, we make both kernel configurations build on top of the existing configuration tools/testing/selftests/bpf/config (to be concatenated before build). Lastly, we replaced the terms blacklist & whitelist with denylist and allowlist, respectively. [0] https://github.com/libbpf/libbpf/tree/20f03302350a4143825cedcbd210c4d7112c1898/travis-ci/vmtest/configs [1] https://lore.kernel.org/bpf/20220712212124.3180314-1-deso@posteo.net/T/#m30a53648352ed494e556ac003042a9ad0a8f98c6 Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Mykola Lysenko <mykolal@fb.com> Link: https://lore.kernel.org/bpf/20220727001156.3553701-3-deso@posteo.net
2022-07-27selftests/bpf: Sort configurationDaniel Müller1-50/+49
This change makes sure to sort the existing minimal kernel configuration containing options required for running BPF selftests alphabetically. Doing so will make it easier to diff it against other configurations, which in turn helps with maintaining disjunct config files that build on top of each other. It also helped identify the CONFIG_IPV6_GRE being set twice and removes one of the occurrences. Lastly, we change NET_CLS_BPF from 'm' to 'y'. Having this option as 'm' will cause failures of the btf_skc_cls_ingress selftest. Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Mykola Lysenko <mykolal@fb.com> Link: https://lore.kernel.org/bpf/20220727001156.3553701-2-deso@posteo.net
2022-07-27perf bpf: Remove undefined behavior from bpf_perf_object__next()Ian Rogers1-11/+7
bpf_perf_object__next() folded the last element in the list test with the empty list test. However, this meant that offsets were computed against null and that a struct list_head was compared against a 'struct bpf_perf_object'. Working around this with clang's undefined behavior sanitizer required -fno-sanitize=null and -fno-sanitize=object-size. Remove the undefined behavior by using the regular Linux list APIs and handling the starting case separately from the end testing case. Looking at uses like bpf_perf_object__for_each(), as the constant NULL or non-NULL argument can be constant propagated, the code is no less efficient. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Christy Lee <christylee@fb.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Tom Rix <trix@redhat.com> Cc: bpf@vger.kernel.org Cc: llvm@lists.linux.dev Link: https://lore.kernel.org/r/20220726220921.2567761-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-27perf symbol: Skip symbols if SHF_ALLOC flag is not setLeo Yan1-0/+11
Some symbols are observed with the 'st_value' field zeroed. E.g. libc.so.6 in Ubuntu contains a symbol '__evoke_link_warning_getwd' which resides in the '.gnu.warning.getwd' section. Unlike normal sections, such kind of sections are used for linker warning when a file calls deprecated functions, but they are not part of memory images, the symbols in these sections should be dropped. This patch checks the section attribute SHF_ALLOC bit, if the bit is not set, it skips symbols to avoid spurious ones. Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Chang Rui <changruinj@gmail.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-3-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-27perf symbol: Correct address for bss symbolsLeo Yan1-4/+41
When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 <completed.0>: ... 0000000000004080 <buf1>: ... 00000000000040c0 <buf2>: ... 0000000000004100 <thread>: ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui <changruinj@gmail.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-27perf scripts python: Let script to be python2 compliantLeo Yan1-16/+18
The mainline kernel can be used for relative old distros, e.g. RHEL 7. The distro doesn't upgrade from python2 to python3, this causes the building error that the python script is not python2 compliant. To fix the building failure, this patch changes from the python f-string format to traditional string format. Fixes: 12fdd6c009da0d02 ("perf scripts python: Support Arm CoreSight trace data disassembly") Reported-by: Akemi Yagi <toracat@elrepo.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: ElRepo <contact@elrepo.org> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220725104220.1106663-1-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-27tools headers cpufeatures: Sync with the kernel sourcesArnaldo Carvalho de Melo1-0/+1
To pick the changes from: 28a99e95f55c6185 ("x86/amd: Use IBPB for firmware calls") This only causes these perf files to be rebuilt: CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o CC /tmp/build/perf/bench/mem-memset-x86-64-asm.o And addresses this perf build warning: Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h' diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org Link: https://lore.kernel.org/lkml/Yt6oWce9UDAmBAtX@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-27selftests: net: Fix typo 'the the' in commentSlark Xiao1-1/+1
Replace 'the the' with 'the' in the comment. Signed-off-by: Slark Xiao <slark_xiao@163.com> Link: https://lore.kernel.org/r/20220725020124.5760-1-slark_xiao@163.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-27selftests/vm: fix va_128TBswitch.sh permissionsAdam Sindelar1-0/+0
Restore the +x bit to va_128TBswitch.sh, which got dropped from the previous patch, somehow. Link: https://lkml.kernel.org/r/20220708090646.34927-1-adam@wowsignal.io Fixes: 1afd01d43efc3 ("selftests/vm: Only run 128TBswitch with 5-level paging") Signed-off-by: Adam Sindelar <adam@wowsignal.io> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-07-26selftests: mlxsw: Check line card info on activated line cardJiri Pirko1-0/+24
Once line card is activated, check the FW version and PSID are exposed. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-26selftests: mlxsw: Check line card info on provisioned line cardJiri Pirko1-0/+30
Once line card is provisioned, check if HW revision and INI version are exposed on associated nested auxiliary device. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-26selftests/bpf: Attach to socketcall() in test_probe_userIlya Leoshkevich2-13/+51
test_probe_user fails on architectures where libc uses socketcall(SYS_CONNECT) instead of connect(). Fix by attaching to socketcall as well. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20220726134008.256968-3-iii@linux.ibm.com
2022-07-26libbpf: Extend BPF_KSYSCALL documentationIlya Leoshkevich1-4/+11
Explicitly list known quirks. Mention that socket-related syscalls can be invoked via socketcall(). Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20220726134008.256968-2-iii@linux.ibm.com
2022-07-26selftests/bpf: Don't assign outer source IP to hostPaul Chaignon2-11/+86
The previous commit fixed a bug in the bpf_skb_set_tunnel_key helper to avoid dropping packets whose outer source IP address isn't assigned to a host interface. This commit changes the corresponding selftest to not assign the outer source IP address to an interface. Not assigning the source IP to an interface causes two issues in the existing test: 1. The ARP requests will fail for that IP address so we need to add the ARP entry manually. 2. The encapsulated ICMP echo reply traffic will not reach the VXLAN device. It will be dropped by the stack before, because the outer destination IP is unknown. To solve 2., we have two choices. Either we perform decapsulation ourselves in a BPF program attached at veth1 (the base device for the VXLAN device), or we switch the outer destination address when we receive the packet at veth1, such that the stack properly demultiplexes it to the VXLAN device afterward. This commit implements the second approach, where we switch the outer destination address from the unassigned IP address to the assigned one, only for VXLAN traffic ingressing veth1. Then, at the vxlan device, the BPF program that checks the output of bpf_skb_get_tunnel_key needs to be updated as the expected local IP address is now the unassigned one. Signed-off-by: Paul Chaignon <paul@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/4addde76eaf3477a58975bef15ed2788c44e5f55.1658759380.git.paul@isovalent.com
2022-07-25selftests/io_uring: test zerocopy sendPavel Begunkov3-0/+737
Add selftests for io_uring zerocopy sends and io_uring's notification infrastructure. It's largely influenced by msg_zerocopy and uses it on the receive side. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/03d5ec78061cf52db420f88ed0b48eb8f47ce9f7.1657643355.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-07-25selftests/kprobe: Update test for no event name syntax errorMasami Hiramatsu (Google)1-0/+1
The commit 208003254c32 ("selftests/kprobe: Do not test for GRP/ without event failures") removed a syntax which is no more cause a syntax error (NO_EVENT_NAME error with GRP/). However, there are another case (NO_EVENT_NAME error without GRP/) which causes a same error. This adds a test for that case. Link: https://lkml.kernel.org/r/165812790993.1377963.9762767354560397298.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-07-25selftests/kprobe: Do not test for GRP/ without event failuresSteven Rostedt (Google)1-1/+0
A new feature is added where kprobes (and other probes) do not need to explicitly state the event name when creating a probe. The event name will come from what is being attached. That is: # echo 'p:foo/ vfs_read' > kprobe_events Will no longer error, but instead create an event: # cat kprobe_events p:foo/p_vfs_read_0 vfs_read This should not be tested as an error case anymore. Remove it from the selftest as now this feature "breaks" the selftest as it no longer fails as expected. Link: https://lore.kernel.org/all/1656296348-16111-1-git-send-email-quic_linyyuan@quicinc.com/ Link: https://lkml.kernel.org/r/20220712161707.6dc08a14@gandalf.local.home Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-07-25selftests/ftrace: Add test case for GRP/ only inputLinyu Yuan2-1/+15
Add kprobe and eprobe event test for new GRP/ only format. Link: https://lore.kernel.org/all/1656296348-16111-5-git-send-email-quic_linyyuan@quicinc.com/ Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Tom Zanussi <zanussi@kernel.org> Signed-off-by: Linyu Yuan <quic_linyyuan@quicinc.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-07-23Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-4/+6
Pull kvm fixes from Paolo Bonzini: - Check for invalid flags to KVM_CAP_X86_USER_SPACE_MSR - Fix use of sched_setaffinity in selftests - Sync kernel headers to tools - Fix KVM_STATS_UNIT_MAX * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86: Protect the unused bits in MSR exiting flags tools headers UAPI: Sync linux/kvm.h with the kernel sources KVM: selftests: Fix target thread to be migrated in rseq_test KVM: stats: Fix value for KVM_STATS_UNIT_MAX for boolean stats
2022-07-23Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski32-226/+926
Daniel Borkmann says: ==================== bpf-next 2022-07-22 We've added 73 non-merge commits during the last 12 day(s) which contain a total of 88 files changed, 3458 insertions(+), 860 deletions(-). The main changes are: 1) Implement BPF trampoline for arm64 JIT, from Xu Kuohai. 2) Add ksyscall/kretsyscall section support to libbpf to simplify tracing kernel syscalls through kprobe mechanism, from Andrii Nakryiko. 3) Allow for livepatch (KLP) and BPF trampolines to attach to the same kernel function, from Song Liu & Jiri Olsa. 4) Add new kfunc infrastructure for netfilter's CT e.g. to insert and change entries, from Kumar Kartikeya Dwivedi & Lorenzo Bianconi. 5) Add a ksym BPF iterator to allow for more flexible and efficient interactions with kernel symbols, from Alan Maguire. 6) Bug fixes in libbpf e.g. for uprobe binary path resolution, from Dan Carpenter. 7) Fix BPF subprog function names in stack traces, from Alexei Starovoitov. 8) libbpf support for writing custom perf event readers, from Jon Doron. 9) Switch to use SPDX tag for BPF helper man page, from Alejandro Colomar. 10) Fix xsk send-only sockets when in busy poll mode, from Maciej Fijalkowski. 11) Reparent BPF maps and their charging on memcg offlining, from Roman Gushchin. 12) Multiple follow-up fixes around BPF lsm cgroup infra, from Stanislav Fomichev. 13) Use bootstrap version of bpftool where possible to speed up builds, from Pu Lehui. 14) Cleanup BPF verifier's check_func_arg() handling, from Joanne Koong. 15) Make non-prealloced BPF map allocations low priority to play better with memcg limits, from Yafang Shao. 16) Fix BPF test runner to reject zero-length data for skbs, from Zhengchao Shao. 17) Various smaller cleanups and improvements all over the place. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (73 commits) bpf: Simplify bpf_prog_pack_[size|mask] bpf: Support bpf_trampoline on functions with IPMODIFY (e.g. livepatch) bpf, x64: Allow to use caller address from stack ftrace: Allow IPMODIFY and DIRECT ops on the same function ftrace: Add modify_ftrace_direct_multi_nolock bpf/selftests: Fix couldn't retrieve pinned program in xdp veth test bpf: Fix build error in case of !CONFIG_DEBUG_INFO_BTF selftests/bpf: Fix test_verifier failed test in unprivileged mode selftests/bpf: Add negative tests for new nf_conntrack kfuncs selftests/bpf: Add tests for new nf_conntrack kfuncs selftests/bpf: Add verifier tests for trusted kfunc args net: netfilter: Add kfuncs to set and change CT status net: netfilter: Add kfuncs to set and change CT timeout net: netfilter: Add kfuncs to allocate and insert CT net: netfilter: Deduplicate code in bpf_{xdp,skb}_ct_lookup bpf: Add documentation for kfuncs bpf: Add support for forcing kfunc args to be trusted bpf: Switch to new kfunc flags infrastructure tools/resolve_btfids: Add support for 8-byte BTF sets bpf: Introduce 8-byte BTF set ... ==================== Link: https://lore.kernel.org/r/20220722221218.29943-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-22bpf/selftests: Fix couldn't retrieve pinned program in xdp veth testJie2x Zhou1-3/+3
Before change: selftests: bpf: test_xdp_veth.sh Couldn't retrieve pinned program '/sys/fs/bpf/test_xdp_veth/progs/redirect_map_0': No such file or directory selftests: xdp_veth [SKIP] ok 20 selftests: bpf: test_xdp_veth.sh # SKIP After change: PING 10.1.1.33 (10.1.1.33) 56(84) bytes of data. 64 bytes from 10.1.1.33: icmp_seq=1 ttl=64 time=0.320 ms --- 10.1.1.33 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms selftests: xdp_veth [PASS] For the test case, the following can be found: ls /sys/fs/bpf/test_xdp_veth/progs/redirect_map_0 ls: cannot access '/sys/fs/bpf/test_xdp_veth/progs/redirect_map_0': No such file or directory ls /sys/fs/bpf/test_xdp_veth/progs/ xdp_redirect_map_0 xdp_redirect_map_1 xdp_redirect_map_2 Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Jie2x Zhou <jie2x.zhou@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220719082430.9916-1-jie2x.zhou@intel.com
2022-07-22uapi: asm-generic: fcntl: Fix typo 'the the' in commentSlark Xiao1-1/+1
Replace 'the the' with 'the' in the comment. Signed-off-by: Slark Xiao <slark_xiao@163.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-07-22ping: support ipv6 ping socket flow labelsAlan Brady2-15/+76
Ping sockets don't appear to make any attempt to preserve flow labels created and set by userspace using IPV6_FLOWINFO_SEND. Instead they are clobbered by autolabels (if enabled) or zero. Grab the flowlabel out of the msghdr similar to how rawv6_sendmsg does it and move the memset up so it doesn't get zeroed after. Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Gurucharan <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-07-22selftests/bpf: Fix test_verifier failed test in unprivileged modeKumar Kartikeya Dwivedi1-0/+1
Loading the BTF won't be permitted without privileges, hence only test for privileged mode by setting the prog type. This makes the test_verifier show 0 failures when unprivileged BPF is enabled. Fixes: 41188e9e9def ("selftest/bpf: Test for use-after-free bug fix in inline_bpf_loop") Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220721134245.2450-14-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-22selftests/bpf: Add negative tests for new nf_conntrack kfuncsKumar Kartikeya Dwivedi2-1/+189
Test cases we care about and ensure improper usage is caught and rejected by the verifier. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220721134245.2450-13-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-22selftests/bpf: Add tests for new nf_conntrack kfuncsLorenzo Bianconi2-12/+81
Introduce selftests for the following kfunc helpers: - bpf_xdp_ct_alloc - bpf_skb_ct_alloc - bpf_ct_insert_entry - bpf_ct_set_timeout - bpf_ct_change_timeout - bpf_ct_set_status - bpf_ct_change_status Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220721134245.2450-12-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-22selftests/bpf: Add verifier tests for trusted kfunc argsKumar Kartikeya Dwivedi1-0/+53
Make sure verifier rejects the bad cases and ensure the good case keeps working. The selftests make use of the bpf_kfunc_call_test_ref kfunc added in the previous patch only for verification. Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220721134245.2450-11-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-22bpf: Switch to new kfunc flags infrastructureKumar Kartikeya Dwivedi1-5/+5
Instead of populating multiple sets to indicate some attribute and then researching the same BTF ID in them, prepare a single unified BTF set which indicates whether a kfunc is allowed to be called, and also its attributes if any at the same time. Now, only one call is needed to perform the lookup for both kfunc availability and its attributes. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20220721134245.2450-4-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-22tools/resolve_btfids: Add support for 8-byte BTF setsKumar Kartikeya Dwivedi1-6/+34
A flag is a 4-byte symbol that may follow a BTF ID in a set8. This is used in the kernel to tag kfuncs in BTF sets with certain flags. Add support to adjust the sorting code so that it passes size as 8 bytes for 8-byte BTF sets. Cc: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220721134245.2450-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-22selftests: tls: add a test for timeo vs lockJakub Kicinski1-0/+32
Add a test for recv timeout. Place it in the tls_err group, so it only runs for TLS 1.2 and 1.3 but not for every AEAD out there. Link: https://lore.kernel.org/r/20220720203701.2179034-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski6-20/+47
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-21libbpf: Fix str_has_sfx()'s return valueDan Carpenter1-3/+3
The return from strcmp() is inverted so it wrongly returns true instead of false and vice versa. Fixes: a1c9d61b19cb ("libbpf: Improve library identification for uprobe binary path resolution") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Cc: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/YtZ+/dAA195d99ak@kili
2022-07-21libbpf: Fix sign expansion bug in btf_dump_get_enum_value()Dan Carpenter1-1/+1
The code here is supposed to take a signed int and store it in a signed long long. Unfortunately, the way that the type promotion works with this conditional statement is that it takes a signed int, type promotes it to a __u32, and then stores that as a signed long long. The result is never negative. This is from static analysis, but I made a little test program just to test it before I sent the patch: #include <stdio.h> int main(void) { unsigned long long src = -1ULL; signed long long dst1, dst2; int is_signed = 1; dst1 = is_signed ? *(int *)&src : *(unsigned int *)0; dst2 = is_signed ? (signed long long)*(int *)&src : *(unsigned int *)0; printf("%lld\n", dst1); printf("%lld\n", dst2); return 0; } Fixes: d90ec262b35b ("libbpf: Add enum64 support for btf_dump") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/YtZ+LpgPADm7BeEd@kili
2022-07-21selftests: net: af_unix: Fix a build error of unix_connect.c.Kuniyuki Iwashima1-2/+1
This patch fixes a build error reported in the link. [0] unix_connect.c: In function ‘unix_connect_test’: unix_connect.c:115:55: error: expected identifier before ‘(’ token #define offsetof(type, member) ((size_t)&((type *)0)->(member)) ^ unix_connect.c:128:12: note: in expansion of macro ‘offsetof’ addrlen = offsetof(struct sockaddr_un, sun_path) + variant->len; ^~~~~~~~ We can fix this by removing () around member, but checkpatch will complain about it, and the root cause of the build failure is that I followed the warning and fixed this in the v2 -> v3 change of the blamed commit. [1] CHECK: Macro argument 'member' may be better as '(member)' to avoid precedence issues #33: FILE: tools/testing/selftests/net/af_unix/unix_connect.c:115: +#define offsetof(type, member) ((size_t)&((type *)0)->member) To avoid this warning, let's use offsetof() defined in stddef.h instead. [0]: https://lore.kernel.org/linux-mm/202207182205.FrkMeDZT-lkp@intel.com/ [1]: https://lore.kernel.org/netdev/20220702154818.66761-1-kuniyu@amazon.com/ Fixes: e95ab1d85289 ("selftests: net: af_unix: Test connect() with different netns.") Reported-by: kernel test robot <lkp@intel.com> Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://lore.kernel.org/r/20220720005750.16600-1-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20selftests: gpio: fix include path to kernel headers for out of tree buildsKent Gibson1-1/+1
When building selftests out of the kernel tree the gpio.h the include path is incorrect and the build falls back to the system includes which may be outdated. Add the KHDR_INCLUDES to the CFLAGS to include the gpio.h from the build tree. Fixes: 4f4d0af7b2d9 ("selftests: gpio: restore CFLAGS options") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Kent Gibson <warthog618@gmail.com> Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>
2022-07-20tools: Fixed MIPS builds due to struct flock re-definitionFlorian Fainelli1-1/+10
Building perf for MIPS failed after 9f79b8b72339 ("uapi: simplify __ARCH_FLOCK{,64}_PAD a little") with the following error: CC /home/fainelli/work/buildroot/output/bmips/build/linux-custom/tools/perf/trace/beauty/fcntl.o In file included from ../../../../host/mipsel-buildroot-linux-gnu/sysroot/usr/include/asm/fcntl.h:77, from ../include/uapi/linux/fcntl.h:5, from trace/beauty/fcntl.c:10: ../include/uapi/asm-generic/fcntl.h:188:8: error: redefinition of 'struct flock' struct flock { ^~~~~ In file included from ../include/uapi/linux/fcntl.h:5, from trace/beauty/fcntl.c:10: ../../../../host/mipsel-buildroot-linux-gnu/sysroot/usr/include/asm/fcntl.h:63:8: note: originally defined here struct flock { ^~~~~ This is due to the local copy under tools/include/uapi/asm-generic/fcntl.h including the toolchain's kernel headers which already define 'struct flock' and define HAVE_ARCH_STRUCT_FLOCK to future inclusions make a decision as to whether re-defining 'struct flock' is appropriate or not. Make sure what do not re-define 'struct flock' when HAVE_ARCH_STRUCT_FLOCK is already defined. Fixes: 9f79b8b72339 ("uapi: simplify __ARCH_FLOCK{,64}_PAD a little") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> [arnd: sync with include/uapi/asm-generic/fcntl.h as well] Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-07-19libbpf: fix an snprintf() overflow checkDan Carpenter1-1/+1
The snprintf() function returns the number of bytes it *would* have copied if there were enough space. So it can return > the sizeof(gen->attach_target). Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/r/YtZ+oAySqIhFl6/J@kili Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19selftests/bpf: fix a test for snprintf() overflowDan Carpenter1-1/+1
The snprintf() function returns the number of bytes which *would* have been copied if there were space. In other words, it can be > sizeof(pin_path). Fixes: c0fa1b6c3efc ("bpf: btf: Add BTF tests") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/r/YtZ+aD/tZMkgOUw+@kili Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19selftests/bpf: test eager BPF ringbuf size adjustment logicAndrii Nakryiko1-0/+11
Add test validating that libbpf adjusts (and reflects adjusted) ringbuf size early, before bpf_object is loaded. Also make sure we can't successfully resize ringbuf map after bpf_object is loaded. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220715230952.2219271-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19libbpf: make RINGBUF map size adjustments more eagerlyAndrii Nakryiko1-35/+42
Make libbpf adjust RINGBUF map size (rounding it up to closest power-of-2 of page_size) more eagerly: during open phase when initializing the map and on explicit calls to bpf_map__set_max_entries(). Such approach allows user to check actual size of BPF ringbuf even before it's created in the kernel, but also it prevents various edge case scenarios where BPF ringbuf size can get out of sync with what it would be in kernel. One of them (reported in [0]) is during an attempt to pin/reuse BPF ringbuf. Move adjust_ringbuf_sz() helper closer to its first actual use. The implementation of the helper is unchanged. Also make detection of whether bpf_object is already loaded more robust by checking obj->loaded explicitly, given that map->fd can be < 0 even if bpf_object is already loaded due to ability to disable map creation with bpf_map__set_autocreate(map, false). [0] Closes: https://github.com/libbpf/libbpf/pull/530 Fixes: 0087a681fa8c ("libbpf: Automatically fix up BPF_MAP_TYPE_RINGBUF size, if necessary") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220715230952.2219271-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19bpf: fix bpf_skb_pull_data documentationJoanne Koong1-1/+2
Fix documentation for bpf_skb_pull_data() helper for when len == 0. Fixes: fa15601ab31e ("bpf: add documentation for eBPF helpers (33-41)") Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Acked-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/r/20220715193800.3940070-1-joannelkoong@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19libbpf: fallback to tracefs mount point if debugfs is not mountedAndrii Nakryiko1-21/+40
Teach libbpf to fallback to tracefs mount point (/sys/kernel/tracing) if debugfs (/sys/kernel/debug/tracing) isn't mounted. Acked-by: Yonghong Song <yhs@fb.com> Suggested-by: Connor O'Brien <connoro@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220715185736.898848-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19selftests/bpf: validate .bss section bigger than 8MB is possible nowAndrii Nakryiko2-0/+6
Add a simple big 16MB array and validate access to the very last byte of it to make sure that kernel supports > KMALLOC_MAX_SIZE value_size for BPF array maps (which are backing .bss in this case). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220715053146.1291891-5-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19selftests/bpf: use BPF_KSYSCALL and SEC("ksyscall") in selftestsAndrii Nakryiko3-32/+16
Convert few selftest that used plain SEC("kprobe") with arch-specific syscall wrapper prefix to ksyscall/kretsyscall and corresponding BPF_KSYSCALL macro. test_probe_user.c is especially benefiting from this simplification. Tested-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220714070755.3235561-6-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19libbpf: add ksyscall/kretsyscall sections support for syscall kprobesAndrii Nakryiko4-9/+157
Add SEC("ksyscall")/SEC("ksyscall/<syscall_name>") and corresponding kretsyscall variants (for return kprobes) to allow users to kprobe syscall functions in kernel. These special sections allow to ignore complexities and differences between kernel versions and host architectures when it comes to syscall wrapper and corresponding __<arch>_sys_<syscall> vs __se_sys_<syscall> differences, depending on whether host kernel has CONFIG_ARCH_HAS_SYSCALL_WRAPPER (though libbpf itself doesn't rely on /proc/config.gz for detecting this, see BPF_KSYSCALL patch for how it's done internally). Combined with the use of BPF_KSYSCALL() macro, this allows to just specify intended syscall name and expected input arguments and leave dealing with all the variations to libbpf. In addition to SEC("ksyscall+") and SEC("kretsyscall+") add bpf_program__attach_ksyscall() API which allows to specify syscall name at runtime and provide associated BPF cookie value. At the moment SEC("ksyscall") and bpf_program__attach_ksyscall() do not handle all the calling convention quirks for mmap(), clone() and compat syscalls. It also only attaches to "native" syscall interfaces. If host system supports compat syscalls or defines 32-bit syscalls in 64-bit kernel, such syscall interfaces won't be attached to by libbpf. These limitations may or may not change in the future. Therefore it is recommended to use SEC("kprobe") for these syscalls or if working with compat and 32-bit interfaces is required. Tested-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220714070755.3235561-5-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19libbpf: improve BPF_KPROBE_SYSCALL macro and rename it to BPF_KSYSCALLAndrii Nakryiko2-13/+40
Improve BPF_KPROBE_SYSCALL (and rename it to shorter BPF_KSYSCALL to match libbpf's SEC("ksyscall") section name, added in next patch) to use __kconfig variable to determine how to properly fetch syscall arguments. Instead of relying on hard-coded knowledge of whether kernel's architecture uses syscall wrapper or not (which only reflects the latest kernel versions, but is not necessarily true for older kernels and won't necessarily hold for later kernel versions on some particular host architecture), determine this at runtime by attempting to create perf_event (with fallback to kprobe event creation through tracefs on legacy kernels, just like kprobe attachment code is doing) for kernel function that would correspond to bpf() syscall on a system that has CONFIG_ARCH_HAS_SYSCALL_WRAPPER set (e.g., for x86-64 it would try '__x64_sys_bpf'). If host kernel uses syscall wrapper, syscall kernel function's first argument is a pointer to struct pt_regs that then contains syscall arguments. In such case we need to use bpf_probe_read_kernel() to fetch actual arguments (which we do through BPF_CORE_READ() macro) from inner pt_regs. But if the kernel doesn't use syscall wrapper approach, input arguments can be read from struct pt_regs directly with no probe reading. All this feature detection is done without requiring /proc/config.gz existence and parsing, and BPF-side helper code uses newly added LINUX_HAS_SYSCALL_WRAPPER virtual __kconfig extern to keep in sync with user-side feature detection of libbpf. BPF_KSYSCALL() macro can be used both with SEC("kprobe") programs that define syscall function explicitly (e.g., SEC("kprobe/__x64_sys_bpf")) and SEC("ksyscall") program added in the next patch (which are the same kprobe program with added benefit of libbpf determining correct kernel function name automatically). Kretprobe and kretsyscall (added in next patch) programs don't need BPF_KSYSCALL as they don't provide access to input arguments. Normal BPF_KRETPROBE is completely sufficient and is recommended. Tested-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220714070755.3235561-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19selftests/bpf: add test of __weak unknown virtual __kconfig externAndrii Nakryiko2-10/+10
Exercise libbpf's logic for unknown __weak virtual __kconfig externs. USDT selftests are already excercising non-weak known virtual extern already (LINUX_HAS_BPF_COOKIE), so no need to add explicit tests for it. Tested-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220714070755.3235561-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19libbpf: generalize virtual __kconfig externs and use it for USDTAndrii Nakryiko2-45/+66
Libbpf supports single virtual __kconfig extern currently: LINUX_KERNEL_VERSION. LINUX_KERNEL_VERSION isn't coming from /proc/kconfig.gz and is intead customly filled out by libbpf. This patch generalizes this approach to support more such virtual __kconfig externs. One such extern added in this patch is LINUX_HAS_BPF_COOKIE which is used for BPF-side USDT supporting code in usdt.bpf.h instead of using CO-RE-based enum detection approach for detecting bpf_get_attach_cookie() BPF helper. This allows to remove otherwise not needed CO-RE dependency and keeps user-space and BPF-side parts of libbpf's USDT support strictly in sync in terms of their feature detection. We'll use similar approach for syscall wrapper detection for BPF_KSYSCALL() BPF-side macro in follow up patch. Generally, currently libbpf reserves CONFIG_ prefix for Kconfig values and LINUX_ for virtual libbpf-backed externs. In the future we might extend the set of prefixes that are supported. This can be done without any breaking changes, as currently any __kconfig extern with unrecognized name is rejected. For LINUX_xxx externs we support the normal "weak rule": if libbpf doesn't recognize given LINUX_xxx extern but such extern is marked as __weak, it is not rejected and defaults to zero. This follows CONFIG_xxx handling logic and will allow BPF applications to opportunistically use newer libbpf virtual externs without breaking on older libbpf versions unnecessarily. Tested-by: Alan Maguire <alan.maguire@oracle.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220714070755.3235561-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-07-19tools headers UAPI: Sync linux/kvm.h with the kernel sourcesPaolo Bonzini1-1/+2
Silence this perf build warning: Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h' diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-07-19KVM: selftests: Fix target thread to be migrated in rseq_testGavin Shan1-3/+5
In rseq_test, there are two threads, which are vCPU thread and migration worker separately. Unfortunately, the test has the wrong PID passed to sched_setaffinity() in the migration worker. It forces migration on the migration worker because zeroed PID represents the calling thread, which is the migration worker itself. It means the vCPU thread is never enforced to migration and it can migrate at any time, which eventually leads to failure as the following logs show. host# uname -r 5.19.0-rc6-gavin+ host# # cat /proc/cpuinfo | grep processor | tail -n 1 processor : 223 host# pwd /home/gavin/sandbox/linux.main/tools/testing/selftests/kvm host# for i in `seq 1 100`; do \ echo "--------> $i"; ./rseq_test; done --------> 1 --------> 2 --------> 3 --------> 4 --------> 5 --------> 6 ==== Test Assertion Failure ==== rseq_test.c:265: rseq_cpu == cpu pid=3925 tid=3925 errno=4 - Interrupted system call 1 0x0000000000401963: main at rseq_test.c:265 (discriminator 2) 2 0x0000ffffb044affb: ?? ??:0 3 0x0000ffffb044b0c7: ?? ??:0 4 0x0000000000401a6f: _start at ??:? rseq CPU = 4, sched CPU = 27 Fix the issue by passing correct parameter, TID of the vCPU thread, to sched_setaffinity() in the migration worker. Fixes: 61e52f1630f5 ("KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs") Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Message-Id: <20220719020830.3479482-1-gshan@redhat.com> Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>