summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-12-02libbpf: Use CO-RE in the kernel in light skeleton.Alexei Starovoitov3-33/+120
Without lskel the CO-RE relocations are processed by libbpf before any other work is done. Instead, when lskel is needed, remember relocation as RELO_CORE kind. Then when loader prog is generated for a given bpf program pass CO-RE relos of that program to gen loader via bpf_gen__record_relo_core(). The gen loader will remember them as-is and pass it later as-is into the kernel. The normal libbpf flow is to process CO-RE early before call relos happen. In case of gen_loader the core relos have to be added to other relos to be copied together when bpf static function is appended in different places to other main bpf progs. During the copy the append_subprog_relos() will adjust insn_idx for normal relos and for RELO_CORE kind too. When that is done each struct reloc_desc has good relos for specific main prog. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-10-alexei.starovoitov@gmail.com
2021-12-02bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn().Alexei Starovoitov1-1/+345
Given BPF program's BTF root type name perform the following steps: . search in vmlinux candidate cache. . if (present in cache and candidate list >= 1) return candidate list. . do a linear search through kernel BTFs for possible candidates. . regardless of number of candidates found populate vmlinux cache. . if (candidate list >= 1) return candidate list. . search in module candidate cache. . if (present in cache) return candidate list (even if list is empty). . do a linear search through BTFs of all kernel modules collecting candidates from all of them. . regardless of number of candidates found populate module cache. . return candidate list. Then wire the result into bpf_core_apply_relo_insn(). When BPF program is trying to CO-RE relocate a type that doesn't exist in either vmlinux BTF or in modules BTFs these steps will perform 2 cache lookups when cache is hit. Note the cache doesn't prevent the abuse by the program that might have lots of relocations that cannot be resolved. Hence cond_resched(). CO-RE in the kernel requires CAP_BPF, since BTF loading requires it. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-9-alexei.starovoitov@gmail.com
2021-12-02libbpf: Cleanup struct bpf_core_cand.Andrii Nakryiko2-15/+17
Remove two redundant fields from struct bpf_core_cand. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-8-alexei.starovoitov@gmail.com
2021-12-02bpf: Adjust BTF log size limit.Alexei Starovoitov1-1/+1
Make BTF log size limit to be the same as the verifier log size limit. Otherwise tools that progressively increase log size and use the same log for BTF loading and program loading will be hitting hard to debug EINVAL. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-7-alexei.starovoitov@gmail.com
2021-12-02bpf: Pass a set of bpf_core_relo-s to prog_load command.Alexei Starovoitov7-56/+207
struct bpf_core_relo is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the struct bpf_core_relo becomes uapi de-jure. Add an ability to pass a set of 'struct bpf_core_relo' to prog_load command and let the kernel perform CO-RE relocations. Note the struct bpf_line_info and struct bpf_func_info have the same layout when passed from LLVM to libbpf and from libbpf to the kernel except "insn_off" fields means "byte offset" when LLVM generates it. Then libbpf converts it to "insn index" to pass to the kernel. The struct bpf_core_relo's "insn_off" field is always "byte offset". Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-6-alexei.starovoitov@gmail.com
2021-12-02bpf: Define enum bpf_core_relo_kind as uapi.Alexei Starovoitov5-60/+82
enum bpf_core_relo_kind is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure. Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind, bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf program into llvm. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-5-alexei.starovoitov@gmail.com
2021-12-02bpf: Prepare relo_core.c for kernel duty.Alexei Starovoitov4-11/+176
Make relo_core.c to be compiled for the kernel and for user space libbpf. Note the patch is reducing BPF_CORE_SPEC_MAX_LEN from 64 to 32. This is the maximum number of nested structs and arrays. For example: struct sample { int a; struct { int b[10]; }; }; struct sample *s = ...; int *y = &s->b[5]; This field access is encoded as "0:1:0:5" and spec len is 4. The follow up patch might bump it back to 64. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-4-alexei.starovoitov@gmail.com
2021-12-02bpf: Rename btf_member accessors.Alexei Starovoitov4-19/+19
Rename btf_member_bit_offset() and btf_member_bitfield_size() to avoid conflicts with similarly named helpers in libbpf's btf.h. Rename the kernel helpers, since libbpf helpers are part of uapi. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-3-alexei.starovoitov@gmail.com
2021-12-02libbpf: Replace btf__type_by_id() with btf_type_by_id().Alexei Starovoitov3-13/+10
To prepare relo_core.c to be compiled in the kernel and the user space replace btf__type_by_id with btf_type_by_id. In libbpf btf__type_by_id and btf_type_by_id have different behavior. bpf_core_apply_relo_insn() needs behavior of uapi btf__type_by_id vs internal btf_type_by_id, but type_id range check is already done in bpf_core_apply_relo(), so it's safe to replace it everywhere. The kernel btf_type_by_id() does the check anyway. It doesn't hurt. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-2-alexei.starovoitov@gmail.com
2021-12-02samples: bpf: Fix conflicting types in fds_exampleAlexander Lobakin1-7/+2
Fix the following samples/bpf build error appeared after the introduction of bpf_map_create() in libbpf: CC samples/bpf/fds_example.o samples/bpf/fds_example.c:49:12: error: static declaration of 'bpf_map_create' follows non-static declaration static int bpf_map_create(void) ^ samples/bpf/libbpf/include/bpf/bpf.h:55:16: note: previous declaration is here LIBBPF_API int bpf_map_create(enum bpf_map_type map_type, ^ samples/bpf/fds_example.c:82:23: error: too few arguments to function call, expected 6, have 0 fd = bpf_map_create(); ~~~~~~~~~~~~~~ ^ samples/bpf/libbpf/include/bpf/bpf.h:55:16: note: 'bpf_map_create' declared here LIBBPF_API int bpf_map_create(enum bpf_map_type map_type, ^ 2 errors generated. fds_example by accident has a static function with the same name. It's not worth it to separate a single call into its own function, so just embed it. Fixes: 992c4225419a ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20211201164931.47357-1-alexandr.lobakin@intel.com
2021-12-01bpf: Clean-up bpf_verifier_vlog() for BPF_LOG_KERNEL log levelHou Tao1-4/+6
An extra newline will output for bpf_log() with BPF_LOG_KERNEL level as shown below: [ 52.095704] BPF:The function test_3 has 12 arguments. Too many. [ 52.095704] [ 52.096896] Error in parsing func ptr test_3 in struct bpf_dummy_ops Now all bpf_log() are ended by newline, but not all btf_verifier_log() are ended by newline, so checking whether or not the log message has the trailing newline and adding a newline if not. Also there is no need to calculate the left userspace buffer size for kernel log output and to truncate the output by '\0' which has already been done by vscnprintf(), so only do these for userspace log output. Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20211201073458.2731595-2-houtao1@huawei.com
2021-12-01Merge branch 'Apply suggestions for typeless/weak ksym series'Andrii Nakryiko2-20/+24
Kumar Kartikeya says: ==================== Three commits addressing comments for the typeless/weak ksym set. No functional change intended. Hopefully this is simpler to read for kfunc as well. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2021-12-01libbpf: Avoid reload of imm for weak, unresolved, repeating ksymKumar Kartikeya Dwivedi1-3/+2
Alexei pointed out that we can use BPF_REG_0 which already contains imm from move_blob2blob computation. Note that we now compare the second insn's imm, but this should not matter, since both will be zeroed out for the error case for the insn populated earlier. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122235733.634914-4-memxor@gmail.com
2021-12-01libbpf: Avoid double stores for success/failure case of ksym relocationsKumar Kartikeya Dwivedi1-16/+21
Instead, jump directly to success case stores in case ret >= 0, else do the default 0 value store and jump over the success case. This is better in terms of readability. Readjust the code for kfunc relocation as well to follow a similar pattern, also leads to easier to follow code now. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122235733.634914-3-memxor@gmail.com
2021-12-01bpf: Change bpf_kallsyms_lookup_name size type to ARG_CONST_SIZE_OR_ZEROKumar Kartikeya Dwivedi1-1/+1
Andrii mentioned in [0] that switching to ARG_CONST_SIZE_OR_ZERO lets user avoid having to prove that string size at runtime is not zero and helps with not having to supress clang optimizations. [0]: https://lore.kernel.org/bpf/CAEf4BzZa_vhXB3c8atNcTS6=krQvC25H7K7c3WWZhM=27ro=Wg@mail.gmail.com Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122235733.634914-2-memxor@gmail.com
2021-11-30Merge branch 'Add bpf_loop helper'Alexei Starovoitov20-39/+771
Joanne Koong says: ==================== This patchset add a new helper, bpf_loop. One of the complexities of using for loops in bpf programs is that the verifier needs to ensure that in every possibility of the loop logic, the loop will always terminate. As such, there is a limit on how many iterations the loop can do. The bpf_loop helper moves the loop logic into the kernel and can thereby guarantee that the loop will always terminate. The bpf_loop helper simplifies a lot of the complexity the verifier needs to check, as well as removes the constraint on the number of loops able to be run. From the test results, we see that using bpf_loop in place of the traditional for loop led to a decrease in verification time and number of bpf instructions by ~99%. The benchmark results show that as the number of iterations increases, the overhead per iteration decreases. The high-level overview of the patches - Patch 1 - kernel-side + API changes for adding bpf_loop Patch 2 - tests Patch 3 - use bpf_loop in strobemeta + pyperf600 and measure verifier performance Patch 4 - benchmark for throughput + latency of bpf_loop call v3 -> v4: ~ Address nits: use usleep for triggering bpf programs, fix copyright style v2 -> v3: ~ Rerun benchmarks on physical machine, update results ~ Propagate original error codes in the verifier v1 -> v2: ~ Change helper name to bpf_loop (instead of bpf_for_each) ~ Set max nr_loops (~8 million loops) for bpf_loop call ~ Split tests + strobemeta/pyperf600 changes into two patches ~ Add new ops_report_final helper for outputting throughput and latency ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-30selftest/bpf/benchs: Add bpf_loop benchmarkJoanne Koong7-1/+203
Add benchmark to measure the throughput and latency of the bpf_loop call. Testing this on my dev machine on 1 thread, the data is as follows: nr_loops: 10 bpf_loop - throughput: 198.519 ± 0.155 M ops/s, latency: 5.037 ns/op nr_loops: 100 bpf_loop - throughput: 247.448 ± 0.305 M ops/s, latency: 4.041 ns/op nr_loops: 500 bpf_loop - throughput: 260.839 ± 0.380 M ops/s, latency: 3.834 ns/op nr_loops: 1000 bpf_loop - throughput: 262.806 ± 0.629 M ops/s, latency: 3.805 ns/op nr_loops: 5000 bpf_loop - throughput: 264.211 ± 1.508 M ops/s, latency: 3.785 ns/op nr_loops: 10000 bpf_loop - throughput: 265.366 ± 3.054 M ops/s, latency: 3.768 ns/op nr_loops: 50000 bpf_loop - throughput: 235.986 ± 20.205 M ops/s, latency: 4.238 ns/op nr_loops: 100000 bpf_loop - throughput: 264.482 ± 0.279 M ops/s, latency: 3.781 ns/op nr_loops: 500000 bpf_loop - throughput: 309.773 ± 87.713 M ops/s, latency: 3.228 ns/op nr_loops: 1000000 bpf_loop - throughput: 262.818 ± 4.143 M ops/s, latency: 3.805 ns/op >From this data, we can see that the latency per loop decreases as the number of loops increases. On this particular machine, each loop had an overhead of about ~4 ns, and we were able to run ~250 million loops per second. Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211130030622.4131246-5-joannekoong@fb.com
2021-11-30selftests/bpf: Measure bpf_loop verifier performanceJoanne Koong5-4/+169
This patch tests bpf_loop in pyperf and strobemeta, and measures the verifier performance of replacing the traditional for loop with bpf_loop. The results are as follows: ~strobemeta~ Baseline verification time 6808200 usec stack depth 496 processed 554252 insns (limit 1000000) max_states_per_insn 16 total_states 15878 peak_states 13489 mark_read 3110 #192 verif_scale_strobemeta:OK (unrolled loop) Using bpf_loop verification time 31589 usec stack depth 96+400 processed 1513 insns (limit 1000000) max_states_per_insn 2 total_states 106 peak_states 106 mark_read 60 #193 verif_scale_strobemeta_bpf_loop:OK ~pyperf600~ Baseline verification time 29702486 usec stack depth 368 processed 626838 insns (limit 1000000) max_states_per_insn 7 total_states 30368 peak_states 30279 mark_read 748 #182 verif_scale_pyperf600:OK (unrolled loop) Using bpf_loop verification time 148488 usec stack depth 320+40 processed 10518 insns (limit 1000000) max_states_per_insn 10 total_states 705 peak_states 517 mark_read 38 #183 verif_scale_pyperf600_bpf_loop:OK Using the bpf_loop helper led to approximately a 99% decrease in the verification time and in the number of instructions. Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211130030622.4131246-4-joannekoong@fb.com
2021-11-30selftests/bpf: Add bpf_loop testJoanne Koong2-0/+257
Add test for bpf_loop testing a variety of cases: various nr_loops, null callback ctx, invalid flags, nested callbacks. Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211130030622.4131246-3-joannekoong@fb.com
2021-11-30bpf: Add bpf_loop helperJoanne Koong6-34/+142
This patch adds the kernel-side and API changes for a new helper function, bpf_loop: long bpf_loop(u32 nr_loops, void *callback_fn, void *callback_ctx, u64 flags); where long (*callback_fn)(u32 index, void *ctx); bpf_loop invokes the "callback_fn" **nr_loops** times or until the callback_fn returns 1. The callback_fn can only return 0 or 1, and this is enforced by the verifier. The callback_fn index is zero-indexed. A few things to please note: ~ The "u64 flags" parameter is currently unused but is included in case a future use case for it arises. ~ In the kernel-side implementation of bpf_loop (kernel/bpf/bpf_iter.c), bpf_callback_t is used as the callback function cast. ~ A program can have nested bpf_loop calls but the program must still adhere to the verifier constraint of its stack depth (the stack depth cannot exceed MAX_BPF_STACK)) ~ Recursive callback_fns do not pass the verifier, due to the call stack for these being too deep. ~ The next patch will include the tests and benchmark Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211130030622.4131246-2-joannekoong@fb.com
2021-11-30bpf, docs: Split general purpose eBPF documentation out of filter.rstChristoph Hellwig4-990/+1008
filter.rst starts out documenting the classic BPF and then spills into introducing and documentating eBPF. Move the eBPF documentation into rwo new files under Documentation/bpf/ for the instruction set and the verifier and link to the BPF documentation from filter.rst. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211119163215.971383-6-hch@lst.de
2021-11-30bpf, docs: Move handling of maps to Documentation/bpf/maps.rstChristoph Hellwig2-44/+46
Move the general maps documentation into the maps.rst file from the overall networking filter documentation and add a link instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211119163215.971383-5-hch@lst.de
2021-11-30bpf, docs: Prune all references to "internal BPF"Christoph Hellwig6-21/+20
The eBPF name has completely taken over from eBPF in general usage for the actual eBPF representation, or BPF for any general in-kernel use. Prune all remaining references to "internal BPF". Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211119163215.971383-4-hch@lst.de
2021-11-30bpf: Remove a redundant comment on bpf_prog_freeChristoph Hellwig1-1/+0
The comment telling that the prog_free helper is freeing the program is not exactly useful, so just remove it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211119163215.971383-3-hch@lst.de
2021-11-30x86, bpf: Cleanup the top of file header in bpf_jit_comp.cChristoph Hellwig1-2/+2
Don't bother mentioning the file name as it is implied, and remove the reference to internal BPF. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211119163215.971383-2-hch@lst.de
2021-11-30libbpf: Remove duplicate assignmentsMehrdad Arshad Rad1-1/+0
There is a same action when load_attr.attach_btf_id is initialized. Signed-off-by: Mehrdad Arshad Rad <arshad.rad@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211128193337.10628-1-arshad.rad@gmail.com
2021-11-29libbpf: Silence uninitialized warning/error in btf_dump_dump_type_dataAlan Maguire1-1/+1
When compiling libbpf with gcc 4.8.5, we see: CC staticobjs/btf_dump.o btf_dump.c: In function ‘btf_dump_dump_type_data.isra.24’: btf_dump.c:2296:5: error: ‘err’ may be used uninitialized in this function [-Werror=maybe-uninitialized] if (err < 0) ^ cc1: all warnings being treated as errors make: *** [staticobjs/btf_dump.o] Error 1 While gcc 4.8.5 is too old to build the upstream kernel, it's possible it could be used to build standalone libbpf which suffers from the same problem. Silence the error by initializing 'err' to 0. The warning/error seems to be a false positive since err is set early in the function. Regardless we shouldn't prevent libbpf from building for this. Fixes: 920d16af9b42 ("libbpf: BTF dumper support for typed data") Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1638180040-8037-1-git-send-email-alan.maguire@oracle.com
2021-11-29Merge branch 'Support static initialization of BPF_MAP_TYPE_PROG_ARRAY'Andrii Nakryiko3-33/+192
Hengqi Chen says: ==================== Make libbpf support static initialization of BPF_MAP_TYPE_PROG_ARRAY with a syntax similar to map-in-map initialization: SEC("socket") int tailcall_1(void *ctx) { return 0; } struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 2); __uint(key_size, sizeof(__u32)); __array(values, int (void *)); } prog_array_init SEC(".maps") = { .values = { [1] = (void *)&tailcall_1, }, }; v1->v2: - Add stricter checks on relos collect, some renamings (Andrii) - Update selftest to check tailcall result (Andrii) ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2021-11-29selftests/bpf: Test BPF_MAP_TYPE_PROG_ARRAY static initializationHengqi Chen2-0/+71
Add testcase for BPF_MAP_TYPE_PROG_ARRAY static initialization. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211128141633.502339-3-hengqi.chen@gmail.com
2021-11-29libbpf: Support static initialization of BPF_MAP_TYPE_PROG_ARRAYHengqi Chen1-33/+121
Support static initialization of BPF_MAP_TYPE_PROG_ARRAY with a syntax similar to map-in-map initialization ([0]): SEC("socket") int tailcall_1(void *ctx) { return 0; } struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 2); __uint(key_size, sizeof(__u32)); __array(values, int (void *)); } prog_array_init SEC(".maps") = { .values = { [1] = (void *)&tailcall_1, }, }; Here's the relevant part of libbpf debug log showing what's going on with prog-array initialization: libbpf: sec '.relsocket': collecting relocation for section(3) 'socket' libbpf: sec '.relsocket': relo #0: insn #2 against 'prog_array_init' libbpf: prog 'entry': found map 0 (prog_array_init, sec 4, off 0) for insn #0 libbpf: .maps relo #0: for 3 value 0 rel->r_offset 32 name 53 ('tailcall_1') libbpf: .maps relo #0: map 'prog_array_init' slot [1] points to prog 'tailcall_1' libbpf: map 'prog_array_init': created successfully, fd=5 libbpf: map 'prog_array_init': slot [1] set to prog 'tailcall_1' fd=6 [0] Closes: https://github.com/libbpf/libbpf/issues/354 Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211128141633.502339-2-hengqi.chen@gmail.com
2021-11-27bpf, mips: Fix build errors about __NR_bpf undeclaredTiezhu Yang3-0/+22
Add the __NR_bpf definitions to fix the following build errors for mips: $ cd tools/bpf/bpftool $ make [...] bpf.c:54:4: error: #error __NR_bpf not defined. libbpf does not support your arch. # error __NR_bpf not defined. libbpf does not support your arch. ^~~~~ bpf.c: In function ‘sys_bpf’: bpf.c:66:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’? return syscall(__NR_bpf, cmd, attr, size); ^~~~~~~~ __NR_brk [...] In file included from gen_loader.c:15:0: skel_internal.h: In function ‘skel_sys_bpf’: skel_internal.h:53:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’? return syscall(__NR_bpf, cmd, attr, size); ^~~~~~~~ __NR_brk We can see the following generated definitions: $ grep -r "#define __NR_bpf" arch/mips arch/mips/include/generated/uapi/asm/unistd_o32.h:#define __NR_bpf (__NR_Linux + 355) arch/mips/include/generated/uapi/asm/unistd_n64.h:#define __NR_bpf (__NR_Linux + 315) arch/mips/include/generated/uapi/asm/unistd_n32.h:#define __NR_bpf (__NR_Linux + 319) The __NR_Linux is defined in arch/mips/include/uapi/asm/unistd.h: $ grep -r "#define __NR_Linux" arch/mips arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 4000 arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 5000 arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 6000 That is to say, __NR_bpf is: 4000 + 355 = 4355 for mips o32, 6000 + 319 = 6319 for mips n32, 5000 + 315 = 5315 for mips n64. So use the GCC pre-defined macro _ABIO32, _ABIN32 and _ABI64 [1] to define the corresponding __NR_bpf. This patch is similar with commit bad1926dd2f6 ("bpf, s390: fix build for libbpf and selftest suite"). [1] https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/config/mips/mips.h#l549 Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/1637804167-8323-1-git-send-email-yangtiezhu@loongson.cn
2021-11-26selftests/bpf: Fix misaligned accesses in xdp and xdp_bpf2bpf testsAndrii Nakryiko2-8/+9
Similar to previous patch, just copy over necessary struct into local stack variable before checking its fields. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-14-andrii@kernel.org
2021-11-26selftests/bpf: Fix misaligned memory accesses in xdp_bonding testAndrii Nakryiko1-16/+20
Construct packet buffer explicitly for each packet to avoid unaligned memory accesses. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-13-andrii@kernel.org
2021-11-26selftests/bpf: Prevent out-of-bounds stack access in test_bpffsAndrii Nakryiko1-1/+3
Buf can be not zero-terminated leading to strstr() to access data beyond the intended buf[] array. Fix by forcing zero termination. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-12-andrii@kernel.org
2021-11-26selftests/bpf: Fix misaligned memory access in queue_stack_map testAndrii Nakryiko1-5/+7
Copy over iphdr into a local variable before accessing its fields. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-11-andrii@kernel.org
2021-11-26selftests/bpf: Prevent misaligned memory access in get_stack_raw_tp testAndrii Nakryiko1-4/+10
Perfbuf doesn't guarantee 8-byte alignment of the data like BPF ringbuf does, so struct get_stack_trace_t can arrive not properly aligned for subsequent u64 accesses. Easiest fix is to just copy data locally. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-10-andrii@kernel.org
2021-11-26selftests/bpf: Fix possible NULL passed to memcpy() with zero sizeAndrii Nakryiko1-1/+2
Prevent sanitizer from complaining about passing NULL into memcpy(), even if it happens with zero size. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-9-andrii@kernel.org
2021-11-26selftests/bpf: Fix UBSan complaint about signed __int128 overflowAndrii Nakryiko1-1/+1
Test is using __int128 variable as unsigned and highest order bit can be set to 1 after bit shift. Use unsigned __int128 explicitly and prevent UBSan from complaining. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-8-andrii@kernel.org
2021-11-26libbpf: Fix using invalidated memory in bpf_linkerAndrii Nakryiko1-1/+4
add_dst_sec() can invalidate bpf_linker's section index making dst_symtab pointer pointing into unallocated memory. Reinitialize dst_symtab pointer on each iteration to make sure it's always valid. Fixes: faf6ed321cf6 ("libbpf: Add BPF static linker APIs") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-7-andrii@kernel.org
2021-11-26libbpf: Fix glob_syms memory leak in bpf_linkerAndrii Nakryiko1-0/+1
glob_syms array wasn't freed on bpf_link__free(). Fix that. Fixes: a46349227cd8 ("libbpf: Add linker extern resolution support for functions and global variables") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-6-andrii@kernel.org
2021-11-26libbpf: Don't call libc APIs with NULL pointersAndrii Nakryiko1-3/+7
Sanitizer complains about qsort(), bsearch(), and memcpy() being called with NULL pointer. This can only happen when the associated number of elements is zero, so no harm should be done. But still prevent this from happening to keep sanitizer runs clean from extra noise. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-5-andrii@kernel.org
2021-11-26libbpf: Fix potential misaligned memory access in btf_ext__new()Andrii Nakryiko2-6/+6
Perform a memory copy before we do the sanity checks of btf_ext_hdr. This prevents misaligned memory access if raw btf_ext data is not 4-byte aligned ([0]). While at it, also add missing const qualifier. [0] Closes: https://github.com/libbpf/libbpf/issues/391 Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Reported-by: Evgeny Vereshchagin <evvers@ya.ru> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-3-andrii@kernel.org
2021-11-26tools/resolve_btf_ids: Close ELF file on errorAndrii Nakryiko1-2/+3
Fix one case where we don't do explicit clean up. Fixes: fbbb68de80a4 ("bpf: Add resolve_btfids tool to resolve BTF IDs in ELF object") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124002325.1737739-2-andrii@kernel.org
2021-11-26selftests/bpf: Migrate selftests to bpf_map_create()Andrii Nakryiko21-256/+201
Conversion is straightforward for most cases. In few cases tests are using mutable map_flags and attribute structs, but bpf_map_create_opts can be used in the similar fashion, so there were no problems. Just lots of repetitive conversions. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124193233.3115996-5-andrii@kernel.org
2021-11-26libbpf: Prevent deprecation warnings in xsk.cAndrii Nakryiko1-0/+5
xsk.c is using own APIs that are marked for deprecation internally. Given xsk.c and xsk.h will be gone in libbpf 1.0, there is no reason to do public vs internal function split just to avoid deprecation warnings. So just add a pragma to silence deprecation warnings (until the code is removed completely). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124193233.3115996-4-andrii@kernel.org
2021-11-26libbpf: Use bpf_map_create() consistently internallyAndrii Nakryiko4-51/+25
Remove all the remaining uses of to-be-deprecated bpf_create_map*() APIs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124193233.3115996-3-andrii@kernel.org
2021-11-26libbpf: Unify low-level map creation APIs w/ new bpf_map_create()Andrii Nakryiko7-153/+126
Mark the entire zoo of low-level map creation APIs for deprecation in libbpf 0.7 ([0]) and introduce a new bpf_map_create() API that is OPTS-based (and thus future-proof) and matches the BPF_MAP_CREATE command name. While at it, ensure that gen_loader sends map_extra field. Also remove now unneeded btf_key_type_id/btf_value_type_id logic that libbpf is doing anyways. [0] Closes: https://github.com/libbpf/libbpf/issues/282 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211124193233.3115996-2-andrii@kernel.org
2021-11-26selftests/bpf: Mix legacy (maps) and modern (vars) BPF in one testAndrii Nakryiko2-0/+138
Add selftest that combines two BPF programs within single BPF object file such that one of the programs is using global variables, but can be skipped at runtime on old kernels that don't support global data. Another BPF program is written with the goal to be runnable on very old kernels and only relies on explicitly accessed BPF maps. Such test, run against old kernels (e.g., libbpf CI will run it against 4.9 kernel that doesn't support global data), allows to test the approach and ensure that libbpf doesn't make unnecessary assumption about necessary kernel features. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211123200105.387855-2-andrii@kernel.org
2021-11-26libbpf: Load global data maps lazily on legacy kernelsAndrii Nakryiko1-4/+30
Load global data maps lazily, if kernel is too old to support global data. Make sure that programs are still correct by detecting if any of the to-be-loaded programs have relocation against any of such maps. This allows to solve the issue ([0]) with bpf_printk() and Clang generating unnecessary and unreferenced .rodata.strX.Y sections, but it also goes further along the CO-RE lines, allowing to have a BPF object in which some code can work on very old kernels and relies only on BPF maps explicitly, while other BPF programs might enjoy global variable support. If such programs are correctly set to not load at runtime on old kernels, bpf_object will load and function correctly now. [0] https://lore.kernel.org/bpf/CAK-59YFPU3qO+_pXWOH+c1LSA=8WA1yabJZfREjOEXNHAqgXNg@mail.gmail.com/ Fixes: aed659170a31 ("libbpf: Support multiple .rodata.* and .data.* BPF maps") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211123200105.387855-1-andrii@kernel.org
2021-11-23selftests/bpf: Fix trivial typoDrew Fustini1-1/+1
Fix trivial typo in comment from 'oveflow' to 'overflow'. Reported-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Drew Fustini <dfustini@baylibre.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211122070528.837806-1-dfustini@baylibre.com