summaryrefslogtreecommitdiff
path: root/tools/lib/bpf/skel_internal.h
diff options
context:
space:
mode:
authorDaniel Borkmann <daniel@iogearbox.net>2021-05-19 16:27:32 +0300
committerDaniel Borkmann <daniel@iogearbox.net>2021-05-19 16:27:42 +0300
commit3d431677687652808a9c9cf9e6c59dbeaf4a1f4d (patch)
treea5e9ec92213d4e4e9d4450ef578d09b57e9343e4 /tools/lib/bpf/skel_internal.h
parentfa7b83bf3b156c767f3e4a25bbf3817b08f3ff8e (diff)
parent1a532eb28df3e45ae2d403d66c0e98295688ae10 (diff)
downloadlinux-3d431677687652808a9c9cf9e6c59dbeaf4a1f4d.tar.xz
Merge branch 'bpf-loader-progs'
Alexei Starovoitov says: ==================== v5->v6: - fixed issue found by bpf CI. The light skeleton generation was doing a dry-run of loading the program where all actual sys_bpf syscalls were replaced by calls into gen_loader. Turned out that search for valid vmlinux_btf was not stubbed out which was causing light skeleton gen to fail on older kernels. - significantly reduced verbosity of gen_loader.c. - an example trace_printk.lskel.h generated out of progs/trace_printk.c https://gist.github.com/4ast/774ea58f8286abac6aa8e3bf3bf3b903 v4->v5: - addressed a bunch of minor comments from Andrii. - the main difference is that lskel is now more robust in case of errors and a bit cleaner looking. v3->v4: - cleaned up closing of temporary FDs in case intermediate sys_bpf fails during execution of loader program. - added support for rodata in the skeleton. - enforce bpf_prog_type_syscall to be sleepable, since it needs bpf_copy_from_user to populate rodata map. - converted test trace_printk to use lskel to test rodata access. - various small bug fixes. v2->v3: Addressed comments from Andrii and John. - added support for setting max_entries after signature verification and used it in ringbuf test, since ringbuf's max_entries has to be updated after skeleton open() and before load(). See patch 20. - bpf_btf_find_by_name_kind doesn't take btf_fd anymore. Because of that removed attach_prog_fd from bpf_prog_desc in lskel. Both features to be added later. - cleaned up closing of fd==0 during loader gen by resetting fds back to -1. - converted loader gen to use memset(&attr, cmd_specific_attr_size). would love to see this optimization in the rest of libbpf. - fixed memory leak during loader_gen in case of enomem. - support for fd_array kernel feature is added in patch 9 to have exhaustive testing across all selftests and then partially reverted in patch 15 to keep old style map_fd patching tested as well. - since fentry_test/fexit_tests were extended with re-attach had to add support for per-program attach method in lskel and use it in the tests. - cleanup closing of fds in lskel in case of partial failures. - fixed numerous small nits. v1->v2: Addressed comments from Al, Yonghong and Andrii. - documented sys_close fdget/fdput requirement and non-recursion check. - reduced internal api leaks between libbpf and bpftool. Now bpf_object__gen_loader() is the only new libbf api with minimal fields. - fixed light skeleton __destroy() method to munmap and close maps and progs. - refactored bpf_btf_find_by_name_kind to return btf_id | (btf_obj_fd << 32). - refactored use of bpf_btf_find_by_name_kind from loader prog. - moved auto-gen like code into skel_internal.h that is used by *.lskel.h It has minimal static inline bpf_load_and_run() method used by lskel. - added lksel.h example in patch 15. - replaced union bpf_map_prog_desc with struct bpf_map_desc and struct bpf_prog_desc. - removed mark_feat_supported and added a patch to pass 'obj' into kernel_supports. - added proper tracking of temporary FDs in loader prog and their cleanup via bpf_sys_close. - rename gen_trace.c into gen_loader.c to better align the naming throughout. - expanded number of available helpers in new prog type. - added support for raw_tp attaching in lskel. lskel supports tracing and raw_tp progs now. It correctly loads all networking prog types too, but __attach() method is tbd. - converted progs/test_ksyms_module.c to lskel. - minor feedback fixes all over. The description of V1 set is still valid: This is a first step towards signed bpf programs and the third approach of that kind. The first approach was to bring libbpf into the kernel as a user-mode-driver. The second approach was to invent a new file format and let kernel execute that format as a sequence of syscalls that create maps and load programs. This third approach is using new type of bpf program instead of inventing file format. 1st and 2nd approaches had too many downsides comparing to this 3rd and were discarded after months of work. To make it work the following new concepts are introduced: 1. syscall bpf program type A kind of bpf program that can do sys_bpf and sys_close syscalls. It can only execute in user context. 2. FD array or FD index. Traditionally BPF instructions are patched with FDs. What it means that maps has to be created first and then instructions modified which breaks signature verification if the program is signed. Instead of patching each instruction with FD patch it with an index into array of FDs. That makes the program signature stable if it uses maps. 3. loader program that is generated as "strace of libbpf". When libbpf is loading bpf_file.o it does a bunch of sys_bpf() syscalls to load BTF, create maps, populate maps and finally load programs. Instead of actually doing the syscalls generate a trace of what libbpf would have done and represent it as the "loader program". The "loader program" consists of single map and single bpf program that does those syscalls. Executing such "loader program" via bpf_prog_test_run() command will replay the sequence of syscalls that libbpf would have done which will result the same maps created and programs loaded as specified in the elf file. The "loader program" removes libelf and majority of libbpf dependency from program loading process. 4. light skeleton Instead of embedding the whole elf file into skeleton and using libbpf to parse it later generate a loader program and embed it into "light skeleton". Such skeleton can load the same set of elf files, but it doesn't need libbpf and libelf to do that. It only needs few sys_bpf wrappers. Future steps: - support CO-RE in the kernel. This patch set is already too big, so that critical feature is left for the next step. - generate light skeleton in golang to allow such users use BTF and all other features provided by libbpf - generate light skeleton for kernel, so that bpf programs can be embeded in the kernel module. The UMD usage in bpf_preload will be replaced with such skeleton, so bpf_preload would become a standard kernel module without user space dependency. - finally do the signing of the loader program. The patches are work in progress with few rough edges. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'tools/lib/bpf/skel_internal.h')
-rw-r--r--tools/lib/bpf/skel_internal.h123
1 files changed, 123 insertions, 0 deletions
diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
new file mode 100644
index 000000000000..12a126b452c1
--- /dev/null
+++ b/tools/lib/bpf/skel_internal.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
+/* Copyright (c) 2021 Facebook */
+#ifndef __SKEL_INTERNAL_H
+#define __SKEL_INTERNAL_H
+
+#include <unistd.h>
+#include <sys/syscall.h>
+#include <sys/mman.h>
+
+/* This file is a base header for auto-generated *.lskel.h files.
+ * Its contents will change and may become part of auto-generation in the future.
+ *
+ * The layout of bpf_[map|prog]_desc and bpf_loader_ctx is feature dependent
+ * and will change from one version of libbpf to another and features
+ * requested during loader program generation.
+ */
+struct bpf_map_desc {
+ union {
+ /* input for the loader prog */
+ struct {
+ __aligned_u64 initial_value;
+ __u32 max_entries;
+ };
+ /* output of the loader prog */
+ struct {
+ int map_fd;
+ };
+ };
+};
+struct bpf_prog_desc {
+ int prog_fd;
+};
+
+struct bpf_loader_ctx {
+ size_t sz;
+ __u32 log_level;
+ __u32 log_size;
+ __u64 log_buf;
+};
+
+struct bpf_load_and_run_opts {
+ struct bpf_loader_ctx *ctx;
+ const void *data;
+ const void *insns;
+ __u32 data_sz;
+ __u32 insns_sz;
+ const char *errstr;
+};
+
+static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
+ unsigned int size)
+{
+ return syscall(__NR_bpf, cmd, attr, size);
+}
+
+static inline int skel_closenz(int fd)
+{
+ if (fd > 0)
+ return close(fd);
+ return -EINVAL;
+}
+
+static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
+{
+ int map_fd = -1, prog_fd = -1, key = 0, err;
+ union bpf_attr attr;
+
+ map_fd = bpf_create_map_name(BPF_MAP_TYPE_ARRAY, "__loader.map", 4,
+ opts->data_sz, 1, 0);
+ if (map_fd < 0) {
+ opts->errstr = "failed to create loader map";
+ err = -errno;
+ goto out;
+ }
+
+ err = bpf_map_update_elem(map_fd, &key, opts->data, 0);
+ if (err < 0) {
+ opts->errstr = "failed to update loader map";
+ err = -errno;
+ goto out;
+ }
+
+ memset(&attr, 0, sizeof(attr));
+ attr.prog_type = BPF_PROG_TYPE_SYSCALL;
+ attr.insns = (long) opts->insns;
+ attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
+ attr.license = (long) "Dual BSD/GPL";
+ memcpy(attr.prog_name, "__loader.prog", sizeof("__loader.prog"));
+ attr.fd_array = (long) &map_fd;
+ attr.log_level = opts->ctx->log_level;
+ attr.log_size = opts->ctx->log_size;
+ attr.log_buf = opts->ctx->log_buf;
+ attr.prog_flags = BPF_F_SLEEPABLE;
+ prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
+ if (prog_fd < 0) {
+ opts->errstr = "failed to load loader prog";
+ err = -errno;
+ goto out;
+ }
+
+ memset(&attr, 0, sizeof(attr));
+ attr.test.prog_fd = prog_fd;
+ attr.test.ctx_in = (long) opts->ctx;
+ attr.test.ctx_size_in = opts->ctx->sz;
+ err = skel_sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
+ if (err < 0 || (int)attr.test.retval < 0) {
+ opts->errstr = "failed to execute loader prog";
+ if (err < 0)
+ err = -errno;
+ else
+ err = (int)attr.test.retval;
+ goto out;
+ }
+ err = 0;
+out:
+ if (map_fd >= 0)
+ close(map_fd);
+ if (prog_fd >= 0)
+ close(prog_fd);
+ return err;
+}
+
+#endif