diff options
author | David S. Miller <davem@davemloft.net> | 2021-09-28 15:52:46 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2021-09-28 15:52:46 +0300 |
commit | 4ccb9f03fee7b20484187ba7e25a7b9b79fe63d5 (patch) | |
tree | 3e6fd2e4a67b6e2be99331518f3c7c541367ad3d /include/linux | |
parent | c894b51e2a23c8c00acb3cea5045c5b70691e790 (diff) | |
parent | ced185824c89b60e65b5a2606954c098320cdfb8 (diff) | |
download | linux-4ccb9f03fee7b20484187ba7e25a7b9b79fe63d5.tar.xz |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Daniel Borkmann says:
====================
pull-request: bpf 2021-09-28
The following pull-request contains BPF updates for your *net* tree.
We've added 10 non-merge commits during the last 14 day(s) which contain
a total of 11 files changed, 139 insertions(+), 53 deletions(-).
The main changes are:
1) Fix MIPS JIT jump code emission for too large offsets, from Piotr Krysiuk.
2) Fix x86 JIT atomic/fetch emission when dst reg maps to rax, from Johan Almbladh.
3) Fix cgroup_sk_alloc corner case when called from interrupt, from Daniel Borkmann.
4) Fix segfault in libbpf's linker for objects without BTF, from Kumar Kartikeya Dwivedi.
5) Fix bpf_jit_charge_modmem for applications with CAP_BPF, from Lorenz Bauer.
6) Fix return value handling for struct_ops BPF programs, from Hou Tao.
7) Various fixes to BPF selftests, from Jiri Benc.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
,
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/bpf.h | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f4c16f19f83e..020a7d5bf470 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -578,11 +578,12 @@ struct btf_func_model { * programs only. Should not be used with normal calls and indirect calls. */ #define BPF_TRAMP_F_SKIP_FRAME BIT(2) - /* Store IP address of the caller on the trampoline stack, * so it's available for trampoline's programs. */ #define BPF_TRAMP_F_IP_ARG BIT(3) +/* Return the return value of fentry prog. Only used by bpf_struct_ops. */ +#define BPF_TRAMP_F_RET_FENTRY_RET BIT(4) /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50 * bytes on x86. Pick a number to fit into BPF_IMAGE_SIZE / 2 |