diff options
| author | Daniel Borkmann <daniel@iogearbox.net> | 2017-12-17 22:34:37 +0300 |
|---|---|---|
| committer | Daniel Borkmann <daniel@iogearbox.net> | 2017-12-17 22:34:37 +0300 |
| commit | ef9fde06a259f5da660ada63214addf8cd86a7b9 (patch) | |
| tree | 8b0d109f49281f68709343f72c5c3c89549ab9af /include/linux/filter.h | |
| parent | 0bce7c9a607f1dbf8d83dd2865e1657096dbce59 (diff) | |
| parent | 28ab173e96b3971842414bf88eb02eca6ea3f018 (diff) | |
| download | linux-ef9fde06a259f5da660ada63214addf8cd86a7b9.tar.xz | |
Merge branch 'bpf-to-bpf-function-calls'
Alexei Starovoitov says:
====================
First of all huge thank you to Daniel, John, Jakub, Edward and others who
reviewed multiple iterations of this patch set over the last many months
and to Dave and others who gave critical feedback during netconf/netdev.
The patch is solid enough and we thought through numerous corner cases,
but it's not the end. More followups with code reorg and features to follow.
TLDR: Allow arbitrary function calls from bpf function to another bpf function.
Since the beginning of bpf all bpf programs were represented as a single function
and program authors were forced to use always_inline for all functions
in their C code. That was causing llvm to unnecessary inflate the code size
and forcing developers to move code to header files with little code reuse.
With a bit of additional complexity teach verifier to recognize
arbitrary function calls from one bpf function to another as long as
all of functions are presented to the verifier as a single bpf program.
Extended program layout:
..
r1 = .. // arg1
r2 = .. // arg2
call pc+1 // function call pc-relative
exit
.. = r1 // access arg1
.. = r2 // access arg2
..
call pc+20 // second level of function call
...
It allows for better optimized code and finally allows to introduce
the core bpf libraries that can be reused in different projects,
since programs are no longer limited by single elf file.
With function calls bpf can be compiled into multiple .o files.
This patch is the first step. It detects programs that contain
multiple functions and checks that calls between them are valid.
It splits the sequence of bpf instructions (one program) into a set
of bpf functions that call each other. Calls to only known
functions are allowed. Since all functions are presented to
the verifier at once conceptually it is 'static linking'.
Future plans:
- introduce BPF_PROG_TYPE_LIBRARY and allow a set of bpf functions
to be loaded into the kernel that can be later linked to other
programs with concrete program types. Aka 'dynamic linking'.
- introduce function pointer type and indirect calls to allow
bpf functions call other dynamically loaded bpf functions while
the caller bpf function is already executing. Aka 'runtime linking'.
This will be more generic and more flexible alternative
to bpf_tail_calls.
FAQ:
Q: Interpreter and JIT changes mean that new instruction is introduced ?
A: No. The call instruction technically stays the same. Now it can call
both kernel helpers and other bpf functions.
Calling convention stays the same as well.
From uapi point of view the call insn got new 'relocation' BPF_PSEUDO_CALL
similar to BPF_PSEUDO_MAP_FD 'relocation' of bpf_ldimm64 insn.
Q: What had to change on LLVM side?
A: Trivial LLVM patch to allow calls was applied to upcoming 6.0 release:
https://reviews.llvm.org/rL318614
with few bugfixes as well.
Make sure to build the latest llvm to have bpf_call support.
More details in the patches.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'include/linux/filter.h')
| -rw-r--r-- | include/linux/filter.h | 13 |
1 files changed, 11 insertions, 2 deletions
diff --git a/include/linux/filter.h b/include/linux/filter.h index 5feb441d3dd9..e872b4ebaa57 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -58,6 +58,9 @@ struct bpf_prog_aux; /* unused opcode to mark special call to bpf_tail_call() helper */ #define BPF_TAIL_CALL 0xf0 +/* unused opcode to mark call to interpreter with arguments */ +#define BPF_CALL_ARGS 0xe0 + /* As per nm, we expose JITed images as text (code) section for * kallsyms. That way, tools like perf can find it to match * addresses. @@ -455,10 +458,13 @@ struct bpf_binary_header { struct bpf_prog { u16 pages; /* Number of allocated pages */ u16 jited:1, /* Is our filter JIT'ed? */ + jit_requested:1,/* archs need to JIT the prog */ locked:1, /* Program image locked? */ gpl_compatible:1, /* Is filter GPL compatible? */ cb_access:1, /* Is control block accessed? */ dst_needed:1, /* Do we need dst entry? */ + blinded:1, /* Was blinded */ + is_func:1, /* program is a bpf function */ kprobe_override:1; /* Do we override a kprobe? */ enum bpf_prog_type type; /* Type of BPF program */ u32 len; /* Number of filter blocks */ @@ -710,6 +716,9 @@ bool sk_filter_charge(struct sock *sk, struct sk_filter *fp); void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp); u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); +#define __bpf_call_base_args \ + ((u64 (*)(u64, u64, u64, u64, u64, const struct bpf_insn *)) \ + __bpf_call_base) struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog); void bpf_jit_compile(struct bpf_prog *prog); @@ -798,7 +807,7 @@ static inline bool bpf_prog_ebpf_jited(const struct bpf_prog *fp) return fp->jited && bpf_jit_is_ebpf(); } -static inline bool bpf_jit_blinding_enabled(void) +static inline bool bpf_jit_blinding_enabled(struct bpf_prog *prog) { /* These are the prerequisites, should someone ever have the * idea to call blinding outside of them, we make sure to @@ -806,7 +815,7 @@ static inline bool bpf_jit_blinding_enabled(void) */ if (!bpf_jit_is_ebpf()) return false; - if (!bpf_jit_enable) + if (!prog->jit_requested) return false; if (!bpf_jit_harden) return false; |
