diff options
| author | Alexei Starovoitov <ast@kernel.org> | 2020-01-25 18:12:41 +0300 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2020-01-25 18:12:46 +0300 |
| commit | e9f02a8027675e3957d463d7f8422d79fa90f2ba (patch) | |
| tree | 79f53b2c1eeaef4807494ba3de30fb2192ff3edc /include | |
| parent | 35b9211c0a2427e8f39e534f442f43804fc8d5ca (diff) | |
| parent | d633d57902a510debd4ec5b7a374a009c8c2d620 (diff) | |
| download | linux-e9f02a8027675e3957d463d7f8422d79fa90f2ba.tar.xz | |
Merge branch 'trampoline-fixes'
Jiri Olsa says:
====================
hi,
sending 2 fixes to fix kernel support for loading
trampoline programs in bcc/bpftrace and allow to
unwind through trampoline/dispatcher.
Original rfc post [1].
Speedup output of perf bench while running klockstat.py
on kprobes vs trampolines:
Without:
$ perf bench sched messaging -l 50000
...
Total time: 18.571 [sec]
With current kprobe tracing:
$ perf bench sched messaging -l 50000
...
Total time: 183.395 [sec]
With kfunc tracing:
$ perf bench sched messaging -l 50000
...
Total time: 39.773 [sec]
v4 changes:
- rebased on latest bpf-next/master
- removed image tree mutex and use trampoline_mutex instead
- checking directly for string pointer in patch 1 [Alexei]
- skipped helpers patches, as they are no longer needed [Alexei]
v3 changes:
- added ack from John Fastabend for patch 1
- move out is_bpf_image_address from is_bpf_text_address call [David]
v2 changes:
- make the unwind work for dispatcher as well
- added test for allowed trampolines count
- used raw tp pt_regs nest-arrays for trampoline helpers
thanks,
jirka
[1] https://lore.kernel.org/netdev/20191229143740.29143-1-jolsa@kernel.org/
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include')
| -rw-r--r-- | include/linux/bpf.h | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a9687861fd7e..8e9ad3943cd9 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -525,7 +525,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); -void *bpf_jit_alloc_exec_page(void); #define BPF_DISPATCHER_INIT(name) { \ .mutex = __MUTEX_INITIALIZER(name.mutex), \ .func = &name##func, \ @@ -557,6 +556,13 @@ void *bpf_jit_alloc_exec_page(void); #define BPF_DISPATCHER_PTR(name) (&name) void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to); +struct bpf_image { + struct latch_tree_node tnode; + unsigned char data[]; +}; +#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image)) +bool is_bpf_image_address(unsigned long address); +void *bpf_image_alloc(void); #else static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { @@ -578,6 +584,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {} static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to) {} +static inline bool is_bpf_image_address(unsigned long address) +{ + return false; +} #endif struct bpf_func_info_aux { |
