diff options
author | Alexei Starovoitov <ast@kernel.org> | 2020-02-25 03:12:21 +0300 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2020-02-25 03:20:10 +0300 |
commit | 80a836c2506b2b249a9934fbe373eb7a4a98db86 (patch) | |
tree | 38d6b44dc8654900cfb0e735cd8d027ce00d4e53 /kernel/bpf/stackmap.c | |
parent | 8eece07c011f88da0ccf4127fca9a4e4faaf58ae (diff) | |
parent | 099bfaa731ec347d3f16a463ae53b88a1700c0af (diff) | |
download | linux-80a836c2506b2b249a9934fbe373eb7a4a98db86.tar.xz |
Merge branch 'BPF_and_RT'
Thomas Gleixner says:
====================
This is the third version of the BPF/RT patch set which makes both coexist
nicely. The long explanation can be found in the cover letter of the V1
submission:
https://lore.kernel.org/r/20200214133917.304937432@linutronix.de
V2 is here:
https://lore.kernel.org/r/20200220204517.863202864@linutronix.de
The following changes vs. V2 have been made:
- Rebased to bpf-next, adjusted to the lock changes in the hashmap code.
- Split the preallocation enforcement patch for instrumentation type BPF
programs into two pieces:
1) Emit a one-time warning on !RT kernels when any instrumentation type
BPF program uses run-time allocation. Emit also a corresponding
warning in the verifier log. But allow the program to run for
backward compatibility sake. After a grace period this should be
enforced.
2) On RT reject such programs because on RT the memory allocator cannot
be called from truly atomic contexts.
- Fixed the fallout from V2 as reported by Alexei and 0-day
- Removed the redundant preempt_disable() from trace_call_bpf()
- Removed the unused export of trace_call_bpf()
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'kernel/bpf/stackmap.c')
-rw-r--r-- | kernel/bpf/stackmap.c | 18 |
1 files changed, 15 insertions, 3 deletions
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 3f958b90d914..db76339fe358 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -40,6 +40,9 @@ static void do_up_read(struct irq_work *entry) { struct stack_map_irq_work *work; + if (WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_RT))) + return; + work = container_of(entry, struct stack_map_irq_work, irq_work); up_read_non_owner(work->sem); work->sem = NULL; @@ -288,10 +291,19 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, struct stack_map_irq_work *work = NULL; if (irqs_disabled()) { - work = this_cpu_ptr(&up_read_work); - if (atomic_read(&work->irq_work.flags) & IRQ_WORK_BUSY) - /* cannot queue more up_read, fallback */ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) { + work = this_cpu_ptr(&up_read_work); + if (atomic_read(&work->irq_work.flags) & IRQ_WORK_BUSY) { + /* cannot queue more up_read, fallback */ + irq_work_busy = true; + } + } else { + /* + * PREEMPT_RT does not allow to trylock mmap sem in + * interrupt disabled context. Force the fallback code. + */ irq_work_busy = true; + } } /* |