diff options
| author | Ilya Leoshkevich <iii@linux.ibm.com> | 2025-08-21 14:25:55 +0300 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2025-08-27 02:51:52 +0300 |
| commit | b8efa810c1db201dece99e4113229e5ecc00db5c (patch) | |
| tree | 5ce5dda3a2a5c069e01f8d76c2a91a889a8996ac /arch/s390/net/bpf_jit_comp.c | |
| parent | d0f27ff27c048a3bdc49877255a7e7f8b49f5603 (diff) | |
| download | linux-b8efa810c1db201dece99e4113229e5ecc00db5c.tar.xz | |
s390/bpf: Add s390 JIT support for timed may_goto
The verifier provides an architecture-independent implementation of the
may_goto instruction, which is currently used on s390x, but it has a
downside: there is no way to prevent progs using it from running for a
very long time.
The solution to this problem is an alternative timed implementation,
which requires architecture-specific bits. Its availability is signaled
to the verifier by bpf_jit_supports_timed_may_goto() returning true.
The verifier then emits a call to arch_bpf_timed_may_goto() using a
non-standard calling convention. This function must act as a trampoline
for bpf_check_timed_may_goto().
Implement bpf_jit_supports_timed_may_goto(), account for the special
calling convention in the BPF_CALL implementation, and implement
arch_bpf_timed_may_goto().
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/r/20250821113339.292434-2-iii@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'arch/s390/net/bpf_jit_comp.c')
| -rw-r--r-- | arch/s390/net/bpf_jit_comp.c | 25 |
1 files changed, 21 insertions, 4 deletions
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index fd45f03a213c..8b57d8532f36 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -1806,10 +1806,22 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, } } - /* brasl %r14,func */ - EMIT6_PCREL_RILB_PTR(0xc0050000, REG_14, (void *)func); - /* lgr %b0,%r2: load return value into %b0 */ - EMIT4(0xb9040000, BPF_REG_0, REG_2); + if ((void *)func == arch_bpf_timed_may_goto) { + /* + * arch_bpf_timed_may_goto() has a special ABI: the + * parameters are in BPF_REG_AX and BPF_REG_10; the + * return value is in BPF_REG_AX; and all GPRs except + * REG_W0, REG_W1, and BPF_REG_AX are callee-saved. + */ + + /* brasl %r0,func */ + EMIT6_PCREL_RILB_PTR(0xc0050000, REG_0, (void *)func); + } else { + /* brasl %r14,func */ + EMIT6_PCREL_RILB_PTR(0xc0050000, REG_14, (void *)func); + /* lgr %b0,%r2: load return value into %b0 */ + EMIT4(0xb9040000, BPF_REG_0, REG_2); + } /* * Copy the potentially updated tail call counter back. @@ -2993,3 +3005,8 @@ void arch_bpf_stack_walk(bool (*consume_fn)(void *, u64, u64, u64), prev_addr = addr; } } + +bool bpf_jit_supports_timed_may_goto(void) +{ + return true; +} |
