Age | Commit message (Collapse) | Author | Files | Lines |
|
Commit 316580b69d0a ("u64_stats: provide u64_stats_t type")
fixed possible load/store tearing on 64bit arches.
For instance the following C code
stats->nsecs += sched_clock() - start;
Could be rightfully implemented like this by a compiler,
confusing concurrent readers a lot:
stats->nsecs += sched_clock();
// arbitrary delay
stats->nsecs -= start;
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211026214133.3114279-4-eric.dumazet@gmail.com
|
|
It seems update_prog_stats() suffers from same issue fixed
in the prior patch:
As it can run while interrupts are enabled, it could
be re-entered and the u64_stats syncp could be mangled.
Fixes: fec56f5890d9 ("bpf: Introduce BPF trampoline")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211026214133.3114279-3-eric.dumazet@gmail.com
|
|
__bpf_prog_run() can run from non IRQ contexts, meaning
it could be re entered if interrupted.
This calls for the irq safe variant of u64_stats_update_{begin|end},
or risk a deadlock.
This patch is a nop on 64bit arches, fortunately.
syzbot report:
WARNING: inconsistent lock state
5.12.0-rc3-syzkaller #0 Not tainted
--------------------------------
inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
udevd/4013 [HC0[0]:SC0[0]:HE1:SE1] takes:
ff7c9dec (&(&pstats->syncp)->seq){+.?.}-{0:0}, at: sk_filter include/linux/filter.h:867 [inline]
ff7c9dec (&(&pstats->syncp)->seq){+.?.}-{0:0}, at: do_one_broadcast net/netlink/af_netlink.c:1468 [inline]
ff7c9dec (&(&pstats->syncp)->seq){+.?.}-{0:0}, at: netlink_broadcast_filtered+0x27c/0x4fc net/netlink/af_netlink.c:1520
{IN-SOFTIRQ-W} state was registered at:
lock_acquire.part.0+0xf0/0x41c kernel/locking/lockdep.c:5510
lock_acquire+0x6c/0x74 kernel/locking/lockdep.c:5483
do_write_seqcount_begin_nested include/linux/seqlock.h:520 [inline]
do_write_seqcount_begin include/linux/seqlock.h:545 [inline]
u64_stats_update_begin include/linux/u64_stats_sync.h:129 [inline]
bpf_prog_run_pin_on_cpu include/linux/filter.h:624 [inline]
bpf_prog_run_clear_cb+0x1bc/0x270 include/linux/filter.h:755
run_filter+0xa0/0x17c net/packet/af_packet.c:2031
packet_rcv+0xc0/0x3e0 net/packet/af_packet.c:2104
dev_queue_xmit_nit+0x2bc/0x39c net/core/dev.c:2387
xmit_one net/core/dev.c:3588 [inline]
dev_hard_start_xmit+0x94/0x518 net/core/dev.c:3609
sch_direct_xmit+0x11c/0x1f0 net/sched/sch_generic.c:313
qdisc_restart net/sched/sch_generic.c:376 [inline]
__qdisc_run+0x194/0x7f8 net/sched/sch_generic.c:384
qdisc_run include/net/pkt_sched.h:136 [inline]
qdisc_run include/net/pkt_sched.h:128 [inline]
__dev_xmit_skb net/core/dev.c:3795 [inline]
__dev_queue_xmit+0x65c/0xf84 net/core/dev.c:4150
dev_queue_xmit+0x14/0x18 net/core/dev.c:4215
neigh_resolve_output net/core/neighbour.c:1491 [inline]
neigh_resolve_output+0x170/0x228 net/core/neighbour.c:1471
neigh_output include/net/neighbour.h:510 [inline]
ip6_finish_output2+0x2e4/0x9fc net/ipv6/ip6_output.c:117
__ip6_finish_output net/ipv6/ip6_output.c:182 [inline]
__ip6_finish_output+0x164/0x3f8 net/ipv6/ip6_output.c:161
ip6_finish_output+0x2c/0xb0 net/ipv6/ip6_output.c:192
NF_HOOK_COND include/linux/netfilter.h:290 [inline]
ip6_output+0x74/0x294 net/ipv6/ip6_output.c:215
dst_output include/net/dst.h:448 [inline]
NF_HOOK include/linux/netfilter.h:301 [inline]
NF_HOOK include/linux/netfilter.h:295 [inline]
mld_sendpack+0x2a8/0x7e4 net/ipv6/mcast.c:1679
mld_send_cr net/ipv6/mcast.c:1975 [inline]
mld_ifc_timer_expire+0x1e8/0x494 net/ipv6/mcast.c:2474
call_timer_fn+0xd0/0x570 kernel/time/timer.c:1431
expire_timers kernel/time/timer.c:1476 [inline]
__run_timers kernel/time/timer.c:1745 [inline]
run_timer_softirq+0x2e4/0x384 kernel/time/timer.c:1758
__do_softirq+0x204/0x7ac kernel/softirq.c:345
do_softirq_own_stack include/asm-generic/softirq_stack.h:10 [inline]
invoke_softirq kernel/softirq.c:228 [inline]
__irq_exit_rcu+0x1d8/0x200 kernel/softirq.c:422
irq_exit+0x10/0x3c kernel/softirq.c:446
__handle_domain_irq+0xb4/0x120 kernel/irq/irqdesc.c:692
handle_domain_irq include/linux/irqdesc.h:176 [inline]
gic_handle_irq+0x84/0xac drivers/irqchip/irq-gic.c:370
__irq_svc+0x5c/0x94 arch/arm/kernel/entry-armv.S:205
debug_smp_processor_id+0x0/0x24 lib/smp_processor_id.c:53
rcu_read_lock_held_common kernel/rcu/update.c:108 [inline]
rcu_read_lock_sched_held+0x24/0x7c kernel/rcu/update.c:123
trace_lock_acquire+0x24c/0x278 include/trace/events/lock.h:13
lock_acquire+0x3c/0x74 kernel/locking/lockdep.c:5481
rcu_lock_acquire include/linux/rcupdate.h:267 [inline]
rcu_read_lock include/linux/rcupdate.h:656 [inline]
avc_has_perm_noaudit+0x6c/0x260 security/selinux/avc.c:1150
selinux_inode_permission+0x140/0x220 security/selinux/hooks.c:3141
security_inode_permission+0x44/0x60 security/security.c:1268
inode_permission.part.0+0x5c/0x13c fs/namei.c:521
inode_permission fs/namei.c:494 [inline]
may_lookup fs/namei.c:1652 [inline]
link_path_walk.part.0+0xd4/0x38c fs/namei.c:2208
link_path_walk fs/namei.c:2189 [inline]
path_lookupat+0x3c/0x1b8 fs/namei.c:2419
filename_lookup+0xa8/0x1a4 fs/namei.c:2453
user_path_at_empty+0x74/0x90 fs/namei.c:2733
do_readlinkat+0x5c/0x12c fs/stat.c:417
__do_sys_readlink fs/stat.c:450 [inline]
sys_readlink+0x24/0x28 fs/stat.c:447
ret_fast_syscall+0x0/0x2c arch/arm/mm/proc-v7.S:64
0x7eaa4974
irq event stamp: 298277
hardirqs last enabled at (298277): [<802000d0>] no_work_pending+0x4/0x34
hardirqs last disabled at (298276): [<8020c9b8>] do_work_pending+0x9c/0x648 arch/arm/kernel/signal.c:676
softirqs last enabled at (298216): [<8020167c>] __do_softirq+0x584/0x7ac kernel/softirq.c:372
softirqs last disabled at (298201): [<8024dff4>] do_softirq_own_stack include/asm-generic/softirq_stack.h:10 [inline]
softirqs last disabled at (298201): [<8024dff4>] invoke_softirq kernel/softirq.c:228 [inline]
softirqs last disabled at (298201): [<8024dff4>] __irq_exit_rcu+0x1d8/0x200 kernel/softirq.c:422
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&(&pstats->syncp)->seq);
<Interrupt>
lock(&(&pstats->syncp)->seq);
*** DEADLOCK ***
1 lock held by udevd/4013:
#0: 82b09c5c (rcu_read_lock){....}-{1:2}, at: sk_filter_trim_cap+0x54/0x434 net/core/filter.c:139
stack backtrace:
CPU: 1 PID: 4013 Comm: udevd Not tainted 5.12.0-rc3-syzkaller #0
Hardware name: ARM-Versatile Express
Backtrace:
[<81802550>] (dump_backtrace) from [<818027c4>] (show_stack+0x18/0x1c arch/arm/kernel/traps.c:252)
r7:00000080 r6:600d0093 r5:00000000 r4:82b58344
[<818027ac>] (show_stack) from [<81809e98>] (__dump_stack lib/dump_stack.c:79 [inline])
[<818027ac>] (show_stack) from [<81809e98>] (dump_stack+0xb8/0xe8 lib/dump_stack.c:120)
[<81809de0>] (dump_stack) from [<81804a00>] (print_usage_bug.part.0+0x228/0x230 kernel/locking/lockdep.c:3806)
r7:86bcb768 r6:81a0326c r5:830f96a8 r4:86bcb0c0
[<818047d8>] (print_usage_bug.part.0) from [<802bb1b8>] (print_usage_bug kernel/locking/lockdep.c:3776 [inline])
[<818047d8>] (print_usage_bug.part.0) from [<802bb1b8>] (valid_state kernel/locking/lockdep.c:3818 [inline])
[<818047d8>] (print_usage_bug.part.0) from [<802bb1b8>] (mark_lock_irq kernel/locking/lockdep.c:4021 [inline])
[<818047d8>] (print_usage_bug.part.0) from [<802bb1b8>] (mark_lock.part.0+0xc34/0x136c kernel/locking/lockdep.c:4478)
r10:83278fe8 r9:82c6d748 r8:00000000 r7:82c6d2d4 r6:00000004 r5:86bcb768
r4:00000006
[<802ba584>] (mark_lock.part.0) from [<802bc644>] (mark_lock kernel/locking/lockdep.c:4442 [inline])
[<802ba584>] (mark_lock.part.0) from [<802bc644>] (mark_usage kernel/locking/lockdep.c:4391 [inline])
[<802ba584>] (mark_lock.part.0) from [<802bc644>] (__lock_acquire+0x9bc/0x3318 kernel/locking/lockdep.c:4854)
r10:86bcb768 r9:86bcb0c0 r8:00000001 r7:00040000 r6:0000075a r5:830f96a8
r4:00000000
[<802bbc88>] (__lock_acquire) from [<802bfb90>] (lock_acquire.part.0+0xf0/0x41c kernel/locking/lockdep.c:5510)
r10:00000000 r9:600d0013 r8:00000000 r7:00000000 r6:828a2680 r5:828a2680
r4:861e5bc8
[<802bfaa0>] (lock_acquire.part.0) from [<802bff28>] (lock_acquire+0x6c/0x74 kernel/locking/lockdep.c:5483)
r10:8146137c r9:00000000 r8:00000001 r7:00000000 r6:00000000 r5:00000000
r4:ff7c9dec
[<802bfebc>] (lock_acquire) from [<81381eb4>] (do_write_seqcount_begin_nested include/linux/seqlock.h:520 [inline])
[<802bfebc>] (lock_acquire) from [<81381eb4>] (do_write_seqcount_begin include/linux/seqlock.h:545 [inline])
[<802bfebc>] (lock_acquire) from [<81381eb4>] (u64_stats_update_begin include/linux/u64_stats_sync.h:129 [inline])
[<802bfebc>] (lock_acquire) from [<81381eb4>] (__bpf_prog_run_save_cb include/linux/filter.h:727 [inline])
[<802bfebc>] (lock_acquire) from [<81381eb4>] (bpf_prog_run_save_cb include/linux/filter.h:741 [inline])
[<802bfebc>] (lock_acquire) from [<81381eb4>] (sk_filter_trim_cap+0x26c/0x434 net/core/filter.c:149)
r10:a4095dd0 r9:ff7c9dd0 r8:e44be000 r7:8146137c r6:00000001 r5:8611ba80
r4:00000000
[<81381c48>] (sk_filter_trim_cap) from [<8146137c>] (sk_filter include/linux/filter.h:867 [inline])
[<81381c48>] (sk_filter_trim_cap) from [<8146137c>] (do_one_broadcast net/netlink/af_netlink.c:1468 [inline])
[<81381c48>] (sk_filter_trim_cap) from [<8146137c>] (netlink_broadcast_filtered+0x27c/0x4fc net/netlink/af_netlink.c:1520)
r10:00000001 r9:833d6b1c r8:00000000 r7:8572f864 r6:8611ba80 r5:8698d800
r4:8572f800
[<81461100>] (netlink_broadcast_filtered) from [<81463e60>] (netlink_broadcast net/netlink/af_netlink.c:1544 [inline])
[<81461100>] (netlink_broadcast_filtered) from [<81463e60>] (netlink_sendmsg+0x3d0/0x478 net/netlink/af_netlink.c:1925)
r10:00000000 r9:00000002 r8:8698d800 r7:000000b7 r6:8611b900 r5:861e5f50
r4:86aa3000
[<81463a90>] (netlink_sendmsg) from [<81321f54>] (sock_sendmsg_nosec net/socket.c:654 [inline])
[<81463a90>] (netlink_sendmsg) from [<81321f54>] (sock_sendmsg+0x3c/0x4c net/socket.c:674)
r10:00000000 r9:861e5dd4 r8:00000000 r7:86570000 r6:00000000 r5:86570000
r4:861e5f50
[<81321f18>] (sock_sendmsg) from [<813234d0>] (____sys_sendmsg+0x230/0x29c net/socket.c:2350)
r5:00000040 r4:861e5f50
[<813232a0>] (____sys_sendmsg) from [<8132549c>] (___sys_sendmsg+0xac/0xe4 net/socket.c:2404)
r10:00000128 r9:861e4000 r8:00000000 r7:00000000 r6:86570000 r5:861e5f50
r4:00000000
[<813253f0>] (___sys_sendmsg) from [<81325684>] (__sys_sendmsg net/socket.c:2433 [inline])
[<813253f0>] (___sys_sendmsg) from [<81325684>] (__do_sys_sendmsg net/socket.c:2442 [inline])
[<813253f0>] (___sys_sendmsg) from [<81325684>] (sys_sendmsg+0x58/0xa0 net/socket.c:2440)
r8:80200224 r7:00000128 r6:00000000 r5:7eaa541c r4:86570000
[<8132562c>] (sys_sendmsg) from [<80200060>] (ret_fast_syscall+0x0/0x2c arch/arm/mm/proc-v7.S:64)
Exception stack(0x861e5fa8 to 0x861e5ff0)
5fa0: 00000000 00000000 0000000c 7eaa541c 00000000 00000000
5fc0: 00000000 00000000 76fbf840 00000128 00000000 0000008f 7eaa541c 000563f8
5fe0: 00056110 7eaa53e0 00036cec 76c9bf44
r6:76fbf840 r5:00000000 r4:00000000
Fixes: 492ecee892c2 ("bpf: enable program stats")
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211026214133.3114279-2-eric.dumazet@gmail.com
|
|
Add a flag to `enum libbpf_strict_mode' to disable the global
`bpf_objects_list', preventing race conditions when concurrent threads
call bpf_object__open() or bpf_object__close().
bpf_object__next() will return NULL if this option is set.
Callers may achieve the same workflow by tracking bpf_objects in
application code.
[0] Closes: https://github.com/libbpf/libbpf/issues/293
Signed-off-by: Joe Burton <jevburton@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026223528.413950-1-jevburton.kernel@gmail.com
|
|
Function in modules could appear in /proc/kallsyms in random order.
ffffffffa02608a0 t bpf_testmod_loop_test
ffffffffa02600c0 t __traceiter_bpf_testmod_test_writable_bare
ffffffffa0263b60 d __tracepoint_bpf_testmod_test_write_bare
ffffffffa02608c0 T bpf_testmod_test_read
ffffffffa0260d08 t __SCT__tp_func_bpf_testmod_test_writable_bare
ffffffffa0263300 d __SCK__tp_func_bpf_testmod_test_read
ffffffffa0260680 T bpf_testmod_test_write
ffffffffa0260860 t bpf_testmod_test_mod_kfunc
Therefore, we cannot reliably use kallsyms_find_next() to find the end of
a function. Replace it with a simple guess (start + 128). This is good
enough for this test.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022234814.318457-1-songliubraving@fb.com
|
|
Skipping the second half of the test is not enough to silent the warning
in dmesg. Skip the whole test before we can either properly silent the
warning in kernel, or fix LBR snapshot for VM.
Fixes: 025bd7c753aa ("selftests/bpf: Add test for bpf_get_branch_snapshot")
Fixes: aa67fdb46436 ("selftests/bpf: Skip the second half of get_branch_snapshot in vm")
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026000733.477714-1-songliubraving@fb.com
|
|
Ilya Leoshkevich says:
====================
v2: https://lore.kernel.org/bpf/20211025131214.731972-1-iii@linux.ibm.com/
v2 -> v3: Split the fix from the cleanup (Daniel).
v1: https://lore.kernel.org/bpf/20211021234653.643302-1-iii@linux.ibm.com/
v1 -> v2: Drop bpf_core_calc_field_relo() restructuring, split the
__BYTE_ORDER__ change (Andrii).
Hi,
this series fixes test failures in core_reloc on s390.
Patch 1 fixes an endianness bug with __BYTE_ORDER vs __BYTE_ORDER__.
Patches 2-5 make the rest of the code consistent in that respect.
Patch 6 fixes an endianness issue in test_core_reloc_mods.
Best regards,
Ilya
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
|
|
This is the same as commit d164dd9a5c08 ("selftests/bpf: Fix
test_core_autosize on big-endian machines"), but for
test_core_reloc_mods.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026010831.748682-7-iii@linux.ibm.com
|
|
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined
__BYTE_ORDER for consistency.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026010831.748682-6-iii@linux.ibm.com
|
|
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined
__BYTE_ORDER for consistency.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026010831.748682-5-iii@linux.ibm.com
|
|
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined
__BYTE_ORDER for consistency.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026010831.748682-4-iii@linux.ibm.com
|
|
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined
__BYTE_ORDER for consistency.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026010831.748682-3-iii@linux.ibm.com
|
|
__BYTE_ORDER is supposed to be defined by a libc, and __BYTE_ORDER__ -
by a compiler. bpf_core_read.h checks __BYTE_ORDER == __LITTLE_ENDIAN,
which is true if neither are defined, leading to incorrect behavior on
big-endian hosts if libc headers are not included, which is often the
case.
Fixes: ee26dade0e3b ("libbpf: Add support for relocatable bitfields")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211026010831.748682-2-iii@linux.ibm.com
|
|
Andrii Nakryiko says:
====================
Add libbpf APIs to access BPF program instructions. Both before and after
libbpf processing (before and after bpf_object__load()). This allows to
inspect what's going on with BPF program assembly instructions as libbpf
performs its processing magic.
But in more practical terms, this allows to do a no-brainer BPF program
cloning, which is something you need when working with fentry/fexit BPF
programs to be able to attach the same BPF program code to multiple kernel
functions. Currently, kernel needs multiple copies of BPF programs, each
loaded with its own target BTF ID. retsnoop is one such example that
previously had to rely on bpf_program__set_prep() API to hijack program
instructions ([0] for before and after).
Speaking of bpf_program__set_prep() API and the whole concept of
multiple-instance BPF programs in libbpf, all that is scheduled for
deprecation in v0.7. It doesn't work well, it's cumbersome, and it will become
more broken as libbpf adds more functionality. So deprecate and remove it in
libbpf 1.0. It doesn't seem to be used by anyone anyways (except for that
retsnoop hack, which is now much cleaner with new APIs as can be seen in [0]).
[0] https://github.com/anakryiko/retsnoop/pull/1
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
The name of the API doesn't convey clearly that this size is in number
of bytes (there needed to be a separate comment to make this clear in
libbpf.h). Further, measuring the size of BPF program in bytes is not
exactly the best fit, because BPF programs always consist of 8-byte
instructions. As such, bpf_program__insn_cnt() is a better alternative
in pretty much any imaginable case.
So schedule bpf_program__size() deprecation starting from v0.7 and it
will be removed in libbpf 1.0.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211025224531.1088894-5-andrii@kernel.org
|
|
Schedule deprecation of a set of APIs that are related to multi-instance
bpf_programs:
- bpf_program__set_prep() ([0]);
- bpf_program__{set,unset}_instance() ([1]);
- bpf_program__nth_fd().
These APIs are obscure, very niche, and don't seem to be used much in
practice. bpf_program__set_prep() is pretty useless for anything but the
simplest BPF programs, as it doesn't allow to adjust BPF program load
attributes, among other things. In short, it already bitrotted and will
bitrot some more if not removed.
With bpf_program__insns() API, which gives access to post-processed BPF
program instructions of any given entry-point BPF program, it's now
possible to do whatever necessary adjustments were possible with
set_prep() API before, but also more. Given any such use case is
automatically an advanced use case, requiring users to stick to
low-level bpf_prog_load() APIs and managing their own prog FDs is
reasonable.
[0] Closes: https://github.com/libbpf/libbpf/issues/299
[1] Closes: https://github.com/libbpf/libbpf/issues/300
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211025224531.1088894-4-andrii@kernel.org
|
|
Add APIs providing read-only access to bpf_program BPF instructions ([0]).
This is useful for diagnostics purposes, but it also allows a cleaner
support for cloning BPF programs after libbpf did all the FD resolution
and CO-RE relocations, subprog instructions appending, etc. Currently,
cloning BPF program is possible only through hijacking a half-broken
bpf_program__set_prep() API, which doesn't really work well for anything
but most primitive programs. For instance, set_prep() API doesn't allow
adjusting BPF program load parameters which are necessary for loading
fentry/fexit BPF programs (the case where BPF program cloning is
a necessity if doing some sort of mass-attachment functionality).
Given bpf_program__set_prep() API is set to be deprecated, having
a cleaner alternative is a must. libbpf internally already keeps track
of linear array of struct bpf_insn, so it's not hard to expose it. The
only gotcha is that libbpf previously freed instructions array during
bpf_object load time, which would make this API much less useful overall,
because in between bpf_object__open() and bpf_object__load() a lot of
changes to instructions are done by libbpf.
So this patch makes libbpf hold onto prog->insns array even after BPF
program loading. I think this is a small price for added functionality
and improved introspection of BPF program code.
See retsnoop PR ([1]) for how it can be used in practice and code
savings compared to relying on bpf_program__set_prep().
[0] Closes: https://github.com/libbpf/libbpf/issues/298
[1] https://github.com/anakryiko/retsnoop/pull/1
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211025224531.1088894-3-andrii@kernel.org
|
|
Fix instruction index validity check which has off-by-one error.
Fixes: 3ee4f5335511 ("libbpf: Split bpf_core_apply_relo() into bpf_program independent helper.")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211025224531.1088894-2-andrii@kernel.org
|
|
Quentin Monnet says:
====================
When listing BPF objects, bpftool can print a number of properties about
items holding references to these objects. For example, it can show pinned
paths for BPF programs, maps, and links; or programs and maps using a given
BTF object; or the names and PIDs of processes referencing BPF objects. To
collect this information, bpftool uses hash maps (to be clear: the data
structures, inside bpftool - we are not talking of BPF maps). It uses the
implementation available from the kernel, and picks it up from
tools/include/linux/hashtable.h.
This patchset converts bpftool's hash maps to a distinct implementation
instead, the one coming with libbpf. The main motivation for this change is
that it should ease the path towards a potential out-of-tree mirror for
bpftool, like the one libbpf already has. Although it's not perfect to
depend on libbpf's internal components, bpftool is intimately tied with the
library anyway, and this looks better than depending too much on (non-UAPI)
kernel headers.
The first two patches contain preparatory work on the Makefile and on the
initialisation of the hash maps for collecting pinned paths for objects.
Then the transition is split into several steps, one for each kind of
properties for which the collection is backed by hash maps.
v2:
- Move hashmap cleanup for pinned paths for links from do_detach() to
do_show().
- Handle errors on hashmap__append() (in three of the patches).
- Rename bpftool_hash_fn() and bpftool_equal_fn() as hash_fn_for_key_id()
and equal_fn_for_key_id(), respectively.
- Add curly braces for hashmap__for_each_key_entry() { } in
show_btf_plain() and show_btf_json(), where the flow was difficult to
read.
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
|
|
In order to show PIDs and names for processes holding references to BPF
programs, maps, links, or BTF objects, bpftool creates hash maps to
store all relevant information. This commit is part of a set that
transitions from the kernel's hash map implementation to the one coming
with libbpf.
The motivation is to make bpftool less dependent of kernel headers, to
ease the path to a potential out-of-tree mirror, like libbpf has.
This is the third and final step of the transition, in which we convert
the hash maps used for storing the information about the processes
holding references to BPF objects (programs, maps, links, BTF), and at
last we drop the inclusion of tools/include/linux/hashtable.h.
Note: Checkpatch complains about the use of __weak declarations, and the
missing empty lines after the bunch of empty function declarations when
compiling without the BPF skeletons (none of these were introduced in
this patch). We want to keep things as they are, and the reports should
be safe to ignore.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211023205154.6710-6-quentin@isovalent.com
|
|
In order to show BPF programs and maps using BTF objects when the latter
are being listed, bpftool creates hash maps to store all relevant items.
This commit is part of a set that transitions from the kernel's hash map
implementation to the one coming with libbpf.
The motivation is to make bpftool less dependent of kernel headers, to
ease the path to a potential out-of-tree mirror, like libbpf has.
This commit focuses on the two hash maps used by bpftool when listing
BTF objects to store references to programs and maps, and convert them
to the libbpf's implementation.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211023205154.6710-5-quentin@isovalent.com
|
|
In order to show pinned paths for BPF programs, maps, or links when
listing them with the "-f" option, bpftool creates hash maps to store
all relevant paths under the bpffs. So far, it would rely on the
kernel implementation (from tools/include/linux/hashtable.h).
We can make bpftool rely on libbpf's implementation instead. The
motivation is to make bpftool less dependent of kernel headers, to ease
the path to a potential out-of-tree mirror, like libbpf has.
This commit is the first step of the conversion: the hash maps for
pinned paths for programs, maps, and links are converted to libbpf's
hashmap.{c,h}. Other hash maps used for the PIDs of process holding
references to BPF objects are left unchanged for now. On the build side,
this requires adding a dependency to a second header internal to libbpf,
and making it a dependency for the bootstrap bpftool version as well.
The rest of the changes are a rather straightforward conversion.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211023205154.6710-4-quentin@isovalent.com
|
|
BPF programs, maps, and links, can all be listed with their pinned paths
by bpftool, when the "-f" option is provided. To do so, bpftool builds
hash maps containing all pinned paths for each kind of objects.
These three hash maps are always initialised in main.c, and exposed
through main.h. There appear to be no particular reason to do so: we can
just as well make them static to the files that need them (prog.c,
map.c, and link.c respectively), and initialise them only when we want
to show objects and the "-f" switch is provided.
This may prevent unnecessary memory allocations if the implementation of
the hash maps was to change in the future.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211023205154.6710-3-quentin@isovalent.com
|
|
The dependency is only useful to make sure that the $(LIBBPF_HDRS_DIR)
directory is created before we try to install locally the required
libbpf internal header. Let's create this directory properly instead.
This is in preparation of making $(LIBBPF_INTERNAL_HDRS) a dependency to
the bootstrap bpftool version, in which case we want no dependency on
$(LIBBPF).
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211023205154.6710-2-quentin@isovalent.com
|
|
Andrii Nakryiko says:
====================
Reduce amount of waiting time when running test_progs in parallel mode (-j) by
splitting bpf_verif_scale selftests into multiple tests. Previously it was
structured as a test with multiple subtests, but subtests are not easily
parallelizable with test_progs' infra. Also in practice each scale subtest is
really an independent test with nothing shared across all substest.
This patch set changes how test_progs test discovery works. Now it is possible
to define multiple tests within a single source code file. One of the patches
also marks tc_redirect selftests as serial, because it's extremely harmful to
the test system when run in parallel mode.
====================
Acked-by: Yucong Sun <sunyucong@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Instead of using subtests in bpf_verif_scale selftest, turn each scale
sub-test into its own test. Each subtest is compltely independent and
just reuses a bit of common test running logic, so the conversion is
trivial. For convenience, keep all of BPF verifier scale tests in one
file.
This conversion shaves off a significant amount of time when running
test_progs in parallel mode. E.g., just running scale tests (-t verif_scale):
BEFORE
======
Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED
real 0m22.894s
user 0m0.012s
sys 0m22.797s
AFTER
=====
Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED
real 0m12.044s
user 0m0.024s
sys 0m27.869s
Ten second saving right there. test_progs -j is not yet ready to be
turned on by default, unfortunately, and some tests fail almost every
time, but this is a good improvement nevertheless. Ignoring few
failures, here is sequential vs parallel run times when running all
tests now:
SEQUENTIAL
==========
Summary: 206/953 PASSED, 4 SKIPPED, 0 FAILED
real 1m5.625s
user 0m4.211s
sys 0m31.650s
PARALLEL
========
Summary: 204/952 PASSED, 4 SKIPPED, 2 FAILED
real 0m35.550s
user 0m4.998s
sys 0m39.890s
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022223228.99920-5-andrii@kernel.org
|
|
It seems to cause a lot of harm to kprobe/tracepoint selftests. Yucong
mentioned before that it does manipulate sysfs, which might be the
reason. So let's mark it as serial, though ideally it would be less
intrusive on the system at test.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022223228.99920-4-andrii@kernel.org
|
|
Revamp how test discovery works for test_progs and allow multiple test
entries per file. Any global void function with no arguments and
serial_test_ or test_ prefix is considered a test.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022223228.99920-3-andrii@kernel.org
|
|
Ensure that all test entry points are global void functions with no
input arguments. Mark few subtest entry points as static.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022223228.99920-2-andrii@kernel.org
|
|
Original code assumed fixed and correct BTF header length. That's not
always the case, though, so fix this bug with a proper additional check.
And use actual header length instead of sizeof(struct btf_header) in
sanity checks.
Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf")
Reported-by: Evgeny Vereshchagin <evvers@ya.ru>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211023003157.726961-2-andrii@kernel.org
|
|
btf_header's str_off+str_len or type_off+type_len can overflow as they
are u32s. This will lead to bypassing the sanity checks during BTF
parsing, resulting in crashes afterwards. Fix by using 64-bit signed
integers for comparison.
Fixes: d8123624506c ("libbpf: Fix BTF data layout checks and allow empty BTF")
Reported-by: Evgeny Vereshchagin <evvers@ya.ru>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211023003157.726961-1-andrii@kernel.org
|
|
Yonghong Song says:
====================
Latest upstream llvm-project added support for btf_decl_tag attributes
for typedef declarations ([1], [2]). Similar to other btf_decl_tag cases,
func/func_param/global_var/struct/union/field, btf_decl_tag with typedef
declaration can carry information from kernel source to clang compiler
and then to dwarf/BTF, for bpf verification or other use cases.
This patch set added kernel support for BTF_KIND_DECL_TAG to typedef
declaration (Patch 1). Additional selftests are added to cover
unit testing, dedup, or bpf program usage of btf_decl_tag with typedef.
(Patches 2, 3 and 4). The btf documentation is updated to include
BTF_KIND_DECL_TAG typedef (Patch 5).
[1] https://reviews.llvm.org/D110127
[2] https://reviews.llvm.org/D112259
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Add BTF_KIND_DECL_TAG typedef support in btf.rst.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211021195649.4020514-1-yhs@fb.com
|
|
Change value type in progs/tag.c to a typedef with a btf_decl_tag.
With `bpftool btf dump file tag.o`, we have
...
[14] TYPEDEF 'value_t' type_id=17
[15] DECL_TAG 'tag1' type_id=14 component_idx=-1
[16] DECL_TAG 'tag2' type_id=14 component_idx=-1
[17] STRUCT '(anon)' size=8 vlen=2
'a' type_id=2 bits_offset=0
'b' type_id=2 bits_offset=32
...
The btf_tag selftest also succeeded:
$ ./test_progs -t tag
#21 btf_tag:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211021195643.4020315-1-yhs@fb.com
|
|
Add unit tests for deduplication of BTF_KIND_DECL_TAG to typedef types.
Also changed a few comments from "tag" to "decl_tag" to match
BTF_KIND_DECL_TAG enum value name.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211021195638.4019770-1-yhs@fb.com
|
|
Test good and bad variants of typedef BTF_KIND_DECL_TAG encoding.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211021195633.4019472-1-yhs@fb.com
|
|
The llvm patches ([1], [2]) added support to attach btf_decl_tag
attributes to typedef declarations. This patch added
support in kernel.
[1] https://reviews.llvm.org/D110127
[2] https://reviews.llvm.org/D112259
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211021195628.4018847-1-yhs@fb.com
|
|
LIBBPF_STRICT_SEC_NAME'
Stanislav Fomichev says:
====================
Commit 15669e1dcd75 ("selftests/bpf: Normalize all the rest SEC() uses")
broke flow dissector tests. With the strict section names, bpftool isn't
able to pin all programs of the objects (all section names are the
same now). To bring it back to life let's do the following:
- teach libbpf to pin by func name with LIBBPF_STRICT_SEC_NAME
- enable strict mode in bpftool (breaking cli change)
- fix custom flow_dissector loader to use strict mode
- fix flow_dissector tests to use new pin names (func vs sec)
v5:
- get rid of error when retrying with '/' (Quentin Monnet)
v4:
- fix comment spelling (Quentin Monnet)
- retry progtype without / (Quentin Monnet)
v3:
- clarify program pinning in LIBBPF_STRICT_SEC_NAME,
for real this time (Andrii Nakryiko)
- fix possible segfault in __bpf_program__pin_name (Andrii Nakryiko)
v2:
- add github issue (Andrii Nakryiko)
- remove sec_name from bpf_program.pin_name comment (Andrii Nakryiko)
- add cover letter (Andrii Nakryiko)
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
|
|
- update custom loader to search by name, not section name
- update bpftool commands to use proper pin path
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211021214814.1236114-4-sdf@google.com
|
|
We can't use section name anymore because they are not unique
and pinning objects with multiple programs with the same
progtype/secname will fail.
[0] Closes: https://github.com/libbpf/libbpf/issues/273
Fixes: 33a2c75c55e2 ("libbpf: add internal pin_name")
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20211021214814.1236114-2-sdf@google.com
|
|
Bpftool creates a new JSON object for writing program metadata in plain
text mode, regardless of metadata being present or not. Then this writer
is freed if any metadata has been found and printed, but it leaks
otherwise. We cannot destroy the object unconditionally, because the
destructor prints an undesirable line break. Instead, make sure the
writer is created only after we have found program metadata to print.
Found with valgrind.
Fixes: aff52e685eb3 ("bpftool: Support dumping metadata")
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022094743.11052-1-quentin@isovalent.com
|
|
Hengqi Chen says:
====================
Add btf__type_cnt() and btf__raw_data() APIs and deprecate
btf__get_nr_type() and btf__get_raw_data() since the old APIs
don't follow libbpf naming convention. Also update tools/selftests
to use these new APIs. This is part of effort towards libbpf v1.0
v1->v2:
- Update commit message, deprecate the old APIs in libbpf v0.7 (Andrii)
- Separate changes in tools/ to individual patches (Andrii)
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
|
|
Replace the calls to btf__get_nr_types/btf__get_raw_data in
selftests with new APIs btf__type_cnt/btf__raw_data. The old
APIs will be deprecated in libbpf v0.7+.
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022130623.1548429-6-hengqi.chen@gmail.com
|
|
Replace the call to btf__get_nr_types with new API btf__type_cnt.
The old API will be deprecated in libbpf v0.7+. No functionality
change.
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022130623.1548429-5-hengqi.chen@gmail.com
|
|
Replace the call to btf__get_nr_types with new API btf__type_cnt.
The old API will be deprecated in libbpf v0.7+. No functionality
change.
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022130623.1548429-4-hengqi.chen@gmail.com
|
|
Replace the call to btf__get_raw_data with new API btf__raw_data.
The old APIs will be deprecated in libbpf v0.7+. No functionality
change.
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022130623.1548429-3-hengqi.chen@gmail.com
|
|
Add btf__type_cnt() and btf__raw_data() APIs and deprecate
btf__get_nr_type() and btf__get_raw_data() since the old APIs
don't follow the libbpf naming convention for getters which
omit 'get' in the name (see [0]). btf__raw_data() is just an
alias to the existing btf__get_raw_data(). btf__type_cnt()
now returns the number of all types of the BTF object
including 'void'.
[0] Closes: https://github.com/libbpf/libbpf/issues/279
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022130623.1548429-2-hengqi.chen@gmail.com
|
|
Free btf_dedup if btf_ensure_modifiable() returns error.
Fixes: 919d2b1dbb07 ("libbpf: Allow modification of BTF and add btf__add_str API")
Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022202035.48868-1-mauricio@kinvolk.io
|
|
Recent change to use tp/syscalls/sys_enter_nanosleep for perf_buffer
selftests causes this selftest to fail on 4.9 kernel in libbpf CI ([0]):
libbpf: prog 'handle_sys_enter': failed to attach to perf_event FD 6: Invalid argument
libbpf: prog 'handle_sys_enter': failed to attach to tracepoint 'syscalls/sys_enter_nanosleep': Invalid argument
It's not exactly clear why, because perf_event itself is created for
this tracepoint, but I can't even compile 4.9 kernel locally, so it's
hard to figure this out. If anyone has better luck and would like to
help investigating this, I'd really appreciate this.
For now, unblock CI by switching back to raw_syscalls/sys_enter, but reduce
amount of unnecessary samples emitted by filter by process ID. Use
explicit ARRAY map for that to make it work on 4.9 as well, because
global data isn't yet supported there.
Fixes: aa274f98b269 ("selftests/bpf: Fix possible/online index mismatch in perf_buffer test")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022201342.3490692-1-andrii@kernel.org
|
|
Building libbpf sources out of kernel tree (in Github repo) we run into
compilation error due to unknown __aligned attribute. It must be coming
from some kernel header, which is not available to Github sources. Use
explicit __attribute__((aligned(16))) instead.
Fixes: 961632d54163 ("libbpf: Fix dumping non-aligned __int128")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022192502.2975553-1-andrii@kernel.org
|