diff options
author | David S. Miller <davem@davemloft.net> | 2021-07-10 01:22:45 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2021-07-10 01:22:45 +0300 |
commit | 5d52c906f059b9ee11747557aaaf1fd85a3b6c3d (patch) | |
tree | 6d8a1f863940a1cc6dbb6e7e9968cfa956c9abbb /tools/lib/bpf/libbpf.c | |
parent | 67a9c94317402b826fc3db32afc8f39336803d97 (diff) | |
parent | 1fb5ba29ad0835c5cbfc69a27f9c2733cb65726e (diff) | |
download | linux-5d52c906f059b9ee11747557aaaf1fd85a3b6c3d.tar.xz |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Daniel Borkmann says:
====================
pull-request: bpf 2021-07-09
The following pull-request contains BPF updates for your *net* tree.
We've added 9 non-merge commits during the last 9 day(s) which contain
a total of 13 files changed, 118 insertions(+), 62 deletions(-).
The main changes are:
1) Fix runqslower task->state access from BPF, from SanjayKumar Jeyakumar.
2) Fix subprog poke descriptor tracking use-after-free, from John Fastabend.
3) Fix sparse complaint from prior devmap RCU conversion, from Toke Høiland-Jørgensen.
4) Fix missing va_end in bpftool JIT json dump's error path, from Gu Shengxian.
5) Fix tools/bpf install target from missing runqslower install, from Wei Li.
6) Fix xdpsock BPF sample to unload program on shared umem option, from Wang Hai.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'tools/lib/bpf/libbpf.c')
-rw-r--r-- | tools/lib/bpf/libbpf.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 1e04ce724240..6f5e2757bb3c 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -10136,7 +10136,7 @@ int bpf_link__unpin(struct bpf_link *link) err = unlink(link->pin_path); if (err != 0) - return libbpf_err_errno(err); + return -errno; pr_debug("link fd=%d: unpinned from %s\n", link->fd, link->pin_path); zfree(&link->pin_path); @@ -11197,7 +11197,7 @@ int perf_buffer__poll(struct perf_buffer *pb, int timeout_ms) cnt = epoll_wait(pb->epoll_fd, pb->events, pb->cpu_cnt, timeout_ms); if (cnt < 0) - return libbpf_err_errno(cnt); + return -errno; for (i = 0; i < cnt; i++) { struct perf_cpu_buf *cpu_buf = pb->events[i].data.ptr; |