summaryrefslogtreecommitdiff
path: root/tools/lib
AgeCommit message (Collapse)AuthorFilesLines
2020-05-28Merge tag 'v5.7-rc7' into perf/core, to pick up fixesIngo Molnar2-2/+4
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-05-14libbpf: Fix register naming in PT_REGS s390 macrosSumanth Korikkar1-2/+2
Fix register naming in PT_REGS s390 macros Fixes: b8ebce86ffe6 ("libbpf: Provide CO-RE variants of PT_REGS macros") Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200513154414.29972-1-sumanthk@linux.ibm.com
2020-05-05libsymbols kallsyms: Move hex2u64 out of headerIan Rogers2-15/+0
hex2u64 is a helper that's out of place in kallsyms.h as not being kallsyms related. Move from kallsyms.h to the only user. Committer notes: Move it out of tools/lib/symbol/kallsyms.c as well, as we had to leave it there in the previous patch lest we break the build. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200501221315.54715-4-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-05libsymbols kallsyms: Parse using io apiIan Rogers2-45/+51
'perf record' will call kallsyms__parse 4 times during startup and process megabytes of data. This changes kallsyms__parse to use the io library rather than fgets to improve performance of the user code by over 8%. Before: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 103.988 ms (+- 0.203 ms) After: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 95.571 ms (+- 0.006 ms) For a workload like: $ perf record /bin/true Run under 'perf record -e cycles:u -g' the time goes from: Before 30.10% 1.67% perf perf [.] kallsyms__parse After 25.55% 20.04% perf perf [.] kallsyms__parse So a little under 5% of the start-up time is removed. A lot of what remains is on the kernel side, but caching kallsyms within perf would at least impact memory footprint. Committer notes: The internal/kallsyms-parse bench is run using: [root@five ~]# perf bench internals kallsyms-parse # Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 80.381 ms (+- 0.115 ms) [root@five ~]# And this pre-existing test uses these routines to parse kallsyms and then compare with the info obtained from the matching ELF symtab: [root@five ~]# perf test vmlinux 1: vmlinux symtab matches kallsyms : Ok [root@five ~]# Also we can't remove hex2u64() in this patch as this breaks the build: /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `modules__parse': /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `dso__load_perf_map': /home/acme/git/perf/tools/perf/util/symbol.c:1477: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:1483: undefined reference to `hex2u64' collect2: error: ld returned 1 exit status Leave it there, move it in the next patch. Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200501221315.54715-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-05libperf evlist: Fix a refcount leakIan Rogers1-0/+1
Memory leaks found by applying LLVM's libfuzzer on the tools/perf parse_events function. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: clang-built-linux@googlegroups.com Link: http://lore.kernel.org/lkml/20200319023101.82458-2-irogers@google.com [ Did a minor adjustment due to some other previous patch having already set evlist->all_cpus to NULL at perf_evlist__exit() ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-05libperf: Add NULL pointer check for cpu_map iteration and NULL assignment ↵He Zhe2-1/+2
for all_cpus. A NULL pointer may be passed to perf_cpu_map__cpu and then cause a crash, such as the one commit cb71f7d43ece ("libperf: Setup initial evlist::all_cpus value") fix. Signed-off-by: He Zhe <zhe.he@windriver.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kyle Meyer <meyerk@hpe.com> Link: http://lore.kernel.org/lkml/1583665157-349023-1-git-send-email-zhe.he@windriver.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-05libsubcmd: Introduce OPT_CALLBACK_SET()Arnaldo Carvalho de Melo1-0/+2
To register that an option was set, like with the upcoming 'perf record --switch-output-option' one. Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200429131106.27974-7-acme@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-30libtraceevent: Remove unneeded semicolonZou Wei1-1/+1
Fixes coccicheck warning: tools/lib/traceevent/kbuffer-parse.c:441:2-3: Unneeded semicolon Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zou Wei <zou_wei@huawei.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: http://lore.kernel.org/lkml/1588065121-71236-1-git-send-email-zou_wei@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-30tools api: Add a lightweight buffered reading apiIan Rogers1-0/+112
The synthesize benchmark shows the majority of execution time going to fgets and sscanf, necessary to parse /proc/pid/maps. Add a new buffered reading library that will be used to replace these calls in a follow-up CL. Add tests for the library to perf test. Committer tests: $ perf test api 63: Test api io : Ok $ Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrey Zhizhikin <andrey.z@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200415054050.31645-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-23libbpf: Only check mode flags in get_xdp_idDavid Ahern1-0/+2
The commit in the Fixes tag changed get_xdp_id to only return prog_id if flags is 0, but there are other XDP flags than the modes - e.g., XDP_FLAGS_UPDATE_IF_NOEXIST. Since the intention was only to look at MODE flags, clear other ones before checking if flags is 0. Fixes: f07cbad29741 ("libbpf: Fix bpf_get_link_xdp_id flags handling") Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrey Ignatov <rdna@fb.com>
2020-04-22perf evlist: Remove duplicate headersJagadeesh Pagadala1-2/+0
Code cleanup: Remove duplicate headers which are included twice. Signed-off-by: Jagadeesh Pagadala <jagdsh.linux@gmail.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/1587276836-17088-1-git-send-email-jagdsh.linux@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-22Merge tag 'perf-core-for-mingo-5.8-20200420' of ↵Ingo Molnar3-10/+48
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core fixes and improvements from Arnaldo Carvalho de Melo: kernel + tools/perf: Alexey Budankov: - Introduce CAP_PERFMON to kernel and user space. callchains: Adrian Hunter: - Allow using Intel PT to synthesize callchains for regular events. Kan Liang: - Stitch LBR records from multiple samples to get deeper backtraces, there are caveats, see the csets for details. perf script: Andreas Gerstmayr: - Add flamegraph.py script BPF: Jiri Olsa: - Synthesize bpf_trampoline/dispatcher ksymbol events. perf stat: Arnaldo Carvalho de Melo: - Honour --timeout for forked workloads. Stephane Eranian: - Force error in fallback on :k events, to avoid counting nothing when the user asks for kernel events but is not allowed to. perf bench: Ian Rogers: - Add event synthesis benchmark. tools api fs: Stephane Eranian: - Make xxx__mountpoint() more scalable libtraceevent: He Zhe: - Handle return value of asprintf. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-04-18tools lib traceevent: Take care of return value of asprintfHe Zhe1-10/+19
According to the API, if memory allocation wasn't possible, or some other error occurs, asprintf will return -1, and the contents of strp below are undefined. int asprintf(char **strp, const char *fmt, ...); This patch takes care of return value of asprintf to make it less error prone and prevent the following build warning. ignoring return value of ‘asprintf’, declared with attribute warn_unused_result [-Wunused-result] Signed-off-by: He Zhe <zhe.he@windriver.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tzvetomir Stoyanov <tstoyanov@vmware.com> Cc: hewenliang4@huawei.com Link: http://lore.kernel.org/lkml/1582163930-233692-1-git-send-email-zhe.he@windriver.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-16tools api fs: Make xxx__mountpoint() more scalableStephane Eranian2-0/+29
The xxx_mountpoint() interface provided by fs.c finds mount points for common pseudo filesystems. The first time xxx_mountpoint() is invoked, it scans the mount table (/proc/mounts) looking for a match. If found, it is cached. The price to scan /proc/mounts is paid once if the mount is found. When the mount point is not found, subsequent calls to xxx_mountpoint() scan /proc/mounts over and over again. There is no caching. This causes a scaling issue in perf record with hugeltbfs__mountpoint(). The function is called for each process found in synthesize__mmap_events(). If the machine has thousands of processes and if the /proc/mounts has many entries this could cause major overhead in perf record. We have observed multi-second slowdowns on some configurations. As an example on a laptop: Before: $ sudo umount /dev/hugepages $ strace -e trace=openat -o /tmp/tt perf record -a ls $ fgrep mounts /tmp/tt 285 After: $ sudo umount /dev/hugepages $ strace -e trace=openat -o /tmp/tt perf record -a ls $ fgrep mounts /tmp/tt 1 One could argue that the non-caching in case the moint point is not found is intentional. That way subsequent calls may discover a moint point if the sysadmin mounts the filesystem. But the same argument could be made against caching the mount point. It could be unmounted causing errors. It all depends on the intent of the interface. This patch assumes it is expected to scan /proc/mounts once. The patch documents the caching behavior in the fs.h header file. An alternative would be to just fix perf record. But it would solve the problem with hugetlbs__mountpoint() but there could be similar issues (possibly down the line) with other xxx_mountpoint() calls in perf or other tools. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrey Zhizhikin <andrey.z@gmail.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200402154357.107873-3-irogers@google.com Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-15libbpf: Fix type of old_fd in bpf_xdp_set_link_optsToke Høiland-Jørgensen1-1/+1
The 'old_fd' parameter used for atomic replacement of XDP programs is supposed to be an FD, but was left as a u32 from an earlier iteration of the patch that added it. It was converted to an int when read, so things worked correctly even with negative values, but better change the definition to correctly reflect the intention. Fixes: bd5ca3ef93cd ("libbpf: Add function to set link XDP fd while specifying old program") Reported-by: David Ahern <dsahern@gmail.com> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Ahern <dsahern@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200414145025.182163-1-toke@redhat.com
2020-04-15libbpf: Always specify expected_attach_type on program load if supportedAndrii Nakryiko1-44/+82
For some types of BPF programs that utilize expected_attach_type, libbpf won't set load_attr.expected_attach_type, even if expected_attach_type is known from section definition. This was done to preserve backwards compatibility with old kernels that didn't recognize expected_attach_type attribute yet (which was added in 5e43f899b03a ("bpf: Check attach type at prog load time"). But this is problematic for some BPF programs that utilize newer features that require kernel to know specific expected_attach_type (e.g., extended set of return codes for cgroup_skb/egress programs). This patch makes libbpf specify expected_attach_type by default, but also detect support for this field in kernel and not set it during program load. This allows to have a good metadata for bpf_program (e.g., bpf_program__get_extected_attach_type()), but still work with old kernels (for cases where it can work at all). Additionally, due to expected_attach_type being always set for recognized program types, bpf_program__attach_cgroup doesn't have to do extra checks to determine correct attach type, so remove that additional logic. Also adjust section_names selftest to account for this change. More detailed discussion can be found in [0]. [0] https://lore.kernel.org/bpf/20200412003604.GA15986@rdna-mbp.dhcp.thefacebook.com/ Fixes: 5cf1e9145630 ("bpf: cgroup inet skb programs can return 0 to 3") Fixes: 5e43f899b03a ("bpf: Check attach type at prog load time") Reported-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrey Ignatov <rdna@fb.com> Link: https://lore.kernel.org/bpf/20200414182645.1368174-1-andriin@fb.com
2020-04-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller1-3/+3
Daniel Borkmann says: ==================== pull-request: bpf 2020-04-10 The following pull-request contains BPF updates for your *net* tree. We've added 13 non-merge commits during the last 7 day(s) which contain a total of 13 files changed, 137 insertions(+), 43 deletions(-). The main changes are: 1) JIT code emission fixes for riscv and arm32, from Luke Nelson and Xi Wang. 2) Disable vmlinux BTF info if GCC_PLUGIN_RANDSTRUCT is used, from Slava Bacherikov. 3) Fix oob write in AF_XDP when meta data is used, from Li RongQing. 4) Fix bpf_get_link_xdp_id() handling on single prog when flags are specified, from Andrey Ignatov. 5) Fix sk_assign() BPF helper for request sockets that can have sk_reuseport field uninitialized, from Joe Stringer. 6) Fix mprotect() test case for the BPF LSM, from KP Singh. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-04-08libbpf: Fix bpf_get_link_xdp_id flags handlingAndrey Ignatov1-1/+1
Currently if one of XDP_FLAGS_{DRV,HW,SKB}_MODE flags is passed to bpf_get_link_xdp_id() and there is a single XDP program attached to ifindex, that program's id will be returned by bpf_get_link_xdp_id() in prog_id argument no matter what mode the program is attached in, i.e. flags argument is not taken into account. For example, if there is a single program attached with XDP_FLAGS_SKB_MODE but user calls bpf_get_link_xdp_id() with flags = XDP_FLAGS_DRV_MODE, that skb program will be returned. Fix it by returning info->prog_id only if user didn't specify flags. If flags is specified then return corresponding mode-specific-field from struct xdp_link_info. The initial error was introduced in commit 50db9f073188 ("libbpf: Add a support for getting xdp prog id on ifindex") and then refactored in 473f4e133a12 so 473f4e133a12 is used in the Fixes tag. Fixes: 473f4e133a12 ("libbpf: Add bpf_get_link_xdp_info() function to get more XDP information") Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/0e9e30490b44b447bb2bebc69c7135e7fe7e4e40.1586236080.git.rdna@fb.com
2020-04-07lib/rbtree: fix coding style of assignmentschenqiwu1-2/+2
Leave blank space between the right-hand and left-hand side of the assignment to meet the kernel coding style better. Signed-off-by: chenqiwu <chenqiwu@xiaomi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Michel Lespinasse <walken@google.com> Link: http://lkml.kernel.org/r/1582621140-25850-1-git-send-email-qiwuchen55@gmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-06libbpf: Initialize *nl_pid so gcc 10 is happyJeremy Cline1-2/+2
Builds of Fedora's kernel-tools package started to fail with "may be used uninitialized" warnings for nl_pid in bpf_set_link_xdp_fd() and bpf_get_link_xdp_info() on the s390 architecture. Although libbpf_netlink_open() always returns a negative number when it does not set *nl_pid, the compiler does not determine this and thus believes the variable might be used uninitialized. Assuage gcc's fears by explicitly initializing nl_pid. Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1807781 Signed-off-by: Jeremy Cline <jcline@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200404051430.698058-1-jcline@redhat.com
2020-04-05Merge tag 'perf-urgent-2020-04-05' of ↵Linus Torvalds1-0/+7
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull more perf updates from Thomas Gleixner: "Perf updates all over the place: core: - Support for cgroup tracking in samples to allow cgroup based analysis tools: - Support for cgroup analysis - Commandline option and hotkey for perf top to change the sort order - A set of fixes all over the place - Various build system related improvements - Updates of the X86 pmu event JSON data - Documentation updates" * tag 'perf-urgent-2020-04-05' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (55 commits) perf python: Fix clang detection to strip out options passed in $CC perf tools: Support Python 3.8+ in Makefile perf script: Fix invalid read of directory entry after closedir() perf script report: Fix SEGFAULT when using DWARF mode perf script: add -S/--symbols documentation perf pmu-events x86: Use CPU_CLK_UNHALTED.THREAD in Kernel_Utilization metric perf events parser: Add missing Intel CPU events to parser perf script: Allow --symbol to accept hexadecimal addresses perf report/top TUI: Fix title line formatting perf top: Support hotkey to change sort order perf top: Support --group-sort-idx to change the sort order perf symbols: Fix arm64 gap between kernel start and module end perf build-test: Honour JOBS to override detection of number of cores perf script: Add --show-cgroup-events option perf top: Add --all-cgroups option perf record: Add --all-cgroups option perf record: Support synthesizing cgroup events perf report: Add 'cgroup' sort key perf cgroup: Maintain cgroup hierarchy perf tools: Basic support for CGROUP event ...
2020-04-03Merge tag 'spdx-5.7-rc1' of ↵Linus Torvalds3-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx Pull SPDX updates from Greg KH: "Here are three SPDX patches for 5.7-rc1. One fixes up the SPDX tag for a single driver, while the other two go through the tree and add SPDX tags for all of the .gitignore files as needed. Nothing too complex, but you will get a merge conflict with your current tree, that should be trivial to handle (one file modified by two things, one file deleted.) All three of these have been in linux-next for a while, with no reported issues other than the merge conflict" * tag 'spdx-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: ASoC: MT6660: make spdxcheck.py happy .gitignore: add SPDX License Identifier .gitignore: remove too obvious comments
2020-04-03perf tools: Basic support for CGROUP eventNamhyung Kim1-0/+7
Implement basic functionality to support cgroup tracking. Each cgroup can be identified by inode number which can be read from userspace too. The actual cgroup processing will come in the later patch. Reported-by: kernel test robot <rong.a.chen@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> [ fix perf test failure on sampling parsing ] Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200325124536.2800725-4-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-01Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextLinus Torvalds11-75/+659
Pull networking updates from David Miller: "Highlights: 1) Fix the iwlwifi regression, from Johannes Berg. 2) Support BSS coloring and 802.11 encapsulation offloading in hardware, from John Crispin. 3) Fix some potential Spectre issues in qtnfmac, from Sergey Matyukevich. 4) Add TTL decrement action to openvswitch, from Matteo Croce. 5) Allow paralleization through flow_action setup by not taking the RTNL mutex, from Vlad Buslov. 6) A lot of zero-length array to flexible-array conversions, from Gustavo A. R. Silva. 7) Align XDP statistics names across several drivers for consistency, from Lorenzo Bianconi. 8) Add various pieces of infrastructure for offloading conntrack, and make use of it in mlx5 driver, from Paul Blakey. 9) Allow using listening sockets in BPF sockmap, from Jakub Sitnicki. 10) Lots of parallelization improvements during configuration changes in mlxsw driver, from Ido Schimmel. 11) Add support to devlink for generic packet traps, which report packets dropped during ACL processing. And use them in mlxsw driver. From Jiri Pirko. 12) Support bcmgenet on ACPI, from Jeremy Linton. 13) Make BPF compatible with RT, from Thomas Gleixnet, Alexei Starovoitov, and your's truly. 14) Support XDP meta-data in virtio_net, from Yuya Kusakabe. 15) Fix sysfs permissions when network devices change namespaces, from Christian Brauner. 16) Add a flags element to ethtool_ops so that drivers can more simply indicate which coalescing parameters they actually support, and therefore the generic layer can validate the user's ethtool request. Use this in all drivers, from Jakub Kicinski. 17) Offload FIFO qdisc in mlxsw, from Petr Machata. 18) Support UDP sockets in sockmap, from Lorenz Bauer. 19) Fix stretch ACK bugs in several TCP congestion control modules, from Pengcheng Yang. 20) Support virtual functiosn in octeontx2 driver, from Tomasz Duszynski. 21) Add region operations for devlink and use it in ice driver to dump NVM contents, from Jacob Keller. 22) Add support for hw offload of MACSEC, from Antoine Tenart. 23) Add support for BPF programs that can be attached to LSM hooks, from KP Singh. 24) Support for multiple paths, path managers, and counters in MPTCP. From Peter Krystad, Paolo Abeni, Florian Westphal, Davide Caratti, and others. 25) More progress on adding the netlink interface to ethtool, from Michal Kubecek" * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2121 commits) net: ipv6: rpl_iptunnel: Fix potential memory leak in rpl_do_srh_inline cxgb4/chcr: nic-tls stats in ethtool net: dsa: fix oops while probing Marvell DSA switches net/bpfilter: remove superfluous testing message net: macb: Fix handling of fixed-link node net: dsa: ksz: Select KSZ protocol tag netdevsim: dev: Fix memory leak in nsim_dev_take_snapshot_write net: stmmac: add EHL 2.5Gbps PCI info and PCI ID net: stmmac: add EHL PSE0 & PSE1 1Gbps PCI info and PCI ID net: stmmac: create dwmac-intel.c to contain all Intel platform net: dsa: bcm_sf2: Support specifying VLAN tag egress rule net: dsa: bcm_sf2: Add support for matching VLAN TCI net: dsa: bcm_sf2: Move writing of CFP_DATA(5) into slicing functions net: dsa: bcm_sf2: Check earlier for FLOW_EXT and FLOW_MAC_EXT net: dsa: bcm_sf2: Disable learning for ASP port net: dsa: b53: Deny enslaving port 7 for 7278 into a bridge net: dsa: b53: Prevent tagged VLAN on port 7 for 7278 net: dsa: b53: Restore VLAN entries upon (re)configuration net: dsa: bcm_sf2: Fix overflow checks hv_netvsc: Remove unnecessary round_up for recv_completion_cnt ...
2020-03-31libbpf: Add support for bpf_link-based cgroup attachmentAndrii Nakryiko5-1/+110
Add bpf_program__attach_cgroup(), which uses BPF_LINK_CREATE subcommand to create an FD-based kernel bpf_link. Also add low-level bpf_link_create() API. If expected_attach_type is not specified explicitly with bpf_program__set_expected_attach_type(), libbpf will try to determine proper attach type from BPF program's section definition. Also add support for bpf_link's underlying BPF program replacement: - unconditional through high-level bpf_link__update_program() API; - cmpxchg-like with specifying expected current BPF program through low-level bpf_link_update() API. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200330030001.2312810-4-andriin@fb.com
2020-03-30tools/libbpf: Add support for BPF_PROG_TYPE_LSMKP Singh4-5/+44
Since BPF_PROG_TYPE_LSM uses the same attaching mechanism as BPF_PROG_TYPE_TRACING, the common logic is refactored into a static function bpf_program__attach_btf_id. A new API call bpf_program__attach_lsm is still added to avoid userspace conflicts if this ever changes in the future. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Brendan Jackman <jackmanb@google.com> Reviewed-by: Florent Revest <revest@google.com> Reviewed-by: James Morris <jamorris@linux.microsoft.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200329004356.27286-7-kpsingh@chromium.org
2020-03-30bpf: Introduce BPF_PROG_TYPE_LSMKP Singh1-0/+1
Introduce types and configs for bpf programs that can be attached to LSM hooks. The programs can be enabled by the config option CONFIG_BPF_LSM. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Brendan Jackman <jackmanb@google.com> Reviewed-by: Florent Revest <revest@google.com> Reviewed-by: Thomas Garnier <thgarnie@google.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: James Morris <jamorris@linux.microsoft.com> Link: https://lore.kernel.org/bpf/20200329004356.27286-2-kpsingh@chromium.org
2020-03-30libbpf: Add setter for initial value for internal mapsToke Høiland-Jørgensen3-0/+14
For internal maps (most notably the maps backing global variables), libbpf uses an internal mmaped area to store the data after opening the object. This data is subsequently copied into the kernel map when the object is loaded. This adds a function to set a new value for that data, which can be used to before it is loaded into the kernel. This is especially relevant for RODATA maps, since those are frozen on load. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200329132253.232541-1-toke@redhat.com
2020-03-29libbpf: Add function to set link XDP fd while specifying old programToke Høiland-Jørgensen3-1/+42
This adds a new function to set the XDP fd while specifying the FD of the program to replace, using the newly added IFLA_XDP_EXPECTED_FD netlink parameter. The new function uses the opts struct mechanism to be extendable in the future. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/158515700857.92963.7052131201257841700.stgit@toke.dk
2020-03-28libbpf, xsk: Init all ring members in xsk_umem__create and xsk_socket__createFletcher Dunn1-2/+14
Fix a sharp edge in xsk_umem__create and xsk_socket__create. Almost all of the members of the ring buffer structs are initialized, but the "cached_xxx" variables are not all initialized. The caller is required to zero them. This is needlessly dangerous. The results if you don't do it can be very bad. For example, they can cause xsk_prod_nb_free and xsk_cons_nb_avail to return values greater than the size of the queue. xsk_ring_cons__peek can return an index that does not refer to an item that has been queued. I have confirmed that without this change, my program misbehaves unless I memset the ring buffers to zero before calling the function. Afterwards, my program works without (or with) the memset. Signed-off-by: Fletcher Dunn <fletcherd@valvesoftware.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/85f12913cde94b19bfcb598344701c38@valvesoftware.com
2020-03-26libbpf: Don't allocate 16M for log buffer by defaultStanislav Fomichev2-13/+29
For each prog/btf load we allocate and free 16 megs of verifier buffer. On production systems it doesn't really make sense because the programs/btf have gone through extensive testing and (mostly) guaranteed to successfully load. Let's assume successful case by default and skip buffer allocation on the first try. If there is an error, start with BPF_LOG_BUF_SIZE and double it on each ENOSPC iteration. v3: * Return -ENOMEM when can't allocate log buffer (Andrii Nakryiko) v2: * Don't allocate the buffer at all on the first try (Andrii Nakryiko) Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200325195521.112210-1-sdf@google.com
2020-03-26libbpf: Remove unused parameter `def` to get_map_field_intTobias Klauser1-10/+6
Has been unused since commit ef99b02b23ef ("libbpf: capture value in BTF type info for BTF-defined map defs"). Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200325113655.19341-1-tklauser@distanz.ch
2020-03-25.gitignore: add SPDX License IdentifierMasahiro Yamada3-0/+3
Add SPDX License Identifier to all .gitignore files. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-17bpf, libbpf: Fix ___bpf_kretprobe_args1(x) macro definitionWenbo Zhang1-1/+1
Use PT_REGS_RC instead of PT_REGS_RET to get ret correctly. Fixes: df8ff35311c8 ("libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h") Signed-off-by: Wenbo Zhang <ethercflow@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200315083252.22274-1-ethercflow@gmail.com
2020-03-14libbpf: Provide CO-RE variants of PT_REGS macrosAndrii Nakryiko1-0/+103
Syscall raw tracepoints have struct pt_regs pointer as tracepoint's first argument. After that, reading any of pt_regs fields requires bpf_probe_read(), even for tp_btf programs. Due to that, PT_REGS_PARMx macros are not usable as is. This patch adds CO-RE variants of those macros that use BPF_CORE_READ() to read necessary fields. This provides relocatable architecture-agnostic pt_regs field accesses. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200313172336.1879637-4-andriin@fb.com
2020-03-14libbpf: Ignore incompatible types with matching name during CO-RE relocationAndrii Nakryiko1-0/+4
When finding target type candidates, ignore forward declarations, functions, and other named types of incompatible kind. Not doing this can cause false errors. See [0] for one such case (due to struct pt_regs forward declaration). [0] https://github.com/iovisor/bcc/pull/2806#issuecomment-598543645 Fixes: ddc7c3042614 ("libbpf: implement BPF CO-RE offset relocation algorithm") Reported-by: Wenbo Zhang <ethercflow@gmail.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200313172336.1879637-3-andriin@fb.com
2020-03-13libbpf: Split BTF presence checks into libbpf- and kernel-specific partsAndrii Nakryiko1-5/+12
Needs for application BTF being present differs between user-space libbpf needs and kernel needs. Currently, BTF is mandatory only in kernel only when BPF application is using STRUCT_OPS. While libbpf itself relies more heavily on presense of BTF: - for BTF-defined maps; - for Kconfig externs; - for STRUCT_OPS as well. Thus, checks for presence and validness of bpf_object's BPF needs to be performed separately, which is patch does. Fixes: 5327644614a1 ("libbpf: Relax check whether BTF is mandatory") Reported-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Cc: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20200312185033.736911-1-andriin@fb.com
2020-03-05tools/libbpf: Add support for BPF_MODIFY_RETURNKP Singh1-0/+4
Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200304191853.1529-6-kpsingh@chromium.org
2020-03-04libbpf: Assume unsigned values for BTF_KIND_ENUMAndrii Nakryiko1-4/+4
Currently, BTF_KIND_ENUM type doesn't record whether enum values should be interpreted as signed or unsigned. In Linux, most enums are unsigned, though, so interpreting them as unsigned matches real world better. Change btf_dump test case to test maximum 32-bit value, instead of negative value. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200303003233.3496043-3-andriin@fb.com
2020-03-04tools lib traceevent: Remove extra '\n' in print_event_time()Steven Rostedt (VMware)1-1/+1
If the precision of print_event_time() is zero or greater than the timestamp, it uses a different format. But that format had an extra new line at the end, and caused the output to not look right: cpus=2 sleep-3946 [001]111264306005 : function: inotify_inode_queue_event sleep-3946 [001]111264307158 : function: __fsnotify_parent sleep-3946 [001]111264307637 : function: inotify_dentry_parent_queue_event sleep-3946 [001]111264307989 : function: fsnotify sleep-3946 [001]111264308401 : function: audit_syscall_exit Fixes: 38847db9740a ("libtraceevent, perf tools: Changes in tep_print_event_* APIs") Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20200303231852.6ab6882f@oasis.local.home Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-03-04libperf: Add counting exampleMichael Petlan1-0/+83
Current libperf man pages mention file counting.c "coming with libperf package", however, the file is missing. Add the file then. Fixes: 81de3bf37a8b ("libperf: Add man pages") Signed-off-by: Michael Petlan <mpetlan@redhat.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> LPU-Reference: 20200227194424.28210-1-mpetlan@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-03-04tools lib api fs: Move cgroupsfs_find_mountpoint()Namhyung Kim3-0/+70
Move it from tools/perf/util/cgroup.c as it can be used by other places. Note that cgroup filesystem is different from others since it's usually mounted separately (in v1) for each subsystem. I just copied the code with a little modification to pass a name of subsystem. Suggested-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20200127100031.1368732-1-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-03-04libbpf: Fix handling of optional field_name in btf_dump__emit_type_declAndrii Nakryiko1-1/+1
Internal functions, used by btf_dump__emit_type_decl(), assume field_name is never going to be NULL. Ensure it's always the case. Fixes: 9f81654eebe8 ("libbpf: Expose BTF-to-C type declaration emitting API") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200303180800.3303471-1-andriin@fb.com
2020-03-03libbpf: Add bpf_link pinning/unpinningAndrii Nakryiko3-27/+114
With bpf_link abstraction supported by kernel explicitly, add pinning/unpinning API for links. Also allow to create (open) bpf_link from BPF FS file. This API allows to have an "ephemeral" FD-based BPF links (like raw tracepoint or fexit/freplace attachments) surviving user process exit, by pinning them in a BPF FS, which is an important use case for long-running BPF programs. As part of this, expose underlying FD for bpf_link. While legacy bpf_link's might not have a FD associated with them (which will be expressed as a bpf_link with fd=-1), kernel's abstraction is based around FD-based usage, so match it closely. This, subsequently, allows to have a generic pinning/unpinning API for generalized bpf_link. For some types of bpf_links kernel might not support pinning, in which case bpf_link__pin() will return error. With FD being part of generic bpf_link, also get rid of bpf_link_fd in favor of using vanialla bpf_link. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200303043159.323675-3-andriin@fb.com
2020-03-03libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.hAndrii Nakryiko1-0/+118
Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macro into libbpf's bpf_tracing.h header to make it available for non-selftests users. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200229231112.1240137-5-andriin@fb.com
2020-03-03libbpf: Fix use of PT_REGS_PARM macros with vmlinux.hAndrii Nakryiko1-1/+1
Add detection of vmlinux.h to bpf_tracing.h header for PT_REGS macro. Currently, BPF applications have to define __KERNEL__ symbol to use correct definition of struct pt_regs on x86 arch. This is due to different field names under internal kernel vs UAPI conditions. To make this more transparent for users, detect vmlinux.h by checking __VMLINUX_H__ symbol. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200229231112.1240137-3-andriin@fb.com
2020-02-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller3-7/+40
Daniel Borkmann says: ==================== pull-request: bpf-next 2020-02-21 The following pull-request contains BPF updates for your *net-next* tree. We've added 25 non-merge commits during the last 4 day(s) which contain a total of 33 files changed, 2433 insertions(+), 161 deletions(-). The main changes are: 1) Allow for adding TCP listen sockets into sock_map/hash so they can be used with reuseport BPF programs, from Jakub Sitnicki. 2) Add a new bpf_program__set_attach_target() helper for adding libbpf support to specify the tracepoint/function dynamically, from Eelco Chaudron. 3) Add bpf_read_branch_records() BPF helper which helps use cases like profile guided optimizations, from Daniel Xu. 4) Enable bpf_perf_event_read_value() in all tracing programs, from Song Liu. 5) Relax BTF mandatory check if only used for libbpf itself e.g. to process BTF defined maps, from Andrii Nakryiko. 6) Move BPF selftests -mcpu compilation attribute from 'probe' to 'v3' as it has been observed that former fails in envs with low memlock, from Yonghong Song. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-21libbpf: Add support for dynamic program attach targetEelco Chaudron3-4/+36
Currently when you want to attach a trace program to a bpf program the section name needs to match the tracepoint/function semantics. However the addition of the bpf_program__set_attach_target() API allows you to specify the tracepoint/function dynamically. The call flow would look something like this: xdp_fd = bpf_prog_get_fd_by_id(id); trace_obj = bpf_object__open_file("func.o", NULL); prog = bpf_object__find_program_by_title(trace_obj, "fentry/myfunc"); bpf_program__set_expected_attach_type(prog, BPF_TRACE_FENTRY); bpf_program__set_attach_target(prog, xdp_fd, "xdpfilt_blk_all"); bpf_object__load(trace_obj) Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/158220519486.127661.7964708960649051384.stgit@xdp-tutorial
2020-02-21libbpf: Bump libpf current version to v0.0.8Eelco Chaudron1-0/+3
New development cycles starts, bump to v0.0.8. Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/158220518424.127661.8278643006567775528.stgit@xdp-tutorial
2020-02-20libbpf: Relax check whether BTF is mandatoryAndrii Nakryiko1-3/+1
If BPF program is using BTF-defined maps, BTF is required only for libbpf itself to process map definitions. If after that BTF fails to be loaded into kernel (e.g., if it doesn't support BTF at all), this shouldn't prevent valid BPF program from loading. Existing retry-without-BTF logic for creating maps will succeed to create such maps without any problems. So, presence of .maps section shouldn't make BTF required for kernel. Update the check accordingly. Validated by ensuring simple BPF program with BTF-defined maps is still loaded on old kernel without BTF support and map is correctly parsed and created. Fixes: abd29c931459 ("libbpf: allow specifying map definitions using BTF") Reported-by: Julia Kartseva <hex@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200220062635.1497872-1-andriin@fb.com