summaryrefslogtreecommitdiff
path: root/tools/perf/util
AgeCommit message (Collapse)AuthorFilesLines
2025-03-13perf ftrace: Use atomic inc to update histogram in BPFNamhyung Kim1-1/+1
It should use an atomic instruction to update even if the histogram is keyed by delta as it's also used for stats. Cc: Gabriele Monaco <gmonaco@redhat.com> Link: https://lore.kernel.org/r/20250227191223.1288473-3-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-13perf ftrace: Remove an unnecessary condition check in BPFNamhyung Kim1-2/+1
The bucket_num is set based on the {max,min}_latency already in cmd_ftrace(), so no need to check it again in BPF. Also I found that it didn't pass the max_latency to BPF. :) No functional changes intended. Cc: Gabriele Monaco <gmonaco@redhat.com> Link: https://lore.kernel.org/r/20250227191223.1288473-2-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-13perf ftrace: Fix latency stats with BPFNamhyung Kim2-13/+15
When BPF collects the stats for the latency in usec, it first divides the time by 1000. But that means it would have 0 if the delta is small and won't update the total time properly. Let's keep the stats in nsec always and adjust to usec before printing. Before: $ sudo ./perf ftrace latency -ab -T mutex_lock --hide-empty -- sleep 0.1 # DURATION | COUNT | GRAPH | 0 - 1 us | 765 | ############################################# | 1 - 2 us | 10 | | 2 - 4 us | 2 | | 4 - 8 us | 5 | | # statistics (in usec) total time: 0 <<<--- (here) avg time: 0 max time: 6 min time: 0 count: 782 After: $ sudo ./perf ftrace latency -ab -T mutex_lock --hide-empty -- sleep 0.1 # DURATION | COUNT | GRAPH | 0 - 1 us | 880 | ############################################ | 1 - 2 us | 13 | | 2 - 4 us | 8 | | 4 - 8 us | 3 | | # statistics (in usec) total time: 268 <<<--- (here) avg time: 0 max time: 6 min time: 0 count: 904 Tested-by: Athira Rajeev <atrajeev@linux.ibm.com> Cc: Gabriele Monaco <gmonaco@redhat.com> Link: https://lore.kernel.org/r/20250227191223.1288473-1-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf parse-events: Corrections to topdown sortingIan Rogers1-49/+96
In the case of '{instructions,slots},faults,topdown-retiring' the first event that must be grouped, slots, is ignored causing the topdown-retiring event not to be adjacent to the group it needs to be inserted into. Don't ignore the group members when computing the force_grouped_index. Make the force_grouped_index be for the leader of the group it is within and always use it first rather than a group leader index so that topdown events may be sorted from one group into another. As the PMU name comparison applies to moving events in the same group ensure the name ordering is always respected. Change the group splitting logic to not group if there are no other topdown events and to fix cases where the force group leader wasn't being grouped with the other members of its group. Reported-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Closes: https://lore.kernel.org/lkml/20250224083306.71813-2-dapeng1.mi@linux.intel.com/ Closes: https://lore.kernel.org/lkml/f7e4f7e8-748c-4ec7-9088-0e844392c11a@linux.intel.com/ Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://lore.kernel.org/r/20250307023906.1135613-3-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf tools: Improve handling of hybrid PMUs in perf_event_attr__fprintfIan Rogers1-49/+75
Support the PMU name from the legacy hardware and hw_cache PMU extended types. Remove some macros and make variables more intention revealing, rather than just being called "value". Before: ``` $ perf stat -vv -e instructions true ... ------------------------------------------------------------ perf_event_attr: type 0 (PERF_TYPE_HARDWARE) size 136 config 0xa00000001 sample_type IDENTIFIER read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING disabled 1 inherit 1 enable_on_exec 1 exclude_guest 1 ------------------------------------------------------------ sys_perf_event_open: pid 181636 cpu -1 group_fd -1 flags 0x8 = 5 ------------------------------------------------------------ perf_event_attr: type 0 (PERF_TYPE_HARDWARE) size 136 config 0x400000001 sample_type IDENTIFIER read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING disabled 1 inherit 1 enable_on_exec 1 exclude_guest 1 ------------------------------------------------------------ sys_perf_event_open: pid 181636 cpu -1 group_fd -1 flags 0x8 = 6 ... ``` After: ``` $ perf stat -vv -e instructions true ... ------------------------------------------------------------ perf_event_attr: type 0 (PERF_TYPE_HARDWARE) size 136 config 0xa00000001 (cpu_atom/PERF_COUNT_HW_INSTRUCTIONS/) sample_type IDENTIFIER read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING disabled 1 inherit 1 enable_on_exec 1 ------------------------------------------------------------ sys_perf_event_open: pid 181724 cpu -1 group_fd -1 flags 0x8 = 5 ------------------------------------------------------------ perf_event_attr: type 0 (PERF_TYPE_HARDWARE) size 136 config 0x400000001 (cpu_core/PERF_COUNT_HW_INSTRUCTIONS/) sample_type IDENTIFIER read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING disabled 1 inherit 1 enable_on_exec 1 ------------------------------------------------------------ sys_perf_event_open: pid 181724 cpu -1 group_fd -1 flags 0x8 = 6 ... ``` Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Tested-by: Thomas Falcon <thomas.falcon@intel.com> Tested-by: James Clark <james.clark@linaro.org> Link: https://lore.kernel.org/r/20250307023906.1135613-1-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf python: Add evlist.config to set up record optionsIan Rogers1-0/+33
Add access to evlist__config that is used to configure an evlist with record options. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-11-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf python: Add evlist all_cpus accessorIan Rogers1-0/+16
Add a means to get the reference counted all_cpus CPU map from an evlist in its python form. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-10-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf python: Avoid duplicated code in get_tracepoint_fieldIan Rogers1-13/+4
The code replicates computations done in evsel__tp_format, reuse evsel__tp_format to simplify the python C code. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-9-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf python: Update ungrouped evsel leader in cloneIan Rogers1-0/+2
evsels are cloned in the python code as they form part of the Python object pyrf_evsel. The cloning doesn't update the evsel's leader, do this for the case of an evsel being ungrouped. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-8-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf python: Add optional cpus and threads arguments to parse_eventsIan Rogers1-2/+8
Used for the evlist initialization. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-7-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf python: Add member access to a number of evsel variablesIan Rogers1-0/+23
Most variables are part of the perf_event_attr, so that they may be queried and modified. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-6-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf python: Add evlist enable and disable methodsIan Rogers1-0/+26
By default the evsels from parse_events will be disabled. Add access to the evlist functions so they can be enabled/disabled. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-5-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf evsel: tp_format accessing improvementsIan Rogers1-1/+15
Ensure evsel__clone copies the tp_sys and tp_name variables. In evsel__tp_format, if tp_sys isn't set, use the config value to find the tp_format. This succeeds in python code where pyrf__tracepoint has already found the format. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-4-irogers@google.com Fixes: 6c8310e8380d472c ("perf evsel: Allow evsel__newtp without libtraceevent") Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf evlist: Add success path to evlist__create_syswide_mapsIan Rogers1-7/+6
Over various refactorings evlist__create_syswide_maps has been made to only ever return with -ENOMEM. Fix this so that when perf_evlist__set_maps is successfully called, 0 is returned. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-3-irogers@google.com Fixes: 8c0498b6891d7ca5 ("perf evlist: Fix create_syswide_maps() not propagating maps") Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-12perf debug: Avoid stack overflow in recursive error messageIan Rogers1-1/+1
In debug_file, pr_warning_once is called on error. As that function calls debug_file the function will yield a stack overflow. Switch the location of the call so the recursion is avoided. Reviewed-by: Howard Chu <howardchu95@gmail.com> Signed-off-by: Ian Rogers <irogers@google.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250228222308.626803-2-irogers@google.com Fixes: ec49230cf6dda704 ("perf debug: Expose debug file") Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-11perf symbol: Support .gnu_debugdata for symbolsStephen Brennan4-2/+110
Fedora introduced a "MiniDebuginfo" feature, in which an LZMA-compressed ELF file is placed inside a section named ".gnu_debugdata". This file contains nothing but a symbol table, which can be used to supplement the .dynsym section which only contains required symbols for runtime. It is supported by GDB for stack traces, but it should be useful for tracing as well. Implement support for loading symbols from .gnu_debugdata. Signed-off-by: Stephen Brennan <stephen.s.brennan@oracle.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250307232206.2102440-4-stephen.s.brennan@oracle.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-11perf tools: Add LZMA decompression from FILEStephen Brennan2-11/+26
Internally lzma_decompress_to_file() creates a FILE from the filename. Add an API that takes an existing FILE directly. This allows decompressing already-open files and even buffers opened by fmemopen(). It is necessary for supporting .gnu_debugdata in the next patch. Signed-off-by: Stephen Brennan <stephen.s.brennan@oracle.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250307232206.2102440-3-stephen.s.brennan@oracle.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-11perf tools: Add dummy functions for !HAVE_LZMA_SUPPORTStephen Brennan1-0/+12
This allows us to use them without needing to ifdef the calling code. Signed-off-by: Stephen Brennan <stephen.s.brennan@oracle.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20250307232206.2102440-2-stephen.s.brennan@oracle.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-11perf mem: Don't leak mem event namesIan Rogers2-28/+42
When preparing the mem events for the argv copies are intentionally made. These copies are leaked and cause runs of perf using address sanitizer to fail. Rather than leak the memory allocate a chunk of memory for the mem event names upfront and build the strings in this - the storage is sized larger than the previous buffer size. The caller is then responsible for clearing up this memory. As part of this change, remove the mem_loads_name and mem_stores_name global buffers then change the perf_pmu__mem_events_name to write to an out argument buffer. Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Thomas Falcon <thomas.falcon@intel.com> Reviewed-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250308012853.1384762-1-irogers@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-10perf util: Remove unused perf_config__refreshDr. David Alan Gilbert2-7/+0
perf_config__refresh() was added in 2016 by commit 8a0a9c7e9146 ("perf config: Introduce new init() and exit()") but has remained unused. Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250305023120.155420-7-linux@treblig.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-10perf util: Remove unused perf_pmus__default_pmu_nameDr. David Alan Gilbert2-30/+0
perf_pmus__default_pmu_name() last use was removed by 2023's commit e3edd6cf6399 ("perf pmu-events: Reduce processed events by passing PMU") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250305023120.155420-6-linux@treblig.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-10perf util: Remove unused perf_data__update_dirDr. David Alan Gilbert2-21/+0
perf_data__update_dir() was added in 2019's commit e8be135751f2 ("perf data: Add perf_data__update_dir() function") but has never been used. Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250305023120.155420-5-linux@treblig.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-10perf util: Remove unused pstack__popDr. David Alan Gilbert2-15/+0
The last use of pstack__pop() was removed in 2015 by commit 6422184b087f ("perf hists browser: Simplify zooming code using pstack_peek()") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250305023120.155420-4-linux@treblig.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-10perf util: Remove unused perf_color_default_configDr. David Alan Gilbert2-16/+0
perf_color_default_config() was added in 2009 by commit 8fc0321f1ad0 ("perf_counter tools: Add color terminal output support") but has remained unused. Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250305023120.155420-3-linux@treblig.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-08perf report: Fix memory leaks in the hierarchy modeNamhyung Kim1-0/+10
Ian told me that there are many memory leaks in the hierarchy mode. I can easily reproduce it with the follwing command. $ make DEBUG=1 EXTRA_CFLAGS=-fsanitize=leak $ perf record --latency -g -- ./perf test -w thloop $ perf report -H --stdio ... Indirect leak of 168 byte(s) in 21 object(s) allocated from: #0 0x7f3414c16c65 in malloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:75 #1 0x55ed3602346e in map__get util/map.h:189 #2 0x55ed36024cc4 in hist_entry__init util/hist.c:476 #3 0x55ed36025208 in hist_entry__new util/hist.c:588 #4 0x55ed36027c05 in hierarchy_insert_entry util/hist.c:1587 #5 0x55ed36027e2e in hists__hierarchy_insert_entry util/hist.c:1638 #6 0x55ed36027fa4 in hists__collapse_insert_entry util/hist.c:1685 #7 0x55ed360283e8 in hists__collapse_resort util/hist.c:1776 #8 0x55ed35de0323 in report__collapse_hists /home/namhyung/project/linux/tools/perf/builtin-report.c:735 #9 0x55ed35de15b4 in __cmd_report /home/namhyung/project/linux/tools/perf/builtin-report.c:1119 #10 0x55ed35de43dc in cmd_report /home/namhyung/project/linux/tools/perf/builtin-report.c:1867 #11 0x55ed35e66767 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:351 #12 0x55ed35e66a0e in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:404 #13 0x55ed35e66b67 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:448 #14 0x55ed35e66eb0 in main /home/namhyung/project/linux/tools/perf/perf.c:556 #15 0x7f340ac33d67 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 ... $ perf report -H --stdio 2>&1 | grep -c '^Indirect leak' 93 I found that hist_entry__delete() missed to release child entries in the hierarchy tree (hroot_{in,out}). It needs to iterate the child entries and call hist_entry__delete() recursively. After this change: $ perf report -H --stdio 2>&1 | grep -c '^Indirect leak' 0 Reported-by: Ian Rogers <irogers@google.com> Tested-by Thomas Falcon <thomas.falcon@intel.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250307061250.320849-2-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-08perf report: Use map_symbol__copy() when copying callchainsNamhyung Kim1-7/+3
It seems there are places to miss updating refcount of maps. Let's use map_symbol__copy() helper to properly copy them with refcounts updated. Link: https://lore.kernel.org/r/20250307061250.320849-1-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-07perf annotate: Return errors from disasm_line__parse_powerpc()Athira Rajeev1-2/+3
In disasm_line__parse_powerpc() , return code from function disasm_line__parse() is ignored. This will result in bad results if the disasm_line__parse() fails to disasm the line. Use the return code to fix this. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-By: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Link: https://lore.kernel.org/r/20250304154114.62093-2-atrajeev@linux.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-07perf annotate: Add annotation_options.disassembler_usedAthira Rajeev2-9/+14
When doing "perf annotate", perf tool provides option to use specific disassembler like llvm/objdump/capstone. The order picked is to use llvm first and if that fails fallback to objdump ie to use PERF_DISASM_LLVM, PERF_DISASM_CAPSTONE and PERF_DISASM_OBJDUMP In powerpc, when using "data type" sort keys, first preferred approach is to read the raw instruction from the DSO. In objdump is specified in "--objdump" option, it picks the symbol disassemble using objdump. Currently disasm_line__parse_powerpc() function uses length of the "line" to determine if objdump is used. But there are few cases, where if objdump doesn't recognise the instruction, the disassembled string will be empty. Example: 134cdc: c4 05 82 41 beq 1352a0 <getcwd+0x6e0> 134ce0: ac 00 8e 40 bne cr3,134d8c <getcwd+0x1cc> 134ce4: 0f 00 10 04 pld r9,1028308 ====>134ce8: d4 b0 20 e5 134cec: 16 00 40 39 li r10,22 134cf0: 48 01 21 ea ld r17,328(r1) So depending on length of line will give bad results. Add a new filed to annotation options structure, "struct annotation_options" to save the disassembler used. Use this info to determine if disassembly is done while parsing the disasm line. Reported-by: Tejas Manhas <Tejas.Manhas1@ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-By: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Link: https://lore.kernel.org/r/20250304154114.62093-1-atrajeev@linux.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-07perf report: Do not process non-JIT BPF ksymbol eventsNamhyung Kim1-0/+4
The length of PERF_RECORD_KSYMBOL for BPF is a size of JITed code so it'd be 0 when it's not JITed. The ksymbol is needed to symbolize the code when it gets samples in the region but non-JITed code cannot get samples. Thus it'd be ok to ignore them. Actually it caused a performance issue in the perf tools on old ARM kernels where it can refuse to JIT some BPF codes. It ended up splitting the existing kernel map (kallsyms). And later lookup for a kernel symbol would create a new kernel map from kallsyms and then split it again and again. :( Probably there's a bug in the kernel map/symbol handling in perf tools. But I think we need to fix this anyway. Reported-by: Kevin Nomura <nomurak@google.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20250305232838.128692-1-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-06perf machine: Fix insertion of PERF_RECORD_KSYMBOL related kernel mapsNamhyung Kim1-1/+1
This was detected at the end of a 'perf record' session when build-id collection was enabled and thus the BPF programs put in place while the session was running, some even put in place by perf itself were processed and inserted, with some overlaps related to BPF trampolines and programs took place. Using maps__fixup_overlap_and_insert() instead of maps__insert() "fixes" the problem, in the sense that overlaps will be dealt with and then the consistency will be kept, but it would be interesting to fully understand why such overlaps take place and how to deal with them when doing symbol resolution. Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Suggested-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/lkml/CAP-5=fXEEMFgPF2aZhKsfrY_En+qoqX20dWfuE_ad73Uxf0ZHQ@mail.gmail.com Link: https://lore.kernel.org/r/20250228211734.33781-7-acme@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-06perf maps: Add missing map__set_kmap_maps() when replacing a kernel mapArnaldo Carvalho de Melo1-0/+2
Since in this case __maps__insert_sorted() is not called and thus doesn't have the opportunity to do the needed map__set_kmap_maps() calls on the new map. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/lkml/Z7-May5w9VQd5QD0@x1 Link: https://lore.kernel.org/r/20250228211734.33781-6-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-06perf maps: Fixup maps_by_name when modifying maps_by_addressNamhyung Kim1-1/+23
We can't just replacing the map in the maps_by_address and not touching on the maps_by_name, that would leave the refcount as 1 and thus trip another consistency check, this one: perf: util/maps.c:110: check_invariants: Assertion `refcount_read(map__refcnt(map)) > 1' failed. 106 /* 107 * Maps by name maps should be in maps_by_address, so 108 * the reference count should be higher. 109 */ 110 assert(refcount_read(map__refcnt(map)) > 1); Committer notice: Initialize the newly added 'ni' variable, that really can't be accessed unitialized trips some gcc versions, like: 12 20.00 archlinux:base : FAIL gcc version 13.2.1 20230801 (GCC) util/maps.c: In function ‘__maps__fixup_overlap_and_insert’: util/maps.c:896:54: error: ‘ni’ may be used uninitialized [-Werror=maybe-uninitialized] 896 | map__put(maps_by_name[ni]); | ^ util/maps.c:816:25: note: ‘ni’ was declared here 816 | unsigned int i, ni; | ^~ cc1: all warnings being treated as errors make[3]: *** [/git/perf-6.14.0-rc1/tools/build/Makefile.build:138: util] Error 2 Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/lkml/Z79std66tPq-nqsD@google.com Link: https://lore.kernel.org/r/20250228211734.33781-5-acme@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-06perf machine: Fixup kernel maps ends after adding extra mapsNamhyung Kim1-2/+2
I just noticed it would add extra kernel maps after modules. I think it should fixup end address of the kernel maps after adding all maps first. Fixes: 876e80cf83d10585 ("perf tools: Fixup end address of modules") Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/lkml/Z7TvZGjVix2asYWI@x1 Link: https://lore.kernel.org/lkml/Z712hzvv22Ni63f1@google.com Link: https://lore.kernel.org/r/20250228211734.33781-4-acme@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-06perf maps: Set the kmaps for newly created/added kernel mapsArnaldo Carvalho de Melo1-0/+3
When using __maps__insert_sorted() the map kmaps field needs to be initialized, as we need kernel maps to work with map__kmap(). Fix it by using the newly introduced map__set_kmap() method. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/lkml/Z74V0hZXrTLM6VIJ@x1 Link: https://lore.kernel.org/r/20250228211734.33781-3-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-06perf maps: Introduce map__set_kmap_maps() for kernel mapsArnaldo Carvalho de Melo1-8/+21
We need to set it in other places than __maps__insert(), so that we can have access to the 'struct maps' from a kernel 'struct map'. When building perf with 'DEBUG=1' we can notice it failing a consistency check done in the check_invariants() function: root@number:~# perf record -- perf test -w offcpu [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.040 MB perf.data (23 samples) ] perf: util/maps.c:95: check_invariants: Assertion `map__end(prev) <= map__end(map)' failed. Aborted (core dumped) root@number:~# The investigation on that was happening bisected to 876e80cf83d10585 ("perf tools: Fixup end address of modules"), and the following patches will plug the problems found, this patch is just legwork on that direction. Use the map__set_kmap_maps() name as per a review comment from Ian Rogers, later there are further suggestions from him on getting rid of the kmaps variable, see the thread referenced in the Link below. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/lkml/Z74V0hZXrTLM6VIJ@x1 Link: https://lore.kernel.org/r/20250228211734.33781-2-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf arm-spe: Support previous branch target (PBT) addressLeo Yan3-50/+70
When FEAT_SPE_PBT is implemented, the previous branch target address (named as PBT) before the sampled operation, will be recorded. This commit first introduces a 'prev_br_tgt' field in the record for saving the PBT address in the decoder. If the current operation is a branch instruction, by combining with PBT, it can create a chain with two consecutive branches. As the branch stack stores branches in descending order, meaning a newer branch is stored in a lower entry in the stack. Arm SPE stores the latest branch in the first entry of branch stack, and the previous branch coming from PBT is stored into the second entry. Otherwise, if current operation is not a branch, the last branch will be saved for PBT only. PBT lacks associated information such as branch source address, branch type, and events. The branch entry fills zeros for the corresponding fields and only set its target address. After: perf script -f --itrace=bl -F flags,addr,brstack jcc ffff800080187914 0xffff8000801878fc/0xffff800080187914/P/-/-/1/COND/- 0x0/0xffff8000801878f8/-/-/-/0//- jcc ffff8000802d12d8 0xffff8000802d12f8/0xffff8000802d12d8/P/-/-/1/COND/- 0x0/0xffff8000802d12ec/-/-/-/0//- jcc ffff8000813fe200 0xffff8000813fe20c/0xffff8000813fe200/P/-/-/1/COND/- 0x0/0xffff8000813fe200/-/-/-/0//- jcc ffff8000813fe200 0xffff8000813fe20c/0xffff8000813fe200/P/-/-/1/COND/- 0x0/0xffff8000813fe200/-/-/-/0//- jmp ffff800081410980 0xffff800081419108/0xffff800081410980/P/-/-/1//- 0x0/0xffff800081419104/-/-/-/0//- return ffff80008036e064 0xffff80008141ba84/0xffff80008036e064/P/-/-/1/RET/- 0x0/0xffff80008141ba60/-/-/-/0//- jcc ffff8000803d54f0 0xffff8000803d54e8/0xffff8000803d54f0/P/-/-/1/COND/- 0x0/0xffff8000803d54e0/-/-/-/0//- jmp ffff80008015e468 0xffff8000803d46dc/0xffff80008015e468/P/-/-/1//- 0x0/0xffff8000803d46c8/-/-/-/0//- jmp ffff8000806e2d50 0xffff80008040f710/0xffff8000806e2d50/P/-/-/1//- 0x0/0xffff80008040f6e8/-/-/-/0//- jcc ffff800080721704 0xffff8000807216b4/0xffff800080721704/P/-/-/1/COND/- 0x0/0xffff8000807216ac/-/-/-/0//- Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-13-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf arm-spe: Add branch stackLeo Yan1-0/+99
Although Arm SPE cannot generate continuous branch records, this commit creates a branch stack with only one branch entry. A single branch info can be used for performance optimization. A branch stack structure is dynamically allocated in the decode queue. The branch stack and stack flags are synthesized based on branch types and associated events. After: # perf script --itrace=bl1 -F flags,addr,brstack jcc ffffc0fad9c6b214 0xffffc0fad9c6b234/0xffffc0fad9c6b214/P/-/-/7/COND/- jcc/miss,not_taken/ ffffc0fadaaebb30 0xffffc0fadaaebb2c/0xffffc0fadaaebb30/MN/-/-/7/COND/- jmp ffffc0fadaaea358 0xffffc0fadaaea5ec/0xffffc0fadaaea358/P/-/-/5//- jcc/not_taken/ ffffc0fadaae6494 0xffffc0fadaae6490/0xffffc0fadaae6494/PN/-/-/11/COND/- jcc/not_taken/ ffff7f83ab54 0xffff7f83ab50/0xffff7f83ab54/PN/-/-/13/COND/- jcc/not_taken/ ffff7f83ab08 0xffff7f83ab04/0xffff7f83ab08/PN/-/-/8/COND/- jcc ffff7f83aa80 0xffff7f83aa58/0xffff7f83aa80/P/-/-/10/COND/- jcc ffff7f9a45d0 0xffff7f9a43f0/0xffff7f9a45d0/P/-/-/29/COND/- jcc/not_taken/ ffffc0fad9ba6db4 0xffffc0fad9ba6db0/0xffffc0fad9ba6db4/PN/-/-/44/COND/- jcc ffffc0fadaac2964 0xffffc0fadaac2970/0xffffc0fadaac2964/P/-/-/6/COND/- jcc ffffc0fad99ddc10 0xffffc0fad99ddc04/0xffffc0fad99ddc10/P/-/-/72/COND/- jcc/not_taken/ ffffc0fad9b3f21c 0xffffc0fad9b3f218/0xffffc0fad9b3f21c/PN/-/-/64/COND/- jcc ffffc0fad9c3b604 0xffffc0fad9c3b5f8/0xffffc0fad9c3b604/P/-/-/13/COND/- jcc ffffc0fadaad6048 0xffffc0fadaad5f8c/0xffffc0fadaad6048/P/-/-/5/COND/- return/miss/ ffff7f84e614 0xffffc0fad98a2274/0xffff7f84e614/M/-/-/13/RET/- jcc/not_taken/ ffffc0fadaac4eb4 0xffffc0fadaac4eb0/0xffffc0fadaac4eb4/PN/-/-/5/COND/- jmp ffff7f8e3130 0xffff7f87555c/0xffff7f8e3130/P/-/-/5//- jcc/not_taken/ ffffc0fad9b3d9b0 0xffffc0fad9b3d9ac/0xffffc0fad9b3d9b0/PN/-/-/14/COND/- return ffffc0fad9b91950 0xffffc0fad98c3e28/0xffffc0fad9b91950/P/-/-/12/RET/- Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-12-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf arm-spe: Set sample flags with supplement infoLeo Yan1-0/+20
Based on the supplement information in the record, this commit sets the sample flags for conditional branch, function call, return. It also sets events in flags, such as mispredict, not taken, and in transaction. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-11-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf arm-spe: Fill branch operations and events to recordLeo Yan2-2/+26
The new added branch operations and events are filled into record, the information will be consumed when synthesizing samples. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-10-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf arm-spe: Decode transactional eventLeo Yan2-0/+3
The bit[16] in an event payload indicates an operation is in transactional state. Decode the bit. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-9-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf arm-spe: Extend branch operationsLeo Yan2-6/+17
In Arm ARM (ARM DDI 0487, L.a), the section "D18.2.7 Operation Type packet", the branch subclass is extended for Call Return (CR), Guarded control stack data access (GCS). This commit adds support CR and GCS operations. The IND (indirect) operation is defined only in bit [1], its macro is updated accordingly. Move the COND (Conditional) macro into the same group with other operations for better maintenance. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-8-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf arm-spe: Fix load-store operation checkingLeo Yan1-1/+7
The ARM_SPE_OP_LD and ARM_SPE_OP_ST operations are secondary operation type, they are overlapping with other second level's operation types belonging to SVE and branch operations. As a result, a non load-store operation can be parsed for data source and memory sample. To fix the issue, this commit introduces a is_ldst_op() macro for checking LDST operation, and apply the checking when synthesize data source and memory samples. Fixes: a89dbc9b988f ("perf arm-spe: Set sample's data source field") Signed-off-by: Leo Yan <leo.yan@arm.com> Reviewed-by: James Clark <james.clark@linaro.org> Link: https://lore.kernel.org/r/20250304111240.3378214-7-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf script: Add not taken event for branch stackLeo Yan1-1/+2
The branch stack has an existed field for printing mispredict, extend the field for printing events and add support not-taken event. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-6-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf script: Add not taken event for branchesLeo Yan2-2/+5
Some hardware (e.g., Arm SPE) can trace the not taken event for branches. Add a flag for this event and support printing it. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-5-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf script: Separate events from branch typesLeo Yan2-4/+37
Branch types and events are two different things. A branch type can be a conditional branch, an indirect branch, a procedure call, a return, or an exception taken, etc. The extra event information is provided for what happens during a branch, e.g. if a branch is mispredicted or not taken (specific to conditional branches). To deliver information about branches, this commit separates events from branch types. It parses branch types first, then appends event strings embraced by the '/' character. If multiple events occur, the events is separated with a comma (,). Also add a minor improvement by adding char 'm' in char array for branch mispredict event. Below are extracted sample flags. Before: branch: br miss instructions: br miss After: branch: jmp/miss/ instructions: jmp/miss/ Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-4-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf script: Refactor sample_flags_to_name() functionLeo Yan2-31/+59
When generating a string for sample flags, the sample_flags_to_name() function lacks the ability to parse the trace start bit or trace end bit. Therefore, the function is invoked multiple times after clearing its unsupported bits. This commit improves the sample_flags_to_name() function to parse sample flags in one go for three kinds of information: - The prefix info for trace start, trace end, etc. - Branch types. - Extra info for transaction and interrupt related info. As a result, the code is simplified to call the sample_flags_to_name() only once. No expectation for any changes in the perf script output. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250304111240.3378214-3-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-05perf script: Make printing flags reliableLeo Yan1-0/+2
Add a check for the generated string of flags. Print out the raw number if the string generation fails. Use the SAMPLE_FLAGS_STR_ALIGNED_SIZE macro to replace the value '21'. Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: James Clark <james.clark@linaro.org> Signed-off-by: Leo Yan <leo.yan@arm.com> Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Link: https://lore.kernel.org/r/20250304111240.3378214-2-leo.yan@arm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-03perf stat: Fix non-uniquified hybrid legacy eventsJames Clark1-6/+6
Legacy hybrid events have attr.type == PERF_TYPE_HARDWARE, so they look like plain legacy events if we only look at attr.type. But legacy events should still be uniquified if they were opened on a non-legacy PMU. Fix it by checking if the evsel is hybrid and forcing needs_uniquify before looking at the attr.type. This restores PMU names on hybrid systems and also changes "perf stat metrics (shadow stat) test" from a FAIL back to a SKIP (on hybrid). The test was gated on "cycles" appearing alone which doesn't happen on here. Before: $ perf stat -- true ... <not counted> instructions:u (0.00%) 162,536 instructions:u # 0.58 insn per cycle ... After: $ perf stat -- true ... <not counted> cpu_atom/instructions/u (0.00%) 162,541 cpu_core/instructions/u # 0.62 insn per cycle ... Fixes: 357b965deba9 ("perf stat: Changes to event name uniquification") Suggested-by: Ian Rogers <irogers@google.com> Signed-off-by: James Clark <james.clark@linaro.org> Tested-by: Thomas Falcon <thomas.falcon@intel.com> Link: https://lore.kernel.org/r/20250226145526.632380-1-james.clark@linaro.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-02perf tools: Skip BPF sideband event for userspace profilingNamhyung Kim2-0/+15
The BPF sideband information is tracked using a separate thread and evlist. But it's only useful for profiling kernel and we can skip it when users profile their application only. It seems it already fails to open the sideband event in that case. Let's remove the noise in the verbose output anyway. Reviewed-by: Ian Rogers <irogers@google.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20250226203039.1099131-1-namhyung@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-02-28perf lock: Report owner stack in usermodeChun-Tse Shao2-7/+71
This patch parses `owner_lock_stat` into a RB tree, enabling ordered reporting of owner lock statistics with stack traces. It also updates the documentation for the `-o` option in contention mode, decouples `-o` from `-t`, and issues a warning to inform users about the new behavior of `-ov`. Example output: $ sudo ~/linux/tools/perf/perf lock con -abvo -Y mutex-spin -E3 perf bench sched pipe ... contended total wait max wait avg wait type caller 171 1.55 ms 20.26 us 9.06 us mutex pipe_read+0x57 0xffffffffac6318e7 pipe_read+0x57 0xffffffffac623862 vfs_read+0x332 0xffffffffac62434b ksys_read+0xbb 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 36 193.71 us 15.27 us 5.38 us mutex pipe_write+0x50 0xffffffffac631ee0 pipe_write+0x50 0xffffffffac6241db vfs_write+0x3bb 0xffffffffac6244ab ksys_write+0xbb 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 4 51.22 us 16.47 us 12.80 us mutex do_epoll_wait+0x24d 0xffffffffac691f0d do_epoll_wait+0x24d 0xffffffffac69249b do_epoll_pwait.part.0+0xb 0xffffffffac693ba5 __x64_sys_epoll_pwait+0x95 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 === owner stack trace === 3 31.24 us 15.27 us 10.41 us mutex pipe_read+0x348 0xffffffffac631bd8 pipe_read+0x348 0xffffffffac623862 vfs_read+0x332 0xffffffffac62434b ksys_read+0xbb 0xfffffffface604b2 do_syscall_64+0x82 0xffffffffad00012f entry_SYSCALL_64_after_hwframe+0x76 ... Signed-off-by: Chun-Tse Shao <ctshao@google.com> Tested-by: Athira Rajeev <atrajeev@linux.ibm.com> Link: https://lore.kernel.org/r/20250227003359.732948-5-ctshao@google.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>