diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-03-27 23:42:32 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-03-27 23:42:32 +0300 |
commit | 7b58b82b86c8b65a2b57a4c6cb96a460654f9e09 (patch) | |
tree | a13e19f216389f16f1cb6641d54751f167482515 /tools/perf/util/session.c | |
parent | 02f9a04d76b76b80b05ddc33ceabe806b84fda3c (diff) | |
parent | ab0809af0bee88b689ba289ec8c40aa2be3a17ec (diff) | |
download | linux-7b58b82b86c8b65a2b57a4c6cb96a460654f9e09.tar.xz |
Merge tag 'perf-tools-for-v5.18-2022-03-26' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull perf tools updates from Arnaldo Carvalho de Melo:
"New features:
perf ftrace:
- Add -n/--use-nsec option to the 'latency' subcommand.
Default: usecs:
$ sudo perf ftrace latency -T dput -a sleep 1
# DURATION | COUNT | GRAPH |
0 - 1 us | 2098375 | ############################# |
1 - 2 us | 61 | |
2 - 4 us | 33 | |
4 - 8 us | 13 | |
8 - 16 us | 124 | |
16 - 32 us | 123 | |
32 - 64 us | 1 | |
64 - 128 us | 0 | |
128 - 256 us | 1 | |
256 - 512 us | 0 | |
Better granularity with nsec:
$ sudo perf ftrace latency -T dput -a -n sleep 1
# DURATION | COUNT | GRAPH |
0 - 1 us | 0 | |
1 - 2 ns | 0 | |
2 - 4 ns | 0 | |
4 - 8 ns | 0 | |
8 - 16 ns | 0 | |
16 - 32 ns | 0 | |
32 - 64 ns | 0 | |
64 - 128 ns | 1163434 | ############## |
128 - 256 ns | 914102 | ############# |
256 - 512 ns | 884 | |
512 - 1024 ns | 613 | |
1 - 2 us | 31 | |
2 - 4 us | 17 | |
4 - 8 us | 7 | |
8 - 16 us | 123 | |
16 - 32 us | 83 | |
perf lock:
- Add -c/--combine-locks option to merge lock instances in the same
class into a single entry.
# perf lock report -c
Name acquired contended avg wait(ns) total wait(ns) max wait(ns) min wait(ns)
rcu_read_lock 251225 0 0 0 0 0
hrtimer_bases.lock 39450 0 0 0 0 0
&sb->s_type->i_l... 10301 1 662 662 662 662
ptlock_ptr(page) 10173 2 701 1402 760 642
&(ei->i_block_re... 8732 0 0 0 0 0
&xa->xa_lock 8088 0 0 0 0 0
&base->lock 6705 0 0 0 0 0
&p->pi_lock 5549 0 0 0 0 0
&dentry->d_lockr... 5010 4 1274 5097 1844 789
&ep->lock 3958 0 0 0 0 0
- Add -F/--field option to customize the list of fields to output:
$ perf lock report -F contended,wait_max -k avg_wait
Name contended max wait(ns) avg wait(ns)
slock-AF_INET6 1 23543 23543
&lruvec->lru_lock 5 18317 11254
slock-AF_INET6 1 10379 10379
rcu_node_1 1 2104 2104
&dentry->d_lockr... 1 1844 1844
&dentry->d_lockr... 1 1672 1672
&newf->file_lock 15 2279 1025
&dentry->d_lockr... 1 792 792
- Add --synth=no option for record, as there is no need to symbolize,
lock names comes from the tracepoints.
perf record:
- Threaded recording, opt-in, via the new --threads command line
option.
- Improve AMD IBS (Instruction-Based Sampling) error handling
messages.
perf script:
- Add 'brstackinsnlen' field (use it with -F) for branch stacks.
- Output branch sample type in 'perf script'.
perf report:
- Add "addr_from" and "addr_to" sort dimensions.
- Print branch stack entry type in 'perf report --dump-raw-trace'
- Fix symbolization for chrooted workloads.
Hardware tracing:
Intel PT:
- Add CFE (Control Flow Event) and EVD (Event Data) packets support.
- Add MODE.Exec IFLAG bit support.
Explanation about these features from the "Intel® 64 and IA-32
architectures software developer’s manual combined volumes: 1, 2A,
2B, 2C, 2D, 3A, 3B, 3C, 3D, and 4" PDF at:
https://cdrdv2.intel.com/v1/dl/getContent/671200
At page 3951:
"32.2.4
Event Trace is a capability that exposes details about the
asynchronous events, when they are generated, and when their
corresponding software event handler completes execution. These
include:
o Interrupts, including NMI and SMI, including the interrupt
vector when defined.
o Faults, exceptions including the fault vector.
- Page faults additionally include the page fault address,
when in context.
o Event handler returns, including IRET and RSM.
o VM exits and VM entries.¹
- VM exits include the values written to the “exit reason”
and “exit qualification” VMCS fields. INIT and SIPI events.
o TSX aborts, including the abort status returned for the RTM
instructions.
o Shutdown.
Additionally, it provides indication of the status of the
Interrupt Flag (IF), to indicate when interrupts are masked"
ARM CoreSight:
- Use advertised caps/min_interval as default sample_period on ARM
spe.
- Update deduction of TRCCONFIGR register for branch broadcast on
ARM's CoreSight ETM.
Vendor Events (JSON):
Intel:
- Update events and metrics for: Alderlake, Broadwell, Broadwell DE,
BroadwellX, CascadelakeX, Elkhartlake, Bonnell, Goldmont,
GoldmontPlus, Westmere EP-DP, Haswell, HaswellX, Icelake, IcelakeX,
Ivybridge, Ivytown, Jaketown, Knights Landing, Nehalem EP,
Sandybridge, Silvermont, Skylake, Skylake Server, SkylakeX,
Tigerlake, TremontX, Westmere EP-SP, and Westmere EX.
ARM:
- Add support for HiSilicon CPA PMU aliasing.
perf stat:
- Fix forked applications enablement of counters.
- The 'slots' should only be printed on a different order than the
one specified on the command line when 'topdown' events are
present, fix it.
Miscellaneous:
- Sync msr-index, cpufeatures header files with the kernel sources.
- Stop using some deprecated libbpf APIs in 'perf trace'.
- Fix some spelling mistakes.
- Refactor the maps pointers usage to pave the way for using refcount
debugging.
- Only offer the --tui option on perf top, report and annotate when
perf was built with libslang.
- Don't mention --to-ctf in 'perf data --help' when not linking with
the required library, libbabeltrace.
- Use ARRAY_SIZE() instead of ad hoc equivalent, spotted by
array_size.cocci.
- Enhance the matching of sub-commands abbreviations:
'perf c2c rec' -> 'perf c2c record'
'perf c2c recport -> error
- Set build-id using build-id header on new mmap records.
- Fix generation of 'perf --version' string.
perf test:
- Add test for the arm_spe event.
- Add test to check unwinding using fame-pointer (fp) mode on arm64.
- Make metric testing more robust in 'perf test'.
- Add error message for unsupported branch stack cases.
libperf:
- Add API for allocating new thread map array.
- Fix typo in perf_evlist__open() failure error messages in libperf
tests.
perf c2c:
- Replace bitmap_weight() with bitmap_empty() where appropriate"
* tag 'perf-tools-for-v5.18-2022-03-26' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (143 commits)
perf evsel: Improve AMD IBS (Instruction-Based Sampling) error handling messages
perf python: Add perf_env stubs that will be needed in evsel__open_strerror()
perf tools: Enhance the matching of sub-commands abbreviations
libperf tests: Fix typo in perf_evlist__open() failure error messages
tools arm64: Import cputype.h
perf lock: Add -F/--field option to control output
perf lock: Extend struct lock_key to have print function
perf lock: Add --synth=no option for record
tools headers cpufeatures: Sync with the kernel sources
tools headers cpufeatures: Sync with the kernel sources
perf stat: Fix forked applications enablement of counters
tools arch x86: Sync the msr-index.h copy with the kernel sources
perf evsel: Make evsel__env() always return a valid env
perf build-id: Fix spelling mistake "Cant" -> "Can't"
perf header: Fix spelling mistake "could't" -> "couldn't"
perf script: Add 'brstackinsnlen' for branch stacks
perf parse-events: Move slots only with topdown
perf ftrace latency: Update documentation
perf ftrace latency: Add -n/--use-nsec option
perf tools: Fix version kernel tag
...
Diffstat (limited to 'tools/perf/util/session.c')
-rw-r--r-- | tools/perf/util/session.c | 213 |
1 files changed, 182 insertions, 31 deletions
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 498b05708db5..3b8dfe603e50 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -39,7 +39,8 @@ #ifdef HAVE_ZSTD_SUPPORT static int perf_session__process_compressed_event(struct perf_session *session, - union perf_event *event, u64 file_offset) + union perf_event *event, u64 file_offset, + const char *file_path) { void *src; size_t decomp_size, src_size; @@ -61,6 +62,7 @@ static int perf_session__process_compressed_event(struct perf_session *session, } decomp->file_pos = file_offset; + decomp->file_path = file_path; decomp->mmap_len = mmap_len; decomp->head = 0; @@ -100,7 +102,8 @@ static int perf_session__process_compressed_event(struct perf_session *session, static int perf_session__deliver_event(struct perf_session *session, union perf_event *event, struct perf_tool *tool, - u64 file_offset); + u64 file_offset, + const char *file_path); static int perf_session__open(struct perf_session *session, int repipe_fd) { @@ -182,7 +185,8 @@ static int ordered_events__deliver_event(struct ordered_events *oe, ordered_events); return perf_session__deliver_event(session, event->event, - session->tool, event->file_offset); + session->tool, event->file_offset, + event->file_path); } struct perf_session *__perf_session__new(struct perf_data *data, @@ -471,7 +475,8 @@ static int process_event_time_conv_stub(struct perf_session *perf_session __mayb static int perf_session__process_compressed_event_stub(struct perf_session *session __maybe_unused, union perf_event *event __maybe_unused, - u64 file_offset __maybe_unused) + u64 file_offset __maybe_unused, + const char *file_path __maybe_unused) { dump_printf(": unhandled!\n"); return 0; @@ -1072,9 +1077,9 @@ static int process_finished_round(struct perf_tool *tool __maybe_unused, } int perf_session__queue_event(struct perf_session *s, union perf_event *event, - u64 timestamp, u64 file_offset) + u64 timestamp, u64 file_offset, const char *file_path) { - return ordered_events__queue(&s->ordered_events, event, timestamp, file_offset); + return ordered_events__queue(&s->ordered_events, event, timestamp, file_offset, file_path); } static void callchain__lbr_callstack_printf(struct perf_sample *sample) @@ -1154,14 +1159,15 @@ static void branch_stack__printf(struct perf_sample *sample, bool callstack) struct branch_entry *e = &entries[i]; if (!callstack) { - printf("..... %2"PRIu64": %016" PRIx64 " -> %016" PRIx64 " %hu cycles %s%s%s%s %x\n", + printf("..... %2"PRIu64": %016" PRIx64 " -> %016" PRIx64 " %hu cycles %s%s%s%s %x %s\n", i, e->from, e->to, (unsigned short)e->flags.cycles, e->flags.mispred ? "M" : " ", e->flags.predicted ? "P" : " ", e->flags.abort ? "A" : " ", e->flags.in_tx ? "T" : " ", - (unsigned)e->flags.reserved); + (unsigned)e->flags.reserved, + e->flags.type ? branch_type_name(e->flags.type) : ""); } else { printf("..... %2"PRIu64": %016" PRIx64 "\n", i, i > 0 ? e->from : e->to); @@ -1277,13 +1283,14 @@ static void sample_read__printf(struct perf_sample *sample, u64 read_format) } static void dump_event(struct evlist *evlist, union perf_event *event, - u64 file_offset, struct perf_sample *sample) + u64 file_offset, struct perf_sample *sample, + const char *file_path) { if (!dump_trace) return; - printf("\n%#" PRIx64 " [%#x]: event: %d\n", - file_offset, event->header.size, event->header.type); + printf("\n%#" PRIx64 "@%s [%#x]: event: %d\n", + file_offset, file_path, event->header.size, event->header.type); trace_event(event); if (event->header.type == PERF_RECORD_SAMPLE && evlist->trace_event_sample_raw) @@ -1486,12 +1493,13 @@ static int machines__deliver_event(struct machines *machines, struct evlist *evlist, union perf_event *event, struct perf_sample *sample, - struct perf_tool *tool, u64 file_offset) + struct perf_tool *tool, u64 file_offset, + const char *file_path) { struct evsel *evsel; struct machine *machine; - dump_event(evlist, event, file_offset, sample); + dump_event(evlist, event, file_offset, sample, file_path); evsel = evlist__id2evsel(evlist, sample->id); @@ -1573,7 +1581,8 @@ static int machines__deliver_event(struct machines *machines, static int perf_session__deliver_event(struct perf_session *session, union perf_event *event, struct perf_tool *tool, - u64 file_offset) + u64 file_offset, + const char *file_path) { struct perf_sample sample; int ret = evlist__parse_sample(session->evlist, event, &sample); @@ -1590,7 +1599,7 @@ static int perf_session__deliver_event(struct perf_session *session, return 0; ret = machines__deliver_event(&session->machines, session->evlist, - event, &sample, tool, file_offset); + event, &sample, tool, file_offset, file_path); if (dump_trace && sample.aux_sample.size) auxtrace__dump_auxtrace_sample(session, &sample); @@ -1600,7 +1609,8 @@ static int perf_session__deliver_event(struct perf_session *session, static s64 perf_session__process_user_event(struct perf_session *session, union perf_event *event, - u64 file_offset) + u64 file_offset, + const char *file_path) { struct ordered_events *oe = &session->ordered_events; struct perf_tool *tool = session->tool; @@ -1610,7 +1620,7 @@ static s64 perf_session__process_user_event(struct perf_session *session, if (event->header.type != PERF_RECORD_COMPRESSED || tool->compressed == perf_session__process_compressed_event_stub) - dump_event(session->evlist, event, file_offset, &sample); + dump_event(session->evlist, event, file_offset, &sample, file_path); /* These events are processed right away */ switch (event->header.type) { @@ -1669,9 +1679,9 @@ static s64 perf_session__process_user_event(struct perf_session *session, case PERF_RECORD_HEADER_FEATURE: return tool->feature(session, event); case PERF_RECORD_COMPRESSED: - err = tool->compressed(session, event, file_offset); + err = tool->compressed(session, event, file_offset, file_path); if (err) - dump_event(session->evlist, event, file_offset, &sample); + dump_event(session->evlist, event, file_offset, &sample, file_path); return err; default: return -EINVAL; @@ -1688,9 +1698,9 @@ int perf_session__deliver_synth_event(struct perf_session *session, events_stats__inc(&evlist->stats, event->header.type); if (event->header.type >= PERF_RECORD_USER_TYPE_START) - return perf_session__process_user_event(session, event, 0); + return perf_session__process_user_event(session, event, 0, NULL); - return machines__deliver_event(&session->machines, evlist, event, sample, tool, 0); + return machines__deliver_event(&session->machines, evlist, event, sample, tool, 0, NULL); } static void event_swap(union perf_event *event, bool sample_id_all) @@ -1787,7 +1797,8 @@ int perf_session__peek_events(struct perf_session *session, u64 offset, } static s64 perf_session__process_event(struct perf_session *session, - union perf_event *event, u64 file_offset) + union perf_event *event, u64 file_offset, + const char *file_path) { struct evlist *evlist = session->evlist; struct perf_tool *tool = session->tool; @@ -1802,7 +1813,7 @@ static s64 perf_session__process_event(struct perf_session *session, events_stats__inc(&evlist->stats, event->header.type); if (event->header.type >= PERF_RECORD_USER_TYPE_START) - return perf_session__process_user_event(session, event, file_offset); + return perf_session__process_user_event(session, event, file_offset, file_path); if (tool->ordered_events) { u64 timestamp = -1ULL; @@ -1811,12 +1822,12 @@ static s64 perf_session__process_event(struct perf_session *session, if (ret && ret != -1) return ret; - ret = perf_session__queue_event(session, event, timestamp, file_offset); + ret = perf_session__queue_event(session, event, timestamp, file_offset, file_path); if (ret != -ETIME) return ret; } - return perf_session__deliver_event(session, event, tool, file_offset); + return perf_session__deliver_event(session, event, tool, file_offset, file_path); } void perf_event_header__bswap(struct perf_event_header *hdr) @@ -2043,7 +2054,7 @@ more: } } - if ((skip = perf_session__process_event(session, event, head)) < 0) { + if ((skip = perf_session__process_event(session, event, head, "pipe")) < 0) { pr_err("%#" PRIx64 " [%#x]: failed to process type: %d\n", head, event->header.size, event->header.type); err = -EINVAL; @@ -2140,7 +2151,8 @@ static int __perf_session__process_decomp_events(struct perf_session *session) size = event->header.size; if (size < sizeof(struct perf_event_header) || - (skip = perf_session__process_event(session, event, decomp->file_pos)) < 0) { + (skip = perf_session__process_event(session, event, decomp->file_pos, + decomp->file_path)) < 0) { pr_err("%#" PRIx64 " [%#x]: failed to process type: %d\n", decomp->file_pos + decomp->head, event->header.size, event->header.type); return -EINVAL; @@ -2171,10 +2183,12 @@ struct reader; typedef s64 (*reader_cb_t)(struct perf_session *session, union perf_event *event, - u64 file_offset); + u64 file_offset, + const char *file_path); struct reader { int fd; + const char *path; u64 data_size; u64 data_offset; reader_cb_t process; @@ -2186,6 +2200,8 @@ struct reader { u64 file_pos; u64 file_offset; u64 head; + u64 size; + bool done; struct zstd_data zstd_data; struct decomp_data decomp_data; }; @@ -2292,7 +2308,7 @@ reader__read_event(struct reader *rd, struct perf_session *session, skip = -EINVAL; if (size < sizeof(struct perf_event_header) || - (skip = rd->process(session, event, rd->file_pos)) < 0) { + (skip = rd->process(session, event, rd->file_pos, rd->path)) < 0) { pr_err("%#" PRIx64 " [%#x]: failed to process type: %d [%s]\n", rd->file_offset + rd->head, event->header.size, event->header.type, strerror(-skip)); @@ -2303,6 +2319,7 @@ reader__read_event(struct reader *rd, struct perf_session *session, if (skip) size += skip; + rd->size += size; rd->head += size; rd->file_pos += size; @@ -2359,15 +2376,17 @@ out: static s64 process_simple(struct perf_session *session, union perf_event *event, - u64 file_offset) + u64 file_offset, + const char *file_path) { - return perf_session__process_event(session, event, file_offset); + return perf_session__process_event(session, event, file_offset, file_path); } static int __perf_session__process_events(struct perf_session *session) { struct reader rd = { .fd = perf_data__fd(session->data), + .path = session->data->file.path, .data_size = session->header.data_size, .data_offset = session->header.data_offset, .process = process_simple, @@ -2411,6 +2430,135 @@ out_err: return err; } +/* + * Processing 2 MB of data from each reader in sequence, + * because that's the way the ordered events sorting works + * most efficiently. + */ +#define READER_MAX_SIZE (2 * 1024 * 1024) + +/* + * This function reads, merge and process directory data. + * It assumens the version 1 of directory data, where each + * data file holds per-cpu data, already sorted by kernel. + */ +static int __perf_session__process_dir_events(struct perf_session *session) +{ + struct perf_data *data = session->data; + struct perf_tool *tool = session->tool; + int i, ret, readers, nr_readers; + struct ui_progress prog; + u64 total_size = perf_data__size(session->data); + struct reader *rd; + + perf_tool__fill_defaults(tool); + + ui_progress__init_size(&prog, total_size, "Sorting events..."); + + nr_readers = 1; + for (i = 0; i < data->dir.nr; i++) { + if (data->dir.files[i].size) + nr_readers++; + } + + rd = zalloc(nr_readers * sizeof(struct reader)); + if (!rd) + return -ENOMEM; + + rd[0] = (struct reader) { + .fd = perf_data__fd(session->data), + .path = session->data->file.path, + .data_size = session->header.data_size, + .data_offset = session->header.data_offset, + .process = process_simple, + .in_place_update = session->data->in_place_update, + }; + ret = reader__init(&rd[0], NULL); + if (ret) + goto out_err; + ret = reader__mmap(&rd[0], session); + if (ret) + goto out_err; + readers = 1; + + for (i = 0; i < data->dir.nr; i++) { + if (!data->dir.files[i].size) + continue; + rd[readers] = (struct reader) { + .fd = data->dir.files[i].fd, + .path = data->dir.files[i].path, + .data_size = data->dir.files[i].size, + .data_offset = 0, + .process = process_simple, + .in_place_update = session->data->in_place_update, + }; + ret = reader__init(&rd[readers], NULL); + if (ret) + goto out_err; + ret = reader__mmap(&rd[readers], session); + if (ret) + goto out_err; + readers++; + } + + i = 0; + while (readers) { + if (session_done()) + break; + + if (rd[i].done) { + i = (i + 1) % nr_readers; + continue; + } + if (reader__eof(&rd[i])) { + rd[i].done = true; + readers--; + continue; + } + + session->active_decomp = &rd[i].decomp_data; + ret = reader__read_event(&rd[i], session, &prog); + if (ret < 0) { + goto out_err; + } else if (ret == READER_NODATA) { + ret = reader__mmap(&rd[i], session); + if (ret) + goto out_err; + } + + if (rd[i].size >= READER_MAX_SIZE) { + rd[i].size = 0; + i = (i + 1) % nr_readers; + } + } + + ret = ordered_events__flush(&session->ordered_events, OE_FLUSH__FINAL); + if (ret) + goto out_err; + + ret = perf_session__flush_thread_stacks(session); +out_err: + ui_progress__finish(); + + if (!tool->no_warn) + perf_session__warn_about_errors(session); + + /* + * We may switching perf.data output, make ordered_events + * reusable. + */ + ordered_events__reinit(&session->ordered_events); + + session->one_mmap = false; + + session->active_decomp = &session->decomp_data; + for (i = 0; i < nr_readers; i++) + reader__release_decomp(&rd[i]); + zfree(&rd); + + return ret; +} + int perf_session__process_events(struct perf_session *session) { if (perf_session__register_idle_thread(session) < 0) @@ -2419,6 +2567,9 @@ int perf_session__process_events(struct perf_session *session) if (perf_data__is_pipe(session->data)) return __perf_session__process_pipe_events(session); + if (perf_data__is_dir(session->data)) + return __perf_session__process_dir_events(session); + return __perf_session__process_events(session); } |