summaryrefslogtreecommitdiff
path: root/tools/perf/util/annotate.h
AgeCommit message (Collapse)AuthorFilesLines
2021-03-06perf annotate: Show full source location with 'l' hotkeyMartin Liska1-0/+2
Right now, when Line numbers are displayed, one can't easily find a source file that the line corresponds to. When a source line is selected and 'l' is pressed, full source file location is displayed in perf UI footer line. The hotkey works only for source code lines. Signed-off-by: Martin Liška <mliska@suse.cz> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lore.kernel.org/lkml/25a6384f-d862-5dda-4fec-8f0555599c75@suse.cz Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-12perf annotate: Fix jump parsing for C++ code.Martin Liška1-0/+1
Considering the following testcase: int foo(int a, int b) { for (unsigned i = 0; i < 1000000000; i++) a += b; return a; } int main() { foo (3, 4); return 0; } 'perf annotate' displays: 86.52 │40055e: → ja 40056c <foo(int, int)+0x26> 13.37 │400560: mov -0x18(%rbp),%eax │400563: add %eax,-0x14(%rbp) │400566: addl $0x1,-0x4(%rbp) 0.11 │40056a: → jmp 400557 <foo(int, int)+0x11> │40056c: mov -0x14(%rbp),%eax │40056f: pop %rbp and the 'ja 40056c' does not link to the location in the function. It's caused by fact that comma is wrongly parsed, it's part of function signature. With my patch I see: 86.52 │ ┌──ja 26 13.37 │ │ mov -0x18(%rbp),%eax │ │ add %eax,-0x14(%rbp) │ │ addl $0x1,-0x4(%rbp) 0.11 │ │↑ jmp 11 │26:└─→mov -0x14(%rbp),%eax and 'o' output prints: 86.52 │4005┌── ↓ ja 40056c <foo(int, int)+0x26> 13.37 │4005│0: mov -0x18(%rbp),%eax │4005│3: add %eax,-0x14(%rbp) │4005│6: addl $0x1,-0x4(%rbp) 0.11 │4005│a: ↑ jmp 400557 <foo(int, int)+0x11> │4005└─→ mov -0x14(%rbp),%eax On the contrary, compiling the very same file with gcc -x c, the parsing is fine because function arguments are not displayed: jmp 400543 <foo+0x1d> Committer testing: Before: $ cat cpp_args_annotate.c int foo(int a, int b) { for (unsigned i = 0; i < 1000000000; i++) a += b; return a; } int main() { foo (3, 4); return 0; } $ gcc --version |& head -1 gcc (GCC) 10.2.1 20201125 (Red Hat 10.2.1-9) $ gcc -g cpp_args_annotate.c -o cpp_args_annotate $ perf record ./cpp_args_annotate [ perf record: Woken up 2 times to write data ] [ perf record: Captured and wrote 0.275 MB perf.data (7188 samples) ] $ perf annotate --stdio2 foo Samples: 7K of event 'cycles:u', 4000 Hz, Event count (approx.): 7468429289, [percent: local period] foo() /home/acme/c/cpp_args_annotate Percent 0000000000401106 <foo>: foo(): int foo(int a, int b) { push %rbp mov %rsp,%rbp mov %edi,-0x14(%rbp) mov %esi,-0x18(%rbp) for (unsigned i = 0; i < 1000000000; i++) movl $0x0,-0x4(%rbp) ↓ jmp 1d a += b; 13.45 13: mov -0x18(%rbp),%eax add %eax,-0x14(%rbp) for (unsigned i = 0; i < 1000000000; i++) addl $0x1,-0x4(%rbp) 0.09 1d: cmpl $0x3b9ac9ff,-0x4(%rbp) 86.46 ↑ jbe 13 return a; mov -0x14(%rbp),%eax } pop %rbp ← retq $ I.e. works for C, now lets switch to C++: $ g++ -g cpp_args_annotate.c -o cpp_args_annotate $ perf record ./cpp_args_annotate [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.268 MB perf.data (6976 samples) ] $ perf annotate --stdio2 foo Samples: 6K of event 'cycles:u', 4000 Hz, Event count (approx.): 7380681761, [percent: local period] foo() /home/acme/c/cpp_args_annotate Percent 0000000000401106 <foo(int, int)>: foo(int, int): int foo(int a, int b) { push %rbp mov %rsp,%rbp mov %edi,-0x14(%rbp) mov %esi,-0x18(%rbp) for (unsigned i = 0; i < 1000000000; i++) movl $0x0,-0x4(%rbp) cmpl $0x3b9ac9ff,-0x4(%rbp) 86.53 → ja 40112c <foo(int, int)+0x26> a += b; 13.32 mov -0x18(%rbp),%eax 0.00 add %eax,-0x14(%rbp) for (unsigned i = 0; i < 1000000000; i++) addl $0x1,-0x4(%rbp) 0.15 → jmp 401117 <foo(int, int)+0x11> return a; mov -0x14(%rbp),%eax } pop %rbp ← retq $ Reproduced. Now with this patch: Reusing the C++ built binary, as we can see here: $ readelf -wi cpp_args_annotate | grep producer <c> DW_AT_producer : (indirect string, offset: 0x2e): GNU C++14 10.2.1 20201125 (Red Hat 10.2.1-9) -mtune=generic -march=x86-64 -g $ And furthermore: $ file cpp_args_annotate cpp_args_annotate: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=4fe3cab260204765605ec630d0dc7a7e93c361a9, for GNU/Linux 3.2.0, with debug_info, not stripped $ perf buildid-list -i cpp_args_annotate 4fe3cab260204765605ec630d0dc7a7e93c361a9 $ perf buildid-list | grep cpp_args_annotate 4fe3cab260204765605ec630d0dc7a7e93c361a9 /home/acme/c/cpp_args_annotate $ It now works: $ perf annotate --stdio2 foo Samples: 6K of event 'cycles:u', 4000 Hz, Event count (approx.): 7380681761, [percent: local period] foo() /home/acme/c/cpp_args_annotate Percent 0000000000401106 <foo(int, int)>: foo(int, int): int foo(int a, int b) { push %rbp mov %rsp,%rbp mov %edi,-0x14(%rbp) mov %esi,-0x18(%rbp) for (unsigned i = 0; i < 1000000000; i++) movl $0x0,-0x4(%rbp) 11: cmpl $0x3b9ac9ff,-0x4(%rbp) 86.53 ↓ ja 26 a += b; 13.32 mov -0x18(%rbp),%eax 0.00 add %eax,-0x14(%rbp) for (unsigned i = 0; i < 1000000000; i++) addl $0x1,-0x4(%rbp) 0.15 ↑ jmp 11 return a; 26: mov -0x14(%rbp),%eax } pop %rbp ← retq $ Signed-off-by: Martin Liška <mliska@suse.cz> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Slaby <jslaby@suse.cz> Link: http://lore.kernel.org/lkml/13e1a405-edf9-e4c2-4327-a9b454353730@suse.cz Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-28perf tools: Replace zero-length array with flexible-arrayGustavo A. R. Silva1-2/+2
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] sizeof(flexible-array-member) triggers a warning because flexible array members have incomplete type[1]. There are some instances of code in which the sizeof operator is being incorrectly/erroneously applied to zero-length arrays and the result is zero. Such instances may be hiding some bugs. So, this work (flexible-array member conversions) will also help to get completely rid of those sorts of issues. This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Gustavo A. R. Silva <gustavo@embeddedor.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200515172926.GA31976@embeddedor Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-03-24perf report: Support interactive annotation of code without symbolsJin Yao1-0/+1
For perf report on stripped binaries it is currently impossible to do annotation. The annotation state is all tied to symbols, but there are either no symbols, or symbols are not covering all the code. We should support the annotation functionality even without symbols. This patch fakes a symbol and the symbol name is the string of address. After that, we just follow current annotation working flow. For example, 1. perf report Overhead Command Shared Object Symbol 20.67% div libc-2.27.so [.] __random_r 17.29% div libc-2.27.so [.] __random 10.59% div div [.] 0x0000000000000628 9.25% div div [.] 0x0000000000000612 6.11% div div [.] 0x0000000000000645 2. Select the line of "10.59% div div [.] 0x0000000000000628" and ENTER. Annotate 0x0000000000000628 Zoom into div thread Zoom into div DSO (use the 'k' hotkey to zoom directly into the kernel) Browse map details Run scripts for samples of symbol [0x0000000000000628] Run scripts for all samples Switch to another data file in PWD Exit 3. Select the "Annotate 0x0000000000000628" and ENTER. Percent│ │ │ │ Disassembly of section .text: │ │ 0000000000000628 <.text+0x68>: │ divsd %xmm4,%xmm0 │ divsd %xmm3,%xmm1 │ movsd (%rsp),%xmm2 │ addsd %xmm1,%xmm0 │ addsd %xmm2,%xmm0 │ movsd %xmm0,(%rsp) Now we can see the dump of object starting from 0x628. v5: --- Remove the hotkey 'a' implementation from this patch. It will be moved to a separate patch. v4: --- 1. Support the hotkey 'a'. When we press 'a' on address, now it supports the annotation. 2. Change the patch title from "Support interactive annotation of code without symbols" to "perf report: Support interactive annotation of code without symbols" v3: --- Keep just the ANNOTATION_DUMMY_LEN, and remove the opts->annotate_dummy_len since it's the "maybe in future we will provide" feature. v2: --- Fix a crash issue when annotating an address in "unknown" object. The steps to reproduce this issue: perf record -e cycles:u ls perf report 75.29% ls ld-2.27.so [.] do_lookup_x 23.64% ls ld-2.27.so [.] __GI___tunables_init 1.04% ls [unknown] [k] 0xffffffff85c01210 0.03% ls ld-2.27.so [.] _start When annotating 0xffffffff85c01210, the crash happens. v2 adds checking for ms->map in add_annotate_opt(). If the object is "unknown", ms->map is NULL. Committer notes: Renamed new_annotate_sym() to symbol__new_unresolved(). Use PRIx64 to fix this issue in some 32-bit arches: ui/browsers/hists.c: In function 'symbol__new_unresolved': ui/browsers/hists.c:2474:38: error: format '%lx' expects argument of type 'long unsigned int', but argument 5 has type 'u64' {aka 'long long unsigned int'} [-Werror=format=] snprintf(name, sizeof(name), "%-#.*lx", BITS_PER_LONG / 4, addr); ~~~~~~^ ~~~~ %-#.*llx Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200227043939.4403-3-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-03-04perf annotate: Get rid of annotation->nr_jumpsRavi Bangoria1-1/+0
The 'nr_jumps' field in 'struct annotation' is not used since it's inception in commit 2402e4a936a0 ("perf annotate browser: Show 'jumpy' functions"). Get rid of it. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200204045233.474937-7-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-27perf annotate: Simplify disasm_line allocation and freeing codeRavi Bangoria1-1/+0
We are allocating disasm_line object in annotation_line__new() instead of disasm_line__new(). Similarly annotation_line__delete() is actually freeing disasm_line object as well. This complexity is because of privsize. But we don't need privsize anymore so get rid of privsize and simplify disasm_line allocation and freeing code. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200204045233.474937-3-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-27perf annotate: Remove privsize from symbol__annotate() argsRavi Bangoria1-1/+1
privsize is passed as 0 from all the symbol__annotate() callers. Remove it from argument list. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: http://lore.kernel.org/lkml/20200204045233.474937-2-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-27perf annotate: Make perf config effectiveRavi Bangoria1-3/+1
perf default config set by user in [annotate] section is totally ignored by annotate code. Fix it. Before: $ ./perf config annotate.hide_src_code=true annotate.show_nr_jumps=true annotate.show_nr_samples=true $ ./perf annotate shash │ unsigned h = 0; │ movl $0x0,-0xc(%rbp) │ while (*s) │ ↓ jmp 44 │ h = 65599 * h + *s++; 11.33 │24: mov -0xc(%rbp),%eax 43.50 │ imul $0x1003f,%eax,%ecx │ mov -0x18(%rbp),%rax After: │ movl $0x0,-0xc(%rbp) │ ↓ jmp 44 1 │1 24: mov -0xc(%rbp),%eax 4 │ imul $0x1003f,%eax,%ecx │ mov -0x18(%rbp),%rax Note that we have removed show_nr_samples and show_total_period from annotation_options because they are not used. Instead of them we use symbol_conf.show_nr_samples and symbol_conf.show_total_period. Committer testing: Using 'perf annotate --stdio2' to use the TUI rendering but emitting the output to stdio: # perf config # # perf config annotate.hide_src_code=true # perf config annotate.hide_src_code=true # # perf config annotate.show_nr_jumps=true # perf config annotate.show_nr_samples=true # perf config annotate.hide_src_code=true annotate.show_nr_jumps=true annotate.show_nr_samples=true # # Before: # perf annotate --stdio2 ObjectInstance::weak_pointer_was_finalized Samples: 1 of event 'cycles', 4000 Hz, Event count (approx.): 830873, [percent: local period] ObjectInstance::weak_pointer_was_finalized() /usr/lib64/libgjs.so.0.0.0 Percent 00000000000609f0 <ObjectInstance::weak_pointer_was_finalized()@@Base>: endbr64 cmpq $0x0,0x20(%rdi) ↓ je 10 xor %eax,%eax ← retq xchg %ax,%ax 100.00 10: push %rbp cmpq $0x0,0x18(%rdi) mov %rdi,%rbp ↓ jne 20 1b: xor %eax,%eax pop %rbp ← retq nop 20: lea 0x18(%rdi),%rdi → callq JS_UpdateWeakPointerAfterGC(JS::Heap<JSObject* cmpq $0x0,0x18(%rbp) ↑ jne 1b mov %rbp,%rdi → callq ObjectBase::jsobj_addr() const@plt mov $0x1,%eax pop %rbp ← retq # After: # perf annotate --stdio2 ObjectInstance::weak_pointer_was_finalized 2> /dev/null Samples: 1 of event 'cycles', 4000 Hz, Event count (approx.): 830873, [percent: local period] ObjectInstance::weak_pointer_was_finalized() /usr/lib64/libgjs.so.0.0.0 Samples endbr64 cmpq $0x0,0x20(%rdi) ↓ je 10 xor %eax,%eax ← retq xchg %ax,%ax 1 1 10: push %rbp cmpq $0x0,0x18(%rdi) mov %rdi,%rbp ↓ jne 20 1 1b: xor %eax,%eax pop %rbp ← retq nop 1 20: lea 0x18(%rdi),%rdi → callq JS_UpdateWeakPointerAfterGC(JS::Heap<JSObject* cmpq $0x0,0x18(%rbp) ↑ jne 1b mov %rbp,%rdi → callq ObjectBase::jsobj_addr() const@plt mov $0x1,%eax pop %rbp ← retq # # perf config annotate.show_nr_jumps annotate.show_nr_jumps=true # perf config annotate.show_nr_jumps=false # perf config annotate.show_nr_jumps annotate.show_nr_jumps=false # # perf annotate --stdio2 ObjectInstance::weak_pointer_was_finalized 2> /dev/null Samples: 1 of event 'cycles', 4000 Hz, Event count (approx.): 830873, [percent: local period] ObjectInstance::weak_pointer_was_finalized() /usr/lib64/libgjs.so.0.0.0 Samples endbr64 cmpq $0x0,0x20(%rdi) ↓ je 10 xor %eax,%eax ← retq xchg %ax,%ax 1 10: push %rbp cmpq $0x0,0x18(%rdi) mov %rdi,%rbp ↓ jne 20 1b: xor %eax,%eax pop %rbp ← retq nop 20: lea 0x18(%rdi),%rdi → callq JS_UpdateWeakPointerAfterGC(JS::Heap<JSObject* cmpq $0x0,0x18(%rbp) ↑ jne 1b mov %rbp,%rdi → callq ObjectBase::jsobj_addr() const@plt mov $0x1,%eax pop %rbp ← retq # Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Changbin Du <changbin.du@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Cc: Taeung Song <treeze.taeung@gmail.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yisheng Xie <xieyisheng1@huawei.com> Link: http://lore.kernel.org/lkml/20200213064306.160480-6-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-27perf annotate: Fix --show-total-period for tui/stdio2Ravi Bangoria1-1/+1
perf annotate --show-total-period does not really show total period. The reason is we have two separate variables for the same purpose. One is in symbol_conf.show_total_period and another is annotation_options.show_total_period. We save command line option in symbol_conf.show_total_period but uses annotation_option.show_total_period while rendering tui/stdio2 browser. Though, we copy symbol_conf.show_total_period to annotation__default_options.show_total_period but that is not really effective as we don't use annotation__default_options once we copy default options to dynamic variable annotate.opts in cmd_annotate(). Instead of all these complication, keep only one variable and use it all over. symbol_conf.show_total_period is used by perf report/top as well. So let's kill annotation_options.show_total_period. On a side note, I've kept annotation_options.show_total_period definition because it's still used by perf-config code. Follow up patch to fix perf-config for annotate will remove annotation_options.show_total_period. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Changbin Du <changbin.du@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Cc: Taeung Song <treeze.taeung@gmail.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yisheng Xie <xieyisheng1@huawei.com> Link: http://lore.kernel.org/lkml/20200213064306.160480-3-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-01-14perf tools: Support --prefix/--prefix-stripAndi Kleen1-0/+5
The objdump utility has useful --prefix / --prefix-strip options to allow changing source code file names hardcoded into executables' debug info. Add options to 'perf report', 'perf top' and 'perf annotate', which are then passed to objdump. $ mkdir foo $ echo 'main() { for (;;); }' > foo/foo.c $ gcc -g foo/foo.c foo/foo.c:1:1: warning: return type defaults to ‘int’ [-Wimplicit-int] 1 | main() { for (;;); } | ^~~~ $ perf record ./a.out ^C[ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.230 MB perf.data (5721 samples) ] $ mv foo bar $ perf annotate <does not show source code> $ perf annotate --prefix=/home/ak/lsrc/git/bar --prefix-strip=5 <does show source code> Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> LPU-Reference: 20200107210444.214071-1-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-12perf annotate: Pass a 'map_symbol' in places receiving a pair of 'map' and ↵Arnaldo Carvalho de Melo1-14/+8
'symbol' pointers We are already passing things like: symbol__annotate(ms->sym, ms->map, ...) So shorten the signature of such functions to receive the 'map_symbol' pointer. This also paves the way to having the 'struct map_groups' pointer in the 'struct map_symbol' so that we can get rid of 'struct map'->groups. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-23yx8v1t41nzpkpi7rdrozww@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-10-11perf diff: Report noisy for cycles diffJin Yao1-0/+2
This patch prints the stddev and hist for the cycles diff of program block. It can help us to understand if the cycles is noisy or not. This patch is inspired by Andi Kleen's patch: https://lwn.net/Articles/600471/ We create new option '--cycles-hist'. Example: perf record -b ./div perf record -b ./div perf diff -c cycles # Baseline [Program Block Range] Cycles Diff Shared Object Symbol # ........ .......................................................... .... ................. ............................ # 46.72% [div.c:40 -> div.c:40] 0 div [.] main 46.72% [div.c:42 -> div.c:44] 0 div [.] main 46.72% [div.c:42 -> div.c:39] 0 div [.] main 20.54% [random_r.c:357 -> random_r.c:394] 1 libc-2.27.so [.] __random_r 20.54% [random_r.c:357 -> random_r.c:380] 0 libc-2.27.so [.] __random_r 20.54% [random_r.c:388 -> random_r.c:388] 0 libc-2.27.so [.] __random_r 20.54% [random_r.c:388 -> random_r.c:391] 0 libc-2.27.so [.] __random_r 17.04% [random.c:288 -> random.c:291] 0 libc-2.27.so [.] __random 17.04% [random.c:291 -> random.c:291] 0 libc-2.27.so [.] __random 17.04% [random.c:293 -> random.c:293] 0 libc-2.27.so [.] __random 17.04% [random.c:295 -> random.c:295] 0 libc-2.27.so [.] __random 17.04% [random.c:295 -> random.c:295] 0 libc-2.27.so [.] __random 17.04% [random.c:298 -> random.c:298] 0 libc-2.27.so [.] __random 8.40% [div.c:22 -> div.c:25] 0 div [.] compute_flag 8.40% [div.c:27 -> div.c:28] 0 div [.] compute_flag 5.14% [rand.c:26 -> rand.c:27] 0 libc-2.27.so [.] rand 5.14% [rand.c:28 -> rand.c:28] 0 libc-2.27.so [.] rand 2.15% [rand@plt+0 -> rand@plt+0] 0 div [.] rand@plt 0.00% [kernel.kallsyms] [k] __x86_indirect_thunk_rax 0.00% [do_mmap+714 -> do_mmap+732] -10 [kernel.kallsyms] [k] do_mmap 0.00% [do_mmap+737 -> do_mmap+765] 1 [kernel.kallsyms] [k] do_mmap 0.00% [do_mmap+262 -> do_mmap+299] 0 [kernel.kallsyms] [k] do_mmap 0.00% [__x86_indirect_thunk_r15+0 -> __x86_indirect_thunk_r15+0] 7 [kernel.kallsyms] [k] __x86_indirect_thunk_r15 0.00% [native_sched_clock+0 -> native_sched_clock+119] -1 [kernel.kallsyms] [k] native_sched_clock 0.00% [native_write_msr+0 -> native_write_msr+16] -13 [kernel.kallsyms] [k] native_write_msr When we enable the option '--cycles-hist', the output is perf diff -c cycles --cycles-hist # Baseline [Program Block Range] Cycles Diff stddev/Hist Shared Object Symbol # ........ .......................................................... .... ................. ................. ............................ # 46.72% [div.c:40 -> div.c:40] 0 ± 37.8% ▁█▁▁██▁█ div [.] main 46.72% [div.c:42 -> div.c:44] 0 ± 49.4% ▁▁▂█▂▂▂▂ div [.] main 46.72% [div.c:42 -> div.c:39] 0 ± 24.1% ▃█▂▄▁▃▂▁ div [.] main 20.54% [random_r.c:357 -> random_r.c:394] 1 ± 33.5% ▅▂▁█▃▁▂▁ libc-2.27.so [.] __random_r 20.54% [random_r.c:357 -> random_r.c:380] 0 ± 39.4% ▁▁█▁██▅▁ libc-2.27.so [.] __random_r 20.54% [random_r.c:388 -> random_r.c:388] 0 libc-2.27.so [.] __random_r 20.54% [random_r.c:388 -> random_r.c:391] 0 ± 41.2% ▁▃▁▂█▄▃▁ libc-2.27.so [.] __random_r 17.04% [random.c:288 -> random.c:291] 0 ± 48.8% ▁▁▁▁███▁ libc-2.27.so [.] __random 17.04% [random.c:291 -> random.c:291] 0 ±100.0% ▁█▁▁▁▁▁▁ libc-2.27.so [.] __random 17.04% [random.c:293 -> random.c:293] 0 ±100.0% ▁█▁▁▁▁▁▁ libc-2.27.so [.] __random 17.04% [random.c:295 -> random.c:295] 0 ±100.0% ▁█▁▁▁▁▁▁ libc-2.27.so [.] __random 17.04% [random.c:295 -> random.c:295] 0 libc-2.27.so [.] __random 17.04% [random.c:298 -> random.c:298] 0 ± 75.6% ▃█▁▁▁▁▁▁ libc-2.27.so [.] __random 8.40% [div.c:22 -> div.c:25] 0 ± 42.1% ▁▃▁▁███▁ div [.] compute_flag 8.40% [div.c:27 -> div.c:28] 0 ± 41.8% ██▁▁▄▁▁▄ div [.] compute_flag 5.14% [rand.c:26 -> rand.c:27] 0 ± 37.8% ▁▁▁████▁ libc-2.27.so [.] rand 5.14% [rand.c:28 -> rand.c:28] 0 libc-2.27.so [.] rand 2.15% [rand@plt+0 -> rand@plt+0] 0 div [.] rand@plt 0.00% [kernel.kallsyms] [k] __x86_indirect_thunk_rax 0.00% [do_mmap+714 -> do_mmap+732] -10 [kernel.kallsyms] [k] do_mmap 0.00% [do_mmap+737 -> do_mmap+765] 1 [kernel.kallsyms] [k] do_mmap 0.00% [do_mmap+262 -> do_mmap+299] 0 [kernel.kallsyms] [k] do_mmap 0.00% [__x86_indirect_thunk_r15+0 -> __x86_indirect_thunk_r15+0] 7 [kernel.kallsyms] [k] __x86_indirect_thunk_r15 0.00% [native_sched_clock+0 -> native_sched_clock+119] -1 ± 38.5% ▄█▁ [kernel.kallsyms] [k] native_sched_clock 0.00% [native_write_msr+0 -> native_write_msr+16] -13 ± 47.1% ▁█▇▃▁▁ [kernel.kallsyms] [k] native_write_msr v8: --- Rebase to perf/core branch v7: --- 1. v6 got Jiri's ACK. 2. Rebase to latest perf/core branch. v6: --- 1. Jiri provides better code for using data__hpp_register() in ui_init(). Use this code in v6. v5: --- 1. Refine the use of data__hpp_register() in ui_init() according to Jiri's suggestion. v4: --- 1. Rename the new option from '--noisy' to '--cycles-hist' 2. Remove the option '-n'. 3. Only update the spark value and stats when '--cycles-hist' is enabled. 4. Remove the code of printing '..'. v3: --- 1. Move the histogram to a separate column 2. Move the svals[] out of struct stats v2: --- Jiri got a compile error, CC builtin-diff.o builtin-diff.c: In function ‘compute_cycles_diff’: builtin-diff.c:712:10: error: taking the absolute value of unsigned type ‘u64’ {aka ‘long unsigned int’} has no effect [-Werror=absolute-value] 712 | labs(pair->block_info->cycles_spark[i] - | ^~~~ Because the result of u64 - u64 is still u64. Now we change the type of cycles_spark[] to s64. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20190925011446.30678-1-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-09-30perf annotate: Don't return -1 for error when doing BPF disassemblyArnaldo Carvalho de Melo1-0/+2
Return errno when open_memstream() fails and add two new speciall error codes for when an invalid, non BPF file or one without BTF is passed to symbol__disassemble_bpf(), so that its callers can rely on symbol__strerror_disassemble() to convert that to a human readable error message that can help figure out what is wrong, with hints even. Cc: Russell King - ARM Linux admin <linux@armlinux.org.uk> Cc: Song Liu <songliubraving@fb.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org>, Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/n/tip-usevw9r2gcipfcrbpaueurw0@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-09-30perf annotate: Fix arch specific ->init() failure errorsArnaldo Carvalho de Melo1-0/+2
They are called from symbol__annotate() and to propagate errors that can help understand the problem make them return what symbol__strerror_disassemble() known, i.e. errno codes and other annotation specific errors in a special, out of errnos, range. Reported-by: Russell King - ARM Linux admin <linux@armlinux.org.uk> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org>, Cc: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/n/tip-pqx7srcv7tixgid251aeboj6@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-30libperf: Add nr_entries to struct perf_evlistJiri Olsa1-1/+1
Move nr_entries count from 'struct perf' to into perf_evlist struct. Committer notes: Fix tools/perf/arch/s390/util/auxtrace.c case. And also the comment in tools/perf/util/annotate.h. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20190721112506.12306-42-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-30perf evsel: Rename struct perf_evsel to struct evselJiri Olsa1-13/+13
Rename struct perf_evsel to struct evsel, so we don't have a name clash when we add struct perf_evsel in libperf. Committer notes: Added fixes for arm64, provided by Jiri. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20190721112506.12306-5-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-20perf annotate: Enable annotation of BPF programsSong Liu1-0/+1
In symbol__disassemble(), DSO_BINARY_TYPE__BPF_PROG_INFO dso calls into a new function symbol__disassemble_bpf(), where annotation line information is filled based on the bpf_prog_info and btf data saved in given perf_env. symbol__disassemble_bpf() uses binutils's libopcodes to disassemble bpf programs. Committer testing: After fixing this: - u64 *addrs = (u64 *)(info_linear->info.jited_ksyms); + u64 *addrs = (u64 *)(uintptr_t)(info_linear->info.jited_ksyms); Detected when crossbuilding to a 32-bit arch. And making all this dependent on HAVE_LIBBFD_SUPPORT and HAVE_LIBBPF_SUPPORT: 1) Have a BPF program running, one that has BTF info, etc, I used the tools/perf/examples/bpf/augmented_raw_syscalls.c put in place by 'perf trace'. # grep -B1 augmented_raw ~/.perfconfig [trace] add_events = /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c # # perf trace -e *mmsg dnf/6245 sendmmsg(20, 0x7f5485a88030, 2, MSG_NOSIGNAL) = 2 NetworkManager/10055 sendmmsg(22<socket:[1056822]>, 0x7f8126ad1bb0, 2, MSG_NOSIGNAL) = 2 2) Then do a 'perf record' system wide for a while: # perf record -a ^C[ perf record: Woken up 68 times to write data ] [ perf record: Captured and wrote 19.427 MB perf.data (366891 samples) ] # 3) Check that we captured BPF and BTF info in the perf.data file: # perf report --header-only | grep 'b[pt]f' # event : name = cycles:ppp, , id = { 294789, 294790, 294791, 294792, 294793, 294794, 294795, 294796 }, size = 112, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|CPU|PERIOD, read_format = ID, disabled = 1, inherit = 1, mmap = 1, comm = 1, freq = 1, task = 1, precise_ip = 3, sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1, ksymbol = 1, bpf_event = 1 # bpf_prog_info of id 13 # bpf_prog_info of id 14 # bpf_prog_info of id 15 # bpf_prog_info of id 16 # bpf_prog_info of id 17 # bpf_prog_info of id 18 # bpf_prog_info of id 21 # bpf_prog_info of id 22 # bpf_prog_info of id 41 # bpf_prog_info of id 42 # btf info of id 2 # 4) Check which programs got recorded: # perf report | grep bpf_prog | head 0.16% exe bpf_prog_819967866022f1e1_sys_enter [k] bpf_prog_819967866022f1e1_sys_enter 0.14% exe bpf_prog_c1bd85c092d6e4aa_sys_exit [k] bpf_prog_c1bd85c092d6e4aa_sys_exit 0.08% fuse-overlayfs bpf_prog_819967866022f1e1_sys_enter [k] bpf_prog_819967866022f1e1_sys_enter 0.07% fuse-overlayfs bpf_prog_c1bd85c092d6e4aa_sys_exit [k] bpf_prog_c1bd85c092d6e4aa_sys_exit 0.01% clang-4.0 bpf_prog_c1bd85c092d6e4aa_sys_exit [k] bpf_prog_c1bd85c092d6e4aa_sys_exit 0.01% clang-4.0 bpf_prog_819967866022f1e1_sys_enter [k] bpf_prog_819967866022f1e1_sys_enter 0.00% clang bpf_prog_c1bd85c092d6e4aa_sys_exit [k] bpf_prog_c1bd85c092d6e4aa_sys_exit 0.00% runc bpf_prog_819967866022f1e1_sys_enter [k] bpf_prog_819967866022f1e1_sys_enter 0.00% clang bpf_prog_819967866022f1e1_sys_enter [k] bpf_prog_819967866022f1e1_sys_enter 0.00% sh bpf_prog_c1bd85c092d6e4aa_sys_exit [k] bpf_prog_c1bd85c092d6e4aa_sys_exit # This was with the default --sort order for 'perf report', which is: --sort comm,dso,symbol If we just look for the symbol, for instance: # perf report --sort symbol | grep bpf_prog | head 0.26% [k] bpf_prog_819967866022f1e1_sys_enter - - 0.24% [k] bpf_prog_c1bd85c092d6e4aa_sys_exit - - # or the DSO: # perf report --sort dso | grep bpf_prog | head 0.26% bpf_prog_819967866022f1e1_sys_enter 0.24% bpf_prog_c1bd85c092d6e4aa_sys_exit # We'll see the two BPF programs that augmented_raw_syscalls.o puts in place, one attached to the raw_syscalls:sys_enter and another to the raw_syscalls:sys_exit tracepoints, as expected. Now we can finally do, from the command line, annotation for one of those two symbols, with the original BPF program source coude intermixed with the disassembled JITed code: # perf annotate --stdio2 bpf_prog_819967866022f1e1_sys_enter Samples: 950 of event 'cycles:ppp', 4000 Hz, Event count (approx.): 553756947, [percent: local period] bpf_prog_819967866022f1e1_sys_enter() bpf_prog_819967866022f1e1_sys_enter Percent int sys_enter(struct syscall_enter_args *args) 53.41 push %rbp 0.63 mov %rsp,%rbp 0.31 sub $0x170,%rsp 1.93 sub $0x28,%rbp 7.02 mov %rbx,0x0(%rbp) 3.20 mov %r13,0x8(%rbp) 1.07 mov %r14,0x10(%rbp) 0.61 mov %r15,0x18(%rbp) 0.11 xor %eax,%eax 1.29 mov %rax,0x20(%rbp) 0.11 mov %rdi,%rbx return bpf_get_current_pid_tgid(); 2.02 → callq *ffffffffda6776d9 2.76 mov %eax,-0x148(%rbp) mov %rbp,%rsi int sys_enter(struct syscall_enter_args *args) add $0xfffffffffffffeb8,%rsi return bpf_map_lookup_elem(pids, &pid) != NULL; movabs $0xffff975ac2607800,%rdi 1.26 → callq *ffffffffda6789e9 cmp $0x0,%rax 2.43 → je 0 add $0x38,%rax 0.21 xor %r13d,%r13d if (pid_filter__has(&pids_filtered, getpid())) 0.81 cmp $0x0,%rax → jne 0 mov %rbp,%rdi probe_read(&augmented_args.args, sizeof(augmented_args.args), args); 2.22 add $0xfffffffffffffeb8,%rdi 0.11 mov $0x40,%esi 0.32 mov %rbx,%rdx 2.74 → callq *ffffffffda658409 syscall = bpf_map_lookup_elem(&syscalls, &augmented_args.args.syscall_nr); 0.22 mov %rbp,%rsi 1.69 add $0xfffffffffffffec0,%rsi syscall = bpf_map_lookup_elem(&syscalls, &augmented_args.args.syscall_nr); movabs $0xffff975bfcd36000,%rdi add $0xd0,%rdi 0.21 mov 0x0(%rsi),%eax 0.93 cmp $0x200,%rax → jae 0 0.10 shl $0x3,%rax 0.11 add %rdi,%rax 0.11 → jmp 0 xor %eax,%eax if (syscall == NULL || !syscall->enabled) 1.07 cmp $0x0,%rax → je 0 if (syscall == NULL || !syscall->enabled) 6.57 movzbq 0x0(%rax),%rdi if (syscall == NULL || !syscall->enabled) cmp $0x0,%rdi 0.95 → je 0 mov $0x40,%r8d switch (augmented_args.args.syscall_nr) { mov -0x140(%rbp),%rdi switch (augmented_args.args.syscall_nr) { cmp $0x2,%rdi → je 0 cmp $0x101,%rdi → je 0 cmp $0x15,%rdi → jne 0 case SYS_OPEN: filename_arg = (const void *)args->args[0]; mov 0x10(%rbx),%rdx → jmp 0 case SYS_OPENAT: filename_arg = (const void *)args->args[1]; mov 0x18(%rbx),%rdx if (filename_arg != NULL) { cmp $0x0,%rdx → je 0 xor %edi,%edi augmented_args.filename.reserved = 0; mov %edi,-0x104(%rbp) augmented_args.filename.size = probe_read_str(&augmented_args.filename.value, mov %rbp,%rdi add $0xffffffffffffff00,%rdi augmented_args.filename.size = probe_read_str(&augmented_args.filename.value, mov $0x100,%esi → callq *ffffffffda658499 mov $0x148,%r8d augmented_args.filename.size = probe_read_str(&augmented_args.filename.value, mov %eax,-0x108(%rbp) augmented_args.filename.size = probe_read_str(&augmented_args.filename.value, mov %rax,%rdi shl $0x20,%rdi shr $0x20,%rdi if (augmented_args.filename.size < sizeof(augmented_args.filename.value)) { cmp $0xff,%rdi → ja 0 len -= sizeof(augmented_args.filename.value) - augmented_args.filename.size; add $0x48,%rax len &= sizeof(augmented_args.filename.value) - 1; and $0xff,%rax mov %rax,%r8 mov %rbp,%rcx return perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, &augmented_args, len); add $0xfffffffffffffeb8,%rcx mov %rbx,%rdi movabs $0xffff975fbd72d800,%rsi mov $0xffffffff,%edx → callq *ffffffffda658ad9 mov %rax,%r13 } mov %r13,%rax 0.72 mov 0x0(%rbp),%rbx mov 0x8(%rbp),%r13 1.16 mov 0x10(%rbp),%r14 0.10 mov 0x18(%rbp),%r15 0.42 add $0x28,%rbp 0.54 leaveq 0.54 ← retq # Please see 'man perf-config' to see how to control what should be seen, via ~/.perfconfig [annotate] section, for instance, one can suppress the source code and see just the disassembly, etc. Alternatively, use the TUI bu just using 'perf annotate', press '/bpf_prog' to see the bpf symbols, press enter and do the interactive annotation, which allows for dumping to a file after selecting the the various output tunables, for instance, the above without source code intermixed, plus showing all the instruction offsets: # perf annotate bpf_prog_819967866022f1e1_sys_enter Then press: 's' to hide the source code + 'O' twice to show all instruction offsets, then 'P' to print to the bpf_prog_819967866022f1e1_sys_enter.annotation file, which will have: # cat bpf_prog_819967866022f1e1_sys_enter.annotation bpf_prog_819967866022f1e1_sys_enter() bpf_prog_819967866022f1e1_sys_enter Event: cycles:ppp 53.41 0: push %rbp 0.63 1: mov %rsp,%rbp 0.31 4: sub $0x170,%rsp 1.93 b: sub $0x28,%rbp 7.02 f: mov %rbx,0x0(%rbp) 3.20 13: mov %r13,0x8(%rbp) 1.07 17: mov %r14,0x10(%rbp) 0.61 1b: mov %r15,0x18(%rbp) 0.11 1f: xor %eax,%eax 1.29 21: mov %rax,0x20(%rbp) 0.11 25: mov %rdi,%rbx 2.02 28: → callq *ffffffffda6776d9 2.76 2d: mov %eax,-0x148(%rbp) 33: mov %rbp,%rsi 36: add $0xfffffffffffffeb8,%rsi 3d: movabs $0xffff975ac2607800,%rdi 1.26 47: → callq *ffffffffda6789e9 4c: cmp $0x0,%rax 2.43 50: → je 0 52: add $0x38,%rax 0.21 56: xor %r13d,%r13d 0.81 59: cmp $0x0,%rax 5d: → jne 0 63: mov %rbp,%rdi 2.22 66: add $0xfffffffffffffeb8,%rdi 0.11 6d: mov $0x40,%esi 0.32 72: mov %rbx,%rdx 2.74 75: → callq *ffffffffda658409 0.22 7a: mov %rbp,%rsi 1.69 7d: add $0xfffffffffffffec0,%rsi 84: movabs $0xffff975bfcd36000,%rdi 8e: add $0xd0,%rdi 0.21 95: mov 0x0(%rsi),%eax 0.93 98: cmp $0x200,%rax 9f: → jae 0 0.10 a1: shl $0x3,%rax 0.11 a5: add %rdi,%rax 0.11 a8: → jmp 0 aa: xor %eax,%eax 1.07 ac: cmp $0x0,%rax b0: → je 0 6.57 b6: movzbq 0x0(%rax),%rdi bb: cmp $0x0,%rdi 0.95 bf: → je 0 c5: mov $0x40,%r8d cb: mov -0x140(%rbp),%rdi d2: cmp $0x2,%rdi d6: → je 0 d8: cmp $0x101,%rdi df: → je 0 e1: cmp $0x15,%rdi e5: → jne 0 e7: mov 0x10(%rbx),%rdx eb: → jmp 0 ed: mov 0x18(%rbx),%rdx f1: cmp $0x0,%rdx f5: → je 0 f7: xor %edi,%edi f9: mov %edi,-0x104(%rbp) ff: mov %rbp,%rdi 102: add $0xffffffffffffff00,%rdi 109: mov $0x100,%esi 10e: → callq *ffffffffda658499 113: mov $0x148,%r8d 119: mov %eax,-0x108(%rbp) 11f: mov %rax,%rdi 122: shl $0x20,%rdi 126: shr $0x20,%rdi 12a: cmp $0xff,%rdi 131: → ja 0 133: add $0x48,%rax 137: and $0xff,%rax 13d: mov %rax,%r8 140: mov %rbp,%rcx 143: add $0xfffffffffffffeb8,%rcx 14a: mov %rbx,%rdi 14d: movabs $0xffff975fbd72d800,%rsi 157: mov $0xffffffff,%edx 15c: → callq *ffffffffda658ad9 161: mov %rax,%r13 164: mov %r13,%rax 0.72 167: mov 0x0(%rbp),%rbx 16b: mov 0x8(%rbp),%r13 1.16 16f: mov 0x10(%rbp),%r14 0.10 173: mov 0x18(%rbp),%r15 0.42 177: add $0x28,%rbp 0.54 17b: leaveq 0.54 17c: ← retq Another cool way to test all this is to symple use 'perf top' look for those symbols, go there and press enter, annotate it live :-) Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-13-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-06perf annotate: Calculate the max instruction name, align column to thatArnaldo Carvalho de Melo1-3/+4
We were hardcoding '6' as the max instruction name, and we have lots that are longer than that, see the diff from two 'P' printed TUI annotations for a libc function that uses instructions with long names, such as 'vpmovmskb' with its 9 chars: --- __strcmp_avx2.annotation.before 2019-03-06 16:31:39.368020425 -0300 +++ __strcmp_avx2.annotation 2019-03-06 16:32:12.079450508 -0300 @@ -2,284 +2,284 @@ Event: cycles:ppp Percent endbr64 - 0.10 mov %edi,%eax + 0.10 mov %edi,%eax - xor %edx,%edx + xor %edx,%edx - 3.54 vpxor %ymm7,%ymm7,%ymm7 + 3.54 vpxor %ymm7,%ymm7,%ymm7 - or %esi,%eax + or %esi,%eax - and $0xfff,%eax + and $0xfff,%eax - cmp $0xf80,%eax + cmp $0xf80,%eax - ↓ jg 370 + ↓ jg 370 - 27.07 vmovdqu (%rdi),%ymm1 + 27.07 vmovdqu (%rdi),%ymm1 - 7.97 vpcmpeqb (%rsi),%ymm1,%ymm0 + 7.97 vpcmpeqb (%rsi),%ymm1,%ymm0 - 2.15 vpminub %ymm1,%ymm0,%ymm0 + 2.15 vpminub %ymm1,%ymm0,%ymm0 - 4.09 vpcmpeqb %ymm7,%ymm0,%ymm0 + 4.09 vpcmpeqb %ymm7,%ymm0,%ymm0 - 0.43 vpmovmskb %ymm0,%ecx + 0.43 vpmovmskb %ymm0,%ecx - 1.53 test %ecx,%ecx + 1.53 test %ecx,%ecx - ↓ je b0 + ↓ je b0 - 5.26 tzcnt %ecx,%edx + 5.26 tzcnt %ecx,%edx - 18.40 movzbl (%rdi,%rdx,1),%eax + 18.40 movzbl (%rdi,%rdx,1),%eax - 7.09 movzbl (%rsi,%rdx,1),%edx + 7.09 movzbl (%rsi,%rdx,1),%edx - 3.34 sub %edx,%eax + 3.34 sub %edx,%eax 2.37 vzeroupper ← retq nop - 50: tzcnt %ecx,%edx + 50: tzcnt %ecx,%edx - movzbl 0x20(%rdi,%rdx,1),%eax + movzbl 0x20(%rdi,%rdx,1),%eax - movzbl 0x20(%rsi,%rdx,1),%edx + movzbl 0x20(%rsi,%rdx,1),%edx - sub %edx,%eax + sub %edx,%eax vzeroupper ← retq - data16 nopw %cs:0x0(%rax,%rax,1) + data16 nopw %cs:0x0(%rax,%rax,1) Reported-by: Travis Downs <travis.downs@gmail.com> LPU-Reference: CAOBGo4z1KfmWeOm6Et0cnX5Z6DWsG2PQbAvRn1MhVPJmXHrc5g@mail.gmail.com Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-89wsdd9h9g6bvq52sgp6d0u4@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-01-25perf annotate: Remove lots of headers from annotate.hArnaldo Carvalho de Melo1-3/+11
To reduce the chances changes trigger tons of rebuilds, more to come. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-ytbykaku63862guk7muflcy4@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf annotate: Compute average IPC and IPC coverage per symbolJin Yao1-0/+5
Add support to 'perf report' annotate view or 'perf annotate --stdio2' to aggregate the IPC derived from timed LBRs per symbol. We compute the average IPC and the IPC coverage percentage. For example: $ perf annotate --stdio2 Percent IPC Cycle (Average IPC: 2.30, IPC Coverage: 54.8%) Disassembly of section .text: 000000000003aac0 <random@@GLIBC_2.2.5>: 8.32 3.28 sub $0x18,%rsp 3.28 mov $0x1,%esi 3.28 xor %eax,%eax 3.28 cmpl $0x0,argp_program_version_hook@@GLIBC_2.2.5+0x1e0 11.57 3.28 1 ↓ je 20 lock cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0 ↓ jne 29 ↓ jmp 43 11.57 1.10 20: cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0 0.00 1.10 1 ↓ je 43 29: lea __abort_msg@@GLIBC_PRIVATE+0x8a0,%rdi sub $0x80,%rsp → callq __lll_lock_wait_private add $0x80,%rsp 0.00 3.00 43: lea __ctype_b@GLIBC_2.2.5+0x38,%rdi 3.00 lea 0xc(%rsp),%rsi 8.49 3.00 1 → callq __random_r 7.91 1.94 cmpl $0x0,argp_program_version_hook@@GLIBC_2.2.5+0x1e0 0.00 1.94 1 ↓ je 68 lock decl __abort_msg@@GLIBC_PRIVATE+0x8a0 ↓ jne 70 ↓ jmp 8a 0.00 2.00 68: decl __abort_msg@@GLIBC_PRIVATE+0x8a0 21.56 2.00 1 ↓ je 8a 70: lea __abort_msg@@GLIBC_PRIVATE+0x8a0,%rdi sub $0x80,%rsp → callq __lll_unlock_wake_private add $0x80,%rsp 21.56 2.90 8a: movslq 0xc(%rsp),%rax 2.90 add $0x18,%rsp 9.03 2.90 1 ← retq It shows for this symbol the average IPC is 2.30 and the IPC coverage is 54.8%. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1543586097-27632-2-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-30perf annotate: Fix parsing aarch64 branch instructions after objdump updateKim Phillips1-0/+1
Starting with binutils 2.28, aarch64 objdump adds comments to the disassembly output to show the alternative names of a condition code [1]. It is assumed that commas in objdump comments could occur in other arches now or in the future, so this fix is arch-independent. The fix could have been done with arm64 specific jump__parse and jump__scnprintf functions, but the jump__scnprintf instruction would have to have its comment character be a literal, since the scnprintf functions cannot receive a struct arch easily. This inconvenience also applies to the generic jump__scnprintf, which is why we add a raw_comment pointer to struct ins_operands, so the __parse function assigns it to be re-used by its corresponding __scnprintf function. Example differences in 'perf annotate --stdio2' output on an aarch64 perf.data file: BEFORE: → b.cs ffff200008133d1c <unwind_frame+0x18c> // b.hs, dffff7ecc47b AFTER : ↓ b.cs 18c BEFORE: → b.cc ffff200008d8d9cc <get_alloc_profile+0x31c> // b.lo, b.ul, dffff727295b AFTER : ↓ b.cc 31c The branch target labels 18c and 31c also now appear in the output: BEFORE: add x26, x29, #0x80 AFTER : 18c: add x26, x29, #0x80 BEFORE: add x21, x21, #0x8 AFTER : 31c: add x21, x21, #0x8 The Fixes: tag below is added so stable branches will get the update; it doesn't necessarily mean that commit was broken at the time, rather it didn't withstand the aarch64 objdump update. Tested no difference in output for sample x86_64, power arch perf.data files. [1] https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=bb7eff5206e4795ac79c177a80fe9f4630aaf730 Signed-off-by: Kim Phillips <kim.phillips@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Anton Blanchard <anton@samba.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Taeung Song <treeze.taeung@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Fixes: b13bbeee5ee6 ("perf annotate: Fix branch instruction with multiple operands") Link: http://lkml.kernel.org/r/20180827125340.a2f7e291901d17cea05daba4@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Add --percent-type optionJiri Olsa1-0/+2
Add --percent-type option to set annotation percent type from following choices: global-period, local-period, global-hits, local-hits Examples: $ perf annotate --percent-type period-local --stdio | head -1 Percent | Source code ... es, percent: local period) $ perf annotate --percent-type hits-local --stdio | head -1 Percent | Source code ... es, percent: local hits) $ perf annotate --percent-type hits-global --stdio | head -1 Percent | Source code ... es, percent: global hits) $ perf annotate --percent-type period-global --stdio | head -1 Percent | Source code ... es, percent: global period) The local/global keywords set if the percentage is computed in the scope of the function (local) or the whole data (global). The period/hits keywords set the base the percentage is computed on - the samples period or the number of samples (hits). Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-20-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Add support to toggle percent typeJiri Olsa1-0/+16
Add new key bindings to toggle percent type/base in annotation UI browser: 'p' to switch between local and global percent type 'b' to switch between hits and perdio percent base Add the following help messages to the UI browser '?' window: ... p Toggle percent type [local/global] b Toggle percent base [period/hits] ... Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-17-jolsa@kernel.org [ Moved percent_type to be the last arg to sym_title(), its an arg to what is being formmated (buf, size) ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Pass 'struct annotation_options' to map_symbol__annotation_dump()Jiri Olsa1-2/+4
Pass 'struct annotation_options' to map_symbol__annotation_dump(), to carry on and pass the percent_type value. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-15-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Add percent_type to struct annotation_optionsJiri Olsa1-0/+1
It will be used to carry user selection of percent type for annotation output. Passing the percent_type to the annotation_line__print function as the first step and making it default to current percentage type (PERCENT_HITS_LOCAL) value. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-13-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Add PERCENT_PERIOD_GLOBAL percent valueJiri Olsa1-0/+1
Adding and computing global period percent value for annotation line. Storing it in struct annotation_data percent array under new PERCENT_PERIOD_GLOBAL index. At the moment it's not displayed, it's coming in following patches. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-12-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Add PERCENT_PERIOD_LOCAL percent valueJiri Olsa1-0/+1
Adding and computing local period percent value for annotation line. Storing it in struct annotation_data percent array under new PERCENT_PERIOD_LOCAL index. At the moment it's not displayed, it's coming in following patches. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-11-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Add PERCENT_HITS_GLOBAL percent valueJiri Olsa1-0/+1
Adding and computing global hits percent value for annotation line. Storing it in struct annotation_data percent array under new PERCENT_HITS_GLOBAL index. At the moment it's not displayed, it's coming in following patches. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-10-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Switch struct annotation_data::percent to arrayJiri Olsa1-1/+12
So we can hold multiple percent values for annotation line. The first member of this array is current local hits percent value (PERCENT_HITS_LOCAL index), so no functional change is expected. Adding annotation_data__percent function to return requested percent value from struct annotation_data. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-9-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Rename struct annotation_line::samples* to data*Jiri Olsa1-2/+2
The name 'samples*' is little confusing because we have nested 'struct sym_hist_entry' under annotation_line struct, which holds 'nr_samples' as well. Also the holding struct name is 'annotation_data' so the 'data' name fits better. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-5-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Get rid of annotation__scnprintf_samples_period()Jiri Olsa1-7/+0
We have more current function tto get the title for annotation, which is hists__scnprintf_title. They both have same output as far as the annotation's header line goes. They differ in counting of the nr_samples, hists__scnprintf_title provides more accurate number based on the setup of the symbol_conf.filter_relative variable. Plus it also displays any uid/thread/dso/socket filters/zooms if there are set any, which annotation__scnprintf_samples_period does not. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-4-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Make annotation_line__max_percent staticJiri Olsa1-1/+0
There's no outside user of it. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20180804130521.11408-3-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08perf annotate: Make symbol__annotate_fprintf2() localJiri Olsa1-1/+0
There's no outside user of it. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: https://lkml.kernel.org/r/20180804130521.11408-2-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Move objdump_path to struct annotation_optionsArnaldo Carvalho de Melo1-0/+1
One more step in grouping annotation options. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-sogzdhugoavm6fyw60jnb0vs@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf hists browser: Pass annotation_options from tool to browserArnaldo Carvalho de Melo1-3/+4
So that things changed in the command line may percolate to the browser code without using globals. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-5daawc40zhl6gcs600com1ua@git.kernel.org [ Merged fix for NO_SLANG=1 build provided by Jiri Olsa ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Move disassembler_style global to annotation_optionsArnaldo Carvalho de Melo1-2/+1
Continuing to group annotation specific stuff into a struct. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-p3cdhltj58jt0byjzg3g7obx@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Adopt anotation options from symbol_confArnaldo Carvalho de Melo1-1/+3
Continuing to group annotation options in an annotation specific struct. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-astei92tzxp4yccag5pxb2h7@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Pass annotation_options to symbol__annotate()Arnaldo Carvalho de Melo1-0/+1
Now all callers to symbol__disassemble() can hand it the per-tool annotation_options, which will allow us to remove lots of stuff from symbol_options, the kitchen sink of perf configs, reducing its size and getting annotation specific stuff grouped together. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-vpr7ys7ggvs2fzpg8wbjcw7e@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate stdio: Use annotation_options consistentlyArnaldo Carvalho de Melo1-6/+9
Accross all the routines, this way we can have eventually have a consistent set of defaults for all UIs. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-6qgtixurjgdk5u0n3rw78ges@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Add comment about annotated_src->nr_histogramsArnaldo Carvalho de Melo1-0/+4
When we have multiple groups in an evlist, say: $ perf stat -e '{cycles,instructions},{cache-references,cache-misses}' sleep 1 Performance counter stats for 'sleep 1': 343,134 cycles:u 249,292 instructions:u # 0.73 insn per cycle 15,556 cache-references:u 8,925 cache-misses:u # 57.373 % of all cache refs 1.000957550 seconds time elapsed $ Then the perf_evsel instances for the two group leaders ("cycles" and "cache-references") will have evsel->nr_members set to 2, while all the evsel->evlist->nr_entries will be set to 4, so we can't use evsel->evlist->nr_entries everywhere, as event groups need to be taken into account. But this probably requires us to audit at least the forced-group code, where we want all of the events to be in a "group", to see them all in the screen, one column for each, even knowing that they were not necessarily scheduled to count at the same time by the kernel perf subsystem. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-2g0vwqnc49wl4ttjk8dvpgcc@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Replace symbol__alloc_hists() with symbol__hists()Arnaldo Carvalho de Melo1-1/+1
Its a bit shorter, so ditch the old symbol__alloc_hists() function. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-m7tienxk7dijh5ln62yln1m9@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: __symbol__inc_addr_samples() needs just annotated_sourceArnaldo Carvalho de Melo1-2/+6
It only operates on the histograms, so no need for the encompassing 'struct annotation'. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-2se2v7rrjil0kwqywks04ey2@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Split allocation of annotated_source structArnaldo Carvalho de Melo1-3/+3
So that we can allocate just the notes->src->cyc_hist, that, unlike notes->src->histograms, is not per event, and in paths where we need to lazily allocate notes->src->cyc_hist we don't have the number of events handy to also allocate ->histograms. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-tsx7dhxzpi0criyx0sio3pz3@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04perf annotate: Pass perf_evsel instead of just evsel->idxArnaldo Carvalho de Melo1-2/+2
The code gets shorter and we'll be able to use evsel->evlist in a followup patch. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-t0s7vy19wq5kak74kavm8swf@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-19perf annotate: Create hotkey 'c' to show min/max cyclesJin Yao1-1/+6
In the 'perf annotate' view, a new hotkey 'c' is created for showing the min/max cycles. For example, when press 'c', the annotate view is: Percent│ IPC Cycle(min/max) │ │ │ Disassembly of section .text: │ │ 000000000003aab0 <random@@GLIBC_2.2.5>: 8.22 │3.92 sub $0x18,%rsp │3.92 mov $0x1,%esi │3.92 xor %eax,%eax │3.92 cmpl $0x0,argp_program_version_hook@@G │3.92 1(2/1) ↓ je 20 │ lock cmpxchg %esi,__abort_msg@@GLIBC_P │ ↓ jne 29 │ ↓ jmp 43 │1.10 20: cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+ 8.93 │1.10 1(5/1) ↓ je 43 When press 'c' again, the annotate view is switched back: Percent│ IPC Cycle │ │ │ Disassembly of section .text: │ │ 000000000003aab0 <random@@GLIBC_2.2.5>: 8.22 │3.92 sub $0x18,%rsp │3.92 mov $0x1,%esi │3.92 xor %eax,%eax │3.92 cmpl $0x0,argp_program_version_hook@@GLIBC_2.2.5+0x │3.92 1 ↓ je 20 │ lock cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0 │ ↓ jne 29 │ ↓ jmp 43 │1.10 20: cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0 8.93 │1.10 1 ↓ je 43 Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1526569118-14217-3-git-send-email-yao.jin@linux.intel.com [ Rename all maxmin to minmax ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-18perf annotate: Record the min/max cyclesJin Yao1-0/+4
Currently perf has a feature to account cycles for LBRs For example, on skylake: perf record -b ... perf report or perf annotate And then browsing the annotate browser gives average cycle counts for program blocks. For some analysis it would be useful if we could know not only the average cycles but also the min and max cycles. This patch records the min and max cycles. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1526569118-14217-2-git-send-email-yao.jin@linux.intel.com [ Switch from max/min to min/max ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-12perf annotate: Allow showing offsets in more than just jump targetsArnaldo Carvalho de Melo1-0/+9
Jesper wanted to see offsets at callq sites when doing some performance investigation related to retpolines, so save him some time by providing an 'struct annotation_options' to control where offsets should appear: just on jump targets? That + call instructions? All? This puts in place the logic to show the offsets, now we need to wire this up in the TUI browser (next patch) and on the 'perf annotate --stdio2" interface, where we need a more general mechanism to setup the 'annotation_options' struct from the command line. Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Liška <mliska@suse.cz> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-m3jc9c3swobye9tj08gnh5i7@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-03perf annotate: Introduce annotation__scnprintf_samples_period() methodArnaldo Carvalho de Melo1-0/+12
To print a string using the total period (nr_events) and the number of samples for a given annotation, i.e. for a given symbol, the counterpart to hists__scnprintf_samples_period(), that is for all the samples in a session (be it a live session, think 'perf top' or a perf.data file, think 'perf report'). Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Martin Liška <mliska@suse.cz> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196935 Link: https://lkml.kernel.org/n/tip-goj2wu4fxutc8vd46mw3yg14@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-23perf annotate: Add "_local" to jump/offset validation routinesArnaldo Carvalho de Melo1-3/+11
Because they all really check if we can access data structures/visual constructs where a "jump" instruction targets code in the same function, i.e. things like: __pthread_mutex_lock /usr/lib64/libpthread-2.26.so 1.95 │ mov __pthread_force_elision,%ecx │ ┌──test %ecx,%ecx 0.07 │ ├──je 60 │ │ test $0x300,%esi │ │↓ jne 60 │ │ or $0x100,%esi │ │ mov %esi,0x10(%rdi) │ 42:│ mov %esi,%edx │ │ lea 0x16(%r8),%rsi │ │ mov %r8,%rdi │ │ and $0x80,%edx │ │ add $0x8,%rsp │ │→ jmpq __lll_lock_elision │ │ nop 0.29 │ 60:└─→and $0x80,%esi 0.07 │ mov $0x1,%edi 0.29 │ xor %eax,%eax 2.53 │ lock cmpxchg %edi,(%r8) And not things like that "jmpq __lll_lock_elision", that instead should behave like a "call" instruction and "jump" to the disassembly of "___lll_lock_elision". Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-3cwx39u3h66dfw9xjrlt7ca2@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21perf annotate: Mark jumps to outher functions with the call arrowArnaldo Carvalho de Melo1-0/+1
Things like this in _cpp_lex_token (gcc's cc1 program): cpp_named_operator2name@@Base+0xa72 Point to a place that is after the cpp_named_operator2name boundaries, i.e. in the ELF symbol table for cc1 cpp_named_operator2name is marked as being 32-bytes long, but it in fact is much larger than that, so we seem to need a symbols__find() routine that looks for >= current->start and < next_symbol->start, possibly just for C++ objects? For now lets just make some progress by marking jumps to outside the current function as call like. Actual navigation will come next, with further understanding of how the symbol searching and disassembly should be done. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-aiys0a0bsgm3e00hbi6fg7yy@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>